I’m having some trouble with the Speech node. It makes use of the microsoft Speech recognition node. It works quite awesomely for the most part, but I’m having a problem.
At the moment, it works well at responding to clear, seperate keywords. For example, I set the searched term as “fiber” and it responds well to when I clearly say fiber. Sometimes it even recognizes “fiber” when I put it into context, such as “we used a very special fiber in order to…”.
The problem comes up if Microsoft speech recognizer fails to understand something preceding the keyword. So i’m having a problem where diction has to be absolutely perfect throughout in order for the recognizer to respond to the keyword. If the recognizer doesn’t understand “we used a very special” - it asks for a correction and stops listening. Even though fiber is said just fine, it hangs on the first part which it didn’t get.
My question is: Is there a way to disable corrections or commands, and get it to just make its best guess without hanging and expecting a correction. I’m not looking for more accuracy, just a way to get it to give up on parts that it can’t understand. I cant find settings like this anywhere.
I’d really appreciate some help on this, as I’m really stuck. Any ideas floating out there?
Alternately, is there a way to get dragon naturally speaking to input strings into V4? I got it to work in the textinput fields in Processing, but there doesn’t seem to be any kind of dynamic text window in v4.
For more background, I’m looking to use this to create a program that logs a speech and responds to keywords embedded within the speech that will be delivered in front of a live audience. I would normally fake this with an on-site operator manually moving it along, but the presentation will be performed upwards of 100 times, so the speech recognition is important for client usability.