Dear @Alessandro_Binetti,
though I'm Italian, I answer you in english for the community.
First of all: why do you use two speech recognizers on the same screen ? I guess that they could interfere one each other. I normally use only one speech recognizer (non legacy mode) and it works "perfectly".
My Android is version 9, and I use a Lenovo pad M8 (but it is working also on Mediacom, Samsung, and other pads). Honestly I don't know if something has changed with newer Android versions, but with that version (9) everything works fine.
I use my app on my car (I made a digital cockpit interfacing the CAN bus to a pad by means of a BT communication) and I give commands to the app handsfree. In other words: I tell commands to the pad by voice, without the need to touch the screen, therefore not leaving the steering wheel.
To reach this feature I mute the beep-boop tones (the ones that the speech recognizer emits when activated) by means of a @Taifun's extension (TaifunSettings), then I intercept the errors, like you do, but in a silent mode, without showing them, and when the error 3809 is raised, I start a clock that after 500 ms restarts autonomously the speech recognizer. This delay is required to allow the speech recognizer to 'internally reset itself'' . In this way there is a "blind moment" (lasting approx 500 ms) in which the recognizer can't hear you, but the final feeling is that the speech recognizer works in a "continuous" mode, without the need that the user pushes any button.
Now, if I link my app to show you how it works in detail, it can lead for you to a nightmare, since the app does so many things that finding the details of the speech recognizer could be a real headache, but in the next days (unless you find the root cause of the malfunction on you own) I will write a simple app that can do the job.
Per adesso, quindi, buon Capodanno !!!
Ciao, Ugo.
I'm working on a app for voice guided logistics. In this scenario the app prompts for user to do something (by using text to speech).
In this scenario the "legacy" mode is almost perfect. The app has to recognize voice only when the app prompts something like: "Input quantity", or "Input lot number" ...
So I don't need to mute the prompts of the legacy vocal input ...
Now, I'm trying to introduce an "Alexa-style" voice recognition, in order to respond to some aleatory user questions. The user can ask something to the app by using a keyword, like: "Sistema orario", and the app says "12.15 pm".
So I've used 2 recognizers: the input recognizer in legacy mode, the Alexa-style one in non legacy mode. Every time I use the legacy one, I stop the non legacy recognizer. When the text is recognized I restart it.
The app works fine but sometimes I see the error messages generated by the non legacy recognizer.
Dear @Alessandro_Binetti,
if I correctly understand your needs : a continuous mode should work as "Alexa" , being capable to get sudden enquiries, then when the user shall tell specific information (on app's prompt) the other speech recognizer keeps working (having previously switched off the other). It seems to me not so easy, but anyway, please always remember that when switching off or on a speech recognizer, this does not happen immediately: I leave more than 500 ms (in the annexed .aia 1000 ms) between an off and on command before being sure that it works.
Anyway, what I've annexed works, and you could use it as an example.
Once started, by telling "led on" a led appears, and by telling "led off" it disappears.
Saying "esci" or "finito" the app exits.
Saying "luce alta", "luce media" or "luce bassa", the brightness varies.
Before using it in a big bang mode , you better can have a look to the blocks, for more details.
Should you need any explanation, don't hesitate to write me..
PS please be aware that for enabling the brightness settings, you have to go to app permissions (in Android settings) and enable this feature manually.
Also allow the app to use the microphone, when it starts for the first time.
PPSS Thanks to @Taifun and to @WatermelonIce for the extensions that I use in this example.
I expect that the error handler of the screen catches all the exceptions ... but I see the toast of the errors ... it seems that the error handler doesn't catch any error raised by the voice recognizer
I've tryed to put your code into another screen, not in the main screen, and the errors appear exactly as the error handler doesn't exist. Same beaviour of my app.
I think it's an app inventor bug ...
Fortunately the app works fine, even if the toast appears every now and then