
I would like to develop two JavaScript scripts that can be initiated using the Bing Speech Recognition skill in Synthiam ARC. The first script is for a silent countdown timer, and the second is for an alarm that activates at a specified time, similar to an alarm clock. I frequently perform similar tasks with Alexa, but I'm encountering some challenges in implementing them within ARC using Bing Speech Recognition.
My wake word in Bing Speech Recognition is "Robot." I want to be able to say, "Robot, set a timer for 'X' minutes" or "X" seconds, or even "X" minutes and "X" seconds. Here, "X" would represent the desired countdown duration. I believe I need to create a script in Bing that utilizes the ControlCommand
function to initiate a separate standalone timer JavaScript. This script would also need to set a global variable to store the value of "X." The standalone timer script would then use this global variable to start the countdown. Once the countdown completes, a sound could be triggered from one of my ARC soundboards to notify me that the time is up.
The same concept applies to the alarm script. I am aware of the JavaScript command Utility.waitUntilTime(hour, minute)
, which might be suitable for the alarm script. However, if I want to use Bing Speech Recognition, would I still need to set a global variable with the desired hour and minutes to trigger the alarm at the specified time? Ideally, I would like to say, "Robot, set an alarm for X:xx am (or pm)."
Another challenge I face is how to capture a spoken time or time duration into a global variable using the Bing Speech Recognition skill. Is there a more efficient method to achieve this?
, could you assist in determining the best way to structure these functions using Bing Speech Recognition and guide me on how to write the necessary JavaScripts?
@dark harvest, if you liked that - check out what we just did with the camera as well. The personality has been updated so you'll want to check it out for the changes. But the personality now includes the ability for it to request images and describe them. Look at this screenshot log where i asked it how many fingers i was holding up.
Wow @DJ. This is an amazing step. Thanks so much for the personal touch and work. I'll need to go through all this and get it implemented. I'll check back in and update my progress. Life it a bit busy right now so it may take a little while to get this all figured out. Thanks again!
Thanks Dave! I know this time of year is busy - hopefully you find some time to robot! I'm going to make a short video about the new features of the open ai chat gpt skill and how it works now. This new feature is pretty wild and we've been doing some amazing stuff with it - including navigation and full conversation with movement and scripting.
Hi DJ, looking forward for the video. This integration opens so much possibilities. I am just finishing my roomba encoder modification to test with the BN. I rewired the roombas encoders and connected them directly to an arduino to use the Wheel Encoder Counter robot skill. Lets see how it works out.
@DJ , I'm looking forward to watching your video on this skill upgrade. It does sound "wild". A video where I can watch how it's used and what it can do will help me wrap my head round this. It will really make it easier to implement and use. Thanks again for all your brilliant work.
I would like to see a video too. This does sound very interesting!