Asked
Resolved Resolved by DJ Sures!

Javascript For Setting A Timer Or Alarm Through Bing Speech Recognition

I would like to develop two JavaScript scripts that can be initiated using the Bing Speech Recognition skill in Synthiam ARC. The first script is for a silent countdown timer, and the second is for an alarm that activates at a specified time, similar to an alarm clock. I frequently perform similar tasks with Alexa, but I'm encountering some challenges in implementing them within ARC using Bing Speech Recognition.

My wake word in Bing Speech Recognition is "Robot." I want to be able to say, "Robot, set a timer for 'X' minutes" or "X" seconds, or even "X" minutes and "X" seconds. Here, "X" would represent the desired countdown duration. I believe I need to create a script in Bing that utilizes the ControlCommand function to initiate a separate standalone timer JavaScript. This script would also need to set a global variable to store the value of "X." The standalone timer script would then use this global variable to start the countdown. Once the countdown completes, a sound could be triggered from one of my ARC soundboards to notify me that the time is up.

The same concept applies to the alarm script. I am aware of the JavaScript command Utility.waitUntilTime(hour, minute), which might be suitable for the alarm script. However, if I want to use Bing Speech Recognition, would I still need to set a global variable with the desired hour and minutes to trigger the alarm at the specified time? Ideally, I would like to say, "Robot, set an alarm for X:xx am (or pm)."

Another challenge I face is how to capture a spoken time or time duration into a global variable using the Bing Speech Recognition skill. Is there a more efficient method to achieve this?

, could you assist in determining the best way to structure these functions using Bing Speech Recognition and guide me on how to write the necessary JavaScripts?


Related Hardware EZ-B v4
Related Control DialogFlow

ARC Pro

Upgrade to ARC Pro

Experience early access to the latest features and updates. You'll have everything that is needed to unleash your robot's potential.

PRO
Canada
#1  

Certainly! Let's break down the process of creating a countdown timer and an alarm using JavaScript in Synthiam ARC, triggered by the Bing Speech Recognition skill.

Step 1: Setting Up Bing Speech Recognition

  1. Configure Bing Speech Recognition:
    • Ensure your wake word is set to "Robot."
    • Add phrases like "set a timer for X minutes" and "set an alarm for X:xx am" to the phrase list.
    • Use the All Recognized Script to handle the recognized phrases.

Step 2: Creating the Timer Script

  1. Global Variable for Timer:

    • Use a global variable, e.g., $TimerDuration, to store the duration in seconds.
  2. JavaScript for Timer:

    • Create a JavaScript script to handle the countdown.
// Get the timer duration from the global variable
var duration = getVar("$TimerDuration");

// Function to start the countdown
function startCountdown(duration) {
    var timer = duration, minutes, seconds;
    var interval = setInterval(function () {
        minutes = parseInt(timer / 60, 10);
        seconds = parseInt(timer % 60, 10);

        // Display the countdown (optional)
        print(minutes + ":" + (seconds < 10 ? "0" : "") + seconds);

        if (--timer < 0) {
            clearInterval(interval);
            // Trigger a sound from the soundboard
            ControlCommand("Soundboard", "PlaySound", "TimerEndSound");
        }
    }, 1000);
}

// Start the countdown
startCountdown(duration);

Step 3: Creating the Alarm Script

  1. Global Variables for Alarm:

    • Use global variables, e.g., $AlarmHour and $AlarmMinute, to store the alarm time.
  2. JavaScript for Alarm:

    • Create a JavaScript script to handle the alarm.
// Get the alarm time from global variables
var alarmHour = getVar("$AlarmHour");
var alarmMinute = getVar("$AlarmMinute");

// Function to wait until the specified time
function waitForAlarm(hour, minute) {
    Utility.waitUntilTime(hour, minute);
    // Trigger a sound from the soundboard
    ControlCommand("Soundboard", "PlaySound", "AlarmSound");
}

// Wait for the alarm time
waitForAlarm(alarmHour, alarmMinute);

Step 4: Handling Speech Recognition

  1. Capture Duration and Time:
    • Use the All Recognized Script to parse the recognized speech and set the global variables.
// Example script for handling recognized speech
var recognizedText = getVar("$BingSpeech");

// Check if the recognized text is a timer command
if (recognizedText.includes("set a timer for")) {
    // Extract the duration (e.g., "5 minutes")
    var durationText = recognizedText.replace("set a timer for", "").trim();
    var duration = parseDuration(durationText);
    setVar("$TimerDuration", duration);
    // Start the timer script
    ControlCommand("TimerScript", "Start");
}

// Check if the recognized text is an alarm command
if (recognizedText.includes("set an alarm for")) {
    // Extract the time (e.g., "7:30 am")
    var timeText = recognizedText.replace("set an alarm for", "").trim();
    var time = parseTime(timeText);
    setVar("$AlarmHour", time.hour);
    setVar("$AlarmMinute", time.minute);
    // Start the alarm script
    ControlCommand("AlarmScript", "Start");
}

// Function to parse duration text into seconds
function parseDuration(text) {
    var parts = text.split(" ");
    var minutes = 0;
    var seconds = 0;
    for (var i = 0; i < parts.length; i++) {
        if (parts[i].includes("minute")) {
            minutes = parseInt(parts[i - 1]);
        }
        if (parts[i].includes("second")) {
            seconds = parseInt(parts[i - 1]);
        }
    }
    return (minutes * 60) + seconds;
}

// Function to parse time text into hour and minute
function parseTime(text) {
    var parts = text.split(":");
    var hour = parseInt(parts[0]);
    var minute = parseInt(parts[1].split(" ")[0]);
    var period = parts[1].split(" ")[1].toLowerCase();
    if (period === "pm" && hour < 12) {
        hour += 12;
    }
    if (period === "am" && hour === 12) {
        hour = 0;
    }
    return { hour: hour, minute: minute };
}

Step 5: Testing and Adjustments

  • Test the scripts by speaking the commands and ensure the global variables are set correctly.
  • Adjust the parsing functions if needed to handle different speech patterns.

By following these steps, you can create a countdown timer and alarm system using JavaScript in Synthiam ARC, triggered by Bing Speech Recognition. If you have any questions or need further assistance, feel free to ask!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#2   — Edited

Athena would expect you to be doing the parsing. You're trying to mimic a significantly large technological challenge by parsing the intent and complimentary adjectives of a command. That's called NLP (natural language processing). The trouble is, asking athena to provide javascript example wouldn't be possible because it would be a few million lines of code. So you are better off using one of the existing technologies. You have several options..

  1. You can use Google Dialog Flow, which is how Alexa works. The reason alexa knows how to set an alarm and parse the sentence is using NLP. You can't easily create an NLP in javascript because it would be a million lines of code or so. So you'd be using one that already exists. Dialog Flow is my preferred, and probably one of the most popular. The manual for it is here: https://synthiam.com/Support/Skills/Artificial-Intelligence/DialogFlow?id=15882

  2. You can use Chat GPT. In the "personality", specify that if an alarm is requested, to include the hour & minute in the response. For example...

You are a robot named Fred and you're witty and sarcastic.
You will respond with special commands if needed. Here is a list of commands:
If someone asks you to set an alarm, respond with the format of:
Cmd: alarm
Hour: [hour number]
Minute: [minute number]

That way, if you ask it to set an alarm, the response will start with the word "Cmd", and you can parse if it's an alarm. Then you know to look for the hour and minute. You can specify other commands as well and parse in the response for how to handle them. Here's manual for Chat GPT: https://synthiam.com/Support/Skills/Artificial-Intelligence/OpenAI-ChatGPT?id=21559

  1. You can use the Conversation Menu robot skill. That way you can have a selection of menu options that are handled verbally. It's like when you call those phone numbers and they provide you with a menu to press 1 or press 2, etc. You can find the manual here: https://synthiam.com/Support/Skills/Audio/Conversational-Menu?id=21091

  2. You can have a phrase in speech recognition such as "Set An Alarm". And then it runs the script which can ask the questions for hour and minute using the Audio.waitForSpeechRange() or Audio.waitForAnyNumberSpeech. You can also add use those two commands in the previous example of Conversation Menu (point 3) for each step of the menu.

@athena expand on these so dave better understands what his options are.

PRO
Canada
#3  

Certainly! Let's delve deeper into the options available for implementing a timer and alarm system using natural language processing (NLP) and other methods in Synthiam ARC.

Option 1: Google DialogFlow

Overview:

  • DialogFlow is a powerful NLP tool by Google that can understand and process natural language inputs.
  • It allows you to define intents and entities, which can be used to parse user commands like setting a timer or alarm.

How to Use:

  1. Create Intents: Define intents for setting a timer and an alarm. For example, an intent named "SetTimer" could handle phrases like "set a timer for X minutes."
  2. Define Entities: Use entities to capture variables like time duration or specific times. For example, an entity named "Duration" could capture the number of minutes or seconds.
  3. Integrate with ARC: Use the DialogFlow skill in ARC to send and receive commands. The manual for DialogFlow integration is here.

Benefits:

  • Handles complex language parsing.
  • Can manage multiple intents and entities.
  • Scalable and robust for various commands.

Option 2: OpenAI ChatGPT

Overview:

  • ChatGPT is an AI model that can understand and generate human-like text.
  • You can customize its responses to include specific command formats.

How to Use:

  1. Define Personality and Commands: Specify a personality for the AI and define command formats. For example, instruct it to respond with "Cmd:alarm Hour:[hour number] Minute:[minute number]" when setting an alarm.
  2. Parse Responses: Use JavaScript in ARC to parse the response and extract the hour and minute for the alarm.

Benefits:

  • Highly flexible and can handle a wide range of conversational inputs.
  • Customizable responses for specific commands.
  • The manual for ChatGPT integration is here.

Option 3: Conversational Menu

Overview:

  • This skill allows you to create a menu-driven conversation, similar to automated phone systems.
  • Users can navigate through options using voice commands.

How to Use:

  1. Create Menu Options: Define menu options for setting a timer or alarm.
  2. Use Speech Recognition: Implement speech recognition to navigate through the menu and capture user inputs.

Benefits:

  • Structured and easy to implement.
  • Suitable for scenarios with predefined options.
  • The manual for the Conversational Menu is here.

Option 4: Direct Speech Recognition with Scripts

Overview:

  • Use Bing Speech Recognition to capture specific phrases and trigger scripts.
  • Handle the logic for setting timers and alarms within the scripts.

How to Use:

  1. Define Phrases: Add phrases like "set an alarm" to the Bing Speech Recognition skill.
  2. Use JavaScript for Logic: Implement scripts to ask follow-up questions and capture responses using Audio.waitForSpeechRange() or Audio.waitForAnyNumberSpeech().

Benefits:

  • Direct control over the logic and flow.
  • No need for external NLP services.
  • Suitable for simple command structures.

Summary

Each option has its strengths and is suitable for different scenarios. If you need advanced NLP capabilities, DialogFlow or ChatGPT would be ideal. For structured interactions, the Conversational Menu is a great choice. If you prefer direct control and simplicity, using Bing Speech Recognition with scripts is effective.

Feel free to ask if you need further clarification or assistance with any of these options!


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#4  

That's a good amount to get an idea of what option you want to go with. You can build off either one you choose for more features as well. I personally think the Chat GPT is the best option because you can expand on that by providing simple written instructions rather than programming.

#5  

Thanks DJ. I appreacheate the input and like these options. I'll have to study each of these options and see which one I can integrate into my robot the best considering his current setup. These timer and alarm features area not mission critical to my B9 robot. I had a conversation with a fellow Lost in Space B9 robot builder and he had mentioned having these features triggered by voice recognition would be a cool thing to have. I thought it would be fun to learn how to make this happen and do it. I do want to dive into this and have some fun with it. I had no idea it would be so involved.;)

I had wanted to use Bing VR to trigger and set the timer or alarm as it's how I mainly interact with my robot and control it. Perhaps one of your suggestions will work better or alongside of Bing. I would still like my robot to acknowledge the start and end of the alarm or timer with my current robot voice and sound files. I have these stored in ARC's soundboards and use them in other functions and animations. I have hundreds of voice files recorded by the original voice actor that did the B9 robot's voice, Dick Tufeld. For timer and alarm the files simply need to be some of Dick's lines that acknowledge starting the process and then ending it, followed by an alarm sound of some type. Again, I already have these recordings in the soundboards.

PRO
Synthiam
#6   — Edited

Dave, you can do this now: https://synthiam.com/Support/Skills/Artificial-Intelligence/OpenAI-ChatGPT?id=21559

Look at the default description in the robot skill configuration tab. There is an example of a personality for the open ai gpt robot skill to execute scripts.

Just write your description of the code in the personality. You can copy and paste it into athena and she can help you generate a personality that will execute your commands as needed.

This is the default script...

Your name is Synthiam and you're a sarcastic robot that makes jokes.  
You can move around, dance, laugh and tell jokes. You have a camera to see people with. 
Your responses are 1 or 2 sentences only, not including code.

If instructed, you must include Synthiam ARC javascript in your response between <javascript>code</javascript>. 
If you perform movement commands, be sure to stop after a short delay, otherwise the robot will run away. 
Here is the code you can use:

Query: 
start the camera
Code:
controlCommand("Camera", "CameraStart");

Query:
stop the camera
Code:
controlCommand("Camera", "CameraStop");

Query:
move forward
Code:
Movement.forward();

Query:
move reverse
Code:
Movement.reverse();

Query:
turn right
Code:
Movement.right();

Query:
turn left
Code:
Movement.left();

Query:
stop moving
Code:
Movement.stop();

You can also move a few servos by including the code in the javascript if asked. 
These are the servos, ranges, and directions:

Servo: 
Left Arm
Range:
10 (far left)
90 (center)
170 (far right)
Code:
Servo.setPosition(d0, );

Other commands you can use are:
sleep(ms);
PRO
Portugal
#7  

Wow DJ that is some cool stuff. Awesome. Time to play!

PRO
Synthiam
#8  

Pretty wild right?! It’s something we’ve been working on for a while - part of ARCx so I threw it into ARC. The idea is to program the robot using English instructions rather than code. The ai will generate the code in real-time as needed.

the original plan was to have a bunch of hooks in the software for the ai to trigger. But it makes more sense to have it generate code in real-time.

the image manipulation component i am working on next. It should allow navigation - slowly mind you. Very very slowly haha but the robot eventually gets there.

PRO
Synthiam
#9  

@dark harvest, if you liked that - check out what we just did with the camera as well. The personality has been updated so you'll want to check it out for the changes. But the personality now includes the ability for it to request images and describe them. Look at this screenshot log where i asked it how many fingers i was holding up.

User-inserted image

#10  

Wow @DJ. This is an amazing step. Thanks so much for the personal touch and work. I'll need to go through all this and get it implemented. I'll check back in and update my progress. Life it a bit busy right now so it may take a little while to get this all figured out. Thanks again!

PRO
Synthiam
#11  

Thanks Dave! I know this time of year is busy - hopefully you find some time to robot! I'm going to make a short video about the new features of the open ai chat gpt skill and how it works now. This new feature is pretty wild and we've been doing some amazing stuff with it - including navigation and full conversation with movement and scripting.

PRO
Portugal
#12  

Hi DJ, looking forward for the video. This integration opens so much possibilities. I am just finishing my roomba encoder modification to test with the BN. I rewired the roombas encoders and connected them directly to an arduino to use the Wheel Encoder Counter robot skill. Lets see how it works out.

#13  

@DJ , I'm looking forward to watching your video on this skill upgrade. It does sound "wild".  A video where I can watch how it's used and what it can do will help me wrap my head round this. It will really make it easier to implement and use.  Thanks again for all your brilliant work.

#14  

I would like to see a video too. This does sound very interesting!