Asked
Resolved Resolved by DJ Sures!

Robot Skills Reach Daily Limit

I am new to ARC.  I purchased the premium subscription but was disappointed that it keeps kicking me out of skills as I'm building because I reach daily limits.  How do these limits work? Is there a way to get unlimited time on these skills? The latest issue was with Cognitive Face, but it also happened with Bing voice recognition.

I got around that by opening up an Azure account and setting Advanced Speech Recognition with my own API Key... Not sure how to handle Cognitive Face though...


Related Hardware EZ-B v4

ARC Pro

Upgrade to ARC Pro

ARC Pro is your passport to a world of endless possibilities in robot programming, waiting for you to explore.

PRO
USA
#1  

Shouldn't be daily limits for Pro and skills. Make sure you are connected to the internet, to allow your subscription to update. Unlimited skill use in Pro....unless something changed?

PRO
Synthiam
#2   — Edited

I believe the user is asking about daily quote limits for 3rd party transactions. The manual pages of the robot skills you have listed use 3rd party services that normally cost money per transaction. Synthiam provides a courtesy daily quota shared across all users to experiment with those 3rd party servers under your pro subscription account. Every time you use one of those robot skills (cognitive face, cognitive vision, cognitive sentiment, bing speech) a paid transaction occurs with a Microsoft server. The daily cap to ensure users are not overspending on our shared courtesy account. See the manual pages for those robot skills to read more.

In summary, those robot skills are provided to Microsoft paid service as a courtesy for experimenting.

In the meantime, the one robot skill you have selected for the question is "advanced speech recognition". You might have meant "Bing speech recognition", because they're similar but different. You will notice in the manual for these robot skills that the "advanced speech recognition" requires your own license key to be registered with Microsoft. This allows having unlimited (as many as your wallet can pay for :)) transactions with their server.

If your application requires a dedicated license, let us know and we'll provide you with a robot skill where you can get your own license from Microsoft and have as many transactions as you require :D

PRO
Synthiam
#3  

I was just asked by our support about this question that I believe you might have asked on support contact. I looked into it and it appears having your own license for speech recognition is super affordable. This is also because we do the VAD (voice activation detection) on the computer locally rather than sending it to the server. The pricing model is here https://azure.microsoft.com/en-us/pricing/details/cognitive-services/speech-services/

It appears the cost is $1 USD per hour of speech recognition. If the average speech recognition phrase is 3 seconds long, that 1,200 recognized phrases for $1 USD.

#4  

Thank you for the feedback. I was able to go to Azure and get my own API Key for speech recognition, but was surprised at how quickly I was getting booted out of cognitive face...  Since I am still learning, it feels like I can only use it for a very short period of time and have to stop because of the limit. I may be doing something wrong where it checks cognitive face every single loop and reaches a maximum limit faster than it should...

This is literally my very first attempt at writing an EZ-Script so it wouldn't surprise me one bit :)

This is how I tried implementing cognitive face into my robot:

controlCommand("Cognitive Face", "Detect") if ($FaceEmotion_Happiness>80) controlCommand("Auto Position", "AutoPositionFrameJump", "Smile") elseif ($FaceEmotion_Sadness>80) controlCommand("Auto Position", "AutoPositionFrameJump", "Frown") else controlCommand("Auto Position", "AutoPositionFrameJump", "Neutral") endif

PRO
Synthiam
#5  

Where was that code inserted?

  1. Was it in the "Tracking Start" of the camera skill?

2) Or was it in a script of its own looping over and over?

My recommendation is to add it in the camera skill as "Tracking Start". Then, enable Face as a tracking type. That way, every time it sees a face it will execute the script.

#6  

Thank you.  I had the script in the cognitive face config. I will move it over and try again!

PRO
Synthiam
#7   — Edited

Okay - don't copy the whole thing over. Just copy this part into the camera skill

controlCommand("Cognitive Face", "Detect")

Then, put this part in the cognitive face config...

if ($FaceEmotion_Happiness>80)
  controlCommand("Auto Position", "AutoPositionFrameJump", "Smile")
elseif ($FaceEmotion_Sadness>80)
  controlCommand("Auto Position", "AutoPositionFrameJump", "Frown")
else 
  controlCommand("Auto Position", "AutoPositionFrameJump", "Neutral")
endif

Then, turn on FACE tracking type. So every time it sees a face it will instruct the cognitive face skill to detect

#8  

This is great!  You  are helping me tremendously to understand how to work within this modular skills environment! I've done a lot of Arduino programming but never moved into this type of environment before. Another quick question to help me wrap my head around this.. If the camera is running at 15fps and the start script waits 15 frames before it executes,  I think that means that cognitive face will receive a request every second unless it looses tracking. Is that a good amount of requests or will that still cause me to overuse the free service and get locked out fairly quickly? This is just a first test so all I am trying to do is get the robot to mimic my expression... sad, happy, or neutral. I still cant get over how close I got in one lunch break!  I cant wait to learn ARC better!

PRO
Synthiam
#9   — Edited

Glad it helps you:) It's difficult for me to type right now because I have lost a pretty big chunk of meat off my thumb the other day LOL. So it's not operating very well at the moment. This type of programming for ARC is event programming. It's how applications on multi-threaded operating systems work, such as Windows, Linux, Android, or iOS. The events are executed in their own thread and can trigger other events, etc...

Regarding the camera TRACKING START question, the script start will only run once for detection. It will take 15 frames for the detection to be valid, then the script runs. The script will not run again unless tracking is lost. So it's not a loop for every frame - it's just one time for when the tracking has been "identified"

So for your case, the robot will need to "lose sight of your face before tracking again"

If you wanted the robot to detect you in real-time, then what I would do is modify the Camera Tracking Start script to loop every 5 seconds or something. Because that script will stop when tracking has stopped. You can also add a camera TRACKING STOP script that speaks something like "hey where did you go?"

If you wanted a constant detection of your emotion once the face is detected, put this in the camera TRACKING START script... I'll put an EZ-Script example but I prefer javascript or Blockly generally because it's much faster.


:loop
controlCommand("Cognitive Face", "Detect")
sleep(5000)
goto(loop)

And in the camera TRACKING STOP put something like this...


Say("Hey where did you go?")

Or if you want it to talk out the EZB go...


SayEZB("Hey where did you go?")

#10  

Quote:

I have lost a pretty big chunk of meat off my thumb the other day
You should keep better track of things like that. Losing something like that is a hard thing to get over. :p

Actually I appreciate you overcoming your pain to help with this issue. I am learning also. I'll be installing a camera in my robot soon and this is very helpful. Thanks.

#11  

Thank you DJ Issue is now resolved