ARC Pro

Upgrade to ARC Pro

With Synthiam ARC Pro, you're not just programming a robot; you're shaping the future of automation, one innovative idea at a time.

PRO
Synthiam
#2  

The poster might be asking because he comes from the concept of writing a program.

Much like newer design suites, you don’t make programs with arc. You use preexisting modules and link them together rather than reinventing the wheel each time.

so if you just want to move a servo, it can be done with the python command found in Jeremie’s link. But moving a servo by a script can be useful, but generally not. This is because you can use existing robot skills to move the servos. Such as the Auto Position or even the camera robot skill for tracking an object with servos.

I recommend the getting started guide / because that explains more.

while python and other languages exist in arc, they are there to help fill the gaps between robot skills. Such as specific programmatic things you need the robot to accomplish

#3  

As per the previous helpful comments, we agree that it is an excellent time to start with the Getting Started Guide here: https://synthiam.com/Support/Get-Started/how-to-make-a-robot/plan-a-robot.

It will help you choose robot skills for your robot's configuration or help you plan a robot from scratch.

Australia
#4  

Thank you guys for the reply.

I have read all topics you recommended but could not find how to link my python script (with ARC Python API functions) with ARC. My understanding is that I have to write the script and somehow let ARC to execute it. Where to put my own script?

By the way, I have already used existing robot skills to move the servos. The reason I use python is for specific tasks for the robot, but I do not know how to do it.

PRO
Synthiam
#5  

Python is different for every robot framework.

as Jeremie said, you could write python in a script robot skill if you need to accomplish something that doesn’t already exist in another robot skill.

can you tell us what your python script does? Or what you’re wanting to achieve? Or paste the script for us to see?

because the concept of ARC and creative design tools is that you reuse existing modules to accomplish something. Rather than trying to write all the code to do what already exists. You can use python to link the logic between robot skills.

Australia
#6  

Python as a programming language is same for any computer, microcontroller, robot platform etc. This is as you know a portable language. How you guys implemented it on the ARC framework is different story. If I know python programing, I do not need to learn python for your platform. As I understand, I need to know the ARC python API and how to implement it. I had a look at the python API functions. Found some of them I need. Now, my "robot" description: I am creating a human head with speech and face recognition. It will make conversation with people around it. It is a big task. I spent a lot of time studying different robot platforms and and software frameworks and found the EZ-Robot and the ARC the best for that task. The EZ-Robot hardware for my project is already at my desk. The ARC is on my PC. Spent a week (not enough yet!) studying the ARC and "playing" with the hardware. Just to be familiar with the tools for my project. What I need to do first (not priority 1) is to implement the Camera skills in order to make my "head" capable to recognize a face among faces stored in the PC database. Next, my "head" by talking to the person recognized, has to acquire crucial data about that person for the further conversation when my "head" meets that person again. The data has to be stored to the persons database. As I said, first step is to implement the face recognition task. I did that already on the Raspberri Pi with OpenCV and python. That is probably what caused my confusion about python implementation on the ARC. I just wanted to make simple script to move servos for the sake of learning python API. My real task is the face recognition using EZ-R hardware and ARC software. Final task is the intelligent head with EZ-R and ARC.

Hope it is enough for now. Also, I hope you will help me during my "journey". PS The topic title "Python servo Control" now looks misleading (at least). Since I disclosed my project, I think I have to change it to "Humanoid head robot" or similar.

PRO
USA
#7  

Quote:

What I need to do first (not priority 1) is to implement the Camera skills in order to make my "head" capable to recognize a face among faces stored in the PC database.

https://www.ez-robot.com/learn-robotics-getting-started-humanoid-robot-kit.html

I don't see python in the list, but it must be similar

Camera Input

  1. Introduction to the EZ-B Camera
  2. Face Detection with RoboScratch
  3. Face Detection with Blockly
  4. Face Detection with EZ-Script
  5. Color Tracking with Servos
  6. Color Tracking with Movement
  7. Detecting Multiple Colors
  8. Line Following with Roli, AdventureBot and Shell-E
  9. Vision - Object Training & Recognition
  10. Glyphs to Control Robot Movement
  11. Detecting Glyphs & Augmented Reality
  12. QR Code Detect
  13. Microsoft Cognitive Emotion
  14. Microsoft Cognitive Vision
PRO
Synthiam
#8   — Edited

Great - thanks for sharing what your objective is. That can be achieved easily with a few mouse clicks, and I'll tell you how now.

But first, I do want to clarify so there's no confusion about my statement that Python is different per robot framework. There are other libraries and modules for Python to interact with physical hardware (i.e., servos, switches, cameras, etc.). Every framework uses another method of communicating with the hardware. This has nothing to do with changing Python commands or syntax - the API for each robot framework will accept different commands to move servos. My inquiry about your program was to understand what framework was being used to move servos so they could be easily translated to ARC servo commands. However, what you're looking to achieve is quite simple and doesn't require much scripting - all hail re-usable robot skill modules. :)

ChatBot The first thing you will want to do is merely experiment with what chatbot you'd like to use. There are a number of them on Synthiam's platform, as you can see in the skill store: https://synthiam.com/Products/Controls/Artificial-Intelligence.

I would probably recommend using the AIMLBot because it is very configurable and has a feature that you require to understand who is looking at the robot by the camera. So, install the AIMLBot here: https://synthiam.com/Support/Skills/Artificial-Intelligence/AimlBot?id=16020.

Make Chatbot Speak The chatbot won't speak by default. It'll display the output in the log window. Let's edit the chatbot to add some python code to make it talk out of the PC speaker OR EZB speaker - whatever you choose. View the aimlbot configuration and select the response script. User-inserted image

Now under the Python tab, add one of these... depends if you want the audio out of the PC or EZB.


// speak out of the PC
Audio.Say(getVar("$BotResponse"));

// speak out of the EZB
Audio.SayEZB(getVar("$BotResponse"));

User-inserted image

Speech Recognition Now you need to speak to the robot. There are dozens of speech recognition modules, but Bing Speech Recognition is preferred. That is very reliable and configurable for things like this. You can install it here: https://synthiam.com/Support/Skills/Audio/Bing-Speech-Recognition?id=16209.

Connect Speech Recognition to Chatbot Now you need to connect the speech recognition to the chatbot. So that when you speak, it pushes the detected phrase into the AIML chatbot. View the bing speech recognition configuration screen and add this code to the All Recognized Scripts. Since you're using python, I used the python tab. User-inserted image

User-inserted image


ControlCommand("AimlBot", "SetPhrase", getVar("$BingSpeech"));

Once you save that configuration setting, you can start talking to the robot, and the chatbot will print responses back.

PRO
Synthiam
#9   — Edited

Now that you have the chatbot working let's add the camera stuff to make the robot change who it is seeing...

Add Camera Device Add the camera device so the robot can see using the camera. Install it from here: https://synthiam.com/Support/Skills/Camera/Camera-Device?id=16120

  • Select your camera input, whether an EZB camera or USB
  • Select your resolution User-inserted image

Detect and remember a Face There are a few ways to remember and know faces. The most popular is using Cognitive Face, which will remember the face and emotions, age, etc. It will allow your robot to recognize how happy or sad someone is. So we'll go ahead with the cognitive face. Add the cognitive face robot skill from here: https://synthiam.com/Support/Skills/Camera/Cognitive-Face?id=16210.

The other method is to train the face as an object using object training in the camera device. You will have to read the camera device manual for information on that: https://synthiam.com/Support/Skills/Camera/Camera-Device?id=16120.

Now, when you press the DETECT button, the information about the person will be displayed. If the robot does not know you yet, press the LEARN button to learn who you are. User-inserted image

Now we can have the camera tell the chatbot who is talking to it. Press the configuration button on the cognitive face and add this python code to the script. It will send the current detected face to the aimlbot chatbot and specify the NAME parameter. User-inserted image


ControlCommand("AimlBot", "SetValue", "name", getVar("$FaceName"));

User-inserted image

Make Camera Device Detect Face Now we need to tell the camera device to run the detection when a face is detected. On teh camera device, swith to the tracking tab and select FACE User-inserted image

Now let's make a script run that will tell the Cognitive Face to detect the face. This is like pushing the DETECT button every time a face is detected. Edit the configuration of the camera device and add this script to TRACKING START.


ControlCommand("Cognitive Face", "Detect");

User-inserted image

PRO
Synthiam
#10   — Edited

There, now you're done with the AI.

The next step is to make the head move. Just use the Auto Position robot skill, and that will be best. Install it from here: https://synthiam.com/Support/Skills/Servo/Auto-Position-Gait?id=20314

There's more than enough information in the Auto Position manual to explain how to create your animations.

PRO
Synthiam
#11   — Edited

Here is the project all working: aiml chatbot.EZB. All you need to add next is the Auto Position for having the robot head servos move around. You can also configure the camera device to move the robot head's eyes to follow the face. That are just a few settings in the camera device.

Select the Track by relative position option if the camera is stationary and not moving. But here are the camera settings you need to have the robot's eyes move and follow the face. User-inserted image

User-inserted image

Australia
#12  

Jees!!! What a great platform the ARC is!! I am going to try it right now!

PS I have a feeling that ARC Pro is needed for this project. Am I right D(r.)J Sures?

PRO
Synthiam
#13  

Most likely because there will be a number of third party robot skills. You can still code the whole thing yourself with python and avoid upgrading - but pro support is what helps us keep making the software better

Thank you for the kind words :)

Australia
#14   — Edited

Before upgrading a question: When I create "my" head does it mean it will work as long as I am subscribed to the ARC Pro? In another words, if I not renew my subscription does it mean I can not run "my" head any more (head "dies" :()?

In my last post, I forgot to thank you for all you have done so far. Thank you very much now :).

PRO
Synthiam
#15   — Edited

When you’re finished programming, use ARC runtime. It’ll run your project without needing a subscription. The subscription is for programming. It’s further documented on the ARC download page or the support section for subscriptions here: https://synthiam.com/Support/Install/subscription-plans/am-i-locked-in

Australia
#16  

It was sheer laziness on my part not to read the Subscription plan and licensing thoroughly. Sorry about it. I'll try my best not to rush to ask you a question before I read the ARC documentation thoroughly. I'll subscribe to have full ARC features.

PRO
Synthiam
#17  

Welcome to pro:) - don’t worry too much about questions. That’s what keeps us on our toes.

Australia
#18  

Thank you very much. I have tried the Pro. With free version I have used third party USB camera (Sonix Microdia) and it worked fine. When I tried it with pro, the following error is displayed:

Camera Disabled Error Initializing Camera: USB Camera. Resolution: 320x240. Perhaps an unsupported resolution or unable to connect? System.ApplicationException: Failed creating device object for moniker at EZ_B.Video.DirectShow.VideoCaptureDevice.XwPNh2EB32(Boolean ) at EZ_B.Video.DirectShow.VideoCaptureDevice.Start() at EZ_B.Camera.StartCamera(ValuePair videoCaptureDevice, Int32 captureWidth, Int32 captureHeight)

It is not a big deal because I'm going to use the EZ-R head with camera and hope it will work fine.

PRO
Synthiam
#19  

That error looks like the camera might already be in use by another program. Windows isn’t great at providing detailed errors about hardware devices. But it says the moniker can’t be created which usually means it’s unable to initialize

OR

it means the resolution selected isn’t supported. You can try selecting a different resolution and see if it works

But lastly I noticed you’re still using Teams with a subscription. Teams is fine but if you want the latest updates, use Early Access

Early Access is great because you get quick software fixes and new features before everyone else :)

Australia
#20  

Yeah, I have used VLC media player before, that could be a problem. I'll try next time when I reboot my PC.

Regarding using the Team, does it mean I need to uninstall the Team and reinstall the Pro?

Australia
#21  

Updated to Early Access successfully :D