8. Vision
Let's choose your robot's vision system! If your robot does not have a camera, skip to the next step.
Robots with cameras provide navigation, tracking, and interactive benefits. Synthiam ARC's software is committed to making robot programming easy, including computer vision tracking. The ARC software includes a robot camera skill to connect WiFi, USB, or video capture devices. ARC's camera device skills include tracking types for objects, colors, motions, glyphs, faces, and more. You can add additional tracking types and computer vision modules from the skill store.
Choose the Camera Type
Connects directly to a computer with a USB cable. You can only use this type of camera in an embedded computer configuration. This is because the USB cable will tether the camera to the PC. You can use any USB camera in ARC. Some advantages of using USB cameras are high resolution and increased framerate.
Connects wirelessly to a PC/SBC over a WiFi connection. Generally, this approach is only used in remote computer configurations. Few I/O controllers support a WiFi wireless camera transmission due to latency causing low resolution and potentially unreliable radio interference. For a wireless camera application, the most popular I/O controllers are the EZ-Robot EZ-B v4 and IoTiny.
Add Camera Device Robot Skill
Whichever camera type you choose, the robot skill needed to connect to the camera is the Camera Device Robot Skill. Add this robot skill to connect to the camera and begin viewing the video feed. Reading the manual for this robot skill, you will find many options for tracking objects.
Add Camera Device Robot SkillAdditional Computer Vision Robot Skills
Now that you have the camera working on your robot, you may wish to add additional computer vision robot skills. Computer vision is a general term for extracting and processing information from images. The computer vision robot skills will use artificial intelligence and machine learning to track objects, detect colors, and recognize faces. There are even robot skills to detect facial expressions to determine your mood.
This skill will overlay an image on any detected object, face, color or glyph. Any type of detectable tracking type in the ARC Camera skill can be used. Simply select your image and voila! Its best to use a transparent PNG Main Window 1. Attach/Detach Button Attaches (or Detaches) the loaded image to the first instance of the Camera skill. Once attached the overlay will display on the detected area inside the camera skill. 2. Load Image button This button loads an image. Browse to the location...
Add this skill to your ARC project and bring the camera to life. The video will be interactive, where you can click on objects to center. There are hot spots along the edges which moves the camera as well. Main Window 1. Attach Button This button will add the Camera Click servo functionality to the Camera skill in ARC. Settings 1. Horizontal Servo These settings configure the Pan servo along the horizontal plane. 2. Vertical Servo These settings configure the Tilt servo along the vertical...
This skill uses any camera installed on your PC or robot to combine computer vision tracking with movement and data acquisition. Computer vision is an experimental technology that requires a clean and bright environment to detect objects and colors accurately. If a camera is mounted on your robot, you can use this skill to track color objects, motion, human faces, or view the camera image. The attributes of the detected image can be adjusted manually in this skill to remove any false positives....
This skill only works with the Object tracking inside the Camera Device skill. This skill displays the detected object name on the Camera Device skill video stream. Main Window 1. Attach Button This button will add the Camera Overlay functionality to the Camera Device skill in ARC. How to Use the Camera Overlay Skill 1) Add a Camera Device skill to your ARC project (Project - Add Skill - Camera - Camera Device). 2) Add a Camera Overlay skill to your ARC project (Project - Add Skill - Camera -...
This skill will save snapshots from an active camera in the camera device skill. It will save a picture to your drive or device storage. The image will be saved to a folder called My Robot Pictures on your drive in the Pictures directory. This skill can also take a snapshot at a time interval, modified in the settings menu. You may also instruct the Camera Device to take photos programmatically through code. The script: controlCommand(Camera Snapshot, CameraSnapshot) can be called to instruct...
Use an EZB that supports video as a camera source for recognition, recording and more.
Use a USB camera as a video source for recongition, recording and more.
Overlay image packs onto the camera and control them using a specified control variable. GENERAL USE Choose an image pack from the drop down menu next to the Overlay button. Set the x and y positions, the width and height, and the variable that will control this image pack. In the Auto Assign tab set the min and max values of the control variable. Press Auto Assign. Start the camera. Press the Overlay button. Press the Start button. IMAGE PACKS Image packs consist of a number of images. Each...
SyntaxHighlighter.all(); Use the Microsoft Cognitive Emotion cloud service to describe images. The images come from the Camera Device added to the project. This plugin requires an internet connection. Please consult the appropriate lessons in the learn section to configure your EZ-B to WiFi client mode or add a second USB WiFi adapter from this tutorial. Currently Disabled This robot skill will only return an error because Microsoft has discontinued this service for their AI ethics....
Use the Cognitive Face cloud service to detect faces, describe emotions, guess age and get the persons name from a worldwide database. The images come from the Camera Device robot skill added to the project. This plugin requires an internet connection. Please consult the appropriate lessons in the learn section to configure your EZ-B to WiFi client mode or add a second USB WiFi adapter. Currently Disabled This robot skill will only return an error because Microsoft has discontinued this...
SyntaxHighlighter.all(); Use the Microsoft Cognitive Computer Vision cloud service to describe or read the text in images. The images come from the Camera Device added to the project. This plugin requires an internet connection. If you are using a WiFi-enabled robot controller (such as EZ-Robot EZ-B v4 or IoTiny), consult their manuals to configure WiFi client mode or add a second USB WiFi adapter. The Synthiam Cognitive Vision Robot Skill utilizes machine learning algorithms to enable robots...
You only look once (YOLO) is a state-of-the-art, real-time object detection system. using Tiny YOLOv3 a very small model as well for constrained environments (CPU Only, NO GPU) Darket YOLO website: https://pjreddie.com/darknet/yolo/ Requirements: You only need a camera control, the detection is done offline (no cloud services). 1) start the camera. 2) check the Running (check box) The detection will run continuously when the detection results change an On Changes script is executed (check the...
Track faces from any of the ARC video sources.
This Skill enables the control of your robots servos by moving the joints of your body, which are detected by a Microsoft XBOX 360 Kinect only. Servos can be assigned to each joint using the Settings window. Degrees to move the servos are automatically calculated for joints about the connecting joint. For example, the wrist position in degrees is calculated based on the elbow position. The elbow position in degrees is calculated based on the shoulder position. Each joint can be assigned to...
This control allows you to broadcast live audio and video from the camera control to the Web. Live stream implements HLS protocol from Apple and works cross-browser. You will have to configure your router to access the live broadcast link from external networks. - If you would like to serve a webpage with an embedded video stream without audio, check out the Custom HTTP Server. - If you are looking for receiving a live stream feed in camera control check out Live Stream Receiver. *Icon...
This control listens to incoming live stream connection from web and playbacks the video and audio stream inside ARC. With this control you can open a web page (currently supports Chrome and Firefox on Desktop) from anywhere and start live streaming directly to ARC camera control. Network configuration might be required to access the server. If you are looking for boadcasting camera feed to web check out Broadcast Control. * Icon credit: Flat Icons
Omron HVC-P plugin for ARC (onboard computer). This is used in Rafiki. These are some case STL files so that you can protect this sensor while messing with it. OmronCase2.stl OmronCase1.stl OmronCaseextrusion.stl You need to install the following... Download Python_Installs.zip. Unzip it and install Python and PySerial. FTDI Friend - Adafruit is what I use to connect from the serial port to the Omron. The pin layout is as follows on the back of the Omron. Ground is the first pin toward the...
Required Download And Installation: Download Python_Install_Zip Unzip it and install Python and PySerial. This is an updated version of the original plugin David Cochran created for the Omron HVC-P which used the Omron EvaluationSoftware_v1.2.0 software for trained faces. My version of the plugin includes the updated EvaluationSoftware_rev.2.4.1 software for training faces. It works with both the original HVC-P and the HVC-P2. As with David Cochrans original plugin you will need to use an ARC...
Omron HVC-P2 plugin for ARC Windows, onboard PC required. This will also work with the original HVC-P. Required Download And Installation: Download Python_Install_Zip Unzip it and install Python and PySerial. This was created to be used with a second Omron HVCP(2) camera if you are using 2 as the variables have an extension _2 in ARC. Please Note Per Omron technical support, it is recommended for each camera to have their own album of saved user faces. It is not recommended to share album data...
Integrate state-of-the-art image generation capabilities directly into your robot apps and products. DALLE 2 is a new AI system that can create realistic images and art from natural language descriptions. Have your robot programmatically generate images from speech recognition robot skills descriptions. Or, have the image from the camera sent to Dall-e for its AI interpretation. Experience the whacky world of AI image generation with your Synthiam-powered robot or webcam. How Does It Work?...
This skill super imposes a camera video stream on top of another camera video stream. Main Window 1. Active Checkbox This checkbox will add the Source Camera Device video stream on top of the Destination Camera Device video stream. Configuration 1. Source Camera Device selection This selects the camera device video stream that will be overlaid onto the destination video stream. 2. Destination Camera Device selection This selects destination stream that the source video stream will appear onto....
Using the camera, this skill will allow you to add programming commands by holding up pieces of paper with images printed on them. The concept is to allow programming instructions using visual representations. Each card has an image that represents a specific command, such as move forward, turn right, turn left, or reverse. Using cue cards, the robot can be programmed for specific movements to navigate a maze, for example. The order in which the cards are shown to the robot are stored in memory...
The QR Code Generator will create a QR Code with the text you enter. By default, the QR Code text is Synthiam, and the QR Code Graphic is the same. Using your phones QR Code Scanner App to scan the graphic will say Synthiam. This control works in conjunction with the Camera Control. QR codes, short for Quick Response codes, are two-dimensional barcodes that have become ubiquitous in various industries, including robotics. These codes consist of a pattern of black squares on a white background...
Capture the output display of a robot skill and send it to a camera device. Specify the robot skill to capture, and it will send the video to the selected camera device. The Ultrasonic Radar Scan area is captured in this screenshot below and sent to the Camera Device as a video stream. Usage - You will need a camera device added to the project. Select the camera device from the configuration menu. - In the camera device, select the Custom option in the device list. Then press the start button....
Rubiks Cube Solving Robot skill. This skill is to be combined with a specific robot project to build. Find the robot project here: https://www.thingiverse.com/thing:2471044 *** Version 5 *** Fix for ARC 2020.02.19.00 release *** First Calibrate Arms Grippers: Main Action: Demo:
Capture any area of the screen and send it to a camera device. Specify the screen area to capture, and it will send the video to the selected camera device. You will need a camera device added to the project. Select the camera device from the configuration menu. In the camera device, select the Custom option in the device list. Then press the start button. *Note: the display resolution scaling must be 100% for accurate capture area.
The Sighthound Cloud Detection API returns the location of any people and faces found in robot camera video. Faces can be analyzed for gender, age, pose, or emotion; and a landmark detector can find the various facial features in the detected faces, including eyes, nose and mouth, by fitting 68 landmark points to those features. *Requirement: This plugin requires ARC 2019.12.11.00 or higher Variables are set with information that has been detected. This plugin requires a Camera control to be...
Stream all video sources from any video URI protocol or codec (RTMP, m3u, m3u8, Mkv, MOV, mpg, etc.). The video stream is sent to the selected camera device. This supports webcams or any type of video device/service that provides a video feed over a network. Protocol Types The URL can be a number of different protocol types that specify an end-point feeding a compatible codec. Some supported protocol types that can be specified in the URL are... - http://xxx.xxx.xxx.xxx:[port]/path -...
Select one of the included templates, or select your own. The images are translucent PNG files which are overlaid on each frame of the camera stream. Main Window 1. Attach/Detach Button This checkbox will add the Source Camera Device video stream on top of the Destination Camera Device video stream. 2. Status Field This field will display the status of the connection to the camera device skill and any errors that occur. 3. Load Image Button This button allows you to load your own custom target...
Display a variable on the processed camera device image. Specify the X/Y coordinates of the variable location, and the variable name. There are ControlCommand() for attaching the skill to a specific camera device, or use any available device.
Object detection is fundamental to computer vision: Recognize the objects inside the robot camera and where they are in the image. This robot skill attaches to the Camera Device robot skill to obtain the video feed for detection. Demo Directions 1) Add a Camera Device robot skill to the project 2) Add this robot skill to the project. Check the robot skills log view to ensure the robot skill has loaded the model correctly. 3) START the camera device robot skill, so it displays a video stream 4)...
In order to train objects to the camera device, the Train Object menu is used in the camera device skill. This menu normally requires human intervention to enter the object name and use the mouse. This skill allows your robot to learn objects in real-time triggered by controlcommand() scripts programmatically. Take a look at the cheat sheet within the Script skill settings to see what controlcommand() this skill accepts. Main Window 1. Beginning Learning Button This button will attach/detach...
Record any video source to a local file.
*Note: Vuzixs decision to no longer support the Synthiam platform with their newer products. Therefore, this control is limited to the deprecated 920VR headset, which may be used on eBay. There will be no further development on this control. For headset support, we recommend the Virtual Reality Robot. The Vuzix augmented reality control enables connectivity between your robot and the Vuzix VR glasses. When the VR module is included with the Vuzix glasses, this allows control of your robots...