+ How To Add This Control To Your Project (Click to Expand)
This skill uses any camera installed on your PC or robot to combine computer vision tracking with movement and data acquisition. Computer vision is an experimental technology that requires a clean and bright environment to accurately detect objects and colors. If a camera is mounted on your robot, this skill can be used to track color objects, motion, human faces, or simply view the camera image. The attributes of the detected image can be adjusted manually in this skill to remove any false positives. Test in a bright room and adjust the image attributes until the desired object/thing is solidly detected.
There are many sections on this page, here are some shortcuts:
JPEG Snapshot HTTP Video Device - a third party camera or video source which provides JPEG images may be selected as a camera device. Simply enter the complete HTTP URL of the video source. If the HTTP video source requires authentication, consult the documentation of your device for correct syntax. You may test the JPEG URL in a web browser to verify that it is valid and that only the JPEG is displayed, not any HTML. Example: http://192.168.100.2/cgi-bin/snapshot.cgi or http://192.168.100.2/image.jpg, etc..
EZB Camera Device - when the Camera Skill is added to a project, the current EZ-B Index #0 ip address is displayed as the default device. If the EZ-B Index #0 ip address has been changed, to accommodate client mode for example, you will manually need to edit the IP Address of the camera video device. Here is the syntax, example: EZB://192.168.0.1 or EZB://192.168.0.1:24.
AR Parrot Drone Device - if using the AR Parrot Drone as a camera device, the AR Parrot Drone movement panel skill must be added, configured and connected to the drone. Please view the AR Parrot Drone Movement Panel documentation for more information.
1. Full Screen/Restore Button
This button makes the Camera Device skill full screen and restores back to original size.
2. Camera Image Display
This section displays the Camera image which has the ability to be manipulated by the Image attributes sliders.
3. Processed Image Drop-down
This drop-down allows you to select if the displayed image is processed (attributes manipulated) or original.
4. Pause detection Checkbox
Once checked this checkbox stops the detection until unchecked.
5. Hide Settings Checkbox
Once checked this checkbox make the camera image fill up the camera skill, the option on the right side are hidden.
6. Tracking Frame Count
This displays how many frames per second are being tracked.
7. Refresh Button
This button refreshes the camera device list. It adds newly installed cameras to the list and removes unplugged devices.
8. Camera Device Drop-down
This drop-down lists the available camera devices that can be used by the camera device skill.
9. Network Scan Button
This button is used for finding a camera on the network that ARC is connected to.
10. Resolution Drop-down
This drop-down allows you to select the resolution of your camera. If your camera image doesn't appear when you click the Start/Stop button you make have the wrong resolution selected.
11. Start/Stop Button
This button will begin or end the camera image display from the selected camera device.
12. Reset Button
This button resets the changes to the image adjustment sliders.
13. Image Adjustment Sliders
These sliders adjust 3 different attributes of the camera image (Brightness, Contrast, and Saturation).
14. Start/Stop & Pause/Resume Video Recording Buttons
These buttons begin/end or pause/resume video recording from the active camera device. The video is recorded in .wmv format and is saved in your "\Pictures\My Robot Pictures" folder.
15. Sharpen Image Checkbox
This checkbox will sharpen the camera image. The image will have less blur and look crisper.
A tracking type is the function of what to recognize for tracking. For example, to track the color RED, you would only select the Color tracking type. Each tracking type has configuration settings which are on their own tabs. If you have Color checked as a tracking type, then the configuration for the Color tracking type is found in the tab labelled Color.
1. Tracking Type Checkboxes
These checkboxes will enable/disable the different tracking types. You can track more than one type at a time but it's not often used in this way.
This will track the specified predefined color from the Color tab. The color can be selected between Red, Green or Blue. The Color tab will also give you settings to adjust the brightness and size of the object. Hold the object in front of the camera while changing the settings. This is the basic color tracking method which is used in RoboScratch's "wait for color".
1. Detection Boxes
These boxes show where the selected color has been detected on the camera image.
2. Color Drop-down
This drop-down give you the selection of 3 colors (Red, Green, and Blue).
3. Minimum Object Size Slider
This slider allows you to select how large the detected colored object size must be before it will register as a detection.
4. Example Minimum Size Display
This display gives you a visual reference of how large (in pixel size) the detected object will have to be in the camera image display.
5. Object Brightness Slider
This slider will adjust the brightness of the detected color. Adjust until only the desired object is detected.
You can use this tracking type to detect pre-encoded QR codes or you can use the QR Code Generator (Add Skill -> Camera -> QR Code Encoder) to create your own QR Codes and print them out. QR Code Tracking will detect QR Codes and set the $CameraQRCode variable, or execute the code within the QR Code. Keep in mind that QR Code recognition is CPU intensive and therefore slow. We do not recommend using QR Code for tracking. It is best used for identification and triggering events, not movement tracking. Note that the QR Code tracking type has additional settings in the Camera Device Settings.
1. QR code Detection Output
This is where the detected output from the QR code will be displayed.
2. QR Code
This is the QR code that will be detected on the camera image display.
3. QR Code Checkbox
This checkbox will need to be enabled to track QR codes.
4. Camera Frame Rate
This displays the camera frame rate in frames per second (fps) and also shows the number of skipped frames.
This is an advanced and more detailed tracking method over the generic Color tracking type. With Multi Color, you can specify and fine tune your own color, as well as add multiple colors. An ez-script variable will be configured to hold the name of the current detected color that has been detected. The multi color definitions can be trained in the Multi Color tab. If you name a custom color "Red", this does not override the basic predefined color tracking types. The basic color tracking (discussed above), has three predefined colors. This multicolor tracking type is separate and has no relationship to those predefined colors.
1. Detection Boxes
These boxes show where the selected colors have been detected on the camera image.
2. Add Button
This button adds a color to the detection list.
3. Edit button
This button will open the Custom HSL Color window to allow you to edit/adjust the color that you would like to detect. The main items that you will be adjusting are the color name field, the two color wheel wipers, the saturation min slider, and the luminance max slider. Here's how the Custom HSL Color window looks:
4. Enable Checkbox
This checkbox allows you to enable/disable the color that you have added to the detection list.
5. Color Name
This is a text display of the name that you have chosen in the Custom HSL Color window.
Face tracking will attempt to detect faces within the image. The Face tracking uses calculations to detect eyes and a mouth. This does require additional processing and will slow the framerate. Also, detecting images within a video stream with many complicated objects will return false positives. Use the Face Tracking against a white wall or non-complicated background. If you wish to detect specific faces, use the Object Tracking type, as you can train your face as an object.
1. Detection Box
This box shows where a face has been detected on the camera image.
2. Face Tracking Checkbox
This checkbox will need to be enabled to track faces.
3. Camera Frame Rate
This displays the camera frame rate in frames per second (fps) and also shows the number of skipped frames.
Unless you hare experienced with computer vision, ignore this tracking type. In brief, this tracking type is used for computer vision specialists who generate HAAR Cascades. This is very cpu intensive and will most likely bring a slower computer to a halt. If the input cascade is not optimized, it will also have great affects on framerate processing performance. Leave this option to the experts :)
1. Detection Box
This box shows where a custom haar cascade has been detected on the camera image.
2. Custom Haar Tracking Checkbox
This checkbox will need to be enabled to track custom haar cascades.
3. Camera Frame Rate
This displays the camera frame rate in frames per second (fps) and also shows the number of skipped frames.
Link to OpenCV Haar Cascades: XML Files
This is an advanced computer vision learning tracking type that gives you the ability to teach the robot an object (such as a face, logo, etc.) and have it detect it. Computer vision learning is very experimental and requires patience and consistent lighting.
*Note: For best results with object training, consider that you are teaching the robot specific details of the object, not the object itself. This means the entire object does not need to be in the training square. Only details which are unique to that object need to be in the training square. For example, you may not wish to teach the robot a can of soda/cola by putting the entire can in the square. Merely put the logo in the square, or any other identifying features.
1. Detection Box
This box shows where a trained object has been detected on the camera image.
2. Train New Object Button
This button will open the Custom Object window to allow you to train new custom objects that you would like to detect. To train a new object first enter in a name of the object, then click in the camera image area to created pink box, place your object in front of the camera in such a way that a unique detail of your object completely fills the box, click the learn selected area button, and then tilt your object slowly at different angles until the learning is complete. Here's how the Custom Object window looks:
3. Clear Memory Button
This button removes all the trained objects from the trained images window.
4. Learn While Tracking Checkbox
This checkbox enables the object learning module to keep learning the object being tracked. Be aware that if this checkbox is left enabled there is a chance that over time the module will start learning the objects around the tracked object.
5. Show Movement Tracer Checkbox
This checkbox enables the trajectory tracer to be displayed as the object is moved.
6. Trained Images Display
This displays the objects that have been trained in the train new object window.
This observes changes within the camera image. Motion should not be confused with the "Movement" Setting from the configuration menu, as they are different things and should not be used together. The Motion Tracking Type will detect a change in the camera image and return the area of the change. For example, if your camera is stationary and you wave your hand in a small area of the camera image, you will see the motion display of your hand moving. If a robot moves during Motion Tracking Type, the entire image would have been considered changed and that is not useful for tracking. So, the Motion Tracking Type is only really useful for stationary cameras.
1. Detection Box
This box shows where motion has been detected on the camera image.
2. Motion Sensitivity Slider
This slider adjusts the amount of motion needed for motion to be detected.
3. Minimum Object Size Slider
This slider allows you to select how large the detected motion object size must be before it will register as a detection.
4. Example Minimum Size Display
This display gives you a visual reference of how large (in pixel size) the detected object will have to be in the camera image display.
5. Check Every (frames) Drop-down
This drop-down allows you to select how many frames to wait between detection samples.
This tracking type will look for images that consist of black and white squares. There is a set of specific glyphs (1, 2, 3, and 4) which will be used for this tracking type. If you download and print the glyph PDF in the link below, this tracking type will detect those glyphs. You may also visit the Camera configuration to set up augmented reality overlays on the glyphs. This means the camera will super-impose a selected image over the detected glyph. When tracking glyph images, the glyph will only execute its respective tracking script once until another glyph has been detected or the ClearGlyph control command has been called. To see all available controlcommand(), press the Cheat Sheet tab when editing scripts.
Glyph Downloads: Glyph PDF
1. Detection Box
This box shows where the glyph has been detected on the camera image.
2. Glyph Tracking Checkbox
This checkbox will need to be enabled to track glyphs.
3. Camera Frame Rate
This displays the camera frame rate in frames per second (fps) and also shows the number of skipped frames.
This tracking type is much like multi color, you can specify and fine tune your own color, as well as add multiple colors. While multi color uses classic RGB values, YCbCr uses blue-difference and red-difference chroma (Cb and Cr) and luma (Y) components.
1. Detection Box
This box shows where the glyph has been detected on the camera image.
2. Add Button
This button adds a color to the detection list.
3. Edit Button
This button will open the Custom YCbCr Color window to allow you to edit/adjust the color that you would like to detect. The main items that you will be adjusting are the color name field, the Cb sliders, the Cr sliders, and the Y sliders. Here's how the Custom YCbCr Color window looks:
4. Enable Checkbox
This checkbox allows you to enable/disable the color that you have added to the detection list.
5. Color Name
This is a text display of the name that you have chosen in the Custom YCbCr Color window.
These grid lines are used for movement and servo tracking. With tracking enabled a robot can try and move itself to keep the detected object in the center of the grid lines. Move the sliders to adjust the position of the grid lines on the camera image. The field of detection can be narrowed or expanded with the grid lines.
1. Grid Line Overlay
These grid lines are overlaid on the camera image and are always there even if you make them completely transparent.
2. Grid Line Sliders
These sliders are for moving the grid lines. There are 4 sliders for each of the 4 grid lines.
3. Defaults Button
This button resets the grid lines back to their default position.
4. Grid Line Transparency Slider
This slider controls the transparency of the grid lines, from bold red to invisible.
These settings setup how a robot to react when an object is detected by one of the enabled tacking types. Servo, Movement, or Scripts, are the 3 reaction types which can be enabled individually or simultaneously. If servo tracking is checked, the skill assumes the camera is mounted on the specified servos. The servos will be moved from left, right, up and down to track the detected object based on the servo settings that you have provided. The Movement reaction type is physically moving the entire robot toward an object. If Movement is checked, the robot will follow the detected object when it is visible using the project's movement panel. The movement panel may be an Auto Position, H-Bridge, or more. The robot will move forward, left or right to follow the desired object. On the Scripts tab of the camera device configuration menu is a Tracking Start and Tracking End script. When an object is detected the Tracking Start script will execute. When the camera is no longer tracking an object, the Tracking End script will execute.
1. Servo Tracking Checkbox
This checkbox will enable X and Y axis Servos to move in order to track a detected object.
2. Relative Position/Grid Line Checkboxes
These checkboxes allow the servos to track the detected object via relative position (moves to the center of the camera image) or using the grid lines (move to wherever the center of the grid lines is set to). Using relative position assumes stationary camera is being used and using grid lines assumes a pan/tilt mechanism is being used.
3. Servo Setup Drop-downs
These drop-downs allows for setup of the X and Y axis Servos. You can set board index, port number, min and max position limits, and multiple servos. You can also invert the servo direction with the included checkbox.
4. Horizontal/Vertical Increment Steps Drop-downs
These drop downs adjust the number of steps the servos will move when moving toward the detected object. The smaller the number the more precise the movements will be.
This section of the Scripts tab includes a checkbox to enable the execution of tracking scripts. You can add tracking start and end scripts as well as use drop-downs for selecting how many frames to delay by before the tracking start/end scripts execute and sorting by detected object (blob) order.
This section includes all the variables that are used with the camera skill. You can change the variable names here if you'd like. These variables can be used throughout your scripts and applications. You can view their value or state in real-time with the Variable Watch skill. There is also a drop-down for selecting the maximum number of objects to detect. Each object that's detected will add and underscore and number to the end of the variables to distinguish one from another.
This section executes scripts when the corresponding glyph is detected. Find more information about glyphs in the Glyph Tracking section.
In this section you can generate a list of different QR Code texts and enter in a script to execute when that certain text is detected. You can also generate a script that executes when a QR Code is detected that is not in your list. There are also buttons included to manage your list of QR Codes.
1. Select Image Buttons
These buttons assign an image to be displayed when the corresponding glyph is detected. Find more information about glyphs in the Glyph Tracking section.
1. Load Button
This button loads custom haar cascades (in .XML format). You can download custom cascades from OpenCV on github.
2. Minimum Size Drop-down
This drop-down sets up the minimum size the detected face (in pixels) will have to be in order to register as a detection.
3. Maximum Size Drop-down
This drop-down sets up the maximum size the detected face (in pixels) will have to be in order to register as a detection.
4. Minimum Detection Count Drop-down
This drop-down sets the minimum of detected frames that the face will have to be present in order to register as a detection. There's more information on Face Tracking above.
1. Change Button
This button allows you to change the save location on your PC that photos taken with the camera device skill will save to.
2. Snapshot Quality Drop-down
This drop-down sets up the percentage of JPEG quality you would like (10 to 100%). The larger the percentage, the larger the file size will be.
3. Video Capture Codec Drop-down
This drop-down sets up the video capture compression format (WMV1, WMV2, or H263P5)
4. Bit Rate Field
This value determines the codec resolution and size of each video frame.
5. Insert Video Title Checkbox
If checked this checkbox enables the Video Title text to be show at the beginning of your recorded video.
6. Title Length Drop-down
This drop-down allows you to set how long the title will be displayed (in seconds) at the beginning of your recorded video.
7. Title Text Field
This field allows you to enter in the text title you would like to be displayed at the beginning of your recorded video. The title can also be a script variable if you'd like.
1. Title Field
This field allows you to change the text title of the camera device skill. *Note: Changing the title here will also change the title in the controlCommand() associated with this skill.
2. Camera Image Orientation Drop-down
This drop-down sets the orientation of the camera image. If your camera is mounted in an odd orientation you can use this drop down to correct how it displays the camera image on screen. There are 16 different orientation options.
3. Stop Camera on Error Checkbox
This checkbox is for plug-in troubleshooting. It allows the camera device to stop if an error occurs and gives you ability to show the error in the debug window to Synthiam.
4. Frame Rate Initialization Field
This field allows you to adjust the frame rate when the camera first initializes. It's best to leave this value at -1.
2) Select the Camera you would like to use from the Video Device drop-down.
3) Select the resolution. If the camera image doesn't show up you may have selected an unsupported resolution for that camera.
4) Press the Start/Stop Button to begin transmitting camera images.
5) Select a tracking type in the tracking tab to begin using the camera image data to track objects.
6) Select the other settings tabs or the triple dots to configure the tracking type (if applicable).
160x120 = 57,600 Bytes per frame = 1,152,000 Bytes per second
320x240 = 230,400 Bytes per frame = 4,608,000 Bytes per second
640x480 = 921,600 Bytes per frame = 18,432,000 Bytes per second
So at 320x240, your CPU is processing complex algorithm on 4,608,000 Bytes per second. Soon as you move to a mere 640x480, it's 18,432,000 Bytes per second.
To expand on that, 4,608,000 Bytes per second is just the data, not including the number of CPU instructions per step of the algorithm(s). Do not let television shows, such as Person Of Interest, make you believe that computer vision and CPU processing is as accessible in real-time, but many of us are working on it! We can put 4,608,000 Bytes into perspective by relating that to a 2 minute MP3 file. Imagine your computer processing a 2 minute MP3 file in less than 1 second - that is what vision processing for recognition is doing at 320x240 resolution. Soon as you increase the resolution, the CPU has to process an exponentially larger amount of data. Computer vision recognition does not require as much resolution as your human eyes, as it looks for patterns, colors or shapes.
If you want to record a birthday party, do not use a robot camera - buy a real camera. Synthiam software algorithms are designed for vision recognition with robotics.
JPEG Snapshot HTTP Camera
Integrated Camera
Wireless Camera
USB Camera
EZ-B Camera
*Note: The included videos tutorials are a little dated due to the rapidly evolving interface and features for ARC. You should be able to extract the necessary tracking information within the videos even though the interface may not be identical to ARC today. :)
There are many sections on this page, here are some shortcuts:
- Device
- Tracking Type
- Color Tracking
- QR Code Tracking
- Multi Color Tracking
- Face Tracking
- Custom Haar Tracking
- Object Tracking
- Motion Tracking
- Glyph Tracking
- YCbCr Tracking
- Grid Overlay
- Servo/Movement Tracking
- Scripts Configuration
- Augmented Reality Configuration
- Detection Settings
- Media Configuration
- Advanced Configuration
- How to Use This Skill
- Video Resolutions and Performance
- Camera Code
- Requirements
Main Window - Device

This is the camera device that is either connected or built into your computer or on your robot. It can be an integrated webcam, usb camera, serial camera, wireless camera, etc.
When the camera is stopped (not running), additional camera devices may be selected or entered. The current EZ-B camera IP Address will be displayed, as well as any other USB or local cameras. There will also be camera devices displayed for third party supported robots, such as the AR Parrot Drone. Additionally, you may manually enter a JPEG Snapshot HTTP address.JPEG Snapshot HTTP Video Device - a third party camera or video source which provides JPEG images may be selected as a camera device. Simply enter the complete HTTP URL of the video source. If the HTTP video source requires authentication, consult the documentation of your device for correct syntax. You may test the JPEG URL in a web browser to verify that it is valid and that only the JPEG is displayed, not any HTML. Example: http://192.168.100.2/cgi-bin/snapshot.cgi or http://192.168.100.2/image.jpg, etc..
EZB Camera Device - when the Camera Skill is added to a project, the current EZ-B Index #0 ip address is displayed as the default device. If the EZ-B Index #0 ip address has been changed, to accommodate client mode for example, you will manually need to edit the IP Address of the camera video device. Here is the syntax, example: EZB://192.168.0.1 or EZB://192.168.0.1:24.
AR Parrot Drone Device - if using the AR Parrot Drone as a camera device, the AR Parrot Drone movement panel skill must be added, configured and connected to the drone. Please view the AR Parrot Drone Movement Panel documentation for more information.
1. Full Screen/Restore Button
This button makes the Camera Device skill full screen and restores back to original size.
2. Camera Image Display
This section displays the Camera image which has the ability to be manipulated by the Image attributes sliders.
3. Processed Image Drop-down
This drop-down allows you to select if the displayed image is processed (attributes manipulated) or original.
4. Pause detection Checkbox
Once checked this checkbox stops the detection until unchecked.
5. Hide Settings Checkbox
Once checked this checkbox make the camera image fill up the camera skill, the option on the right side are hidden.
6. Tracking Frame Count
This displays how many frames per second are being tracked.
7. Refresh Button
This button refreshes the camera device list. It adds newly installed cameras to the list and removes unplugged devices.
8. Camera Device Drop-down
This drop-down lists the available camera devices that can be used by the camera device skill.
9. Network Scan Button
This button is used for finding a camera on the network that ARC is connected to.
10. Resolution Drop-down
This drop-down allows you to select the resolution of your camera. If your camera image doesn't appear when you click the Start/Stop button you make have the wrong resolution selected.
11. Start/Stop Button
This button will begin or end the camera image display from the selected camera device.
12. Reset Button
This button resets the changes to the image adjustment sliders.
13. Image Adjustment Sliders
These sliders adjust 3 different attributes of the camera image (Brightness, Contrast, and Saturation).
14. Start/Stop & Pause/Resume Video Recording Buttons
These buttons begin/end or pause/resume video recording from the active camera device. The video is recorded in .wmv format and is saved in your "\Pictures\My Robot Pictures" folder.
15. Sharpen Image Checkbox
This checkbox will sharpen the camera image. The image will have less blur and look crisper.
Main Window - Tracking

A tracking type is the function of what to recognize for tracking. For example, to track the color RED, you would only select the Color tracking type. Each tracking type has configuration settings which are on their own tabs. If you have Color checked as a tracking type, then the configuration for the Color tracking type is found in the tab labelled Color.
1. Tracking Type Checkboxes
These checkboxes will enable/disable the different tracking types. You can track more than one type at a time but it's not often used in this way.
Main Window - Color Tracking

This will track the specified predefined color from the Color tab. The color can be selected between Red, Green or Blue. The Color tab will also give you settings to adjust the brightness and size of the object. Hold the object in front of the camera while changing the settings. This is the basic color tracking method which is used in RoboScratch's "wait for color".
1. Detection Boxes
These boxes show where the selected color has been detected on the camera image.
2. Color Drop-down
This drop-down give you the selection of 3 colors (Red, Green, and Blue).
3. Minimum Object Size Slider
This slider allows you to select how large the detected colored object size must be before it will register as a detection.
4. Example Minimum Size Display
This display gives you a visual reference of how large (in pixel size) the detected object will have to be in the camera image display.
5. Object Brightness Slider
This slider will adjust the brightness of the detected color. Adjust until only the desired object is detected.
Main Window - QR Code Tracking

You can use this tracking type to detect pre-encoded QR codes or you can use the QR Code Generator (Add Skill -> Camera -> QR Code Encoder) to create your own QR Codes and print them out. QR Code Tracking will detect QR Codes and set the $CameraQRCode variable, or execute the code within the QR Code. Keep in mind that QR Code recognition is CPU intensive and therefore slow. We do not recommend using QR Code for tracking. It is best used for identification and triggering events, not movement tracking. Note that the QR Code tracking type has additional settings in the Camera Device Settings.
1. QR code Detection Output
This is where the detected output from the QR code will be displayed.
2. QR Code
This is the QR code that will be detected on the camera image display.
3. QR Code Checkbox
This checkbox will need to be enabled to track QR codes.
4. Camera Frame Rate
This displays the camera frame rate in frames per second (fps) and also shows the number of skipped frames.
Main Window - Multi Color Tracking

This is an advanced and more detailed tracking method over the generic Color tracking type. With Multi Color, you can specify and fine tune your own color, as well as add multiple colors. An ez-script variable will be configured to hold the name of the current detected color that has been detected. The multi color definitions can be trained in the Multi Color tab. If you name a custom color "Red", this does not override the basic predefined color tracking types. The basic color tracking (discussed above), has three predefined colors. This multicolor tracking type is separate and has no relationship to those predefined colors.
1. Detection Boxes
These boxes show where the selected colors have been detected on the camera image.
2. Add Button
This button adds a color to the detection list.
3. Edit button
This button will open the Custom HSL Color window to allow you to edit/adjust the color that you would like to detect. The main items that you will be adjusting are the color name field, the two color wheel wipers, the saturation min slider, and the luminance max slider. Here's how the Custom HSL Color window looks:

4. Enable Checkbox
This checkbox allows you to enable/disable the color that you have added to the detection list.
5. Color Name
This is a text display of the name that you have chosen in the Custom HSL Color window.
Main Window - Face Tracking

Face tracking will attempt to detect faces within the image. The Face tracking uses calculations to detect eyes and a mouth. This does require additional processing and will slow the framerate. Also, detecting images within a video stream with many complicated objects will return false positives. Use the Face Tracking against a white wall or non-complicated background. If you wish to detect specific faces, use the Object Tracking type, as you can train your face as an object.
1. Detection Box
This box shows where a face has been detected on the camera image.
2. Face Tracking Checkbox
This checkbox will need to be enabled to track faces.
3. Camera Frame Rate
This displays the camera frame rate in frames per second (fps) and also shows the number of skipped frames.
Main Window - Custom Haar Tracking

Unless you hare experienced with computer vision, ignore this tracking type. In brief, this tracking type is used for computer vision specialists who generate HAAR Cascades. This is very cpu intensive and will most likely bring a slower computer to a halt. If the input cascade is not optimized, it will also have great affects on framerate processing performance. Leave this option to the experts :)
1. Detection Box
This box shows where a custom haar cascade has been detected on the camera image.
2. Custom Haar Tracking Checkbox
This checkbox will need to be enabled to track custom haar cascades.
3. Camera Frame Rate
This displays the camera frame rate in frames per second (fps) and also shows the number of skipped frames.
Link to OpenCV Haar Cascades: XML Files
Main Window - Object Tracking

This is an advanced computer vision learning tracking type that gives you the ability to teach the robot an object (such as a face, logo, etc.) and have it detect it. Computer vision learning is very experimental and requires patience and consistent lighting.
*Note: For best results with object training, consider that you are teaching the robot specific details of the object, not the object itself. This means the entire object does not need to be in the training square. Only details which are unique to that object need to be in the training square. For example, you may not wish to teach the robot a can of soda/cola by putting the entire can in the square. Merely put the logo in the square, or any other identifying features.
1. Detection Box
This box shows where a trained object has been detected on the camera image.
2. Train New Object Button
This button will open the Custom Object window to allow you to train new custom objects that you would like to detect. To train a new object first enter in a name of the object, then click in the camera image area to created pink box, place your object in front of the camera in such a way that a unique detail of your object completely fills the box, click the learn selected area button, and then tilt your object slowly at different angles until the learning is complete. Here's how the Custom Object window looks:

3. Clear Memory Button
This button removes all the trained objects from the trained images window.
4. Learn While Tracking Checkbox
This checkbox enables the object learning module to keep learning the object being tracked. Be aware that if this checkbox is left enabled there is a chance that over time the module will start learning the objects around the tracked object.
5. Show Movement Tracer Checkbox
This checkbox enables the trajectory tracer to be displayed as the object is moved.
6. Trained Images Display
This displays the objects that have been trained in the train new object window.
Main Window - Motion Tracking

This observes changes within the camera image. Motion should not be confused with the "Movement" Setting from the configuration menu, as they are different things and should not be used together. The Motion Tracking Type will detect a change in the camera image and return the area of the change. For example, if your camera is stationary and you wave your hand in a small area of the camera image, you will see the motion display of your hand moving. If a robot moves during Motion Tracking Type, the entire image would have been considered changed and that is not useful for tracking. So, the Motion Tracking Type is only really useful for stationary cameras.
1. Detection Box
This box shows where motion has been detected on the camera image.
2. Motion Sensitivity Slider
This slider adjusts the amount of motion needed for motion to be detected.
3. Minimum Object Size Slider
This slider allows you to select how large the detected motion object size must be before it will register as a detection.
4. Example Minimum Size Display
This display gives you a visual reference of how large (in pixel size) the detected object will have to be in the camera image display.
5. Check Every (frames) Drop-down
This drop-down allows you to select how many frames to wait between detection samples.
Main Window - Glyph Tracking

This tracking type will look for images that consist of black and white squares. There is a set of specific glyphs (1, 2, 3, and 4) which will be used for this tracking type. If you download and print the glyph PDF in the link below, this tracking type will detect those glyphs. You may also visit the Camera configuration to set up augmented reality overlays on the glyphs. This means the camera will super-impose a selected image over the detected glyph. When tracking glyph images, the glyph will only execute its respective tracking script once until another glyph has been detected or the ClearGlyph control command has been called. To see all available controlcommand(), press the Cheat Sheet tab when editing scripts.
Glyph Downloads: Glyph PDF
1. Detection Box
This box shows where the glyph has been detected on the camera image.
2. Glyph Tracking Checkbox
This checkbox will need to be enabled to track glyphs.
3. Camera Frame Rate
This displays the camera frame rate in frames per second (fps) and also shows the number of skipped frames.
Main Window - YCbCr Tracking

This tracking type is much like multi color, you can specify and fine tune your own color, as well as add multiple colors. While multi color uses classic RGB values, YCbCr uses blue-difference and red-difference chroma (Cb and Cr) and luma (Y) components.
1. Detection Box
This box shows where the glyph has been detected on the camera image.
2. Add Button
This button adds a color to the detection list.
3. Edit Button
This button will open the Custom YCbCr Color window to allow you to edit/adjust the color that you would like to detect. The main items that you will be adjusting are the color name field, the Cb sliders, the Cr sliders, and the Y sliders. Here's how the Custom YCbCr Color window looks:

4. Enable Checkbox
This checkbox allows you to enable/disable the color that you have added to the detection list.
5. Color Name
This is a text display of the name that you have chosen in the Custom YCbCr Color window.
Main Window - Grid

These grid lines are used for movement and servo tracking. With tracking enabled a robot can try and move itself to keep the detected object in the center of the grid lines. Move the sliders to adjust the position of the grid lines on the camera image. The field of detection can be narrowed or expanded with the grid lines.
1. Grid Line Overlay
These grid lines are overlaid on the camera image and are always there even if you make them completely transparent.
2. Grid Line Sliders
These sliders are for moving the grid lines. There are 4 sliders for each of the 4 grid lines.
3. Defaults Button
This button resets the grid lines back to their default position.
4. Grid Line Transparency Slider
This slider controls the transparency of the grid lines, from bold red to invisible.
Configuration - Servo/Movement Tracking

These settings setup how a robot to react when an object is detected by one of the enabled tacking types. Servo, Movement, or Scripts, are the 3 reaction types which can be enabled individually or simultaneously. If servo tracking is checked, the skill assumes the camera is mounted on the specified servos. The servos will be moved from left, right, up and down to track the detected object based on the servo settings that you have provided. The Movement reaction type is physically moving the entire robot toward an object. If Movement is checked, the robot will follow the detected object when it is visible using the project's movement panel. The movement panel may be an Auto Position, H-Bridge, or more. The robot will move forward, left or right to follow the desired object. On the Scripts tab of the camera device configuration menu is a Tracking Start and Tracking End script. When an object is detected the Tracking Start script will execute. When the camera is no longer tracking an object, the Tracking End script will execute.
1. Servo Tracking Checkbox
This checkbox will enable X and Y axis Servos to move in order to track a detected object.
2. Relative Position/Grid Line Checkboxes
These checkboxes allow the servos to track the detected object via relative position (moves to the center of the camera image) or using the grid lines (move to wherever the center of the grid lines is set to). Using relative position assumes stationary camera is being used and using grid lines assumes a pan/tilt mechanism is being used.
3. Servo Setup Drop-downs
These drop-downs allows for setup of the X and Y axis Servos. You can set board index, port number, min and max position limits, and multiple servos. You can also invert the servo direction with the included checkbox.
4. Horizontal/Vertical Increment Steps Drop-downs
These drop downs adjust the number of steps the servos will move when moving toward the detected object. The smaller the number the more precise the movements will be.
Configuration - Scripts

This section of the Scripts tab includes a checkbox to enable the execution of tracking scripts. You can add tracking start and end scripts as well as use drop-downs for selecting how many frames to delay by before the tracking start/end scripts execute and sorting by detected object (blob) order.

This section includes all the variables that are used with the camera skill. You can change the variable names here if you'd like. These variables can be used throughout your scripts and applications. You can view their value or state in real-time with the Variable Watch skill. There is also a drop-down for selecting the maximum number of objects to detect. Each object that's detected will add and underscore and number to the end of the variables to distinguish one from another.

This section executes scripts when the corresponding glyph is detected. Find more information about glyphs in the Glyph Tracking section.

In this section you can generate a list of different QR Code texts and enter in a script to execute when that certain text is detected. You can also generate a script that executes when a QR Code is detected that is not in your list. There are also buttons included to manage your list of QR Codes.
Configuration - Augmented Reality

1. Select Image Buttons
These buttons assign an image to be displayed when the corresponding glyph is detected. Find more information about glyphs in the Glyph Tracking section.
Configuration - Detection

1. Load Button
This button loads custom haar cascades (in .XML format). You can download custom cascades from OpenCV on github.
2. Minimum Size Drop-down
This drop-down sets up the minimum size the detected face (in pixels) will have to be in order to register as a detection.
3. Maximum Size Drop-down
This drop-down sets up the maximum size the detected face (in pixels) will have to be in order to register as a detection.
4. Minimum Detection Count Drop-down
This drop-down sets the minimum of detected frames that the face will have to be present in order to register as a detection. There's more information on Face Tracking above.
Configuration - Media

1. Change Button
This button allows you to change the save location on your PC that photos taken with the camera device skill will save to.
2. Snapshot Quality Drop-down
This drop-down sets up the percentage of JPEG quality you would like (10 to 100%). The larger the percentage, the larger the file size will be.
3. Video Capture Codec Drop-down
This drop-down sets up the video capture compression format (WMV1, WMV2, or H263P5)
4. Bit Rate Field
This value determines the codec resolution and size of each video frame.
5. Insert Video Title Checkbox
If checked this checkbox enables the Video Title text to be show at the beginning of your recorded video.
6. Title Length Drop-down
This drop-down allows you to set how long the title will be displayed (in seconds) at the beginning of your recorded video.
7. Title Text Field
This field allows you to enter in the text title you would like to be displayed at the beginning of your recorded video. The title can also be a script variable if you'd like.
Configuration - Advanced

1. Title Field
This field allows you to change the text title of the camera device skill. *Note: Changing the title here will also change the title in the controlCommand() associated with this skill.
2. Camera Image Orientation Drop-down
This drop-down sets the orientation of the camera image. If your camera is mounted in an odd orientation you can use this drop down to correct how it displays the camera image on screen. There are 16 different orientation options.
3. Stop Camera on Error Checkbox
This checkbox is for plug-in troubleshooting. It allows the camera device to stop if an error occurs and gives you ability to show the error in the debug window to Synthiam.
4. Frame Rate Initialization Field
This field allows you to adjust the frame rate when the camera first initializes. It's best to leave this value at -1.
How to Use Camera Device
1) Add the Camera Device skill to your ARC project (Project -> Add Skill -> Camera -> Camera Device).2) Select the Camera you would like to use from the Video Device drop-down.
3) Select the resolution. If the camera image doesn't show up you may have selected an unsupported resolution for that camera.
4) Press the Start/Stop Button to begin transmitting camera images.
5) Select a tracking type in the tracking tab to begin using the camera image data to track objects.
6) Select the other settings tabs or the triple dots to configure the tracking type (if applicable).
Video Resolutions and Performance
Machine vision and computer recognition is a very high cpu intensive process. The cameras for computer vision are much less resolution than what you, as a human, would use for recording a birthday party. If you were to run computer vision to recognize objects and decode frames at HD quality, your computer response would grind to a halt.160x120 = 57,600 Bytes per frame = 1,152,000 Bytes per second
320x240 = 230,400 Bytes per frame = 4,608,000 Bytes per second
640x480 = 921,600 Bytes per frame = 18,432,000 Bytes per second
So at 320x240, your CPU is processing complex algorithm on 4,608,000 Bytes per second. Soon as you move to a mere 640x480, it's 18,432,000 Bytes per second.
To expand on that, 4,608,000 Bytes per second is just the data, not including the number of CPU instructions per step of the algorithm(s). Do not let television shows, such as Person Of Interest, make you believe that computer vision and CPU processing is as accessible in real-time, but many of us are working on it! We can put 4,608,000 Bytes into perspective by relating that to a 2 minute MP3 file. Imagine your computer processing a 2 minute MP3 file in less than 1 second - that is what vision processing for recognition is doing at 320x240 resolution. Soon as you increase the resolution, the CPU has to process an exponentially larger amount of data. Computer vision recognition does not require as much resolution as your human eyes, as it looks for patterns, colors or shapes.
If you want to record a birthday party, do not use a robot camera - buy a real camera. Synthiam software algorithms are designed for vision recognition with robotics.
Use Camera with Code
You may also instruct the Camera Device Skill to change settings programmatically through code. The ControlCommand() can be called in EZ-Script to change the Camera Device settings. Learn more about the ControlCommand() here.
Requirements
You will need at least one of the following video devices connected to your computer:Related Content
tutorial

The Robot Program Episode 023: Mobile JD
This lesson will demonstrate how to control the Revolution JD Humanoid with an iOS or Android phone or tablet. At the...
tutorial

The Robot Program Episode 022: Detect Face And Wave -...
This lesson will demonstrate how to use EZ-Script to have the robot wave once it detects a face. At the end of this...
tutorial

The Robot Program Episode 021: Detect Face And Wave -...
This lesson will demonstrate how to use Blockly to have the robot wave once it detects a face. At the end of this...
tutorial

The Robot Program Episode 020: Detect Face And Wave -...
This lesson will demonstrate how to use RoboScratch to have the robot wave once it detects a face. At the end of this...
tutorial

The Robot Program Episode 012: Getting Adventurebot To Move
This lesson will demonstrate how to connect to and move the Revolution AdventureBot robot. Follow along with The Robot...
tutorial

The Robot Program Episode 009: Getting Six To Move
This lesson will demonstrate how to connect to and move the Revolution Six robot. Follow along with The Robot Program...
tutorial

The Robot Program Episode 006: Introducing EZ-Builder
This lesson introduces the EZ-Builder Robot Software by exploring options and describing features. At the end of this...
tutorial

Use Camera As A Button
You can use a Camera in EZ-Builder as a button without any external code. I created a custom multicolor named...
tutorial

Detect Multiple Colors
One of the features that makes ez-robot so special is the camera that can detect faces, objects, glyphs, qr codes and...
tutorial

Time Lapse With A Gopro
Here's a video which shows how you can use a GoPro for time lapse photography with Roli. -...
live hack

The Lattepanda Robot Hack
I'm hacking the lattepanda to control Robotis Dynamixel servos and use a USB camera for machine vision. This will be a...
question

2 Iotinys With Camera Power Needs
I built a D-O droid and has a Iotiny to run 6 servos and a speaker, once the sounds are known. It has a very thin head...
question

Detect Multiple Face From EZ Blocky Logic
Dear: i want to detect the multiple face and on detection the audio response of the JD humanoid robot, like if he seems...
question

The Camera Suddenly Disconnected
Hello, I am having an annoying error with the camera. I was using with the identification for the camera when suddenly...
tutorial

Control Robot With Virtual Reality Headset
Learn how to control your robot with a virtual reality headset in the synthiam software platform
live hack

"Robot Learn A New Object"
I'll be using the camera and speech recognition to instruct the robot to learn a new object. I'll demonstrate how new...
question

Extension From Ezrobot Camera Lens To The Module
Looking for an extension from the camera lens to the module board. I was hoping for maybe a foot to 2 foot long maybe a...
tutorial

Vision Training: Object Recognition
Learn how to program a robot to learn and memorize an object.
question

Can't Connect To Camera Device
Hello, I recently purchased a camera for my EZ-B v4 controller, but I'm having trouble connecting to it. Every time I...
[color=#333333][size=4][font=system-ui, sans-serif]This synthiam.com page can’t be found
[color=#646464][size=2][font=system-ui, sans-serif]No web page was found for the web address:https://synthiam.com/Tutorials/images/80/Glyph.pdf[/font][/size][/color]
[color=#646464][size=2][font=system-ui, sans-serif]Can I get the sample pdf please[/font][/size][/color][/font][/size][/color]
Anyone seen this?
Camera Initialized: EZB://10.0.1.231:24 @ 320x240
EZ-B v4 Camera Error: System.Exception: Client disconnected
at EZ_B.EZBv4Video.aenVYfUEGW(EZTaskScheduler , Int32 , Object )
Camera Disabled
Thanks!
Color tracking works. Motion tracking works. Face tracking doesn't work. Version 2020.09.08.00.
1. I selected my device/camera.
2. I set the resolution to 640x408
3. I clicked the green Start button.
4. Camera went on.
5. My face was there.
6. I selected the Tracking tab
7. I selected "Face".
Expected: bounding boxes around my face
Actual: No bounding boxes.
Is this a SOFTWARE DEFECT? Or user error?
I also tried Color and Motion. Both of those worked just fine.
Thomas Messerschmidt
also note that this thread is the camera device manual. And in this manual page are the instructions for the face tracking settings
[u] I found the issue:
[/u]
1. The "face detection options"/ "Max Size" adjustment needs to be set at least as large as the height of the video resolution. For example, for a resolution of 650X360, the Max Size needs to be set at 360 or more. At 640X480, Max Size needs to be set at 480 or more. if I set the Max Size less than the height of the video's height, face tracking no longer works. Also, a resolution of 1280x720 doesn't work at all because Max Size has a preset maximum of 500.
2. Also, It might be the fault of my camera, but I cannot "Start" the camera with resolutions of less than 650X360.
3. Also, ARC crashes and shuts down when I change the camera resolution more than 6 or 8 times.
Thanks for getting back to me so quickly.
Thomas
The documentation of this skill in particular is absolutely fantastic, but I have also noticed it in others. Videos are great for demonstration of capabilities, but when you want to learn how to set up and configure something, nothing beats a good user manual you can print out and mark up as needed....
Alan
(*) RTFM = Read the Friendly Manual