+ How To Add This Control To Your Project (Click to Expand)
Use the Microsoft Cognitive Computer Vision cloud service to describe or read text in images. The images come from the Camera Device added to the project. This plugin requires an internet connect. If your are using a WiFi enabled robot controller (such as Synthiam EZ-B v4 or IoTiny), lease consult their manualsto configure WiFi client mode or add a second USB WiFi adapter from this tutorial.
Details

The behavior control will detect objects using cognitive machine learning. The image will be analyzed and each detected object will be stored in variable arrays. The width, height, location and description of each object. The image will also be analyzed for adult content. Use the Variable Watcher to view the detected details in real-time.
Educational Tutorial
This educational tutorial for using the Cognitive Vision behavior control was created by The Robot Program by Synthiam . This same procedure can be executed on any robot with a camera, or PC with a USB Camera.
What Can You Do?
An easy example on how to use this control is to add this simple line of code to the control config. The code will speak out of the PC speaker what the camera sees. Here's a sample project: testvision.EZB
Demo
DJ Sures from Synthiam created this demo using an Synthiam JD by combining this Cognitive Vision behavior control, Pandora Bot and speech recognition. He was able to have conversations with the robot, which is quite entertaining!
You will need a Camera Device and this plugin added to the project. It would look like this...

And add this simple line of code to the plugin configuration...
say("I am " + $VisionConfidence + " percent certain that i see " + $VisionDescription)
Code:
Code:
*Notice the space after "words"
2) i tested and it works fine reading text. Re-check your code and see if it works without spaces
you can always stand nude in front of your robot to test it out hahaha
I did a little research earlier today and it’s possible to create custom object detection projects. Train and prediction. Makes it more useful for a case by case robot.
..of course I got naked in front of the vision cognition ...it said '100% sure you should put your clothes back on!' Lol.
ill take a look at the custom detection part. Although it is quite easy to do local with the object tracking built in the camera control
I believe you are asking about Cognitive Face and Cognitive Emotion? Those two report similar stuff, except Emotion doesn't report face. There's slight differences in the returned data of those two. This skill that you replied to is Cognitive Vision and not related to either of those