Emotions V1

CardboardHacker

Canada

User-inserted image

Hey there!

Here is my latest project. This time it's an emotion generating project. Through speech recognition activation, your robot will respond with emotions. Happy, sad, angry or tired, they are all there.

It is built as a development project, so you guys can continue the development.

It works by having an always running main script checking values. Those values change based on what emotion you want. Happy? Set $happy true and all others false. Sad? Set $sad true and all others false.

Currently, Emotions has integrated the Personality Generator, and RGB Animator to help bring personality to your robots. Currently, it is best compatible with the JD Revolution robot as it has the RGB sensor.

Further development can be done by me if people who are less script skilled want something added.

Unlike my most recent project, Emotions is all built into Ez-builder, nothing extra. Simply find the file in the EZ-Cloud or in this thread, download it and your off!

To add emotions is a little difficult as you must set it to false by every script that wants a different emotion. As it stands, that means adding an extra variable to to at least 8 other scripts. And you have to add its call on in the "Emotion Manager". But once that's done(a 10 minute ordeal at best) your good to go!

If you want to add actions to an emotion, simply click the edit button of the emotion in the "script manager" and add the command to start that action.

I hope you guys like Emotions, and cant wait to see the robots using it!

EMOTIONSV1.EZB

Disclaimer: This project is untested with robots. It may not work correctly with your robot at first. Use at your own risk. The creator under the alias Technopro is not responsible for any damage as a result of using this project.

Enjoy your day!


Tech

By — Last update

ARC Pro

Upgrade to ARC Pro

Unleash your robot's full potential with the cutting-edge features and intuitive programming offered by Synthiam ARC Pro.

United Kingdom
#1  

This looks interesting and sounds like a fun project, but I'm curious to why you have posted it without testing it as mentioned in your disclaimer?

#2  

Tested with a robot I meant to imply. Will change in first post. Thanks.

#3  

Hi @Technopro, I'm a little confused about some of the coding and control choices, I was wondering if you could help me understand them?

So, we have the voice commands to interact with the robot, which sets a variable for the emotion and then kicks off the "Emotion Manager" script to run that then runs the specific emotion script which activates the RGB LED animator sequence. I don't understand what the personality generator is doing?

Did you choice to code all of the variables with true/false values in every voice command because it would not work any other way? I probably would have just set specific emotional state to true, then after the specific emotion has ran I would have set it to default back to a false state after a period of time. That would have been less coding, but it would also change the way it functions a little bit. With your coding, it would leave the robot in that emotional state until a new voice command was give to change the emotional state, correct?

Please don't take the questions as a negative on your code, I'm enjoying your project and appreciate you sharing it. I'm just looking to learn more about it. :)

#4  

Techno,

  Good Job. It would be nice if we could link it to Justin's Face Recognition and have it use the Audio AND the Video together to have a more accurate.
  Just a thought. Here is another idea . . .  Could you make it be a lie detector based on the voice frequencies, etc. ?

:-)

#5  

@Justin

I am unsure if it could be done any other way, as the variable for the emotion must be set true, and then the rest must change to avoid a conflict. As for the dynamic "auto emotion" I wanted to incorporate that eventually. What's nice is that at any time the emotion can be changed. And speaking of which, I think I thought of a bug with the code which could prevent that.

@Moviemaker

Thanks! Good to hear from you again.

#6  

@Technopro, it looks like all the emotions are set as "false" by default, so if the voice command sets the emotion to "true", then once the emotion runs and completes it could set itself to false. That function does not appear to happen now, which is fine if that is intended.

When I think about my robot displaying emotions, I think of most of them being displayed for a limited amount of time, like a person would. Like if I saw an old friend I might be "surprised" and smile really big for a little bit, but my emotional display would go back to default or neutral.

#7  

The way I set it up now is sort of a hit or miss as the emotions manager restarts after every emotion change, but that before loop area sets all emotions false so if one emotion does output its because the variable took long enough that the rest of the manager script could run and activate it. Changing the process.

#8  

And It's changed! Change notes can be found in the project. Also updated the EZ-Cloud version. Look for EMOTIONS V2 in the ez-cloud.

Download: EMOTIONSV2.EZB

@JustinRatliff Emotions now time out after a set-able amount of time in the @period script.

#9  

And as for the limited amount of functions, I wanted it to be easilt editable for others to incorporate into there robot. Each robot is different so if i made servo movement actions in the scripts, another robot might spin around and break off something.

#10  

I did notice some servo routines in there for, what did those servos does on your robot?

#11  

Servo speed was put in there because that can be set easily for other robots. Or removed completely depending on the application. And it proves as example. I didn't test it with any of my robots.

Netherlands
#12  

Thanks for the project! I'm also working on an emotion model which will probably be finished in a few months. I'll try out your system on my Roli soon.

#13  

Yes, your the one I actually got the inspiration from. I would have credited you for the idea but I couldn't find the post to name you. I'm excited to see what you make. It is an external program am I not correct?

Netherlands
#14  

Correct. My project is pretty big so it takes a while. Warning: wall of text.

Currently the status is that:

  • ARC can send percepts to my Java based agent software. (using HTTPGet)
  • My agent software can send commands to ARC. (using Telnet)
  • When my agent wants to say something I can construct a wav file from a String, which in turn can be played on the EZBv4. (using Ivona's free developer's cloud service)
  • Using voice recognition software I can send arbitrary sentences to my agent. (e.g. Window's free recognition software or Dragon Naturally Speaking)
  • Using the Stanford natural language library I can parse said sentences to parse trees.
  • I can transform the parse trees into predicate format and send it to a Prolog engine. (SWI-Prolog, with the JPL library)

Currently I'm working on a Prolog library to transform the spoken sentence into a semantic representation such that it can be used as a query on the agent's knowledge/beliefbase, which will be a combination of Java objects, Semantic Web technology and Prolog. Using this speech-interface setup I don't have to pre-program all commands in ARC.

For example now I can start my robot facing the wall. It will then try to find and greet me as an initial goal. This results in the agent software switching on the face recognition module and looking around with the camera, periodically it'll say "where are you?". If the robot can't find me, then I can say something like "l'm behind you" and the agent will reason that it has to turn the body to face me. Once the camera detects my face it waves at me, switches off face recognition and drops the goal.

Once I have the knowledge/beliefbase setup properly I will finally move on to the emotion modules. For this I'll use as a foundation Bas Steunebrink's formal model of emotions (see his thesis here) and combine it with my own models for runtime monitoring and control (I'm still writing my own thesis).

The emotion model has three separate subsystems: emotion elicitation, experience, and regulation. Emotion elicitation causes immediate emotions such as happiness when I give the robot a toy. Emotion experience takes as input these immediate emotions and produces a mood. These are longer lasting effects such as a happy or brooding mood. The mood will always slowly decay to neutral, when no emotional input occurs over time. Finally emotion regulation causes the agent to act upon emotion. E.g., using your project, a switch in mood can cause the eyes of a JD head (which I intend to buy for my Roli) to use a specific mood associated animation.

Using my quick prototypes I find it much easier to perform relatively complex behavior through my agent software, rather than purely in EZ-Script. Especially something like emotions, which permeates a lot of aspects from the robot, is hard to implement and maintain using only EZ-Script. So the framework is pretty complex with a lot of subsystems, but when you get the hang of it, building A.I. for EZ-Robots (and other systems) becomes easier.

That's just the gist of it :D

#15  

@BasTesterink

Quote:

  • ARC can send percepts to my Java based agent software. (using HTTPGet)
  • My agent software can send commands to ARC. (using Telnet)
Very interesting. The work by Bas Steunebrink's is fascinating. I look forward to seeing your work as well. I was wondering though, are the lines in the quote reversed concerning mode of operation? That is to say, should it have been:
  • ARC can send percepts to my Java based agent software. (using Telnet)
  • My agent software can send commands to ARC. (using HTTPGet)
  • Thanks.

    Netherlands
    #16  

    @WBS00001 To my knowledge, ARC cannot via EZ-Script use telnet to push information out of the system. But it can use HTTPGet in EZ-Script to push information.

    The other way around, you cannot use a HTTPGet command to set some variables and start a script form outside of ARC. But you can use telnet for that.

    For instance:

    When the agent wants to find me it sends the telnet command:

     ControlCommand(\"scripts\",ScriptStart,\"detectFace\")") 
    

    The EZ-Script is:

    
    $CameraObjectWidth = 0
    ControlCommand("Camera", CameraFaceTrackingEnable)
    
    REPEATUNTIL($CameraObjectWidth > 100)
    WaitForChange( $CameraObjectWidth ) 
    ENDREPEATUNTIL
    
    ControlCommand("Camera", CameraFaceTrackingDisable)
    
    HTTPGet($PERCEPTURL + "faceDetected/user/")
    
    

    (comments: because face recognition has many small false positives, the script uses a minimal camera object width so that it will only consider it a positive when I'm fairly close to the camera. In the future I'll try to use other, more accurate, face recognition software)

    The $PERCEPTURL variable is set by an init script and refers to my agent's internet address.

    My design principle is to use EZ-Script to only specify high level observations and actions in small scripts, and let the agent software do all the reasoning/decision making.

    I hope that clarifies it a bit.

    #17  

    Hi technopro
    I like the idea Are you going to make only head confused confused..... because it needs lots of servo for expression ez b can control 24 servos it need lot of servo

    #18  

    @dinesh.kra

    I made this project as a template of sorts. I can't make it tailored to one robot or it won't be compatible with another. Have a look at the code, and if you want to add actions to the emotion, simply add it to the script of the emotion. Simple!

    #19  

    When installing EZ-AI.exe with installer, The program says I have to have .net version 4.5. My computer tells me I already have it. But EZAI doesn't recognize it. And, it won't let me install the version it has on your site.

    Thanks!

    #20  

    My Bad, I hit the wrong key and downloaded version 4.

    #21  

    I do have a problem. It tells me that the License for visual studio is invalid. I don't use visual studio, I use Code::Blocks to work with C++. Do I have to install visual studio. It is so big and bulky that it takes up a lot of space and slows down the computer , I guess. What do you sugest?

    #22  

    @moviemaker.... you are posting in the wrong thread.... This is not David Cochran's EZ-AI thread. It's Technopro's Emotions thread. ....

    #24  

    Sorry, Alan. My bad. please forgive me.

    Thanks Richard for letting me know.