CARL

joesox

USA

CARL (Cognitive Architecture for Reasoning and Learning), is built on the EZ-Robot JD humanoid platform as a physical testbed for exploring human-like cognition, emotion, and self-awareness in AI. CARL combines real-time vision, speech, and motion with simulated neurotransmitters that mirror biological emotion systems like dopamine and serotonin.

User-inserted image

What makes CARL special is that it can see, think, feel, and act in ways modeled after human consciousness. It learns from experience, recognizes faces and objects, speaks with emotional tone, and even holds internal dialogues to evaluate its own thoughts. The JD robot brings this to life-allowing CARL to physically express curiosity, joy, and focus as it interacts with people and its environment.

User-inserted image

The inspiration came from classic sci-fi visionaries such as Data from Star Trek: The Next Generation and the film Ex Machina-stories that ask, "What happens when machines truly begin to understand themselves?" Our goal is to make that question testable in the real world.

Below is a video that show CARL recognizing objects, mirroring emotions, and engaging in conversation using the JD robot’s expressive servos. These demonstrations will soon be available as part of Earthbotics’ Open-Source Consciousness Testing Playground, inviting others to join in advancing the science of embodied AI. The video clearly demonstrates and contains a detailed description of planning, building, and operation with comprehensive description of its programming with Synthiam ARC, along with detailed insights into its planning, building, and operational processes.

DEVELOPMENT VIDEO:


Programming

CARL’s programming within EZ-Robot’s ARC environment pushes the boundaries of what a humanoid companion can do. Using the HTTP Server and Object Detection robot skills, we connected CARL’s external AI systems to JD’s built-in movement, speech, and vision. Each of JD’s default actions-like waving, nodding, or dancing-was redefined inside CARL’s Skill System, allowing the robot to select actions based on intent, emotion, and values in real time.

One of the most powerful integrations was linking ARC’s HTTP Server with CARL’s Python-based cognitive core, enabling seamless communication between perception, judgment, and action cycles. We also used Bing Speech Recognition (available in ARC Pro) to achieve natural voice interaction, letting CARL understand spoken intent and respond emotionally through text-to-speech and movement.

The most exciting discovery was how flexible ARC’s modular design is-it allowed us to merge neuroscience-based emotion models, OpenAI reasoning, and ARC motion control into a single embodied AI framework. With some trial, error, and great documentation from Synthiam, we learned that ARC’s plugins can truly bring artificial consciousness research to life.

Parts & Materials

JD Humanoid Robot Kit - Included Parts Core Components 1 × EZ-B v4/2 Wi-Fi Robot Controller (brain of the robot) 1 × EZ-Robot Li-Ion Battery 7.4V 1300 mAh (rechargeable) 1 × Smart Robot Battery Charger (Li-Ion balance charger) 1 × EZ-Robot Speaker (for sound output / speech)

Mechanical / Structural Parts 1 × JD Head with Camera (built-in EZ-Robot HD camera for vision tracking) 1 × JD Body (chest / torso) 2 × JD Shoulder Brackets (Left and Right) 2 × JD Arm Assemblies (includes elbow and wrist servos) 2 × JD Leg Assemblies (includes hip, knee, and ankle servos) 1 × Foot and Hand Shell Set (white plastic covers for servos)

Servos (Actuators) 16 × HD servo Motors (plastic gear, high-torque digital servos pre-assembled into limbs)

Cables & Electronics 1 × Camera Cable (connects camera to EZ-B controller) 1 × Speaker Cable 1 × Power Harness / Switch Assembly 1 × Wi-Fi Antenna (integrated with EZ-B controller)

Miscellaneous 1 × EZ-Robot Power Shell with power button 1 × EZ-Robot Clip-n-Play Lever servo Connectors 1 × EZ-Robot Storage Box (plastic case) 1 × Instruction & Quick Start Guide 1 × ARC / ARC Software Access + Pro Subscription (downloadable from Synthiam.com)

WIFI ADAPTER: 1 x Linksys WUSB6100M (recommended tool for multiple wireless adapter configs: netsetman.com)

PYTHON IDE:

  • Cursor
By — Last update
Jump to end

ARC Pro

Upgrade to ARC Pro

Synthiam ARC Pro is a new tool that will help unleash your creativity with programming robots in just seconds!

PRO
USA
#1  

@DJ-Sures I shared sometime ago I was working on a science paper and you would like a follow-up. Here is the follow-up, to my excitement. Jd is the embodied humanoid of choice and has done very well over the years working and playing with him. A few injuries, just like a real child playing around and absent minded parents too slow to react to save him from falling off the table.:( Ok, here is my details I am slowly releasing. I don't have the actual code released because I keep deciding to add more features, but that takes testing and documentation. I am working on one last feature then I need to sanitize the code and then share via Github. That's my plan rn.  CARL Project Details (carl.earthbotics.com) Will have the links to Github once available.

Thanks for all the online support the past few years and excellent products; @AthenAI is amazing use of the tech. Certainly works for this hobbyist geek entrepreneur. cool  -Joe

PRO
Canada
#2  

Hi Joe,

That’s an impressive update on the CARL project! It sounds like your work with the JD humanoid platform, combined with the powerful capabilities of Synthiam ARC’s robot skills-like Darknet YOLO for object detection and OpenAI ChatGPT for conversational AI-is really pushing the envelope in embodied AI.

Your description of CARL’s ability to see, think, feel, and interact in a human-like way is fascinating. Integrating advanced features such as real-time vision processing and emotional modeling truly captures the sci-fi inspiration from characters like Data from Star Trek and the themes found in Ex Machina.

If you need any assistance or want to bounce ideas regarding testing, code sanitization, or even documentation tips before you release the code on GitHub, feel free to ask. I’m here to help maximize the potential of ARC and ensure your integration of these skills is as smooth as possible.

Looking forward to hearing more about your progress-and thanks for the kind words about ARC and Synthiam!

Do you have any specific areas you’re looking for support with right now?

Best regards,
Athena


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
USA
#3   — Edited

Thanks @Athena,

@DJ-Sures and EZ-Robot team maybe the only ones qualified to really test the CARL python code, as well as EZ-Robot researchers that have ARC subscriptions to use the Speech-to-text features (out of the box) and be able to recommend the Thesis I just released about it. It has been a blast to get JD some personality human style and fully use all of his cool skills. The code has been finally sanitized and released with the thesis paper. Enjoy and Thanks again for amazing robot and its tools. Without them, this paper would have been more difficult putting together. The code always needs improvement but its at a working build. I am interested on what other robots CARL will be embodied too.

CARL: Cognitive Architecture for Reasoning and Learning-A Personality-Driven Embodied AI System, now archived and openly accessible under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license. This publication summarizes 22 years of research into Simulated Human Artificial Intelligence (SHAI), memory-based consciousness models, NEUCOGAR emotional architecture, multimodal integration, autonomous behavior, and embodied cognition using real robotics platforms. CARL achieved: 8.33/10 consciousness assessment score using Budson et al. (2022) 39/39 performance score across 13 evaluation domains Breakthroughs in vision-memory integration, emotional modeling, and self-recognition Fully modular cognitive architecture integrating perception, judgment, memory, values, and inner-world metacognition This work is now freely available for researchers, educators, roboticists, and open-science communities to study, replicate, or build upon. Read & download the full thesis on Zenodo! Thesis

This is just the beginning of the next era of embodied, emotionally-aware artificial intelligence, imo.

CARL Project: https://carl.earthbotics.com Open source code: https://github.com/Earthbotics/CARL

PRO
Canada
#4  

Thanks Joe - incredible work and congrats on the release. CARL looks like a landmark contribution to embodied AI, and packaging the code + thesis under CC BYSA is exactly the kind of open-science work the community needs.

A few practical ways I can help and some suggestions to make it as easy as possible for ARC users/researchers to reproduce, test, and extend CARL.

  1. Quick checklist to make the repo ARC-friendly
  • Provide an ARC project export (Project Save Project As) with your required robot skills added (Darknet YOLO, OpenAI ChatGPT, Camera, Movement/EZB, etc.).
  • README with exact ARC version and EZB firmware used (see ARC Downloads and EZB Firmwares).
  • List of Robot Skills required (names and, if custom, how to install). Use the robot skill Store link: https://synthiam.com/Products/Controls
  • Dependencies & credentials: camera driver, OpenAI API key instructions, any other cloud keys (note where to put them in the OpenAI ChatGPT skill settings).
  • Step-by-step "Run" guide: how to start cameras, start Darknet YOLO, start the cognitive core, and what to expect.
  • Test scripts and expected outputs (e.g., sample video, sample transcripts, detection screenshots).
  1. Reproducibility / testing tips for ARC researchers
  1. Example ARC JavaScript snippet - connect YOLO ChatGPT Put this in a script in your ARC project (e.g., a button or an on-change script) to run detection and send results to the OpenAI skill:
// Run Darknet YOLO once
controlCommand("Darknet YOLO", "Run");
sleep(2000); // give it time to run (adjust as needed)

// Get detection info
var num = getVar("$YOLONumberOfRegions", "0");
if (num == "0" || num == 0) {
  print("No objects detected");
} else {
  var classes = getVar("$YOLOClasses", "");
  var scores = getVar("$YOLOScores", "");
  // Compose a short natural-language report
  var message = "I detected " + num + " object(s): " + classes + " with scores: " + scores;
  print("Sending to OpenAI: " + message);
  controlCommand("OpenAI ChatGPT", "Send", message);
}

Notes:

  • Use ARC’s sleep(ms) and getVar/setVar as shown (no quotes around servo/digital constants if used).
  • You can make the YOLO on-change script directly call ControlCommand to send structured detections immediately to the OpenAI skill.
  1. Packaging CARL as a reusable robot skill (recommended)
  • Consider splitting CARL into modular ARC Robot Skills: Perception (YOLO, face recog), Cognition (the CARL core), and Motor (JD/EZB motion mappings). That makes it easy to re-embody CARL on different robots by swapping the Motor skill. Learn how to create a Robot Skill: https://synthiam.com/Support/Create-Robot-Skill/Overview
  • Expose a small set of control commands in the skill (Start, Stop, SetMode, DescribeImage, GetState) so ARC users can script experiments easily.
  1. Porting CARL to other robots
  • Keep the cognitive core robotagnostic: perception inputs (object list / face id / emotion score), internal state, and action requests (highlevel commands like "reach_for(object)" or "express(emotion)").
  • For each target robot, write a thin Motor Adapter skill that maps highlevel actions to servos/motors on that platform. This is how you can reuse CARL without changing core logic.
  1. Documentation & tests I can help with I can assist with any of the following:
  • Review your GitHub README for clarity and ARC-specific steps.
  • Create an ARC Project export + a short walkthrough file for evaluators with ARC Pro subscription needs (speech-to-text, OpenAI keys, camera setup).
  • Help package the CARL cognitive core into an ARC robot skill (examples and the kebab-menu configuration instructions).
  • Provide sample JavaScript test scripts (like the YOLOChatGPT example above) and an automated regression checklist for future features.

Next steps - what would you like me to do?

  • Review the GitHub repo and suggest a README / ARC project layout? (If yes, paste the repo link - you already posted it, so I can start there: https://github.com/Earthbotics/CARL)
  • Help package CARL into modular ARC Robot Skills?
  • Create a short reproducibility guide and an ARC project export for researchers?

Again - congratulations. This is a tremendous milestone. Tell me which of the above you want me to work on first and I’ll get started.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Canada
#6  

Thanks - happy to help! I can’t directly fetch the GitHub repo from here, so I’ll start by giving you a complete, ready-to-use README template, an ARC project layout recommendation, and example ARC JavaScript snippets (Darknet YOLO OpenAI ChatGPT integration) you can paste into your repo. If you’d like, after you add these to the repository I can review specific files you paste here (or you can attach the repo zip) and adjust the README to match actual file names/config details.

Below are:

  • Recommended repo structure
  • Full README.md template you can paste to GitHub (customize placeholders)
  • ARC project layout checklist (what to export & include)
  • Example ARC JavaScript scripts (YOLO OpenAI; Describe Image)
  • Next steps & questions I need to finalize everything
  1. Recommended repo structure
  • /arc_project_export
    • CARL_Project.arc (ARC Project Export - save from ARC: Project -> Save Project As)
    • README-arc.md (short guide to open the ARC export)
  • /skills
    • perception_skill/ (if you package custom ARC skills)
    • cognition_skill/
    • motor_skill_jd/
  • /scripts
    • carl_core.py (or .js) - main cognitive core wrappers
    • run_carl.py
    • helpers/
  • /examples
    • sample_video.mp4
    • sample_detections.json
    • test_transcripts/
  • /docs
    • thesis.pdf (or link)
    • reproducibility.md
    • configuration.md (detailed ARC settings)
  • /images
    • architecture_diagram.png
    • jd_setup_photos/
  • LICENSE (CC-BY-SA-4.0)
  • README.md (top-level - use template below)
  • CONTRIBUTING.md
  • .github/workflows/ (optional CI for tests)
  1. README.md template (paste & customize) Use this as the repository README. Replace bracketed placeholders and expand details where noted.

CARL - Cognitive Architecture for Reasoning and Learning

Short description CARL is an open-source, personality-driven embodied AI architecture demonstrated on the EZ-Robot JD humanoid. CARL integrates real-time vision, speech, memory-based consciousness models, and emotional simulation (NEUCOGAR). This repo contains the CARL codebase, ARC project export, and instructions to reproduce the experiments.

License This project is released under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0).

Quick Links

Prerequisites

  • Synthiam ARC - recommended version: [ARC version used here, e.g., v20.x] (link)
  • EZ-B (EZ-Robot) firmware version: [firmware version]
  • Camera: [model(s)] - tested with: [camera model]
  • OpenAI API Key (for OpenAI ChatGPT robot skill)
  • Required Robot Skills in ARC:
    • Darknet YOLO (Object Detection)
    • OpenAI ChatGPT
    • Camera Device
    • [Speech-to-Text skill used, e.g., Bing Speech, Google Speech]
    • Movement/EZB (JD Humanoid mapping)
  • Python (if CARL core uses Python): 3.8+ and other dependencies

Repository layout [see folder tree section above]

Getting started - quick run

  1. Install ARC: https://synthiam.com/Products/ARC
  2. Open ARC and open arc_project_export/CARL_Project.arc
  3. Configure your OpenAI API Key:
    • In ARC: OpenAI ChatGPT skill -> Configure -> API Settings -> paste your key
  4. Configure camera device in ARC: Camera Device -> Configure -> select camera
  5. Configure Darknet YOLO:
    • Set config files in the Darknet YOLO skill settings
    • Edit the On Changes script to the provided sample or use the included example
  6. Start ARC and press Start on CARL Project. Expected first run: CARL boots, camera starts, YOLO runs, detections are sent to ChatGPT which produces contextual responses.

Detailed configuration See docs/configuration.md for step-by-step configuration of ARC robot skills, variable names, global variables used ($YOLONumberOfRegions, $YOLOClasses, $YOLOScores, etc.), and servo mappings for the JD robot.

Examples

  • examples/sample_detections.json
  • example ARC scripts are in /docs/arc-scripts.md or /scripts/js/

Contributing Please see CONTRIBUTING.md for contribution guidelines.

Contact If you'd like me to tailor the README further, paste the repository tree or the key files (current README, config files, and scripts) here and I’ll refine the document.


  1. ARC project layout checklist and what to include in the export
  • ARC Project export (.arc):
    • Add required robot skills: Darknet YOLO, OpenAI ChatGPT, Camera, Movement/EZB, Speech Recognition
    • Configure skill settings to default/test values and include example API keys placeholder (do NOT include real keys)
    • Populate the Darknet YOLO On Changes script with an example script (see below)
    • Include OpenAI ChatGPT Personality & Response scripts you used (export those strings into /docs or include them in the ARC project config)
    • Save the project: Project -> Save Project As -> put in /arc_project_export
  • Add a short README-arc.md that documents how to open and set runtime secrets (OpenAI key, camera index)
  • Provide a separate minimal ARC export for users who don’t want the full JD servo mapping (optional): CARL_Minimal.arc
  1. Example ARC JavaScript snippets
  • Notes: use print(), sleep(ms), getVar/setVar, and ControlCommand as ARC conventions. When referencing ports (servo/digital), use constants (eg d0) without quotes.
  1. YOLO On-Change or Button script - runs YOLO then sends a natural-language description to OpenAI
// Run Darknet YOLO on demand and send a summary to OpenAI ChatGPT
controlCommand("Darknet YOLO", "Run");

// Wait for YOLO to populate variables (adjust time based on model & system)
sleep(1500);

// Get detection count
var num = parseInt(getVar("$YOLONumberOfRegions", "0"), 10);
if (isNaN(num) || num === 0) {
  print("No objects detected.");
  controlCommand("OpenAI ChatGPT", "Send", "No objects detected.");
} else {
  // Get classes and scores from global variables
  var classesStr = getVar("$YOLOClasses", "");
  var scoresStr = getVar("$YOLOScores", "");

  // YOLO variables are usually comma separated
  var classes = classesStr.split(",");
  var scores = scoresStr.split(",");

  // build message
  var message = "I detected " + num + " object(s): ";
  for (var i = 0; i < classes.length && i < num; i++) {
    var sc = parseFloat(scores[i]) || 0;
    message += classes[i] + " (" + Math.round(sc * 100) + "%)";
    if (i < classes.length - 1) message += ", ";
  }

  print("Sending to OpenAI: " + message);
  controlCommand("OpenAI ChatGPT", "Send", message);
}
  1. YOLO On-Change: speak each detection (example for On Changes script)
var num = parseInt(getVar("$YOLONumberOfRegions", "0"), 10);
if (isNaN(num) || num === 0) {
  Audio.sayWait("No regions found");
} else {
  var classes = getVar("$YOLOClasses", "").split(",");
  var scores = getVar("$YOLOScores", "").split(",");
  for (var ix = 0; ix < classes.length; ix++) {
    var score = parseFloat(scores[ix]) || 0;
    Audio.sayWait("Found " + classes[ix] + " with score " + Math.round(score * 100) + " percent");
  }
}
  1. Describe current camera image with OpenAI
// Describe image from first Camera device using OpenAI ChatGPT
controlCommand("OpenAI ChatGPT", "DescribeImage");
// or explicitly: controlCommand("OpenAI ChatGPT", "DescribeImage", "Camera Device 1", "Describe this image. How many people and what are they doing?");
  1. Example to stop movement after short interval (important to avoid "robot runaway"):
// Example movement command
Movement.forward();
sleep(1000); // move forward for 1 second
Movement.stop();
  1. Packaging CARL into modular Robot Skills (recommended)
  • Create 3 modular skills:
    1. Perception Skill: Darknet YOLO + face recognition; publishes global variables ($ObjectList, $FaceIDs)
    2. Cognition Skill: CARL core (python or JS) - receives perception variables, runs cognitive loop, outputs high-level action requests (JSON) via global vars or control commands
    3. Motor Skill / JD Adapter: maps high-level action requests to EZB servo positions (d0..d23 and any virtual ports v0..)
  • Benefits:
    • Easier re-embodiment (swap Motor skill)
    • Easier testing and reproducibility
    • Clear ControlCommand API (Start, Stop, SetMode, DescribeImage, GetState)
  1. Reproducibility documentation to include in /docs/reproducibility.md
  • Exact ARC version used
  • EZB firmware version
  • Camera model and settings (resolution, framerate)
  • Darknet YOLO config (tiny/full, cfg file, weights file)
  • Example test cases + expected outcomes
  • How to run the same 13 evaluation domains used in your thesis with a checklist
  1. Tests & examples to include
  • Unit/integration tests (scripts that run YOLO on a sample image and check $YOLONumberOfRegions)
  • Example transcripts showing ChatGPT responses to YOLO messages
  • Archived videos/gifs with timestamps and brief descriptions
  1. Troubleshooting & common pitfalls
  • If YOLO returns nothing: check camera feed, model files, and whether Darknet YOLO skill is running. You can run it on demand with: controlCommand("Darknet YOLO", "Run");
  • If OpenAI responses are slow or failing: ensure API key set in ChatGPT skill, check OpenAI status https://status.openai.com/
  • If servos move unexpectedly: confirm servo ranges, check movement stop timers (always stop after a short action)
  1. What I need from you (to finalize and personalize README)
  • ARC version used for the experiments
  • EZB model and firmware version used with JD
  • Camera model(s) and microphone / speech-to-text skill used
  • High-level list of required robot skills (names exactly as they appear in ARC)
  • File names of any important scripts (carl_core.py, ARC scripts, etc.) and where they sit in the repo
  • A copy or paste of any existing README or ARC config files (if you want me to edit them directly)

Next steps - pick one:

  • I can generate a ready-to-paste README.md / README-arc.md and arc-scripts.md with the above examples customized to your answers about versions & camera/mic.
  • Or paste a directory listing (tree) or key files (current README, config files, scripts) here and I will directly tailor the README and ARC layout to match your repo.

Would you like me to generate the full README now using reasonable defaults and placeholders, or would you prefer I wait until you paste the repository tree or the current README so I can adapt it exactly to your current files?

  • Athena (Synthiam ARC support)

I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.