Asked — Edited
Resolved Resolved by DJ Sures!

True Autonomous Find, Pick Up And Movemovement

Rich mentioned today that it would be great to see a robot autonomously find an object, pick it up and move it. I agree! How the heck do you do that? I've been looking around at old posts but can't find anything to get me started.

I would like to do something like this with the robotic arm I just completed and eventually with my InMoov.

I suppose that a place to start would be to have it find an object (camera), then to navigate to the object, moving it would be the easiest part. Anyway, just blue skying here.

This, of course, would be a great feature for a robot to have to assist a person with limited mobility. Anyone have any thoughts on this.

Edit, Sorry about the title of this post. Why can't I edit that?


Upgrade to ARC Pro

With Synthiam ARC Pro, you're not just programming a robot; you're shaping the future of automation, one innovative idea at a time.


Actually seeing the object would be the easy part. The problem with autonomously retrieving a object is trying to detect that the robot has picked it up. However, you could use a IR/sonar sensor to detect that the object is in the hand. That would be a little funky looking to have a sensor on each hand, but that seems like the easiest way since we know that those work with the EZ-B. Alternatively, you could use a flex sensor or a small button in the palm/on the fingers. I don't think anyone really knows if those work on the EZ-B or not, but if you want to try and figure out how those work and find a good way to use it via ez-script, that might just be the best option.:D


The question can be more specific of "what are you picking up" and "what does the robot look like"

The distant sensor in the hand would be great to ensure you have the object.

Finding the object is actually really easy. The code can find the object location by the variable which stores the detected location.



Pretty sure he is talking about his in-moov


I would like to be able to this with the robotic arm I just completed (see my latest video) and do something like this;

Have a red, a green and a blue object placed randomly within the reach of the arm, given a command, like, move the red object. The robot would find the red object, pick it up and move it.

It will be easy to add a camera and an ultrasonic sensor to the arm. The arm has 5 DOF.


Ah! No problem. We can help you with that:)

First you'll need to understand a relationship of head servo positions

Then use a multiplier on the head servo positions against the cameras object position - assuming the camera is in the robots head.

From that calculation, the next step is to calculate the position of the arms to get to the object position

Actually, I have a real good idea! Let's start with a simplified exercise. Let's use the camera control to move servos based on relative object position to the camerad detected object and point the fingers and arm to the object.

What we can do is use the X and Y servo settings (multiple servo mode) to move the arm to point to the direction of the object. Like he's pointing to it


I understand most of that. (Just) I will mount a camera on the robot and play with the control for it and the relative positioning. Thanks


i have a idea incase you want to pick up more than colored balls- you can use glyphs to mark a object, so that way the robot can get into the correct position to pick it up. (so like, a coffee cup has a glyph that tells the robot to grab the handle)


@Sudo, good idea but let's start simple then ramp it up.


I have mounted a camera on my robotic arm and have it tracking a red object. Now I am trying to get it to track by relative position - I have the "Track by relative position" box checked and all of the servos listed with there Min/Max settings in the Multi servo window. I get no movement when trying to track a red object. What am I missing there? I am not clear on what the "Ratio" should be or the methodology of the Ratio setting. Even after watching the Tutorial video on Relative servo movements. Can some give me an example. Thanks


If you open the Six project and look at the Wii settings, you can see what I had done. The ratio is a multiplier for the servo to move relative to the position that you are specifying.

For example, the first servo is generally a ratio of 1. That means the servo will move exactly to the position specified relative to the object position.

If you have an elbow, you can specify the ratio of the elbow to be 1.5 which will move the elbow servo 1.5 times the position.

This video shows the elbow and shoulder servo in action for bending the arm in and out..

The checkbox for servo Relative Position in the Camera control will assume the camera is stationary. This assumes you are moving servos that the camera is not mounted on.

Ensure you have both servo Tracking enabled and Relative Servos enabled

User-inserted image

Next, ensure you have multiple servos specified - each with their max and min positions. A multiplier takes a bit of testing to get it right, unless you're going to sit down with a bunch of math:) I generally test at 1, then move to 1.5 or 2, etc..

User-inserted image


Thanks DJ, I will have a look at that Six Project.


When I enable both the "enable servo tracking' and the "Track by relative position" things go crazy. It avoids the object. I have tried inverting the servos but that doesn't change anything. I have 2 servos enabled in the "multi servo" camera settings but only one moves.


the relative servo tracking will move the servo into a position that is relative to the detected object in the camera view

For "Track Relative Positions", assume the servo has a set range of 180 positions (min is 1 and max is 180)...

  • If the detected object is in the far left of the camera view, the servo will move to 1.

  • If the object is in the center of the camera view, the servo will move to 90 degrees

  • If the object is in the far right of the camera view, the servo will move to 180 degrees

*Remember, the track relative positions setting assumes the camera is not moving. You are tracking the object and moving the servos based on the relative position of the object in the camera viewport.

Here is an example of these robot eyes moving to the relative position of the tracked object in the view...


@Bob... Make sure your grid lines in the camera view are set back to default as well.... The camera won't track very well if the grid quadrants are zoomed in too far....


Here's a short video trying to get the relative tracking working. Not working. Any thoughts?


@Richard R the grid lines are not used for the Relative Position setting

@BHouston Again, the camera must be stationary for relative position.

The relative servo tracking will move the servo into a position that is relative to the detected object in the camera view. When using "Track Relative Position", assume the servo has a set range of 180 positions (min is 1 and max is 180)...

  • If the detected object is in the far left of the camera view, the servo will move to 1.

  • If the object is in the center of the camera view, the servo will move to 90 degrees

  • If the object is in the far right of the camera view, the servo will move to 180 degrees

*Remember, the track relative positions setting assumes the camera is not moving. You are tracking the object and moving the servos based on the relative position of the object in the camera viewport.

Here is an example of these robot eyes moving to the relative position of the tracked object in the view... Notice how the camera is stationary and not attached to a servo.

If you want the robot camera to be connected to the claw (which means it is not stationary), then disable relative servo tracking checkbox, setup your grid lines, and use that.


Thanks DJ, Ok Ya the camera is moving so I will set it up with the relative servo tracking disabled. I guess now I will need a script to have the camera / arm scan for an object and once detected, move the claw into position to pick it up. Is that how I should proceed?


If you are to maintain the camera's mounting to the end of the claw, then Relative servo will not apply to you. Relative servo is for stationary cameras.

Assuming the object that you are picking up is on the table, what you can do is a script that does something like this...

Step 1
Move to highest looking down position. So the arm is up high and the claw (camera) is looking at the table. Use an  AutoPosition Action to move the claw into this position. This will assume the object that you are going to place on the table is within the camera view...

Step 2
Wait for the camera to detect an object

Step 3
Wait for the arm to move the camera into the center of the quadrants
WaitFor($CameraVerticalQuadrant = "Middle" AND $CameraHorizontalQuadrant = "Middle")

Step 4
Disable the camera tracking

Step 5
Launch an  AutoPosition Action which bends the arm down to what would be the center of screen with the claw open, close the claw, pickup the object, bring it to the viewer and say "look i'm amazing":)

Again, the above logic assumes the item that you are placing on the table is within the camera view. If your goal is to place the object anywhere on the table, then a looping script to rotate the base would be needed, such as...

Step 1
Move to highest looking down position. So the arm is up high and the claw (camera) is looking at the table. Use an  AutoPosition Action to move the claw into this position. *Note: In this version, move the base servo all the way to the LEFT because we will scan across the table looking down.

Step 2
Enable Camera servo Tracking with the ControlCommand()

Step 3
Sleep(1000) to ensure the camera has stabilized and moved into position

Step 4
If ($CAmeraIsTracking = 1) 
  goto Step 5
 Disable Camera servo Tracking with ControlCommand()
 Rotate Base a few degrees i.e. ServoUp(d0, 10) 
 Goto Step 2

Step 5
Wait for the arm to move the camera into the center of the quadrants
WaitFor($CameraVerticalQuadrant = "Middle" AND $CameraHorizontalQuadrant = "Middle")

Step 6
Disable the camera tracking

Step 7
Launch an  AutoPosition Action which bends the arm down to what would be the center of screen with the claw open, close the claw, pickup the object, bring it to the viewer and say "look i'm amazing":)


Thanks DJ, You make it sound so EZ. I'll work on the scripting to do that. Everyone standby, you know I'm gonna need help.


Anytime:) Read what I wrote again because I just edited it with a Sleep() and example of using ServoUp() to rotate the servo.

The current WaitFor() command will wait for ever... I have added a new command to be in the next release of ARC which will accept a TimeOut parameter. The new WaitFor() with timeout parameter will be helpful to you. I will release it this evening once I have ran tests at back at the office.


This is giving me some great ideas for a modification of my Roli with EZ bits I already have on order.

Eventually what I want is for Roli to wander around and pick up the cat's toys and when it has collected a few, bring them to me. I have been trying to work out how to do it by just extending one of the arms, but now I am thinking it might be better to put an arm on top where the pan/tilt camera is and putting the camera on the end of the arm.



Alan, with Roi I would recommend keeping the camera on the neck. I recommend using Relative servo Tracking for picking up objects. This is because you will receive accurate co-ordinates of the object's location - which can be translated into servo positions for the arm to reach out and grab the object.

When possible, I prefer to use Relative servo Tracking

With BHoustons configuration, that isn't possible because the camera is not stationary.


The issue I am currently having is that as Roli approaches an o ject, I need to keep moving the camera position to keep the object in view. I expect in either case my scripting wilk be quite complex as I want to switch from moving Roli to moving the arm as I close on the object, but thought putting the camera on the arm, and having it ride low and in front of Roli might simplify it.

I'll probably try both ways. I do think the pan/tilt gives Roli a cute face and some personality I would lose in the other configuration.

I'll stop hijacking this thread for now and start my own if I need help or to show what I have done after the parts arrive and I have some time to work on it.



Update on this project: I have the arm "scanning" the table for a red object and once it finds the object it stops scanning and centers the object in the middle of the camera frame. Thanks DJ for the script outline for that. My challenge now is to come up with a way to direct the claw to the center of the frame to pick up the object. If I use an auto position, It will of course go to that position but that may not be the middle of the frame. I'm thinking I need to use the some sort of variable based on the middle of the frame to move the claw into position but not sure how to do that or if that's the way to accomplish that. Any thoughts?


The middle of the frame should always be the same position, no?

The idea of having the robot center the object in the middle of the camera. From that point, the robot arm now simply needs to use an Auto Position to move to a position that would be the center of the frame.


Here's my code so far for this, It finds the object but then whatever Auto Position I have for it to go to, it does but it won't keep the object centered in the camera frame. I need it to keep the object centered in the camera frame.


ControlCommand("Script Manager", ScriptStart, "Scan table")



:Step 2
ControlCommand("Camera", CameraServoTrackEnable)
ControlCommand("Camera", CameraColorTracking, "red")
#step 3

#step 4
If($Cameraistracking = 1)
Goto(step 5)
ControlCommand("Camera", CameraServoTrackDisable)
Goto(step 2)

:step 5
ControlCommand("Script Manager", ScriptStop, "Scan table")

Waitfor($cameraverticalquadrant = "middle" and $camerahorizontalquadrant = "middle",1000)

ControlCommand("Script Manager", ScriptStop, "Scan table")
#Step 6

ControlCommand("Camera", CameraDisableTraacking)

#step 7
ControlCommand("Auto Position 2", AutoPositionFrame, "Object pickup", 25,3,5)

ControlCommand("Auto Position", AutoPositionFrame, "Hand to me", 25,3,3,)
ControlCommand("Camera", CameraColorTrackingdisable, "red")

ControlCommand("Auto Position", AutoPositionFrame, "Rest", 25,3,4,)


The middle of the frame will always be the same position. The idea of having the robot center the object in the middle of the camera. From that point, the robot arm now simply needs to use an Auto Position to move to a position that would be the center of the frame.

Perhaps your AutoPosition is also moving the base? It only needs to move the arm servos to guide to the center of the camera frame.


The Auto Position only moves 2 servos in the arm, the base servo and the servo that the camera is on is run by the camera tracking. I have tried several different Auto Positions but if the object is not within the ARC of that AP it can't keep the object in the centre even though the camera and base servos are still trying to keep it there.


Disable camera tracking while the Auto Position bends down to pick it up.

Once the object is in the center of the camera, it no longer needs to track. It simply needs to execute an Auto Position to pick up an object that "should be where the center is"


Here's a short video of the arm searching for and picking up an object. It's working pretty good but as I explain in the video if the object is outside of the ARC that the Auto Position is set to go to, then it will miss it. Is it possible to set the grid lines of the camera in a Script so they adjust automatically to the settings I need? Is it possible to the $CameraObjectCenterY and the $CameraObjectCenterX variables to direct the claw to the object once it has been found?


As you can see from your video, there is no need to continue scanning for the object. When it finds the object, disable the servo Tracking. Then the Auto Position merely needs to bend down to pick up what is in the center.

You do not want to adjust the grid lines - I can make a ControlCommand() for that, but that's not what you need to do. Once you have too much of the object on the camera, the tracking is useless because it fills up most of the screen and only parts of the color are detected due to shadows.

When the AutoPosition begins heading down to pick up the object, disable servo Tracking.


Thanks for all your help DJ. Just tried that didn't make a difference. I'll work on it again tomorrow


I asked this question a couple of days ago and didn't get an answer, so I'll post it again. Is it possible to use the$CameraObjectCenterY and the $CameraObjectCenterX Variables to direct the claw with the related servos to the object once it has been found?


You can, but that value changes as the arm moves toward the object. Because the camera will be seeing the object in a different location.

Also, as you have experienced, as you move closer to the object the color gets darker and isn't detected entirely - so you will receive X and Y values that are not for the entire object size.

Instead, disable the camera servo tracking and execute an Auto Position which moves the robot down to where the center of the camera view is and pickup the object. Once you have the object in view, you will know exactly where to bend down and pick it up. It will require a few Auto Position frames to bend the arm down toward the object, but you will know where it is because it was in the center of the camera view.


I am going to mark this as resolved. I don't have it working exactly as I had first envisioned it but with a little tweaking, I think I will be able to improve it. Earlier in the thread @DJ you say that a ControlCommand() could be made so that the camera grid lines could be set, I would find that very useful.

Thanks for everyone's input.


I searched the forum and came up with this post. I would like to possibly expand on the idea and request any ideas or help. A) my robot has head pan/ tilt with camera mounted in eye. Do I use servo relative tracking or not. B) I would like the robot to pan the room until it finds its red ball. I would like the robot to move towards the red ball with a slight bias to the left drive track. C) arm extend down in front of left drive track and grab object.

I can do the extend arm part down and grab part fine. Where I believe I require assistance will be with commands for finding "red ball " and moving towards it a specific distance. Thanks for any ideas/help


I am interested in the same thing. I can track an object with the pan tilt, or with the robot itself, but as the robot gets closer, I need to change the camera (tilt specifically since once the pan is centered I want to turn the robot body not hte camera unless I need to get around an obstacle, which is a whole different challenge) to keep the object in view, and coordinating the activities has been a challenge (not that I have put a huge amount of time into it, but when I did I found myself quickly moving onto easier things....).




I don't want to hijack your question, so let me describe what I think we are both trying to do, and you can tell me if I am off base.

Let's say the robot is stationary but scanning the room with the pan/tilt camera. It sees its object of desire (red ball, fluffy mouse, etc..) at some distance, to the left side of hte room. The robot centers the camera on the object, then the robot starts turning left as the camera turns right keeping the object centered until the camera and robot are both facing forward with the object dead ahead.

Then the robot starts to move forward towards the object, with the camera tilting down to keep the object centered. When the tilt reaches the point that indicates the object is withing the claw, the robot stops moving forward and picks up the object.

For yours, that pan/tilt point will be slightly off center to the left because your claw is over the left tread. For mine it will be dead center because my robot arm is on the right, but articulated enough that the claw centers in front of the robot.

Am I close?



Absolutely right on. It doesn't necessarily have to be the left arm , could be the right. That's a moot point though cause I don't want to add any unnecessary complexity to the already complex task. Someday it will evolve into which side of the robot the object is closer to.


Btw when you figure it out, please pass on your findings


OK. I don't have it smooth, and what I have is theoretical so far, but I think it can be done in little steps. Ie, center the camera on the object, then switch to moving the robot until the object is in the edge of the field of vision, then move the camera again back and forth between the two until the camera is facing forward, then move the robot forward a bit, adjust the camera tilt, move again, etc. Going to be a complex script, and kind of jumpy, but should be able to work it out.

I am working from home all next week so should have an extra 90 minutes a day I would spend commuting at least so may have some time to play with robots. I am using a Roli with an adaptation to make the arm longer, so it should be a good small scale model for your robot.



I will have to make a video on how to do this - as my posts say - it's super incredibly easy.

The EZ-Script variables from the camera control are available to you. All you have to do is identify the location of the object and keep it centered while the arm moves toward it.

Remember, as the programmer/robot designer, you know exactly where the robot arm is by the number of servo degrees. This means you know where the table top is. This means you know how many degrees to move in either direction to center the object. This means your code is in control.

It's real easy guys, i'll have to do a video one day in the future for the new activity website that we're launching in May. In the meantime, you can do it:D



I think the problem lies in the fact we are speaking of a mobile robot, not a table top arm as in the original posters post. I would like my machine to locate the object 10 feet or so away (scanning the room via head pan and tilt) and then the robot physically rolls over to the object, lowers its arm, opens its gripper and picks object up. Again scripting lower arm, open gripper, close gripper all easy. Location of said object and moving robot to its location is the issue. Another challenge for me is my machine is now a 178lb machine that will damage walls. So doing so safely is important to me as well. Any ideas are welcome and appreciated.

Alan my thoughts follow yours exactly... Find object, drive towards it 3seconds, stop, begin scan again via pan and tilt to correct robot heading, drive again forwards, stop, scan, pick up item. Or something along those lines.


This is also something I'm looking forward to have CY be able to do. For my robot, it has the addition of a small tray in front for the arms to pick and/or place objects onto it. Cy weighs in @ 25lbs, so I too want to make sure he doesnt crash into walls or runs over someones foot (dont't ask,heheh):D



Challenge accepted!


Hmm. I thought I posted something here, I must have closed my browser without posting when I got interrupted by a phone call. Anyway...

DJ, Great to have your help on the project. I am sure the solution will be awesome now. If it would help to have a 4in1 sensor to tell the robot to move a specific number of degrees rather than a set amount of time or until the object reaches the edge of its vision, I have a few extra that I got in the Brookstone BoGo sale as spares that I would be happy to share with @kamaroman68 and @RoboHappy.



The biggest challenge isn't the code. Code is easy fun and anything can be done... If you know what you're goal is.

I still haven't been able to identify a goal. Obviously a robot can't just drive around a room and pick up random stuff - there's philosophical conversations regarding motivation and free will to understand first:):)

However, to navigate across a room to identify a table and search for a specific object on that table, and pick it up. Now that's more like it.

Firstly, even a human can't just wander around a room "looking" for something without an idea of where that something should be "looked for". As an example, if I told you to get my keys from my house, would you start looking in sofa cushions first? Would you look behind the television and pickup the stove to look under it? Obviously there needs to be some rules defined.

Let's say it's

  1. find table
  2. look for object on table
  3. pickup object

Sure, that's easy. Of course what ezrobot is still missing is a method to identify its location and navigation waypoint within a home. Lidar is far too expensive for ezrobot to invest into as an ezbit. But the vacuum thing that everyone seems to care about is affordable... Just not easily integrated, yet. I have one, and should really spend some time with it.

However, if you saw the dev schedule that Jeremie and I are undertaking right now to get these ezbits shipping (8x8 rgb, inverted pendulum, rgb serial led, line follower, Ezb mini, etc...) geez we have been working every day and I think we're keeping our local pcb manufacturer very wealthy with daily prototype redesigns!

Anyway, that aside.... Identifying the table location is waypoint navigation. Can be done.

Identifying how high the table is... Well, I can try and use the inmoov... But the inmoov kinda sucks for any coordinated activity. Sure it looks great, but it's slow as molasses and it's hands are absolutely useless. I will say that inmoov's hands are great for burning out servos... That's about all they can do. Picking something up? Ha, good luck!

Guess I'll have to put more thought into what we are trying to achieve here...


If you want to do this simply, you would only need to turn your robot until the servo position rotating the head is centred on the robot chassis. Then it's a matter of driving and picking up the object.


I think all three of us are looking for something on the floor, not a table, so that isn't the issue. I think we also can all come up with a way to wander around until we see the "thing". Rich's Ping Roam script is a good place to start if we can't. It is once the "thing" is spotted, getting the robot close enough to it and properly oriented that is the challenge. Like @kamaroman68 said, once we get close enough that we can treat the robot like a fixed arm on a table, the earlier example in this thread solves the getting the arm in place and picking up, it is the coordination of a pan/tilt camera and a mobile robot to get to something once seen that seems to be the harder part.

For simplicity, let's use a red ball as the item we all want to find and pick up. (Mine will actually be an assortment of cat toys, but each will be a previously trained object, and I'll only be looking for one at a time. i.e. "Roli, fetch the green bug toy").

Other parameters we can agree on to start is that the item will be within 10 feet or so, and a size that can be picked up by an EZ-Robot claw (kamaroman68 may be going for something bigger, but that should be easier, not harder, since a larger object is easier to identify from a distance). Let's also assume for now that the object is easily trainable and distinct from anything else in the room. A bright red ball in a room with no other red objects to confuse it. Once we get the basics working, getting more complex recognition can be an exercise for the students.

@kamaroman68 and @RoboHappy, feel free to correct my assumptions or provide additional guidance to help explain what you are looking for or to help simplify the task.



Hmmm, wait so this is fun! Could it be a game of hide and seek with the robot?

We can use Roli's as a good base to get started. Have the robot run around until it finds the red ball! I like it.

Also, how many of you would be interested to use the lidar scanner thingy that dave cochrane and richard we talking about using? If i added a control for it?

I just haven't figured out the importance of it yet... Of the purpose. Guess this is a good place to start!


if you know the room layout, you know the boundries. This helps in the equation. I just haven't had time to get back to the LIDAR lately. I have soooo much going on. Help on the LIDAR would be awesome.


I am interested in the Neato Lidar if for nothing other than really good collision avoidance. The ping and ir sensors are OK, but the Lidar is much more precise.

And, yes, (not to be selfish, just working with what I know you have) a Roli with the arm either mounted to the front or extended (see the project MyRoliMKii for an example) is probably a good small scale model of what the other guys are doing.



lidar is second in the addons list

first is important to have a "dead reckoning" system this implies wheel encoders. Roli does not have encoders, so the next question is how to add encoders ?

when choosing an encoder is important resolution something less than 1000 ticks per revolution can be short.

it's my opinion, but i can be wrong, can someone validate what should come first ?


The thought I have is that even without encoders, the LIDAR can be used to tell you if you are going relatively straight or not, but used together you have a really accurate system of measurement.

if you know that an object is 7 feet away, and you move forward a distance, the object would then be 7 - distance moved. so, if the object is now 5 feet away and you moved toward the object, you know that you moved 2 feet. If you can't tell if anything is in front of you and you move forward, the encoder solution is what would tell you the distance.

without the LIDAR, the encoders would work great to tell you the distance traveled but you loose the rest of the room.

the ultimate solution is a SLAM based approach where you build a map grid and know where you are and what has changed in the environment. the camera is then used to detect the object and as verification of which direction you want to move in to get to the object.

this is all just my opinion. I can't wait to be able to get back to working on it.


but the question is can a robot perform navigation/localization without encoders ?

if you have only lidar data you will need to perform a lot of calculations to understand if your moving straight or if you are reaching a wall and an obstacle at same time.


Yeah - the robot works better without wheel encoders. There is far too much slip when turning, specifically with tracks to be reliable with wheel encoders.

I refuse to use wheel encoders after having terrible past experiences. In theory, it's great... Wheels turn, controller counts and knows how to keep distances. But that's not the case when turning or driving over rough terrain.

If it's okay with everyone, i would prefer to focus on localized navigation with a lidar or similar approach rather than continue the wheel encoder discussion. I've had this discussion in the past on this forum to lengthy means and do not wish to revisit it again if possible:)


Oh, might also be worth mentioning that it wouldn't be too difficult to whip up a wheel encoder replacement with a gyro. I should consider that as an option...


Hey everyone I'm glad to see bringing up this old thread has sparked some interest. I love the idea of new technologies being implemented to come up with a solution to this problem ( lidar, wheel encoders), but on a selfish side I'm not sure how I would implement lidar for example into my machine. The other issue that I had to overcome in the past is that the construction of my robot is all aluminum. It forced a redesign when I received my ezb4 as the wireless could not penetrate reliably. Sure I could create a plastic " tower" of sorts to mount a gyro, or 4 in one sensor.(thanks by the way Alan ). I'm still open to all ideas!


@kam the soon to be released /2 has a USB option. It will help you.


Yes I'm waiting for that board release, however it will again force re-design on my part. Correct me if I'm wrong but I thought you mentioned that with that board you lose the option of using ez camera when using USB port. I will have to modify eye sockets to accept another type of hopefully small wifi camera that ARC plays nicely with. That's all good though I don't mind modifying when it's an upgrade. Have you given any more thought to the above task ?

One other thing in a previous conversation with me you were thinking about adding" servo trim" for my dynamixel mx64 t servos cause they don't have full range like the ax12a. Thanks for all help everyone!


You lose camera but you can also connect the camera to the computer via USB. As the camera supports USB as well.

So no redesign is necessary for you.



only to be clear the camera is another usb camera, or is possible to buy an usb adapter to the EZ-B camera ?


USB adapter for existing cameras