Asked — Edited

Sensors For Object Grasping By A Robotic Arm

I'm looking for simple sensors to make a robotic arm detect an object and then grasp it. I don't find adequate sensors, capable of detecting and measuring the distance of small objects. I found that sonars and I.R. sensors have large sensing beams, and can't precisely detect small objects . Can somebody suggest adequate sensors for the purpose ? (other than a video camera). I found a device with multiple IR sensors, named apds9960 , by Sparkfun, with I2C interface, capable of measuring short distances (10 -25 cm), and even capable of detecting movement direction . Mounted near the grasping hand, it could scan the object and give information to drive the arm servos as needed. Does someone know it or has some experience using it ?


ARC Pro

Upgrade to ARC Pro

Harnessing the power of ARC Pro, your robot can be more than just a simple automated machine.

#1  

I don't think that would be a good sensor, if you look at how the video demonstrates, it's really designed for gesture detection. If it were mounted on a robot arm that it's self would need to move to grab something I don't see a good way to use that.

Detecting if an object in range you can use sonar and IR sensors. To identify the object (for the robot to know a ball from bat) you'd want to use a camera and have trained objects in software. To give your robot the ability to know if it has a hold of something, you need a force feedback system, this could be IR sensors, touch sensors (switched) or pressure sensors (pressure sensitive resisters), or all of the above.

#2  

I thought to use the gesture capability of the sensor to scan the object, so to define its borders, mounting it on the hand wrist. I'd perform the following steps:

  1. by manual commands, move the robot hand near the object ,(5-6 inches), checking the distance by the IR sensor. 2)move the hand, by the arm servos, right-left and up-down in front of the object, , performing a sort of scanning, so defining the object borders , in relation with servo positions. For this I'd use the gesture capabilities of the sensor. 3)position the arm so to have the object in the center of the open hand 4)close the hand. What about this procedure ?
#3  

It sounds reasonable enough and plausible. I think it's important to remember that the apds9960 is a RGB light sensor that can see color (as clear, red, green, and blue), proximity detection, and can gesture sense (motion of your hand). All of which sounds fancy and until you realize the sensor is just a type of tiny array light sensor that can measure how light passes over it.

To my knowledge no one has tried to use a gesture sensor as a robot hand sensor, so I don't think anyone can say it 100% will not work, but in my judgement I think it's likely to not work very well as a robot hand sensor.

It does use i2c communication so you'd either want to use an arduino to translate between the sensor and the EZB or be prepared for a lot of programming and tinker to get it to work natively.

#4  

Plus I don't see any examples of it being used for edge detection or returning those values in arduino code. I think this is where the listed functions of the product can be misleading in my opinion, because all those functions are to support hand gesture sensing not general purpose object sensing.

#5  

Before discovering EZB , I wrote many programs using PIC MCUs , with compiled basic or assembly. I2C communication woldn't be a problem. There are, in basic, easy statements for that. But i'd have to understand the many complex registers needed to interact with that sensor. I'm used to have PICs on boad, together with EZB. In my robot , for example, to have a very fast and precise obstacle avoiding system using 3 sonars. I wasn't able to get the same performance using EZB only (too slow response !.., the robot crashed onto the wall !). I think of using 9960 because sonars or I.R. sensors seem not to be able to make a precise scanning of small (1-2 inches) objects. Do you know sonar or I.R. sensors adequate for the purpose ? I'd prefer to avoid a video camera, whose operation seems to be neither easy nor precise enough.

#7  

Thanks,Tony. Those sensors are good to detect the presence and measure the distance of an object in front of them. But i need to know where the object is with respect to the hand, and what's its "contour". This is necessary in order to move the arm servos so to have the object in the center of the open hand , and take it.
This is possible using a video camera, but I'm looking for some simpler solution.

PRO
USA
#8  

Leonardo,

What you want does not exist, basically you want a sensor to do all the hard work.

more important than a Camera is a Depth Sensor (3d camera) e.g. (Kinect, RealSense) you need to "see" 3D space. A depth sensor provides a Point Cloud ( multiple 3d points).

Then you will need a robot model (e.g. URDF), where you map (x,y,z + dimensions) the joints, sensors (3d camera and others), static objects

using the model, and the sensor information, you can calculate how far are the 3d points related to a specific joint.

using forward and/or inverse kinematics you can calculate the joint angles to move the arm joints to a specific 3d position.

Is not a simple problem.

ROS Moveit framework solves many similar problems, is not simple to configure you will need to get into the ROS world, but once you have all the modules working together the results are fantastic.

A simple picture to enumerate the different issues:

User-inserted image

original url: http://moveit.ros.org/documentation/concepts/

BUT, you can do it yourself too...

what you see in most demos is a "Script" action based on well known positions, the robot arm expects something in a specific position.

you can use a camera to detect a color/object. Using a preset calibration you can estimate distances.

adding a distance sensor(s) allows you to measure effective distances

adding force sensors you can control/measure the grip

depending on your code logic the results can be positive.

One good example is Richard R's inmoov buttler videoclip.

#9  

I'm frustrated at such complex job ! It would require a lot of work to learn , and to buy (probably costly) equipment and software, and might be beyond my technical skill. It's much more than what I wanted to do. I was looking for something much simpler. Richard's video shows how the collaboration of a human being (a girl) can make eveything simpler. The robot goes near her , and she does the more complex job, i.e. taking or putting a bottle in the robot's hand. I was imagining something like that, i.e. with the help by a human being. With the following steps: 1)I (human being) move manually the robot near a small object on the table, e.g. 5-6 inches in front of it. 2)the robot arm , moving by a scrip up-down and left-right, scans it , by simple sensors on the wrist (sonar ? I.R ? which one among the dozens available?) and centers the hand to the object. 3)the hand , by a script, checking the distance, is moved towards the object, so to get it inside. 4)then the hand is closed (it may be done without pressure sensors). I was looking for sensors to perform the task with the necessary precision.