John_S4x4
This looks AWSOME !
A new USB device, called The Leap, creates an 8-cubic-feet bubble of "interaction space," which detects your hand gestures down to an accuracy of 0.01 millimeters - about 200 times more accurate than "existing touch-free products and technologies," such as your smartphone's touchscreen. or Microsoft Kinect.
The Leap is available to pre-order now for $70, and is expected to ship early next year. For now, Leap Motion is actually giving away free units and an SDK to developers here
Come on, let's make sure that DJ Sures gets hold of 'the leap'. I am sure he would love to get his hands on one. I also think $70 is a great price, for what you get.
Although I can see this soon becoming installed in Mobile phones and keyboards, I can see a few other uses for it too. Like location mapping for robots in 3D, with 0.01mm depth accuracy ! It seems quite small too (the size of a small candy bar), so ideal to mount on bots.
If you were also clever, you could mount one of these onto a HMD display, like the Vuzix VR Glasses, pointing down at your hands/arms. You can then wave you hands and arms about or point fingers and control your robot. You could even render your own virtual hands/arms in your HMD at the same time....and perhaps use the leap 3D tracker to make your own version of TrackIR on the HMD.
The draw back is that there does not yet seem to be any evidence that this is camera based, so no video/photo captures etc. But who cares, because we all have a camera when we buy the EZ-Robot kit !
awsome!!!!
Just crapped my pants, then ordered 2, 6 month wait but at 70 bucks you cannot lose! Really amazing technology with such a high degree of accuracy. Amazing.
OMG
I guess ive got to have one .
But ill wait until release day or after .
I just pre-ordered one - can't wait!
I believe that they are already looking at using this for 3D scanning of objects and importing the meshes into whatever modeling software you use. You can see the 3D point cloud at 0:52 in the video.
If attatched to a robot, then I am sure your bot could 'see' not only the object it might want to pickup, but also 'see' the robots gripper/hand. Being able to generate depth data, I presume you could then work out using a depth ( or overhead ) map, if the ends of the gripper have gone past or gone further than the object. Hence, the object must be in the middle of the gripper, close gripper and pickup the object. I presume with 0.001mm accuracy, your robot could 'measure' an object, even before it has picked it up and know exactly how wide to open the grippers and also how much to close the grippers
Or how about your robot has The Leap on it and it suddenly bumps into DJ Sures. The robot then takes a point cloud of DJ Sures head and calculates the width. The robot looks up the measurement in it's 'head width' database and says "Hello - You must be DJ Sures. I reconize you, as you have extra big head LOL "
Omg Brett I'm driving to.your house , have that spare bed ready buddy.lol