Aw WBS? Poor guy. Just had a bad one of those myself last year that put me out of commission for a while too.
Anyway, interesting info @Mickey666Maus thanks!
Now if you got frame info from 3Ds Max, per frame and ported that directly through the autopositioner at an equivalent number of "frames" and used the UWP code output to run your board to command the servos (or in my case Nema's and servos) wouldn't that resolve your choppy issue?
Reasoning:
I think the final solution to full live replication and fbs file servo programming is going to rely on essentially creating a script that is pretty much the backbone of 3ds but that only selects for certain values then sorts that data in such a way that it can be coded to automatically set the positions/frame rate equivalent to that of whatever the frame rate the animation is. Honestly, I have no idea on how to do that but I'm glad I could help reignite the discussion. I can see this extrapolated in the future, people programming robotics through a hybrid animation robo programmer suite where one can only design things like lights, sounds, sensors responses etc in an animation and have a real machine corresponding to those instructions without having to become a master electrical engineer and animator at the same time. And that's just be the start! Well, ways away for now but let's see if we can get that train chugging.
@Mickey666Maus you said you have the code to export as text. Could you share it? I'm an absolute novice and would love to compare notes basically to see if we can optimize this method.
And once it get it established then build the complexity from there. This is cool. Ha.
Then do the same thing but with transforms instead of just positions
struct JointPos (time, tm)
struct JointData (node, name, transforms = #())
fn collectJointPositions joints range: =
(
if not iskindof range Interval do range = animationrange
data = for joint in joints collect (JointData node:joint name:joint.name)
for t = range.start to range.end do at time t
(
for d in data do append d.transforms (JointPos time:t tm:d.node.transform)
)
data
)
/* make a test scene **********/
delete objects
(
p0 = point name:#hip pos:[0,0,100] wirecolor:red
p1 = point name:#knee pos:[0,0,50] parent:p0 wirecolor:orange
p2 = point name:#foot pos:[0,0,0] parent:p1 wirecolor:yellow
s0 = p0.rotation.controller = rotation_script()
s0.setexpression "eulerangles 0 (-(sin (t*0.1))*30) 0"
c0 = p0.position.controller = position_script()
c0.setexpression "[t*0.05,0,100 + sin(t*0.1)*10]"
s1 = p1.rotation.controller = rotation_script()
s1.setexpression "eulerangles 0 (-(sin (t*0.1))*90) 0"
global root = p0
)
/* collect the data **********/
joints = join #() root
data = collectJointPositions joints
/* reset scene and restore animation per frame *******/
for joint in joints do
(
c = joint.controller = createinstance prs
notifydependents c
)
/* restore data **************************************/
animate on for d in data do
(
for p in d.transforms do at time p.time d.node.transform = p.tm
)
Quote:
because we apply transforms in hierarchical order we are good. but we can do everything in local space.
So here should be how to store csv data and the retrieve it
data_file = getdir #temp + @"\joints_data.csv"
fn writeDataToCSV data file =
(
fs = createfile file
for d in data do
(
format "%" d.name to:fs
for p in d.transforms do
(
format "," to:fs
for i=1 to 4 do for j=1 to 3 do format " %" p.tm[i][j] to:fs -- 12 floats matrix3 for every frame
)
format ";\n" to:fs
)
close fs
file
)
writeDataToCSV data data_file
edit data_file -- show in editor to check
fn readDataFromCSV file =
(
if iskindof file String and doesfileexist file do
(
fs = openfile file
local data = #()
while not eof fs do
(
ss = readline fs
ps = filterstring ss " ,;"
d = JointData()
d.name = ps[1]
deleteitem ps 1
t = 0
n = 1
/****** restore a matrix3 from 12 floats **********/
for k=1 to ps.count/12 do
(
row1 = [execute ps[n], execute ps[n+1], execute ps[n+2]]; n += 3
row2 = [execute ps[n], execute ps[n+1], execute ps[n+2]]; n += 3
row3 = [execute ps[n], execute ps[n+1], execute ps[n+2]]; n += 3
row4 = [execute ps[n], execute ps[n+1], execute ps[n+2]]; n += 3
append d.transforms (JointPos time:t tm:(matrix3 row1 row2 row3 row4))
t += 1
)
append data d
)
data
)
)
dd = readDataFromCSV data_file
/* reset scene and restore animation per frame *******/
joints = join #() root
for joint in joints do
(
c = joint.controller = createinstance prs
notifydependents c
)
/* restore data **************************************/
animate on for d in dd do
(
d.node = getnodebyname d.name
for p in d.transforms do at time p.time d.node.transform = p.tm
)
so this way you can also get positional data that if the plugin is made with sufficient complexity could seriously simplify the user process. I dont think this is the best way to go through this route but Ive reached far beyond my technical ability and am frankly flailing about in empty dark space at this point lol
what do you guys think?
also @Mickey666Maus since you know max script does this follow the basic path you took? any suggestions?
@Ramon3141 This is not at all my approach, but I am really happy about this...another point of view and another swing at is is more than appreciated!
The way I did it was basically just to set up Expose Transform Helpers and Wire Parameters...then the Time Callback Function, was used to periodically send the retrieved values either to the ARCs or any other custom server to move the servos, or write them to a .txt file for being used to playback animation!
I am not at home right now, but I will take a look at what the code does once I am!
Good to have you here, lets get things going!
@mickey666maus Glad to see you back. So, basically we have a system on the EZB where as we can connect a boatload of servos, but it works because not all servos are getting position data all at once?! Is it because 3d max is outputting data for every single frame for every single servo? And in Auto Position you are telling it to move servo 12 from 90 to 180 degrees at a particular speed, and thats the only data sent? I'm trying to wrap my head around why there is too much data? I'm thinking like mocap data where you have data on every single frame?
The project im working on, is connecting Perception Neuron mocap suit to a robot to get it to move. PTP managed to take the mocap glove and get three joints of one finger move 3 servos in real time. But what happens when I scale it up to 18 axis? And the Kinect plug in will get several axis to move live?! What is different in 3D Max?
I think I need to fully understand why there is a data overflow. Any additional info would be appreciated.
@fxrtst that sounds great...and I guess for live control you guys will not run into too much difficulties! Because anyways, to eliminate jitter and smooth out the servo movements, you will have to filter the incoming data, and it will be as live action as possible restricted by how much frames you can get out of your system! Very interesting, I would like to see the setup!
The playback has the difficulty that, even with 30fps th motion looks not smooth...I used a smoothing function to equal it out...but it can also affect the motion at some point!
Also the ARC approach has timing issues, I dont know if @DJSures has something in mind? Frame by frame animation was the very tricky part which has not been tackled yet, while a system like the Auto Positioners is a pose to pose animating system, works very well...but getting realistic motion is not really working this way!
I will try to make a litte clip on my progress and post it here, visually it work better I guess...so you can see where I stand at the moment!
Frame to frame animation is still my ultimate goal!
Im with you on frame to frame animation @mickey666maus. Id love live control but with frame by frame animation we can create and tune completely natural fluid movements. I used mo-cap data to control a bi-ped and then key framed my machines movements to that of the bi-ped, it means no fussing with insane kinematics math for new configuration, no timing issues etc.
@mickey666maus looking forward for you to get a chance to check out that code, for though my code skills are weak I think the logic is the most straight forward work-around for the system but then again, I may simply be confusing straight forward for primitive lol we've gotten the script to work partially but the trick seems to be in the nittry gritty details. lets see!
thanks @mickey666maus, happy to join the community.
And yes on filtering. Any mocap will have errors and filtering is a must. My past experience with facial mocap program called FaceShift was a live set up for puppeteering characters.
There were three filters, small med and large filtering. The large one produced such a lag in the live feed, but the animation was so smooth! But this used the carmine 1.09 sensor and faceshift to produce the data super noisy. Thats been replaced with a good webcam and new software, creates same results as the sensor or even better. Instead of depth it uses 2D facial tracking.
My 30 year long yearning, has been for live/record and playback system that can be used for animatronics/ film industry and now with robots. This is really the only way to get realistic motion into robots. When the recorded and clean up data is run through the robot it would be great to have a motionbuilder type program that would splice together 2 or motions together on the fly. As an example a robot detects the person is sad. The robot could then call out the sad face and body mocap to reflect the user. It would take its current servo positions and merge/transition that with the "sad" mocap animation in real time.
Dont get me wrong I think they are both related (live capture and keyframe animation. Be it live or hand keyframe animated, I think the processes will be similar and piggyback one another.
The video above with Maya and the Ardurino and Dan Thomsons plug in. He used IK to set up that animation. Even tho its live going back out to the ardurino, he still hand animated that walk cycle and just played it back out. Live or recorded or exported out as a text file and loaded in ARC are all still along the same technique. Everything we share here will help in all the endevours.
Aw WBS? Poor guy. Just had a bad one of those myself last year that put me out of commission for a while too.
Anyway, interesting info @Mickey666Maus thanks! Now if you got frame info from 3Ds Max, per frame and ported that directly through the autopositioner at an equivalent number of "frames" and used the UWP code output to run your board to command the servos (or in my case Nema's and servos) wouldn't that resolve your choppy issue? Reasoning: I think the final solution to full live replication and fbs file servo programming is going to rely on essentially creating a script that is pretty much the backbone of 3ds but that only selects for certain values then sorts that data in such a way that it can be coded to automatically set the positions/frame rate equivalent to that of whatever the frame rate the animation is. Honestly, I have no idea on how to do that but I'm glad I could help reignite the discussion. I can see this extrapolated in the future, people programming robotics through a hybrid animation robo programmer suite where one can only design things like lights, sounds, sensors responses etc in an animation and have a real machine corresponding to those instructions without having to become a master electrical engineer and animator at the same time. And that's just be the start! Well, ways away for now but let's see if we can get that train chugging.
@Mickey666Maus you said you have the code to export as text. Could you share it? I'm an absolute novice and would love to compare notes basically to see if we can optimize this method. And once it get it established then build the complexity from there. This is cool. Ha.
so I got some help from my friends at in the auto desk community. This is what we have so far:
Lets start with a simple animation first
Then do the same thing but with transforms instead of just positions
So here should be how to store csv data and the retrieve itso this way you can also get positional data that if the plugin is made with sufficient complexity could seriously simplify the user process. I dont think this is the best way to go through this route but Ive reached far beyond my technical ability and am frankly flailing about in empty dark space at this point lol what do you guys think?
also @Mickey666Maus since you know max script does this follow the basic path you took? any suggestions?
@Ramon3141 This is not at all my approach, but I am really happy about this...another point of view and another swing at is is more than appreciated!
The way I did it was basically just to set up Expose Transform Helpers and Wire Parameters...then the Time Callback Function, was used to periodically send the retrieved values either to the ARCs or any other custom server to move the servos, or write them to a .txt file for being used to playback animation!
I am not at home right now, but I will take a look at what the code does once I am! Good to have you here, lets get things going!
@mickey666maus Glad to see you back. So, basically we have a system on the EZB where as we can connect a boatload of servos, but it works because not all servos are getting position data all at once?! Is it because 3d max is outputting data for every single frame for every single servo? And in Auto Position you are telling it to move servo 12 from 90 to 180 degrees at a particular speed, and thats the only data sent? I'm trying to wrap my head around why there is too much data? I'm thinking like mocap data where you have data on every single frame?
The project im working on, is connecting Perception Neuron mocap suit to a robot to get it to move. PTP managed to take the mocap glove and get three joints of one finger move 3 servos in real time. But what happens when I scale it up to 18 axis? And the Kinect plug in will get several axis to move live?! What is different in 3D Max?
I think I need to fully understand why there is a data overflow. Any additional info would be appreciated.
@fxrtst that sounds great...and I guess for live control you guys will not run into too much difficulties! Because anyways, to eliminate jitter and smooth out the servo movements, you will have to filter the incoming data, and it will be as live action as possible restricted by how much frames you can get out of your system! Very interesting, I would like to see the setup! The playback has the difficulty that, even with 30fps th motion looks not smooth...I used a smoothing function to equal it out...but it can also affect the motion at some point! Also the ARC approach has timing issues, I dont know if @DJSures has something in mind? Frame by frame animation was the very tricky part which has not been tackled yet, while a system like the Auto Positioners is a pose to pose animating system, works very well...but getting realistic motion is not really working this way!
I will try to make a litte clip on my progress and post it here, visually it work better I guess...so you can see where I stand at the moment! Frame to frame animation is still my ultimate goal!
Good to see we are back on track!
Im with you on frame to frame animation @mickey666maus. Id love live control but with frame by frame animation we can create and tune completely natural fluid movements. I used mo-cap data to control a bi-ped and then key framed my machines movements to that of the bi-ped, it means no fussing with insane kinematics math for new configuration, no timing issues etc.
@mickey666maus looking forward for you to get a chance to check out that code, for though my code skills are weak I think the logic is the most straight forward work-around for the system but then again, I may simply be confusing straight forward for primitive lol we've gotten the script to work partially but the trick seems to be in the nittry gritty details. lets see!
thanks @mickey666maus, happy to join the community.
Yes a video would be great!
And yes on filtering. Any mocap will have errors and filtering is a must. My past experience with facial mocap program called FaceShift was a live set up for puppeteering characters.
There were three filters, small med and large filtering. The large one produced such a lag in the live feed, but the animation was so smooth! But this used the carmine 1.09 sensor and faceshift to produce the data super noisy. Thats been replaced with a good webcam and new software, creates same results as the sensor or even better. Instead of depth it uses 2D facial tracking.
My 30 year long yearning, has been for live/record and playback system that can be used for animatronics/ film industry and now with robots. This is really the only way to get realistic motion into robots. When the recorded and clean up data is run through the robot it would be great to have a motionbuilder type program that would splice together 2 or motions together on the fly. As an example a robot detects the person is sad. The robot could then call out the sad face and body mocap to reflect the user. It would take its current servo positions and merge/transition that with the "sad" mocap animation in real time.
Dont get me wrong I think they are both related (live capture and keyframe animation. Be it live or hand keyframe animated, I think the processes will be similar and piggyback one another.
The video above with Maya and the Ardurino and Dan Thomsons plug in. He used IK to set up that animation. Even tho its live going back out to the ardurino, he still hand animated that walk cycle and just played it back out. Live or recorded or exported out as a text file and loaded in ARC are all still along the same technique. Everything we share here will help in all the endevours.