Australia
Asked — Edited

Camera Head Tracking

I need to extend the camera from it's circuit board and mount it in something like a cable dot on the side of the head facing forward. At the moment she looks borg. Not good. This is for department stores for women's fashion. Can I separate the camera with wires so its just a tiny dot?

How do I do that please?

PS The camera will definitely not be placed in the face of this particular mannequin head - It will be under the chin or facing forward from the ear.

User-inserted image

User-inserted image

User-inserted image

User-inserted image

User-inserted image

User-inserted image

User-inserted image

User-inserted image

User-inserted image


ARC Pro

Upgrade to ARC Pro

Unleash your creativity with the power of easy robot programming using Synthiam ARC Pro

#1  

You can easily remove the camera from its housing and install it.;)

Sorry just read your post again...not sure if that can be done but then again it could be.

From memory not sure if their is a ribbon, if there is then a ribbon extension cable would suffice.

Australia
#3  

I have a lot of camera cables. I was asking about the camera without the circuit board. Is it simple enough to desolder it, then run wires back to the board so the camera (without the circuit board) can be extended? I want a 1cm diameter camera on the side of the head, not a borg attachment like it is now.

If its too hard, can I buy a different camera that is compatible with the ezbv4 that I can control from ARC as normal?

User-inserted image

User-inserted image

#4  

I went and had a look at mine, unless you know what your doing i wouldn't tackle it, but if you do then you would want a longer ribbon, but it looks fiddly.

I don't see why the camera de-cased wouldn't fit in there, you would be left with the exact same look in the end.

It has a really good heat sink on it and will be facing towards the back of course so there's no chance of it heating up plastics, etc.

Im currently finishing off the head on my project, and the de-cased camera fits in there easily, its quite a small head so you'll be fine.;)

Australia
#5  

Thanks for checking, but I dont want the same look. At the moment I have a fashion mannequin for department stores with a tumour sticking out of the side of her head. It needs to be tiny - 1cm x 1cm x 1cm max.

#6  

Aha i see.;)

There are other camera's that are compatible for sure.;)

#8  

@NEWAGETOMY

Quote:

There are other camera's that are compatible for sure.
To clarify you can use other cameras with ARC but only the ezb4 camera plugs directly into the ezb4 board. This is the only camera that will do this...

#9  

@Richard,

thank you i should have been more specific there.;)

#11  

Zxen , is your goal to hid the camera? I have seen a few ways this can be done. All are mounted without the case.

ALAN built by Will Huff has the camera mounted in the upper head looking out thru a hole in the center of the forehead. I have ANTONN with the camera looking out the nose. Others have mounted cameras by the ears. The image can be rotated if you need to mount the camera other than it's normal way. Extension camera cables are available or can be built. DJ or someone from EZ Robot could tell you the maximum length allowed. What is the final look you desire ?

Edit:

You can mount it in the chin looking out a small hole. If tracking is to be used, see if relative tracking will do the job instead of grid tracking and maybe mount the camera in the neck or chest where it will be able to see and do what you want.

#12  

I know that DJ and Jeremy tested the length that the cable can be one time. I remember them having about 10 feet of extension cable without an issue.

https://synthiam.com/Community/Questions/4979 is the thread.

**Edit DJ said in the thread about 8 feet of cable and the camera was working well.

#13  

Camera needs to be in a mount of a store/shop display mannequin of no more than 1cm x 1cm...so he was asking if the actual lens can be detached from the circuit board and then be extended. ;)

#14  

Jerimie would need to reply to the camera sensor issue.

Zxen, could you give more information to allow possible options?

#15  

Im sure we discussed placement of the camera with zxen months ago. I don't have the thread in front of me but I remember myself and others suggesting mounting the camera uncased and on the inside of the robot and just have the lens peek out of a hole. Maybe thru a peace of jewelry or something. Perhaps that discussion got lost in the piles of zxen's planning and building. ;) . Perhaps there's no room inside the robot but that's doubtful unless it's solid or no access is available. If mounted inside the robot this solves the "Borg" issue. Lol.

At any rate, you can mount the camera on or in the head so it moves at the same rate as the head or on or in the torso so its stationary when the head moves. I've been told that you get better tracking if the camera is stationary and doesn't move when following an object. ;) I haven't tested this theory but I've seen DJ mention this. :)

#16  

@DaveSchulpius,

I suggested the same in my first reply, but its essential that the camera be in a certain location, and the head is no longer an option.

Tricky scenario!;)

#17  

I guess I don't understand why the camera needs to be in a "certain location" as you say. Won't it fit anywhere else? What's stopping it from being relocated or am I misreading this?

I'm suggesting that the camera be taken out of it's case and placed inside the robot so it's completely out of sight except for the small dark lens peeking through a small hole. Reading the OP's remarks it sounds like he want's the camera moved to a hidden spot and not hanging off the side of the head like a huge wart. Understandable. I don't think the camera lens needs to be removed. just place the small board with the lens still attached inside the robot. :)

#18  

Yep i'm on the same page as you 100% and suggested the same thing.

If it had black eyes, then the camera could be fitted inside one of them, or even sunglasses, a neck chain with a pendant, etc.

But it seems the face needs to be clean with no signs of anything just a blank face.

Worse case scenario would be to not even install the camera on the mannequin, but rather behind it somewhere either hidden on a wall or on a stand, etc.

Australia
#19  

Is it actually possible to do proper head tracking if the camera is not firmly facing out of the head and is stationary on a wall - if so, how would the head track people? Obviously if the explanation is ridiculously difficult I won't do it.

I know I could replace the white blank face with a different face that has eyes but that would be a different and much easier thing to achieve. I need to solve this one first. Blank face that can do facial tracking.

Can the camera be extended out of the board? Can a different camera be plugged into the board? Can a different board be used and the wires made compatible (easily) with ezbv4?

A small hole on the chin angled upwards to align with the mannequin's 'eye direction' is the most realistic suggestion I've heard so far, but I would rather not drill a hole in the chin - also this would probably require a lens over a big hole to counteract depth constraints. I wonder if a lens covering over a hole drilled into her forehead would look as ugly as a black dot - probably. The face looks good sheen.

Please keep the ideas coming - preferably about the camera board modifications and compatibilities. I'm quite convinced the camera is going to need to remain over her 'ear', but much smaller. If necessary I will stick half the naked board out of the side of her head so she has one ear(!) but I'd rather it was small and round.

Also, if you know any one way white material that has been invented (like one way mirror) than that would solve it. I guess if her face was gold or silver or black this would be a lot easier.

Maybe a 5mm periscope...

#20  

How can a hole in the chin be the most realistic? confused

Sorry if i come across a little rude here but the chin would be taking a huge leap backwards...

Sorry i'm opting out, your asking the same questions over again which have all been answered.

Australia
#21  

The most viable. I agree the chin would look better without a hole in it.

Bye! Thanks for you're contributions! Alright everyone, we have to stop thinking now. All questions have been answered apparently.

#22  

@Zxen,

sorry by opting out i mean i can't help any further, i just don't know enough about it sorry again.:)

#23  

@Zxen

Quote:

Can the camera be extended out of the board? Can a different camera be plugged into the board? Can a different board be used and the wires made compatible (easily) with ezbv4?

  1. The camera is part of the board so no...
  2. already answered this in post#9
  3. Again no to maybe possible but would be extremely difficult
#24  

the hole required for the camera lens would be about 2mm in diameter. the camera would look through it from within the chin, neck chest, eyes etc..

#25  

Quote:

Is it actually possible to do proper head tracking if the camera is not firmly facing out of the head and is stationary on a wall - if so, how would the head track people? Obviously if the explanation is ridiculously difficult I won't do it.
It's not ridiculously difficult at all. Your camera control in ARC already supports this function. Click on the gear icon in the camera control. Near the top you'll see a few check boxes. Make sure Enable servo Tracking is checked and specify the servo that will move the head to follow the object. Then right below that box is one that says Track By Relative Position. When that is enabled the control assumes you have a stationary camera and moves your servo into position relative to the object being tracked. Just like magic! :D Study the other settings in this config window as they will help you fine tune your tracking. Again, everytime DJ comments on this subject he says this is the best way to track and get the best results.

Now you need to decide what needs to be tracked and set up and calibrate your camera accordingly. Of course you'll have to do this no matter which method you choose to use.

If you mount your camera in a stationary position I would not mount it on the wall away from the robot. The object will be tracked relative to where the camera is mounted and the head may not follow the object as well as needed. Try to place it on the torso as close to center as possible but if you mount it off center it should track just as well.

Your idea about mounting the camera lens under a material like a colored two way mirror is very interesting. That should work as long as the camera can clearly see the tracked object from all angles.

It needs to be stated that the quality of the image you will see in the camera control is not as important as you would think. The computer is not seeing the image like you and I do. Light and definition seems to be more important but I'm moving out into the weeds now and know very little about this. eek Perhaps others that have used this control more the me can add to why this is. ;)

Australia
#26  

Andy, thanks for telling me that ARC allows the image to be rotated. You inspired my latest design.

Newagetomy, sorry for fighting.

Richard, thanks for the clarity on board mods.

Dave, that would be amazing to have a stationary camera on a pole or wall, but my design most likely won't have one (she's going to be supported by the back of the neck and wear clothes). I still would be interested in what can be done if it results in a smoother motion. I assume a fish eye lens is required to see everywhere the head will look, though. 'It needs to be stated that the quality of the image you will see in the camera control is not as important as you would think.' If this is true then maybe a bit of white cotton material over the camera in her face would still allow facial tracking to work...

Here is what I have done to reduce the tumour. I was worried it would look like she had one ear, but I think it looks okay. If she was black or had eyes I would have put the camera in her face. Thanks again to Andy for telling me about screen rotation.

What do you guys think?

User-inserted image

User-inserted image

User-inserted image

#27  

Put a fake camera in the other ear for symmetry?

Australia
#28  

Before and after:

User-inserted image

User-inserted image

Australia
#29  

Fake camera for symmetry? I don't think so. Maybe if they were ears. My objective is to let people know its a camera in this design, but keep it classy. I might alter the curves or colour a bit but I think it looks pretty good. Maybe because I've seen the crazy alternatives with a dot on her face:

User-inserted image

#30  

@Zxen, I never suggested to place the camera on the wall or a post. That was your thought. I suggest installing inside the torso.

I will be interesting to see if your tracking will work as expected having the camera mounted on the side of the head like that. The camera won't have a full view of anything on the other side coming at it until the object passes the nose.

If you decide to keep the camera mounted like this perhaps you can put a wig on it and hide the camera in the hair.

If you're finding the movement to be jerky it because your camera is mounted on a swiveling base. The software is fighting to compensate for not only the movement of the object but also the swing of the camera. You may be able to smooth some of this out by adjusting your grid lines and some other fine tune operations in the settings area of the camera control. If you want a better shot of having smoother motion then my suggestion still stands; place the camera stationary in the torso and use the Relative Position Tracking option.

Good luck and please post a video of your final results. I'm fully out of suggestions. sleep ;)

#31  

One last thought is a piece of jewelry like a necklace or choker disguising the 2 - 3 mm hole in the neck where the lens looks out. I plan on mounting and hiding my facial recognition camera behind Antonn's Bow Tie.

A centrally mounted camera would be best as Dave suggested, no additional lenses should be used.

Australia
#32  

Its a clothes mannequin. The torso is out of the question. This is for department stores. I have not tested the vision on the camera yet but I will move it forward if necessary. The camera in previous versions did not see me out of the corner of it's vision anyway. I had to be looking at the face. If you're looking at the face, then this camera would see you. If the camera were placed in the face of the head I'm using in this thread, given that the face is curved, the board would need to be placed back a bit, requiring a large hole in the face to prevent shrouding, although a material cover may be able to conceal it (but the camera feed would look ugly if it was used for other applications). I am still interested in extending a pole above and behind her if the tracking works best that way. It doesn't make logical sense to me unless the pole at least pans with the neck or has a fish eye lens. For example if a person was to the left of the robot while it's looking left, out of sight of the camera, then nothing would be seen. This mannequin style is very popular in Australia at the moment - a faceless way to demonstrate the clothes. I'm not sure if you have this style in America, but it would look ridiculous with a wig. In faced versions, they do wear wigs, and I'll probably be okay with removing the entire iris if necessary to accommodate shrouding. Anyway, I'm pretty proud of myself. So long as it works, it looks okay for now.

Australia
#33  

If a robot is always going to wear the same clothes, then I could do that (hide a camera in a necklace or bow tie). I even considered separating the head from the body so the gap could accommodate the camera. But when she, like most humans, lowers her chin, it conceals the neck. Besides that, high neckline clothing is not uncommon. I also think that a good neck is important for her beauty, but I haven't solved that yet. I'm looking into making foam torsos at the moment so she can move like a human - her shoulders could be made more human, for example. Also consider that the neck is going to be bearing the weight of the entire dancing robot, and needs to click out easily for clothing changes (except in the case of torso only displays perhaps). The arms need to click out too, but that's a different topic.

User-inserted image

User-inserted image

User-inserted image

#34  

Ah, I have a better understanding. After you do some camera trials for the tracking I think you will see the direction many of us have been leading you. Maybe try the remote stationary camera. Also realize a web / usb camera can also be used in the ARC software. This would be connected directly to a computer which allows you to mount the camera anywhere. Also multiple cameras. A lot of options available.

Australia
#35  

'Also realize a web / usb camera can also be used in the ARC software.' 'Also multiple cameras.'

Really?

That would be great. I can't think of how to apply it but I would love a link to a lesson or a project using interesting camera setups. I'd love the head being able to look people up and down and the robot says something like, 'Oh wow! Like, who are you wearing?' I might even put a microphone in her hand so she's like a red carpet journalist. Imagine if she could call people over based on colour recognition. 'Excuse me, you in the green. Come over here, I want to talk to you!' That would be considered AI. I also want her to mimic body movements using a Kinect sensor, which I consider an advanced mod - afterwards. I also want to give her a Pandorabots.com library of responses - I'll use an actress to voiceover her speech rather than rely on a computer voice.

Australia
#36  

I could place a stationary camera in a wall setup. Also, I could make part of her neck beam pan (rotate), and place a camera in that, so the software can figure out the tilt alignment necessary to see someone's face height if that works well. Please note that her entire body may rotate to show the clothing in 360 degrees. Also, it would be very cool to have one camera so that multiple robots can face something together - the birthday girl for example.

User-inserted image

#37  

Go back to the basic camera information in ARC, and click "?" as DJ would say. There is a LOT of information within ARC including many tutorials. "The Yellow Duck" is an example of color recognition with a response. Scripting can allow one camera to do many functions.

People here in the Community can give suggestions and advice, but only you know what your needs are. Learn the basics and you will begin to gain a better understanding of what the EZB controllers can do with a camera.

By the way, the Palette robot seems to have a dot in the upper neck which could be a camera.

#38  

There are many sensors available which will do what you want. There is a sensor / camera (by Omron) which detects gender, age, gaze, emotion and facial recognition. It could return information which will allow your robot to ignore men and young children, but target only women. You just need to learn the ARC capabilities and combine it with these sensors. (Information is already available in a plugin.)

There is a lot of cool stuff available and things you can do with an EZ-Robot control. You have to look at example in the tutorials, and on the community forum.

Australia
#39  

Yes, I did notice the palette neck camera. I may rethink that if I decide to tell stores to never cover the neck with clothes and can trust that the stationary system will work at close range (although it seems impossible) and that the robot face will look directly at the person's face no matter the proximity (which seems miraculous). I've completed the camera tutorials once and played with the controls, but that was before I spent months building the chassis so I forgot / did not know how to apply it at the time.

Thanks for the Omron camera reference (http://www.ia.omron.com/products/category/sensors/vision-sensors_machine-vision-systems/smart-camera/index.html). I would have thought that was not compatible with the ezbv4 and such capabilities were driven by software. Having said that, I did speak about using a Kinect sensor, which has been unnecessarily difficult in my brief experiments (I guess I spent around 100 hours or so on it).

I found my other ez robot camera, and it's internals are different. I hope the new camera is the same design as the one I put in the side of her face, rather than this double decker one:

User-inserted image

#40  

If you consider using a single board computer within the mannequin you will gain a lot of options. I am installing a Latte Panda in my two robots. ( ANTONN AND RAFIKI) . This will allow me to use a lot of hardware directly connected to the computer.

Try playing with 2 stationary cameras. See if you can track center to left with one and center to right with the other. This way you may scan left to right until someone comes by. Just a thought. Edit:: try a web cam.

Ok. I will let you go organize your options and play with the EZB and cameras.

Australia
#41  

It makes more sense to use two cameras if there is no fish eye lens to see everything that a head could see in an arc, but this still leaves a LOT of logical problems. But how would you plug both of them in? When it hits centre, how does the first camera know when its time to stop, second to continue? How does a stationary camera know where the head is facing to begin with? I cannot just experiment wildly with every conceivable method. It would take forever. The theory needs to be tight before many hours of work. And as you must know, every brick wall is soul destroying, and often results in complete project abandonment (CPA). I do thank you for your suggestions, but I'm not going to experiment with every method, and that goes for all my inventions. If someone in a forum has a solution, it is worth asking in the first place. If people don't know, then its usually better to not ask, because that can lead to getting confused and CPA. In fact CPA is difficult to avoid.

#42  

IT sounds like you need to spend a little time experimenting with the camera tracking as opposed to trying to figure out a complete solution and fleshing it all out. Don 't overthink it. As you experiment you will find solutions to your questions. Make sure the EZR tracking routines are even adequate for your application. You have the hardware, set it up, do some tracking, then figure out how to integrate it.

Incremental progress like that will prevent CPA as you put it.

PRO
Canada
#43  

@Zxen a couple of suggestions, if I may.

If you are having difficulty mounting inside of a curved Head I could suggest using the new version 2 EZ-Robot camera and peel the lens away from the PCB to change the angle a little. It is just held against the PCB with double-sided tape.

User-inserted image

My second suggestion would be to make only a 1mm hole somewhere in the mannequin's head. A 1mm hole is all you need for the actual lens portion to see.

User-inserted image

Now you may have to drill out a small cavity for the rest of the lens but the opening only need to be 1mm. Here's an example, created with my amazing MS paint skills LOL.

User-inserted image

I should also note that the camera v2 lens is interchangeable with Fish eyes lenses that are somewhat difficult to find but do exist. But note that you will need a larger viewing hole to incorporate a larger lens; it have would be 10mm in diameter.

Oh and yes I can confirm that you can definitely extend the camera cable, we've gone as far as 10 feet successfully but we've never really tested out the maximum range.

Australia
#44  

Very good explanation Jeremie. I may eventually put a 1mm hole in the face.

I have spent hours testing the facial and colour tracking and its just like I remember. EXTREMELY SLOW AND UNRELIABLE REACTION TIME.

The motors in my robot's head are fast - she can nod and shake her head faster than a real human. But when she is tracking, she moves like a pigeon. I get best results when I shrink the grid and I am moving my face further away, but I basically need to move past at less than 1km/h so she doesn't lose me.

Can someone please tell me how to fix this so its smooth?

Is it slow because it requires WiFi? Video is being sent wirelessly to my computer fast enough, so I don't understand whats causing this delay thats so extreme she moves in slow motion. Are the calculations being done in the ARC software or on the board? She only moves smoothly if I tell her to move 5 or more steps at a time, but this often results in overshooting the mark, after which her head begins to thrash violently to get back to the mark.

User-inserted image

User-inserted image

User-inserted image

User-inserted image

PRO
USA
#45  

@jeremie, is this camera ( v2) released yet? If so what are the changes?

@ Zxen I feel your pain. I've tried to get smooth motion from tracking and the best thing to play with is the increments steps found in the camera set up where your servos are set up.

Australia
#46  

The newer camera is a single board instead of a double decker board and is smaller - not sure about other changes. Do you know where the processing is done? Would it be more responsive to connect from the ezb to the computer with ethernet or something? Do I need a spotlight on my face so it can see the face more easily? I found that colour detection works better when you increase the saturation of the video output. If its being done inside the board, is there a way to make it process faster? Anyone know the smoothest facial tracking demo video on this site?

Australia
#47  

This appears to work smoothly, so its possible.

PRO
Canada
#48  

@fxrtst Yeah for sure! Version 2 of the camera has been in the wild for some time now. I believe we had discussed it in the past when you were looking for dimensions and we released not long after that.

Some of the changes are:

  • Single board design = smaller form factor
  • Added Heatsink for on board cooling and spacing inside JD's head
  • More even gain adjustment for lighting changes
  • Flexible and modular ribbon connector for camera CCD + lens assembly
  • Adjustable and/or removable lens, for focus or replacement lens
  • RGB LED feedback

Version 2 of the camera is meant to match the operation of version 1 but with some quality of life improvements from a hardware and firmware perspective.

PRO
Canada
#49  

@Zxen with a tiny bit of tweaking and some decent lighting you can definitely get some favorable results with ARC facial tracking. Check out this video I just posted:

I used the standard JD example project. I added a script that made Servospeed 0 for the pan and tilt servos: D0 & D1 (The fastest speed). I also slowed down the camera frame rate a touch with the slider bar. The last thing that I changed was the vertical increment steps, I changed it to 2.

*Edit: I also adjusted the Grid lines to be closer to center

#50  

This is encouraging @Jeremie. Thanks for the suggestions. Your tracking is so smooth!

So just so I understand better: *Set servo speeds to zero so servos will move as fast as possible. *Slow down camera frame rate a little. (Why is this?) *Changed was the vertical increment steps. (Why is this and will this setting be different for others?) *Tighten up the grid lines. I assume this is because tracking takes place in the two side boxes that this gives the software more room to work trying to bring the object back to center?

Also, is the camera you're using in this demo stationary or mounted on the part of the robot that is panning back and forth?

Thanks again!:)

PRO
USA
#51  

Thanks for the info on the camera not sure how I missed that, might have been while working on Guardians ?! I have a box of old cameras I'll have to try and sell then I'll pick up a few of the new ones!

Also thanks for the tut on the tracking. I never thought about setting the speed first. That is fast and pretty smooth tracking. I look forward to trying it out with hard core Alan this week.

PRO
Canada
#52  

@Dave Schulpius

Quote:

*Set servo speeds to zero so servos will move as fast as possible.
Correct, I wanted the servos to react as fast as possible.

Quote:

*Slow down camera frame rate a little. (Why is this?)
I slowed down the camera frame rate a little bit to decrease the chance of missed frames due to communication slow downs over WiFi.

Quote:

*Changed was the vertical increment steps. (Why is this and will this setting be different for others?)
Yes, you are correct. In the tracking settings I changed the vertical increment steps to move a little farther with each movement, this will likely be different for each robots/servo. Distance away from the person being tracked will also affect this value.

Quote:

*Tighten up the grid lines. I assume this is because tracking takes place in the two side boxes that this gives the software more room to work trying to bring the object back to center?
Yes, you are correct again Dave:). I tightened up the grid lines so JD would have more time to detect my face moving away from center. The servos will only move if my face is detected in one of the 8 "boxes" (left, right, upper right, top, etc) surrounding the center box. To summarize, it gives JD more room and time to detect a change in face position.

Quote:

Also, is the camera you're using in this demo stationary or mounted on the part of the robot that is panning back and forth?
The camera I was using was inside JD's head and his pan and tilt (neck and head) servos were doing the tracking.

Australia
#54  

Smooth tracking Jeremie! I want to achieve that. How many incremental steps does he move at a time? 1 horizontal and 2 vertical did you say? Do you have powerful WiFi in your house or something? What if you use the phone app instead of a computer? Does it need Wifi to work or are calculations done internally? The video of your face looks like real time - if I decrease the frame rate I get lag, while you do not. Is that your Wifi Router quality thats responsible?

Australia
#55  

I think I'm right - WiFi connection speed must be the answer. You're getting 10-14fps even after slowing down and I'm only getting 2-3fps maximum. Its either WiFi or the video card on my computer. Anyone know the reason for frame rate discrepancies? (I have the new v2 communication board.) How close is your router to your EZB? What kind of graphics card do you have?

User-inserted image

PRO
Canada
#56  

@Zxen

I was using 3 horizontal incremental steps. I was connected via AP mode (no router involved) I believe I was averaging 10fps. Have you tried AP mode instead of client mode (connected to router)?

Australia
#57  

Oh I see. If this is the answer then the instructions should be updated. I have tried a direct connection many times but it never works. Was it easy for you to achieve?

PRO
Canada
#58  

I'm a bit confused, what instructions are you referring to?

A direct connection (AP mode) is our default mode. It is very easy to connect via AP mode. Do you have anti-virus software that could be blocking your connection to ARC?

Australia
#59  

Maybe I am using AP mode. I guess that means my computer is slow or antenna is weak? Why would I get such a slow frame rate?

PRO
Synthiam
#60  

Zxen, post your project so we can see what settings you have. The DEFAULT settings of the camera control are ideal. If you modified the default settings, something like this could happen.

AP mode means your computer is connected directly to the ezb. You learned this when using the learn section for your ezrobot product.

Client mode means your ezb is connected to a router. You can only know of client mode if you followed the tutorial on the learn section for your ezrobot product.

There is no need for updating the connection tutorial. The last update of the tutorial was a detailed and fantastic video less than one week ago. There is no additional information. For client mode, only two peices of information is necessary: SSID and PASSWORD. Client mode is incredibly easy.

However, client mode depends on the speed of your router. Ezrobot does not own your router. We have no idea the speed of your router. The performance you experience in client mode has no relationship to ezrobot. The router speed is specific to the model and manufacturer, and not related to ezrobot.

If your computer is experiencing low frame rate with AP mode, then the pc speed may be the reason - OR the default settings have been changed.

  1. Can you verify that you have the latest ARC?

  2. why camera resolution setting are you using?

  3. try adding a new camera control and enabling face tracking with ezrobot camera. Do NOT change any other settings. Do you still get a low frame rate?

PRO
Synthiam
#61  

Okay - After viewing your diagnostic info, the answer is clear.

Quote:

20/06/2017 5:47:07 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBAttempting connection on 192.168.1.1:23 20/06/2017 5:47:07 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBConnected to 192.168.1.1:23 20/06/2017 5:47:07 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBReports EZB v4 OS With Comm 2 20/06/2017 5:47:08 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBEZ-B v4 ID: 58-54-0-0-48-255-57-56-55-38-38-38 20/06/2017 5:47:08 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBSetting battery monitor voltage: 7 20/06/2017 5:47:08 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBSetting battery protection: True 20/06/2017 5:47:08 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBSetting i2c rate: 100000 20/06/2017 5:47:08 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBEZ-B voltage is 7.249788618 20/06/2017 5:47:08 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBEZ-B temperature is 24.78733292604518964C 20/06/2017 5:47:08 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBConnected 20/06/2017 5:47:21 AMC:\Users\user\Documents\EZ-Builder\My Projects\script manager 2017-02-01g.EZBCamera Initialized: EZB://192.168.1.1 @ 640x480

First, every control has a ? (question mark). You can press that ? (question mark) to view help about that control. The control in question is the CAMERA CONTROL. Here is a direct link to the Camera Control manual page: https://synthiam.com/Tutorials/Help.aspx?id=80

The reason for visiting that page is a section about VIDEO RESOLUTIONS. Please read that section. It is incredibly important that you understand what increasing the resolution does for a computer.

By increasing the video resolution to 640x480, look at the number of Bytes per second your CPU must process.

Quote:

160x120 = 57,600 Bytes per frame = 1,152,000 Bytes per second 320x240 = 230,400 Bytes per frame = 4,608,000 Bytes per second 640x480 = 921,600 Bytes per frame = 18,432,000 Bytes per second

Lastly, i am curious about what other settings may have been modified in the camera control. Please visit my previous message and follow the instructions. Do not change any settings and use the camera control with Face tracking.

Everything I have typed is presented in the Camera Control manual page.