ARC Pro

Upgrade to ARC Pro

ARC Pro is your passport to a world of endless possibilities in robot programming, waiting for you to explore.

PRO
Canada
#1  

The NVIDIA Jetson Nano is a powerful and compact development board designed for AI and robotics applications. While it is not directly compatible with Synthiam ARC, which is designed to run on Windows-based systems or Raspberry Pi with Windows 11, there are still ways you can integrate the Jetson Nano with Synthiam ARC.

Here are a few options to consider:

  1. Use the Jetson Nano as a Companion Computer: You can use the Jetson Nano to handle AI tasks, image processing, or other computationally intensive tasks and communicate the results to a Windows-based system running Synthiam ARC. This communication can be done over a network using TCP/IP, MQTT, or other protocols supported by ARC.

  2. Remote Control: If you have a Windows-based PC or a Raspberry Pi running Windows 11 with ARC, you can use the Jetson Nano to remotely control your robot through network communication. The Jetson Nano can send commands to the ARC software, which then controls the robot.

  3. EZB Server: While the Jetson Nano cannot run Synthiam ARC directly, you can use it as an EZB server if you can install the EZB Server software compatible with the Jetson Nano's Linux environment. This would allow ARC to connect to the Jetson Nano as if it were an EZB controller.

  4. Custom Integration: If you have programming skills, you can write custom software on the Jetson Nano that interacts with ARC through its API or by creating custom robot skills. This would require a good understanding of both the Jetson Nano's capabilities and the ARC platform.

  5. Virtual Machine or Emulation: Although not ideal due to potential performance limitations, you could attempt to run a virtual machine with a Windows environment on the Jetson Nano and then install ARC within that virtual machine. This approach would be quite complex and is not guaranteed to work smoothly due to the hardware and software differences.

  6. Alternative Software: If you are looking to run robot control software directly on the Jetson Nano without involving a separate Windows system, you may need to look into alternative software solutions that are compatible with the Linux OS running on the Jetson Nano.

In summary, while you cannot run Synthiam ARC directly on the NVIDIA Jetson Nano, you can still use the Jetson Nano in conjunction with ARC by using it as a companion computer or through network communication with a system that does run ARC.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#2   — Edited

How would I begin developing a robot with Arch? I have it installed on my main computer, yet how do I develop the robot using the Nvidia Jetson Nano? Would I install the EBZ server onto it? @Athena

#3  

We see you edited the post to add Athena after. However, she is not able to read and respond to messages that have been edited. This is to avoid her having duplicate responses and flooding the forum in an infinite loop.

The answer you seek is in our fantastic getting started guide here: synthiam.com/Support/Get-Started/how-to-make-a-robot/make-a-diy-robot

Keep in mind that Synthiam is currently working on the next generation of ARC, which is called ARCx. Scheduled for the spring in community beta, ARCx runs natively on Linux, Windows, and MacOS. You may wish to become an ARC Pro user to prepare for the upcoming new ARCx for cross-platform robot programming.

PRO
Canada
#4  

I am looking forward to ARCx on Nano but I think we are going to need some basic GPU (Cuda/ROCm) support first.  Today ARC does all of its intensive processing in the various clouds (OpenAi, Azure, AWS, Google, Watson) via API's. Works well but costs us and Synthiam money.   I would like to see the dependancies on the cloud providers go away with movement to local solutions on consumer GPU's.    Voice to Text, Text to Voice, Vision, Chat, Sentiment Analyses, Face recognition etc can all be moved from cloud to PC so cheapskates like me can run everything on my PC and not have to pay expensive greedy cloud providers.  (also we shouldn't be giving big tech all our data).

PRO
Synthiam
#5  

For sure - I've mentioned that in previous posts. The short answer is yes. But let me add some context...

So, ARC and ARCx are frameworks for a platform. ARCx provides a kabillion "functions" that robot skills can use to interact with hardware and each other. ARCx is essentially an operating system that runs robot skills. It's mentioned in the getting started guide, but a robot skill is a program. It is a little compiled program with an entry point, and the system calls into the ARCx or ARC framework.

I'm mentioning this because knowing that bit helps us understand the next bit...

  1. ARC was initially built for 32-bit because back then, 32-bit was the thing to do. Migrating from 32-bit to 64-bit means an entirely new platform. Every robot skill needs to be recreated. All libraries need to be rebuilt. It's essentially a rewrite. If ARC was an "application," then you could select "64 bit" from the compiler build options, and all is fine and dandy. But because ARC is not an application, it's an operating system that runs programs (robot skills); the ENTIRE platform needs to be updated to 64-bit. That is why we said, "screw it, let's make a new ARCx rather than change ARC to 64-bit."

  2. Why does 64-bit matter? Because the libraries that support GPU are 64-bit. All of the stuff you read about is 64-bit, not 32. So, ARC could not run those things. So having a 64-bit operating system with ARCx means using 64-bit libraries, which means having access to GPU

  3. Where do Robot Skills come from? Sometimes, we make them or the company owning the technology makes them. Generally, we don't have the resources to internally sit down and say, "Let's make a robot skill that uses Company X's product." Because all that does is sell more of Company X's product (or free marketing), and we gain nothing. So what happens is we have an outreach team that works with companies, and they pay for the robot skill development, or they do it in-house, or we assist them in-house. That's how we keep the lights on and pay people. That's also why no other robot platform can do what we do: they either build vertically or are open-source with no accountability, etc.

  4. Not everyone has $10,000 of gpu's. This is a big one. Even robot companies (mostly startups) we work with can't justify GPUs in their robots. The idea of a robot is to replace or assist human labor, which is supposed to save money, not cost more. Today, robots cost more. How do they cost more? Well, firstly, they're expensive to make, expensive to buy, and even more costly to maintain. Let's speculate that you had a janitor robot cleaning floors. Significantly, you can lay off the janitor. But who maintains the robot? So you hire an engineer. Now you've added the cost of a robot and triple what you were paying a janitor by hiring an engineer. So, lowering the costs of robots is essential to end-users, which is why cloud services exist.

  5. Does that mean local services won't be developed? No, that's not what my point was in the previous statement. I was giving reasons why cloud services exist and why they will continue to host technologies not accessible for local hosting. So, nothing is stopping us or you from adding GPU experimental open-source stuff to ARCx robot skills in the future. But remember, you'll most likely never get the same quality as a cloud service.

But on top of all this, your question originates from using a local LLM GPT. And you're seeing some "vision processing" stuff mentioned elsewhere. One of the challenges we're always up against is adopting technology as a robot skill before it has matured. You have witnessed this happen several times, even though we're cautious. Technologies keep changing as they mature, leaving the robot skills outdated and useless. So, we usually wait until there's some business dependency for the author that prevents them from "breaking the build."

The biggest one that comes to mind for us is Azure services. That kept changing monthly in the early days. Oh, and the same happened with Google Dialog Flow and other services. Oh, right, and IBM Watson services were changing daily. I don't know if the IBM Watson stuff works anymore  Shrug. There are other examples, too, but those stand out the most in my mind.

So, in summary, yeah - you can ask for things by providing links to technologies (not some article that mentions it), but the actual author and technology, and we can try to get them on board. What usually helps is having more than one person ask for it and or having that person also reach out to the author. Sometimes, the author teams are so small (or one person), and 3rd party integrations are impossible, even if it means expanding their customer reach.

PRO
Canada
#6  

Thanks for the detailed response DJ.  I understand the legacy issues and challenges with cloud changing all the time. Also the time and cost developing skills for hardware that gets taken off the market as soon as the skill is completed (realsense, vuzix, Myo gesture armband etc). I am sure a lot will be dropped in ARCx (although I have pretty much every piece of hardware supported in ARC today :-). . The Watson TTS and STT still work although the vision is no longer in the last version of the skill any more.

I think the best way to tackle these things is just keep doing it the way it worked before.  Need a skill ask. If Synthiam says no it is not going to be developed because of insert reason then we either pay someone to do it or we do it ourselves. The skill shop idea maybe a good way to handle this in future.

For all the cloud skills that use APIs I have seen a lot of Swiss Army knife API proxy servers being developed that will connect anything to anything and provide a bunch or reports at the same time (LiteLLM is a good example)  So if you develop vision for Azure Google or OpenAI etc we can. Just repoint the client to XYZ url and use any tool we want.  At some point they will all start to use standard like how chatGPT client works with several LLM servers today.

PRO
Synthiam
#7  

I think it's important to maintain the existing robot skills in ARCx, including realsense and stuff because my robots and many others still use it. The point to ARCx is to have a more "client/server" model so robots can run without UI's (or remote UIs).

It might not be clear when using a computer, but rendering UI is extremely expensive. Surprisingly, any performance limitations in software is because UI. The browser helps with that by offloading the UI to another cpu, if using a remote browser. So ARCx is ridiculously fast. Let me run a quick example.

I'll use this loop to test the performance difference (just a loop that counts to 1000 and prints every count)...

for (var i = 0; i < 1000; i++)
 print(i);

ARC took 9.39 seconds

User-inserted image

ARCx took 0.01 second

User-inserted image

PRO
Synthiam
#8  

Here's another example that doesn't use any UI at all for comparison. This is a test of between the two script engines of ARC and ARCx...

The code is...

p = 0;

for (var i = 0; i < 9999999; i++)
 p = p + 1;
 
 print(p);

ARC took 16.33 seconds

User-inserted image

ARCx took 2.61 seconds

User-inserted image