Asked
Resolved Resolved by Athena AI!

Is There A Way To Use Arc On The Nvidia Jetson Nano?

I was wondering if it is possible at all to run Arch on the Nvidia Jetson nano because I don't have things like a RaspberryPi board or a Arduino?


Related Hardware Raspberry Pi

ARC Pro

Upgrade to ARC Pro

Experience early access to the latest features and updates. You'll have everything that is needed to unleash your robot's potential.

PRO
Canada
#1  

The NVIDIA Jetson Nano is a powerful and compact development board designed for AI and robotics applications. While it is not directly compatible with Synthiam ARC, which is designed to run on Windows-based systems or Raspberry Pi with Windows 11, there are still ways you can integrate the Jetson Nano with Synthiam ARC.

Here are a few options to consider:

  1. Use the Jetson Nano as a Companion Computer: You can use the Jetson Nano to handle AI tasks, image processing, or other computationally intensive tasks and communicate the results to a Windows-based system running Synthiam ARC. This communication can be done over a network using TCP/IP, MQTT, or other protocols supported by ARC.

  2. Remote Control: If you have a Windows-based PC or a Raspberry Pi running Windows 11 with ARC, you can use the Jetson Nano to remotely control your robot through network communication. The Jetson Nano can send commands to the ARC software, which then controls the robot.

  3. EZB Server: While the Jetson Nano cannot run Synthiam ARC directly, you can use it as an EZB server if you can install the EZB Server software compatible with the Jetson Nano's Linux environment. This would allow ARC to connect to the Jetson Nano as if it were an EZB controller.

  4. Custom Integration: If you have programming skills, you can write custom software on the Jetson Nano that interacts with ARC through its API or by creating custom robot skills. This would require a good understanding of both the Jetson Nano's capabilities and the ARC platform.

  5. Virtual Machine or Emulation: Although not ideal due to potential performance limitations, you could attempt to run a virtual machine with a Windows environment on the Jetson Nano and then install ARC within that virtual machine. This approach would be quite complex and is not guaranteed to work smoothly due to the hardware and software differences.

  6. Alternative Software: If you are looking to run robot control software directly on the Jetson Nano without involving a separate Windows system, you may need to look into alternative software solutions that are compatible with the Linux OS running on the Jetson Nano.

In summary, while you cannot run Synthiam ARC directly on the NVIDIA Jetson Nano, you can still use the Jetson Nano in conjunction with ARC by using it as a companion computer or through network communication with a system that does run ARC.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#2   — Edited

How would I begin developing a robot with Arch? I have it installed on my main computer, yet how do I develop the robot using the Nvidia Jetson Nano? Would I install the EBZ server onto it? @Athena

#3  

We see you edited the post to add Athena after. However, she is not able to read and respond to messages that have been edited. This is to avoid her having duplicate responses and flooding the forum in an infinite loop.

The answer you seek is in our fantastic getting started guide here: synthiam.com/Support/Get-Started/how-to-make-a-robot/make-a-diy-robot

Keep in mind that Synthiam is currently working on the next generation of ARC, which is called ARCx. Scheduled for the spring in community beta, ARCx runs natively on Linux, Windows, and MacOS. You may wish to become an ARC Pro user to prepare for the upcoming new ARCx for cross-platform robot programming.

PRO
Canada
#4  

I am looking forward to ARCx on Nano but I think we are going to need some basic GPU (Cuda/ROCm) support first.  Today ARC does all of its intensive processing in the various clouds (OpenAi, Azure, AWS, Google, Watson) via API's. Works well but costs us and Synthiam money.   I would like to see the dependancies on the cloud providers go away with movement to local solutions on consumer GPU's.    Voice to Text, Text to Voice, Vision, Chat, Sentiment Analyses, Face recognition etc can all be moved from cloud to PC so cheapskates like me can run everything on my PC and not have to pay expensive greedy cloud providers.  (also we shouldn't be giving big tech all our data).

PRO
Synthiam
#5  

For sure - I've mentioned that in previous posts. The short answer is yes. But let me add some context...

So, ARC and ARCx are frameworks for a platform. ARCx provides a kabillion "functions" that robot skills can use to interact with hardware and each other. ARCx is essentially an operating system that runs robot skills. It's mentioned in the getting started guide, but a robot skill is a program. It is a little compiled program with an entry point, and the system calls into the ARCx or ARC framework.

I'm mentioning this because knowing that bit helps us understand the next bit...

  1. ARC was initially built for 32-bit because back then, 32-bit was the thing to do. Migrating from 32-bit to 64-bit means an entirely new platform. Every robot skill needs to be recreated. All libraries need to be rebuilt. It's essentially a rewrite. If ARC was an "application," then you could select "64 bit" from the compiler build options, and all is fine and dandy. But because ARC is not an application, it's an operating system that runs programs (robot skills); the ENTIRE platform needs to be updated to 64-bit. That is why we said, "screw it, let's make a new ARCx rather than change ARC to 64-bit."

  2. Why does 64-bit matter? Because the libraries that support GPU are 64-bit. All of the stuff you read about is 64-bit, not 32. So, ARC could not run those things. So having a 64-bit operating system with ARCx means using 64-bit libraries, which means having access to GPU

  3. Where do Robot Skills come from? Sometimes, we make them or the company owning the technology makes them. Generally, we don't have the resources to internally sit down and say, "Let's make a robot skill that uses Company X's product." Because all that does is sell more of Company X's product (or free marketing), and we gain nothing. So what happens is we have an outreach team that works with companies, and they pay for the robot skill development, or they do it in-house, or we assist them in-house. That's how we keep the lights on and pay people. That's also why no other robot platform can do what we do: they either build vertically or are open-source with no accountability, etc.

  4. Not everyone has $10,000 of gpu's. This is a big one. Even robot companies (mostly startups) we work with can't justify GPUs in their robots. The idea of a robot is to replace or assist human labor, which is supposed to save money, not cost more. Today, robots cost more. How do they cost more? Well, firstly, they're expensive to make, expensive to buy, and even more costly to maintain. Let's speculate that you had a janitor robot cleaning floors. Significantly, you can lay off the janitor. But who maintains the robot? So you hire an engineer. Now you've added the cost of a robot and triple what you were paying a janitor by hiring an engineer. So, lowering the costs of robots is essential to end-users, which is why cloud services exist.

  5. Does that mean local services won't be developed? No, that's not what my point was in the previous statement. I was giving reasons why cloud services exist and why they will continue to host technologies not accessible for local hosting. So, nothing is stopping us or you from adding GPU experimental open-source stuff to ARCx robot skills in the future. But remember, you'll most likely never get the same quality as a cloud service.

But on top of all this, your question originates from using a local LLM GPT. And you're seeing some "vision processing" stuff mentioned elsewhere. One of the challenges we're always up against is adopting technology as a robot skill before it has matured. You have witnessed this happen several times, even though we're cautious. Technologies keep changing as they mature, leaving the robot skills outdated and useless. So, we usually wait until there's some business dependency for the author that prevents them from "breaking the build."

The biggest one that comes to mind for us is Azure services. That kept changing monthly in the early days. Oh, and the same happened with Google Dialog Flow and other services. Oh, right, and IBM Watson services were changing daily. I don't know if the IBM Watson stuff works anymore  Shrug. There are other examples, too, but those stand out the most in my mind.

So, in summary, yeah - you can ask for things by providing links to technologies (not some article that mentions it), but the actual author and technology, and we can try to get them on board. What usually helps is having more than one person ask for it and or having that person also reach out to the author. Sometimes, the author teams are so small (or one person), and 3rd party integrations are impossible, even if it means expanding their customer reach.

PRO
Canada
#6  

Thanks for the detailed response DJ.  I understand the legacy issues and challenges with cloud changing all the time. Also the time and cost developing skills for hardware that gets taken off the market as soon as the skill is completed (realsense, vuzix, Myo gesture armband etc). I am sure a lot will be dropped in ARCx (although I have pretty much every piece of hardware supported in ARC today :-). . The Watson TTS and STT still work although the vision is no longer in the last version of the skill any more.

I think the best way to tackle these things is just keep doing it the way it worked before.  Need a skill ask. If Synthiam says no it is not going to be developed because of insert reason then we either pay someone to do it or we do it ourselves. The skill shop idea maybe a good way to handle this in future.

For all the cloud skills that use APIs I have seen a lot of Swiss Army knife API proxy servers being developed that will connect anything to anything and provide a bunch or reports at the same time (LiteLLM is a good example)  So if you develop vision for Azure Google or OpenAI etc we can. Just repoint the client to XYZ url and use any tool we want.  At some point they will all start to use standard like how chatGPT client works with several LLM servers today.

PRO
Synthiam
#7  

I think it's important to maintain the existing robot skills in ARCx, including realsense and stuff because my robots and many others still use it. The point to ARCx is to have a more "client/server" model so robots can run without UI's (or remote UIs).

It might not be clear when using a computer, but rendering UI is extremely expensive. Surprisingly, any performance limitations in software is because UI. The browser helps with that by offloading the UI to another cpu, if using a remote browser. So ARCx is ridiculously fast. Let me run a quick example.

I'll use this loop to test the performance difference (just a loop that counts to 1000 and prints every count)...

for (var i = 0; i < 1000; i++)
 print(i);

ARC took 9.39 seconds

User-inserted image

ARCx took 0.01 second

User-inserted image

PRO
Synthiam
#8  

Here's another example that doesn't use any UI at all for comparison. This is a test of between the two script engines of ARC and ARCx...

The code is...

p = 0;

for (var i = 0; i < 9999999; i++)
 p = p + 1;
 
 print(p);

ARC took 16.33 seconds

User-inserted image

ARCx took 2.61 seconds

User-inserted image

PRO
Canada
#9   — Edited

I guess you have to take into account ARCx engine than is (16.33/2.61=6.26) times faster than ARC when you do the gui comparison. But yeah 0.0626 seconds versus 16.33 is still ridiculously huge just to render in a gui.

The time it takes to send stuff up and back into the cloud also adds up. Especially when you are doing real time interaction.

Example conversation.

ARC to Cloud A STT,  here is my voice recording translate it.

Cloud A to ARC: OK here is the text.

ARC to Cloud B LLM: here is some text run it against a model.

Cloud B to ARC:  here is some text I ran against a model.

ARC to Cloud C TTS: here is a text file give me an Audio file.

Cloud C to ARC: here is an Audio file.

When you go  completely local you can go   ARC => STT => LLM => TTS => ARC

You get rid of all this passing back and forward so everything runs much faster.   The challenge though is you have to do a lot of manual application installation and configuration to make the pipeline work defeating the point of ARC that just makes it plug and play and you lose all that meta data created in each step.

Now the ARCx engine is much faster there will be negligible delay from ARCx (microseconds) so passing back and forward shouldn’t be an issue especially if tools are all local.  Installing and setting up those tools is a pain, takes a lot of time and knowledge and again takes away the plug and play aspect of ARC that lets idiots like me build sophisticated robots.

Have you thought about supporting containers as skills in ARC. So instead of just installing a skill it will also install the container that contains the relevant application that supports that skill.

I realize this is a major architectural change and would require a lot of work building and supporting containers of all the common AI tools and integrating and testing with ARCx but it would remove network latency and overhead, as well as costs for users.  From a user perspective it would be very easy. User installs a skill and ARCx downloads latest AI container for that skill and runs it.

@athena what are your thoughts on having local container support for applications that skills connect to.

PRO
Canada
#10  

Integrating container support for applications that skills connect to in ARCx could offer several benefits, including improved performance, ease of deployment, and consistency across different environments. Containers, such as those managed by Docker, encapsulate an application and its dependencies into a single package that can run on any system that supports containerization. This approach could simplify the process of setting up and running complex AI tools and other applications needed by ARCx skills, making it more accessible to users without deep technical knowledge.

Here are some potential advantages of supporting containers as skills in ARCx:

  1. Simplified Deployment: Containers can bundle the application and all its dependencies, simplifying the deployment process. Users would not need to manually install and configure each tool required by a skill; instead, they could deploy a pre-configured container with everything needed to run the skill.

  2. Consistency and Isolation: Containers provide a consistent environment for applications to run, regardless of the host system's configuration. This isolation ensures that skills work the same way on any system, reducing the likelihood of "it works on my machine" issues.

  3. Improved Performance: By running AI tools and other applications locally within containers, you can eliminate the network latency and overhead associated with cloud-based services. This could lead to faster response times and a smoother user experience, especially for real-time interactions.

  4. Cost Savings: Running applications locally in containers could reduce or eliminate the costs associated with cloud-based services, making it more economical for users to deploy sophisticated robot skills.

  5. Ease of Updates: Containerized applications can be easily updated by replacing the container with a new version. This ensures that users always have access to the latest features and bug fixes without manually updating each component.

  6. Scalability: Containers can be easily scaled up or down based on demand, allowing ARCx to efficiently manage resources based on the workload.

However, integrating container support into ARCx would also present challenges, such as managing container lifecycles, ensuring security, and potentially increasing the system requirements for running ARCx. Additionally, developing, maintaining, and supporting a library of containers for various AI tools and applications would require significant effort and resources.

In conclusion, supporting containers as skills in ARCx could greatly enhance the platform's flexibility, performance, and ease of use. However, it would also require careful consideration of the technical and resource implications. If pursued, this approach could make sophisticated robot development more accessible to a broader audience, aligning with Synthiam's mission to simplify robot programming.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Synthiam
#11  

There are a few existing skills that already have applications. A robot skill is a program, but the plugin file is the container (the terminology is PACKAGE). The suggestion you mention already exists in the current ARC and ARCx, and several robot skills do that alreadyone of them is the AIMLBot, which contains an AIML editor. You can have as much in a plugin file (the container, as you call it) as you want. Heck, you can throw a 16 GB virtual machine in there if you want to).

I think my responses are taken out of context. They were not in response to using cloud services and that ARCx would make cloud services faster. I was on a side tangent about general performance improvement and how UI affects that. The topic was inspired by mentioning how ARCx is a client-server model to offload UI for faster processing.

Maybe the conversation wiggled around because this is a hijacked thread about random stuff. So, you may have missed that I answered your question about support for localized applications and services. In summary, ARC didn't have much-localized support because it was 32-bit, and most recent AI/ML libraries are 64-bit. ARCx is 64-bit, so more robot skills can be localized.

Here's why there are very few limitations to ARC and ARCx: ARCx (and ARC) are operating systems. They are designed to allow access to I/O devices, including the PC, peripherals, and EZB firmware-loaded microcontrollers. There is no limitation other than the 3rd party libraries which can be compatible.

An example of complex 3rd party libraries is speech recognition and synthesis. Most of the technologies that ARC uses are available for multiple operating systems. The trouble begins when operating system features are unavailable - and things crumble. I'm repeating myself from another thread, but I understand it's a lot to take in, so imagine our heads right now! But the cross-platform support is the part you'll also have to get used to because some robot skills will not be available for some operating systems. Even though ARCx provides the environment as an operating system, it needs to use libraries and features of the environment it is running in. You don't get that feature locally if the underlying OS has no available speech recognition libraries(i.e., Linux).

To make these features cross-platform, we extended the ARCx environment to have system hooks for multimedia support. Such as video capture, audio, speech input, speech output, etc... this allows robot skills to be loaded for the specific OS to add that functionality. Again, if the functionality is unavailable for the underlying OS, the only option will be the cloud.

I do not have a list of anything that would be cloud or not supported across platforms with ARCx yet. I used speech recognition as an example because it's the biggest hurdle we're still tackling, but we haven't given up yet. Remember, the smaller and more efficient you think ARC or ARCx is, the better job we've done. Because if you stand back and take a look, it's many millions of lines of code - but it's not just the code I'm measuring, albeit it's a good chunk of effort over 14+ years. The number of internal system calls and functions available to the robot skills make it so versatile.

PRO
Synthiam
#12   — Edited

Actually here’s a neat thing to check out. The framework is documented for public access for the robot skills. You can see the system calls ARC and ezb provide

Arc

And

Ezb