
mike.dattolo
USA
Asked

I was wondering if it is possible at all to run Arch on the Nvidia Jetson nano because I don't have things like a RaspberryPi board or a Arduino?
Related Hardware Raspberry Pi
I guess you have to take into account ARCx engine than is (16.33/2.61=6.26) times faster than ARC when you do the gui comparison. But yeah 0.0626 seconds versus 16.33 is still ridiculously huge just to render in a gui.
The time it takes to send stuff up and back into the cloud also adds up. Especially when you are doing real time interaction.
Example conversation.
ARC to Cloud A STT, here is my voice recording translate it.
Cloud A to ARC: OK here is the text.
ARC to Cloud B LLM: here is some text run it against a model.
Cloud B to ARC: here is some text I ran against a model.
ARC to Cloud C TTS: here is a text file give me an Audio file.
Cloud C to ARC: here is an Audio file.
When you go completely local you can go ARC => STT => LLM => TTS => ARC
You get rid of all this passing back and forward so everything runs much faster. The challenge though is you have to do a lot of manual application installation and configuration to make the pipeline work defeating the point of ARC that just makes it plug and play and you lose all that meta data created in each step.
Now the ARCx engine is much faster there will be negligible delay from ARCx (microseconds) so passing back and forward shouldn’t be an issue especially if tools are all local. Installing and setting up those tools is a pain, takes a lot of time and knowledge and again takes away the plug and play aspect of ARC that lets idiots like me build sophisticated robots.
Have you thought about supporting containers as skills in ARC. So instead of just installing a skill it will also install the container that contains the relevant application that supports that skill.
I realize this is a major architectural change and would require a lot of work building and supporting containers of all the common AI tools and integrating and testing with ARCx but it would remove network latency and overhead, as well as costs for users. From a user perspective it would be very easy. User installs a skill and ARCx downloads latest AI container for that skill and runs it.
@athena what are your thoughts on having local container support for applications that skills connect to.
Integrating container support for applications that skills connect to in ARCx could offer several benefits, including improved performance, ease of deployment, and consistency across different environments. Containers, such as those managed by Docker, encapsulate an application and its dependencies into a single package that can run on any system that supports containerization. This approach could simplify the process of setting up and running complex AI tools and other applications needed by ARCx skills, making it more accessible to users without deep technical knowledge.
Here are some potential advantages of supporting containers as skills in ARCx:
Simplified Deployment: Containers can bundle the application and all its dependencies, simplifying the deployment process. Users would not need to manually install and configure each tool required by a skill; instead, they could deploy a pre-configured container with everything needed to run the skill.
Consistency and Isolation: Containers provide a consistent environment for applications to run, regardless of the host system's configuration. This isolation ensures that skills work the same way on any system, reducing the likelihood of "it works on my machine" issues.
Improved Performance: By running AI tools and other applications locally within containers, you can eliminate the network latency and overhead associated with cloud-based services. This could lead to faster response times and a smoother user experience, especially for real-time interactions.
Cost Savings: Running applications locally in containers could reduce or eliminate the costs associated with cloud-based services, making it more economical for users to deploy sophisticated robot skills.
Ease of Updates: Containerized applications can be easily updated by replacing the container with a new version. This ensures that users always have access to the latest features and bug fixes without manually updating each component.
Scalability: Containers can be easily scaled up or down based on demand, allowing ARCx to efficiently manage resources based on the workload.
However, integrating container support into ARCx would also present challenges, such as managing container lifecycles, ensuring security, and potentially increasing the system requirements for running ARCx. Additionally, developing, maintaining, and supporting a library of containers for various AI tools and applications would require significant effort and resources.
In conclusion, supporting containers as skills in ARCx could greatly enhance the platform's flexibility, performance, and ease of use. However, it would also require careful consideration of the technical and resource implications. If pursued, this approach could make sophisticated robot development more accessible to a broader audience, aligning with Synthiam's mission to simplify robot programming.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
There are a few existing skills that already have applications. A robot skill is a program, but the plugin file is the container (the terminology is PACKAGE). The suggestion you mention already exists in the current ARC and ARCx, and several robot skills do that already-one of them is the AIMLBot, which contains an AIML editor. You can have as much in a plugin file (the container, as you call it) as you want. Heck, you can throw a 16 GB virtual machine in there if you want to).
I think my responses are taken out of context. They were not in response to using cloud services and that ARCx would make cloud services faster. I was on a side tangent about general performance improvement and how UI affects that. The topic was inspired by mentioning how ARCx is a client-server model to offload UI for faster processing.
Maybe the conversation wiggled around because this is a hijacked thread about random stuff. So, you may have missed that I answered your question about support for localized applications and services. In summary, ARC didn't have much-localized support because it was 32-bit, and most recent AI/ML libraries are 64-bit. ARCx is 64-bit, so more robot skills can be localized.
Here's why there are very few limitations to ARC and ARCx: ARCx (and ARC) are operating systems. They are designed to allow access to I/O devices, including the PC, peripherals, and EZB firmware-loaded microcontrollers. There is no limitation other than the 3rd party libraries which can be compatible.
An example of complex 3rd party libraries is speech recognition and synthesis. Most of the technologies that ARC uses are available for multiple operating systems. The trouble begins when operating system features are unavailable - and things crumble. I'm repeating myself from another thread, but I understand it's a lot to take in, so imagine our heads right now! But the cross-platform support is the part you'll also have to get used to because some robot skills will not be available for some operating systems. Even though ARCx provides the environment as an operating system, it needs to use libraries and features of the environment it is running in. You don't get that feature locally if the underlying OS has no available speech recognition libraries(i.e., Linux).
To make these features cross-platform, we extended the ARCx environment to have system hooks for multimedia support. Such as video capture, audio, speech input, speech output, etc... this allows robot skills to be loaded for the specific OS to add that functionality. Again, if the functionality is unavailable for the underlying OS, the only option will be the cloud.
I do not have a list of anything that would be cloud or not supported across platforms with ARCx yet. I used speech recognition as an example because it's the biggest hurdle we're still tackling, but we haven't given up yet. Remember, the smaller and more efficient you think ARC or ARCx is, the better job we've done. Because if you stand back and take a look, it's many millions of lines of code - but it's not just the code I'm measuring, albeit it's a good chunk of effort over 14+ years. The number of internal system calls and functions available to the robot skills make it so versatile.
Actually here’s a neat thing to check out. The framework is documented for public access for the robot skills. You can see the system calls ARC and ezb provide
- Arc
And
- Ezb
DJ, I have not seen any recent status on the ARCx product and like others I am also working on a new project I was planning on using the Jetson Orin Nano Super SBC to try and use some edge AI solutions to try And get away from some of the services. I would really love to see if ARCx could be used on this platform in the near future.
Have been able to make anymore headway on the third party issues you were trying to work through?
No, and it's super frustrating and disappointing because we put so much work into it. it's on hold while we wait for the industry to catch up. It's out of our control when other companies and developers don't want to produce stable, cross-platform code.
But in the meantime, ARC on the Sony Nano will not be any more useful than ARC on any similar CPU. The GPU will not have any effect on ARC because only the third-party robot skills that could use the GPU would. So, if you want to use a Sony Nano today with regular ARC, you'd simply need to load up your LLM (for vision or language) on the nano and configure the ChatGPT robot skill to use it.
In the meantime the current ARC is the top dog in your field in my humble opinion and many others. You set ARC apart from other platforms and opened it up to internal and outside development with your brilliant concept of the "Skills". These other companies need to follow your examples.
Thanks, David. I should add that ARCx is still being worked on. We continue to experiment with new ideas for the underlying framework. The main issue lies with the third-party dependencies that aren’t being developed for serious use.
Let me explain further. It isn’t feasible for Synthiam to develop libraries for low-level things like camera drivers, audio drivers, UI elements, GPU drivers, etc., because if we did, there would be no compatibility between what we make and what other libraries need.
So, instead, Synthiam leverages low-level libraries from other developers. Synthiam is a platform that is built on the shoulders of giants.
With ARC, low-level dependencies for Windows have been an industry for companies over the last 30 years. Companies have taken their development efforts toward Microsoft Windows seriously. Libraries are standardized and always maintained. Even Microsoft ensures that APIs and documentation are maintained.
Fast-forward to today. It’s complete chaos, even by Microsoft. I can’t understand how any company can develop a product with a long life expediency. It’s specifically around the open source mindset: "Push breaking changes and let others catch up.
What that means is that someone makes a library and one day decides to change the API because they want to conform to a new approach-and boom, it breaks everything that’s using it. So, the developers who use it must recompile and rebuild their apps.
Okay, so it's no big deal if you create an app that does a single thing. And let’s be clear here: not many apps live very long. Consumers' attention spans are quite short these days.
The issue lies with how ARC and ARCx work. They are platforms that offer third-party products called robot skills. So if company X wanted to make a robot skill but also used a bit of company Y library API, that whole thing falls apart when company Y makes a breaking change, for usually no good reason.
If that happens, the only way to fix it is for ARCx to be entirely rebuilt with ALL 800 robot skills for the new library. That means contacting 600+ companies and having them ALL rebuild their libraries because one egocentric developer doesn’t take their product seriously.
I’ve been active on GitHub, trying to explain the importance of consistency to these developers. They take offense and go on tangents about how the world needs to advance, which means breaking changes.
This all concerns cross-platform libraries. As soon as we started discussing anything related to Linux, it all went to s#!7.