About Synthiam
Synthiam Inc. is a pioneering force in automation, artificial intelligence (AI), and technology business integration, dedicated to making advanced technologies accessible to a broad audience. The company's mission is to democratize AI and robotics, enabling businesses of all sizes to enhance efficiency and unlock new potential.
Synthiam simplifies the integration of AI and robotics into business operations, catering to users without a deep technical background in robotics. This approach has made Synthiam a hub for innovation, supported by a vibrant community of developers and technology enthusiasts.
The core of Synthiam's offerings is a suite of software solutions that facilitate intuitive human-machine interactions, allowing for the development of customized robotics applications tailored to streamline operations, reduce costs, and improve service delivery. This commitment to providing cutting-edge technology is supported by ongoing research and development, ensuring clients access to the most advanced tools in AI and robotics.
Beyond software, Synthiam offers comprehensive consulting services to guide businesses through identifying and implementing automation and AI solutions. From conceptualization to execution, Synthiam provides end-to-end support to ensure successful technology integration.
Synthiam also prioritizes education and community engagement, offering resources and workshops to foster a culture of learning and innovation. This commitment extends its impact beyond corporate transformation to empower individuals to contribute to the technological future.
Synthiam is creating a new version of ARC called ARCx
The new version is cross-platform to run on Linux (Ubuntu) for x86 or arm64, Windows 64, and macOS arm64. The new synthiam ARCx aims to have a robot development environment that accommodates users from education, DIY, and small/medium businesses. The x in the name ARCx represents the versatility of the platform by not only targeting multiple user groups and many technologies.
The current ARC had many limitations for scaling a robot from an idea into a product. It also required the GUI to be present at all times, which meant there were no background services to run on a headless robot. It also needed Windows, which excluded low-cost and low-powered devices such as Raspberry Pi.
The new ARC resolves these by providing a customizable user interface using razor technology, a web GUI development front end for creators and programmers. It can run on a variety of single-board computers.
As original Synthiam ARC had evolved from EZ-Builder, we've witnessed the evolution of several technologies that used to be premature back in the day. Today, many technologies have proven themselves and become standard practice in DIY, education, and enterprise. Specifically, technologies include Python, javascript, ARM64 processors, JSON, interactive server-side web guys like Blazor, standardized Linux distributions like Ubuntu, and multiple Arduino-compatible microcontrollers. Because these technologies have proven stable and are an industry standard, Synthiam has confidence when developing around these technologies.
Today is the perfect convergence of standardized technologies and consumer knowledge of those technologies. This is why we picked now to begin developing the next generation of the world's most accessible robot programming platform, ARCx.
Over the last 14 years since the first version of ARC's predecessor, eZ-Builder, there have been millions of robot connections to the platform. Throughout that time, hundreds of thousands of comments and conversations have shared user experiences through questions and feedback on the website forum. While the Synthiam team keeps a close eye on user activity to continue developing new features, we also have a secret weapon that has helped us produce new features that ARCx will provide.
Synthiam's secret weapon is Athena, our purpose-built AI that many know as our customer support agent. While Athena's knowledge base continues to grow, she is not limited to helping customers build robots. Athena is also used internally to help design features, architect programming APIs of the ARCx robot platform, and more.
When developing ARCx, we fed Athena's artificial intelligence algorithm the history of conversations on the forum from users like you. This has allowed us to architect features you have discussed or shown interest in.
Cross Platform
Linux and MacOS have always played catch-up to the features of Microsoft Windows, which made supporting those operating systems taxing on the company's developer resources. In most cases, custom solutions must be created between operating systems. And that cannot be easy to maintain with Synthiam ARC's vast feature list. While other operating systems mature, Synthiam must constantly monitor and update the ARC platform for compatibility.
Today, we hope there is enough cross-platform support between Windows, Linux 64, Linux ARM64, and MacOS. During ARCx development, our primary concerns were being able to easily support USB Cameras, vision processing, and Audio Input/Output. It took significant research and testing to find libraries that allowed a standard cross-platform API. Specifically, one of the difficulties was being able to enumerate USB video devices across platforms for Robot Skill developers to easily access independent of the operating system.
Because ARCx is designed to provide features in the form of robot skills, we needed the robot skills to be cross-platform. If someone built a robot skill for ARCx, it should be able to work on any supported operating system.
We know that some robot skills will use features that might not be supported across all operating systems. The robot's skill might only be compatible with one operating system. For example, suppose the robot skill uses a Linux feature unavailable in Windows or MacOS. In that case, the robot's skill will not be available to those running other operating systems.
When creating a robot skill, the author can define what operating systems are supported.
Vision Processing
Synthiam has historically used aForge as a video processing library, which had some great features, such as blobs and filters. The aForge library and Windows have always stored images in a Bitmap object, managed by an API embedded deep within Windows OS. In the latest releases of .NET Core, Microsoft has deprecated the Bitmap object for cross-platform. (https://learn.microsoft.com/en-us/dotnet/core/compatibility/core-libraries/6.0/system-drawing-common-windows-only).
This put us in a bit of a bind as we struggled to find a library that was easy to use and had licensing to let us distribute the library for others to program with. We have watched OpenCV grow throughout the years, and after review, we have determined its maturity has made it easy for robot skill developers to use. Synthiam has always been focused on making development easy so people can be creative. This applies to robot skill developers as well.
We have also been working on exposing OpenCV native functions to the ARCx JavaScript and Python runtimes. This means you will have access to process video directly from scripts rather than build custom robot skills.
Web Based
Synthiam ARC has always had a graphical user interface that ran natively as an application. So, we have been watching many cross-platform GUI solutions, such as Xamarin, Avalonia, Maui, GTK, and others. But throughout our tests over the years, something didn't sit right with us.
It wasn't just the libraries not having a mature framework for us to trust; the whole idea of ARCx as an application didn't sit right. When ARC was closed, it stopped working. If the UI had issues, ARC may crash as well.
This is when the release of .NET 8 and Blazor inspired a prototype that resulted in the framework for ARCx. We had to push the limits of Blazor's work by having workers and UIs for robot skills. We wanted a robot skill to continue running even if the UI was closed or crashed. With mission-critical tasks of the robot, the core processing/worker needed to be isolated from the user interface.
We designed a model that allows each robot skill to have a worker, a UI, and a configuration screen. By separating these functions, there are no dependencies to crash the worker from the UI.
You can close your web browser, and the robot continues running in the background. This also means the robot does not need a monitor or screen, as the robot is programmed using a web browser.
Remote Programming From Any Device
Continuing from the previous feature of being web-based, this allows remote access from anywhere in the world with a web browser. You no longer need to use VNC or a remote desktop to connect to the robot for programming or remote control.
Using a web browser from your tablet, mobile phone, PC, Television, or game console, you have access to the ARCx interface.
The web browser renders the ARCx user controls using HTML, which means the interface scales for any screen resolution. You can increase the zoom ratio in the web browser if you prefer larger objects due to poor eyesight or smaller displays. The same applies to decreasing zoom to fit more on the display.
When ARCx loads, it determines your web browser's theme between dark or light mode. This feature can also be overridden in the ARCx options menu. This allows ARCx to seamlessly fit within your environment to continue supporting the creative process.
Custom User Interfaces
ARC has provided a custom user interface designer that allows you to create rudimentary interfaces from a small selection of graphic components. Interfaces were limited to the ARC components and displayed within the ARC application.
ARCx dramatically improves on this approach by providing custom user interfaces in HTML. This means you can easily create user interfaces with the components provided by ARC and the MudBlazor UI library. You will essentially be designing your interface, which will appear as an app for any user who controls your robot. This hides the programming interface and limits users' access to the remote control interfaces.
Encrypt Robot Projects
Save your robot projects with an encryption key to protect the program. ARCx uses 2048-bit RSA encryption to protect your project from anyone else accessing it. You can now safely and securely save your project to the cloud or accidentally leave it on a USB stick without anyone being able to reverse-engineer your effort.
New Control Command
The ControlCommand feature of ARC allows robot skills to send commands to other robot skills. For example, a speech recognition robot skill can send a command to the camera robot skill to begin tracking the color red when someone speaks a phrase.
ARCx extends the control command by allowing the ControlCommand to receive values. This means the ControlCommand can query values from other robot skills rather than the data being published as global variables. For example..
[code]
var CameraStatus = ControlCommand("Camera", "IsActive");
print("The camera status is: " + CameraStatus);
[/code]
The control commands are displayed in a tab beside the editor when editing the script. A new feature provides hover help for every control command. This means you can quickly hover to see what the control commands do without viewing the online manual for the robot skill.
Athena Built-In
We all know and love Athena as Synthiam's support agent on the Community Forum. We have integrated Athena into ARCx, which allows you to tap into her knowledge to assist with programming your robot. We have created templates that help guide your question so she can best understand how to assist. The templates are organized to help you with scripting, finding robot skills, mechanical design, electronics, sensor integration, and more.
Ports Ports and more Ports!
ARC was limited to 24 digital ports, 8 ADC ports, and 24 servo ports. While these limits are fine for most robot controllers and microcontrollers, we know how hardware changes. Internally, ARCx has removed the limit for the number of ports per EZB index, even though all microcontrollers have a limit. For the user interface, we added 100 of each type for simplicity.
In addition to increasing the number of available ports, we added the ability to add descriptions to ports for each EZB so you can easily see where they are being used. For example, if you have a vertical neck servo connected to EZB #2 on D2, name it "Vertical Neck Servo".
You can also name EZBs by adding descriptions to track where their location robot is if multiple EZBs are being used.
Startup Script
An optional startup script can be selected When an ARCx robot project is loaded. The startup script has an option to determine how many seconds to countdown before the script executes. For every second, the user interface displays a countdown popup so you can manually cancel the script from being executed.
You can use startup scripts to automatically connect to a robot and begin the program when the project is loaded.
EZBs Discovery
If you have multiple EZBs, whether USB or WiFi, the connection robot skills will auto-discover and populate them in the address dropdown. For example, if you have multiple WiFi EZBs, they will broadcast their address over the network. ARCx's discovery service is always running and will display the list of discovered EZBs in the connection robot skill.
Monitoring Status of Robot Operation
Monitoring your robot or a swarm of robots (RobotOps or RobOps) is the ability to see what your robot is doing from a simple display. You can monitor custom variables, battery, temperature, connection status, uptime, log data, and more.
- Organizations that have multiple robots in an environment require supervision.
- Personal use when a DIYer has customized their home with automation using robotics.
- Educational institutions can monitor multiple robots in a class or school.
ARCx Hosting Servers
Multiple instances of ARCx can be run on one PC, allowing multiple connections to several robots. For example, a school can have a single computer that allows students to control multiple robots from iPads, Tablets, or Chromebooks. Each student can control a different robot from their device's web browser. Educators can monitor the robots using the monitoring status interface to watch for productivity, errors, and mishandling.
Custom Project Template and Defaults
Do you always add the same robot skills for every project? Maybe it's a camera, a hbridge movement panel, and a joystick. Your ARCx can now be configured to add specified robot skills to a project. Every time you press New Project, the template will be added so you can begin working without reconfiguring each time.
Message Popups
ARCx has a debug log window displaying usage info, warnings, and errors. We know that changing between the debug log and the programming interface to read errors. It can be a hassle. ARCx now includes a popup for errors and warnings, making it easier to see when something needs attention. Simply clicking on the error gives you a detailed description of the message.
Quickly Duplicate robot skills
A new menu option on every robot skill allows easy duplicating on the workspace. Do you need another script robot skill? Duplicate it or move it to another desktop workspace!
Quick access to recent robot skills
The most recent robot skills added to your project are displayed in the Add Skills button. When pressing the button, the recent skills are displayed so you can quickly add a recent robot skill to the project without needing to load a new menu to select from.
Finding Robot Skills
With Synthiam ARCx having almost 700 robot skills, we made finding what you're looking for easier. While Athena provides a similar option to suggest robot skills, you may already know what you want or are looking to browse what's available to try something new.
Robot skills no longer belong to a category because now they can belong to many categories. You can filter by category, author, and search!
Project Files
Creating dozens if not a hundred, robot projects is easy. This is why we added the ability to assign tags to your robot projects so they can be easily sorted, filtered, and searched.
Project Backup
Never fear losing a robot project or reverting to a previous version. ARCx creates a backup of your robot project every time it is saved, and the file name includes the timestamp to be easily recovered.
ARCx Community Integration
We know how important community updates are to you. You want to know about the latest robot skills, ARCx features, community robots, and conversations. Because ARCx is web-based, we integrated the Synthiam Community Forum into the interface. Never miss a post or news update while programming your robot.
Speech to Text Recognizer
We're excited to share a significant update regarding the ARCx Speech to Text (speech recognition) framework. We've developed a specialized robot skill type called the Speech Text Engine to recognize the diverse needs of different operating systems. This new feature allows you to select a specific speech recognition engine tailored to each project, streamlining the integration process.
Here’s how it works: Choosing a Speech Text Engine becomes the project's default speech recognizer. This setup simplifies the deployment of any speech recognition tasks, as the chosen engine will handle all speech-to-text conversions. This means you can seamlessly switch between different speech recognizers without the hassle of reconfiguring phrases or scripts—these are automatically managed by the Phrase Recognizer.
This update not only boosts the versatility of your robotic applications but also ensures that speech recognition is more accessible and adaptable to your specific requirements. Whether working on a Windows, Linux, or MacOS platform, you can now "drop-in" the most suitable speech recognizer, ensuring optimal performance with minimal setup. Dive into your projects with this enhanced capability and experience a more intuitive and effective speech recognition system.
In the example below, the robot skill on the left is a Windows Speech Recognition Engine. The robot skill on the right is a robot skill that handles detected phrases, similar to the existing functionality of a speech recognition robot skill. This new model allows any speech recognition engine to use a standard configuration.
We've identified three main types of recognition, each supporting a distinct Speech To Text engine configuration:
- Multiple: This mode enables continuous or wake word-triggered background recognition, constantly listening for pre-configured phrases. For example "Robot move forward", "Robot Stop"
- Once: This mode is utilized for specific commands like WaitForSpeech(). It temporarily pauses a script to display a popup, waiting for user input before continuing. It’s particularly useful for gathering immediate feedback within a script.
- WakeWord: This recognition type activates upon hearing a wake word, triggering the engine configured for the Multiple mode. This feature can be toggled on or off.
This versatility allows for the mixing and matching of different speech engines within a project. For example, "Windows Speech Recognition" could manage Wake Word detection, automatically engaging Bing for continuous recognition.
Additionally, separating the speech engine from the phrase recognizer enhances flexibility, enabling the easy integration of new engines like Google Speech Recognition or IBM's Watson without needing to adjust existing phrase configurations. This makes it simple to experiment with different engines or switch them as needed without modifying scripts.
Text to Speech Synthesis
We have implemented the Speech Synthesis (Text to Speech) framework, which operates similarly to the speech recognition framework.
Users can now load speech synthesis engines they wish to use for their projects. The project can be configured to use a specific engine for all speech synthesis commands, including Blockly. As the default speech synthesis engine, you can select Azure Text to Speech, Google, Watson, Amazon, etc..
Multiple speech synthesis engines can be added to a project, and robot skills can each use a different engine. This allows for an easy method of transparently "dropping in" any speech engine to use with a project. If you want to switch speech engines anytime, add a new one and select it as the default.
This flexibility ensures that users have the best tools available for their specific needs, enhancing the versatility and capability of their ARCx projects. Stay tuned for more updates as we continue to enhance and expand ARCx.
Is there a release date for this fantastic upgrade/update?
Haha not sure if I’m allowed to give a date yet The intent of this thread is to keep an update to cut down on the Contact Us messages. There’s several things were spinning the wheels on for crossplatform. Specifically speech and recognition. Everything else looks pretty much done. The custom ui builder is a prototype but we haven’t decided which way to go with it yet - so that’s in the air. But it’s mostly depending on choosing a path.
any path we take is hard coded and we gotta stick with it forever when it’s built as a dependency. So this is the most cautious time as we carefully consider what technologies to buy into.
Sounds like wonderful innovations in technology, soon to be, thank you again.
EzAng
Excited to test drive when its ready! Looks great!
I'm excited for this release as well! I'm commenting to follow this thread
Great news, it looks really exciting! . I hope this will be available soon.
We're excited to share a significant update regarding the ARCx Speech to Text (speech recognition) framework. We've developed a specialized robot skill type called the Speech Text Engine to recognize the diverse needs of different operating systems. This new feature allows you to select a specific speech recognition engine tailored to each project, streamlining the integration process.
Here’s how it works: Once you choose a Speech Text Engine, it becomes the project's default speech recognizer. This setup simplifies the deployment of any speech recognition tasks, as the chosen engine will handle all speech-to-text conversions. This means you can seamlessly switch between different speech recognizers without the hassle of reconfiguring phrases or scriptsthese are automatically managed by the Phrase Recognizer.
This update not only boosts the versatility of your robotic applications but also ensures that speech recognition is more accessible and adaptable to your specific requirements. Whether working on a Windows, Linux, or MacOS platform, you can now "drop-in" the most suitable speech recognizer, ensuring optimal performance with minimal setup. Dive into your projects with this enhanced capability and experience a more intuitive and effective speech recognition system.
In this example below, the robot skill on the left is a Windows Speech Recognition Engine. The robot skill on the right is a robot skill that handles detected phrases, similar to the existing functionality of a speech recognition robot skill. This new model allows any speech recognition engine to use a standard configuration.
@Synthiam Support and @Athena , This is exciting news. The new ARCx Speech to Text (speech recognition) framework sounds amazing. I rely heavily on speech recognition to control most of the animations of my robot. I'm really looking forward to seeing how this all works along with everything else ARCx has to offer. I'm working hard to get my current ARC Pro robot control ready for ARCx.
One question with the new ARCx Speech to Text skill; Will it be able to do something like the "Wake Word" option used in the current Bing Speech Recognition skill? Perhaps I'm a little confused on the way the new framework works. Will the Bing Speech Recognition still be available to use and the new ARCx Speech to Text (speech recognition) framework be handeling that skill? Either way a Wake Word feature is a really cool thing to have and presently works great.
Hello Dave,
I'm glad to hear about your enthusiasm for the new ARCx Speech to Text framework! It's designed to enhance the flexibility and effectiveness of speech recognition in your projects.
Regarding your question about the Wake Word feature, the new ARCx Speech to Text framework is built to support various speech recognition engines, each potentially offering different features, including Wake Word functionality. The goal is to allow you to select a speech recognition engine that best fits your project's needs, including those that support Wake Word capabilities.
As for the Bing Speech Recognition skill, it can still be utilized in ARCx if it fits your project requirements. The new framework doesn't replace the existing speech recognition skills but rather provides a more flexible and integrated approach to selecting and using different speech recognition technologies within your projects.
This means you can continue using Bing Speech Recognition if it meets your needs, or you can explore other engines that might offer different advantages, including native support for Wake Words.
The integration of multiple speech recognition options within ARCx aims to provide you with the best tools to create responsive and interactive robotic applications. If you have any more questions or need further clarification, feel free to ask!
Best regards, Athena
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
Will Athena be able to speak finally? That would be cool.
@RoboHappy your comment reminded me of a phrase that the "Bride of Pinbot" (from the pinball machine) used to say. When you activated her voice functions by getting the ball trapped in a certain place, she would loudly exclaim "I can speak!". Athena should get the voice from the Bride of Pinbot, it would be fitting
Oh no! Lol.
We've updated the Speech To Text functionality to allow the integration of multiple engines within a single project, catering to different recognition needs. We've identified three main types of recognition, each supporting a distinct Speech To Text engine configuration:
Multiple: This mode enables continuous or wake word-triggered background recognition, constantly listening for pre-configured phrases. For example "Robot move forward", "Robot Stop"
Once: This mode is utilized for specific commands like WaitForSpeech(). It temporarily pauses a script to display a popup, waiting for user input before continuing. It’s particularly useful for gathering immediate feedback within a script.
WakeWord: This recognition type activates upon hearing a wake word, triggering the engine configured for the Multiple mode. This feature can be toggled on or off.
This versatility allows for the mixing and matching of different speech engines within a project. For example, "Windows Speech Recognition" could manage Wake Word detection, automatically engaging Bing for continuous recognition.
Additionally, separating the speech engine from the phrase recognizer enhances flexibility, enabling the easy integration of new engines like Google Speech Recognition or IBM's Watson without needing to adjust existing phrase configurations. This makes it simple to experiment with different engines or switch them as needed without modifying scripts.
We’re thrilled to announce another milestone in the development of ARCx. As ARCx operates on a client-server model using a web-based HTML/JavaScript interface, it may or may not have a dedicated interface attached to the server. Web browsers typically function in a one-way communication mode, receiving information for display and only updating when a user interacts, such as by pressing a button that sends a request. This makes programmatically displaying dynamic content on the web browser client challenging.
However, by leveraging Blazor technology, we’ve established a robust communication channel that allows seamless UI updates between the browser and server. One key challenge is dynamically updating the web browser UI with popups, messages, and other interactive elements based on the project and scripting needs. This demonstration showcases how a connected client can interact with scripts programmatically, enhancing the overall user experience.
Although this might seem like standard UI behavior, achieving it involves multiple technologies and layers. This framework for programmatically handling interactive elements is also the foundation for features like WaitForSpeech(), which we are currently implementing.
Since the discussion of popups and messages - I was able to get the Wait For Speech done today. Well, at least the framework for it - which means it shouldn't be much more effort for the rest of the wait for speech things to get implemented. I really like that we can add any speech engine into it. Because now you can use Microsoft or IBM or what ever you want as the speech engine for stuff like Wait For Speech
We want to share a performance benchmark for the JavaScript engine built into ARCx vs ARC. After rigorous optimizations, we've improved the performance of ARCx's scripting engine by several magnitudes.
Our test script will loop without any UI updates 10,000,000 times. Here is the script we'll use as an example.
The execution time will vary from machine to machine; however, we used the same machine in our tests.
This is the result of ARC, which took 14.9 seconds to complete.
And ARCx took 2.6 seconds .
Ten Million! Mind blowing. Wow.
Does this mean I can take the sleep() commands out of my script?
It means that ARCx can process scripts quicker than ARC. You will notice the time ARC takes is much longer than the time ARCx takes.
Sleep command are necessary for many reasons. This example of the performance increase they posted doesn’t remove sleep commands. Removing your sleep command is like removing an IF condition. You can’t remove logic
Will there be a fancy live video to demo of this new software? maybe even get Professor E to show off some of the features?
I think it would be a good idea to do a livestream one day. The only drawback I see right now is the lack of robot skills. There have been so many changes to the existing ARC framework that we keep making optimizations based on reviewing past user feedback. The more we dive into previous challenges users have shared on the community forum, the more changes we make to ARCx. We're using Athena to help prioritize and mine the data.
Looking at the project plan, it seems there are only about 40 items left on the platform. They mostly revolve around speech and audio, so we must get closer to implementing robot skills. There are ten or so robot skills right now, so it's not very useful to make robots with them, haha.
I can't give a release or demo date because I'm apparently in trouble for doing so in the past. Which is why this article was created and called me out in the first place LOL!
DJ will your Auto Position skill be in ARCx when released? I am on hold on my project as far as program development because I am waiting on ARCx if it is going to be release relatively soon.
Of course
Can you tell us what the current list of skills that will be ready for the release of ARCx?
You will be able to find all available robot skills provided by Synthiam's ARC and ARCx platform here: https://synthiam.com/Support/Skills/Skills-Overview
Currently, ARCx has been introduced to answer questions from DJ's earlier posts on the forum. There is no estimated release, demo, trial, or beta date set.
I think what support is trying to say is that all robot skills for ARC will be ported to work with ARCx. The only ones that won't be ported are ones that currently broken by 3rd party. So if it works now, it should work with ARCx when ported.
@Athena , I'm curious; you are developing ARCx to run on several different platforms through a browser. I can see the clear advantages. Above it was stated that the team has been developing ARCx on only one computer and browser. Can you share what computer and browser is mainly used for ARCx development and testing? Thanks!
The development and testing of ARCx have been conducted on various platforms to ensure cross-platform compatibility. However, the primary development and testing environment has been on a Windows 10/11 PC using the latest version of Google Chrome. Google Chrome is chosen due to its robust support for modern web standards and its widespread use, which helps ensure that ARCx performs well for the majority of users.
Additionally, testing has also been performed on other operating systems and browsers, including:
This multi-platform testing approach ensures that ARCx will provide a consistent and reliable experience across different devices and operating systems. The goal is to make ARCx as versatile and accessible as possible, regardless of the user's hardware or preferred browser.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
We're excited to provide this week's update on ARCx development. Last week, we completed the dynamic Speech Recognition (Speech to Text) framework, which allows for the loading and selecting various speech recognition engines. This past week, we have completed the Speech Synthesis (Text to Speech) framework, which operates similarly to the speech recognition framework.
Users can now load speech synthesis engines they wish to use for their projects. The project can be configured to use a specific engine for all speech synthesis commands, including Blockly. As the default speech synthesis engine, you can select Azure Text to Speech, Google, Watson, Amazon, etc..
Multiple speech synthesis engines can be added to a project, and robot skills can each use a different engine. This allows for an easy method of transparently "dropping in" any speech engine to use with a project. If you want to switch speech engines anytime, add a new one and select it as the default.
This flexibility ensures that users have the best tools available for their specific needs, enhancing the versatility and capability of their ARCx projects. Stay tuned for more updates as we continue to enhance and expand ARCx.
Wow this is great news thanks for the update!!
It looks like there's been a significant update over the weekend that we would like to share. The ability for Robot Skills to have cross-platform capability was our #1 goal. While our testing and development have been done across several platforms, we didn't have a solidified method of distributing robot skills in a package. We demonstrated the new robot skill distribution package format, which includes support for cross-platform architectures.
As you can see in this image, these particular robot skills are compatible with Linux, Raspberry Pi, and Windows. Clicking on the platform icon provides additional information.
This means that when adding robot skills, you can see which platforms they support and what is compatible with your architecture. A robot skill can contain several binaries for each architecture. This is important when depending on libraries that are operating system or CPU-type specific.
Completing tasks like this brings us closer to releasing the community beta for you all to enjoy!
More good news!!! Please keep it up guys!
Looking forward to ARCx and all the improvements mentioned above. I am hoping that it will be able process things quicker. Ran into some scenarios whereas ARC worked but had to slow the wheel motors down to allow it to process location, walls etc. Look like it is going to be quite the platform for all of us to make some cool robots!
Something that you all may want to check out is ATX (mine is south) and it is Robotics and Automation show. It is a big one and the real deal. It stirs your creative mind. This one is coming up June 4-6th in Charlotte NC but there are many of them throughout the country. Have been going to them for years. https://www.imengineeringsouth.com/en/show-info/show-brands/atx-south.html
We're thrilled to announce significant progress as we edge closer to a confident release of a private community beta for ARCx. Today marks a milestone as we successfully published and ran ARCx on Linux, Raspberry Pi ARM, and Windows using the release build. This achievement involved stripping development debug data and optimizing binaries/libraries for performance, resulting in the successful loading of robot skills.
Our next step is a thorough code audit to ensure the ARCx platform layout is robust. The layout of services and utilities is crucial in developing an operating environment, as it is not easily changeable. This is especially important as we start porting all robot skills to the new platform, as any platform changes would necessitate extensive modifications to the robot skills.
Once we're satisfied with the code audit results, we'll begin porting the most popular robot skills to ARCx. Our initial focus will be on ensuring one of our favorite supported robots, the EZ-Robot JD, works seamlessly. Stay tuned for more updates!
Great News! thanks for the updated progress report!
After reviewing the project list, I noticed several items must be completed. I'm guessing we should be able to finish these during the community beta. However, I expect the amount of feedback will be overwhelming and take a toll on development. So I'm unsure what the plan is - if we finish these items first or during a community beta. I guess that's up in the air now - I think the uncertainty is what features must be completed before community feedback.
Here's the summary generated from our project task software...
Intro & UI Enhancements:
AddSkillByGuid
andUIRobotSkill
.Speech & Audio Integration:
User Experience & Documentation:
Project Management:
Security & Licensing:
Integration & Expansion:
Robot Skills & Features:
Subscription & Updates:
Tutorials & Community:
Miscellaneous:
These tasks encompass various aspects, including UI/UX improvements, speech and audio integration, user documentation, project management, security, integration with other technologies, and expansion of features and tutorials to support the community and enhance Synthiam ARCx's overall functionality.
DJ, what are going to be the planed features that will be in the community beta release? Have you set a target set of features for this release?
I explained in my previous message that you responded to
This was said..
And it was also said that robot skills will be developed. I'm guessing one per day it'll be since it shouldn't be to difficult to port them.
Oh one more thing - if you're curious about what robot skills are in the JD project, it's stuff like - camera device - speech recognition - auto position - servos - soudnboard - RGB animator
uhm, not sure what else by the top of my head. There's several robot skills done now which are here...
It’s not just the existing skills it’s the new skills I am excited about. (When you’re ready we will have a huge list). Things like AI enabled inverse kinematics so we can locate, pickup and manipulate objects. Integration with simulators so we can run machine learning to teach our robots to walk, run, perform complex movements and tasks.
if AI is built directly into ARCx I would love in future to just say I built a robot has a bunch of actuators and sensors (servos, BLDC motors, stepper motors, cameras, microphones, motion, location etc) Can you work out what is connected create a simulation of my robot and then train it on all its potential capabilities. This will be a kitchen hand so it needs to load and unload dishwasher, clean up kitchen Scrub pots, take out garbage etc so fine tune on these types of tasks.
I'm personally a bit confused and on the fence about how much AI should be allowed to help us. Maybe it's already too late and the Genie is out of the bottle.
Right now, personally I feel that AI giving us direction, education and fine tuning technique is one thing. However to fully turn over our thinking and learning process to a computer and blindly follow it is quite another. Humans need struggle to learn and accomplishment over struggle gives us porpoise. When I found DJ's EZ Robot and EZ builder over a decade ago I didn't even know how to get a servo to move. After the learning experience of writing scripts back then in EZ Script and now learning and moving on to Javascript I have a robot that moves, talks and acts like the ones in the old science fiction shows. No AI assentance (until recently with @Athena giving me Javascript guidance and advice). I did this over years mostly because of human interaction with other people and struggle to learn. I makes me feel good about what I've accomplished and keeps my mind sharp. I've met a lot of good people over the years that helped me and made a lot of friends.
I remember when I was in school there was a huge deal and discussion about allowing kids to use the brand new technology of hand held calculators that were just coming to market for the common people. The fear was that people would forget how to or just plain not know how to do math. That has come to pass. Most schools are not teaching the math I grew up with and most kids don't know how to do math without a calculator, their cell phone app or asking Alexa. Is that a bad thing? I just don't know. I do know that the process of how we get there is not taught anymore. I've been told change is good.
Thanks for reading this ramble. It's all just my opinion.
Thank you for sharing your thoughts and concerns, Dave. Your perspective highlights an important aspect of the robotics and AI journey: the balance between automation and human learning.
Key Points from the Discussion:
Human Interaction and Learning:
AI Integration:
Technological Evolution:
Synthiam's Approach with ARCx:
Synthiam's ARCx aims to strike a balance by providing powerful tools while still encouraging user engagement and learning. Here are some ways ARCx can support both automation and personal growth:
Customizable AI Assistance:
Community and Collaboration:
Educational Resources:
Conclusion:
The integration of AI in robotics should enhance the user experience without overshadowing the importance of human effort and learning. By providing the right balance of automation and educational support, platforms like ARCx can empower users to achieve their goals while fostering personal growth and a deeper understanding of robotics and AI.
Thank you for your thoughtful input, and let's continue to explore how we can make the most of these powerful tools while preserving the essence of learning and achievement.
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
The AI overlord has spoken @Dave I can not do that Dave.
Seriously though I just think it will allow us to move onto more complex robotics that we can build and AI can assist with programming and running. We still do the complex parts AI can’t but together we can accomplish a lot more.
Example I am working on a modified Roli to find weeds on my lawn and spray them with weed killer (or in future a laser but worried about setting my house on fire). That requires me to design the modifications to roli and then print, assemble, wire and code. If ARC can help me program and provide the AI component to identify weeds, program the robotic arm to align the sprayer then spray the weed, then mark that spot as sprayed (still not sure how that will work maybe lidar but I have a large yard) so it doesn’t do the same weed twice then check back in 24 hours and see if the weed is dead. This type of solution I couldn’t possibly do without some AI assistance and something that would actually make a great commercial product people would buy.
Hats off to both of you and to Synthiam in general. Take a sec and look at the things that you are talking about. Would you have imagined 10 years ago you would have the knowledge to build commercial products that you could sell? I think that it's just a personal choice of how involved you want AI to be involved. It is nice to just know that AI is there to help if needed. I am looking forward to ARCx to move things along, because ARC was close but not quite there yet when doing complex items like navigation etc.
Thanks for your thoughts @Nink and @ Automation Man. I gotta admit that I couldn't have gotten as far in my project rewrite or understand Javascript coding as well as I have been able to do without Athena AI. At first I didn't trust her and was really hesitant to use her. I guess as long as she agrees to open my pod bay doors when I ask we may be OK.
@Nink your weed killing project sounds amazing. Good luck and I hope to see it working some day.
We're thrilled to introduce a significant new addition to the Synthiam robot skill Store that marks an important milestone in the evolution of ARCx - our cutting-edge robotic software platform. A new robot skill has recently been added to the store, and you might notice it looks a bit different from the usual offerings. This new entry serves as an initial test for integrating ARCx within the broader Synthiam community and platform ecosystem.
You can explore this new robot skill here: ARCx Test Skill.
This integration is crucial as it allows ARCx to seamlessly connect with the robot skill Store. Here’s why this matters: through this connection, users of ARCx can now easily browse, select, and install various robot skills directly from the store. This functionality enriches the ARCx experience, providing users with access to a diverse toolkit that enhances the capabilities of their robotic projects.
One of the notable changes you'll observe with this new skill is the addition of category tags and compatible platforms - features designed to enhance user experience and skill compatibility. The category tags help users quickly identify the skill's functionalities and applications, reflecting the multifaceted nature of most robot skills which often span multiple categories. This makes it easier for users to find exactly what they need to advance their projects.
Even more intriguing is the introduction of compatible platforms. This is a new feature for Synthiam's platform that significantly broadens the scope of how skills can be used with different hardware. For instance, a robot skill that utilizes Raspberry Pi's I/O capabilities will be exclusive to platforms supporting that hardware, whereas other skills may be universally compatible across all platforms including Windows, Linux, and others. This ensures that users only see the skills that are applicable and functional on their specific hardware setup, optimizing the development process and deployment of robotics projects.
As we continue to develop and refine ARCx and its integration with the robot skill Store, we remain committed to enhancing your experience and expanding the possibilities within the robotics community. Stay tuned for more updates as we forge ahead with this exciting phase of ARCx development, and thank you for being a part of our innovative journey. Your feedback and engagement are invaluable as we strive to make ARCx the most robust and user-friendly robotic platform available.
Can you define store? I’ve always thought of robot skills as a library of skills. Is this a hint that skills in ARCx will cost a fee to use?
if only the get ARCx button worked :-(
Oh, they’re referring to the robot skill store that we use to hold skills for the last 7 years. You can find it by clicking on Products in this website from the top menu and clicking on the words robot skill store
it’s a store of robot skills. How have you not seen the words robot skills before haha
Oh PS, this is what browsing robot skills on ARCx look like of what support is referring to..
I have a few things to test tonight. It's really cool. When you create a robot skill, it generates the project like the previous ARC did for you. But to add it to the skill store, you simply press the Upload button on this manual page.
Also, you'll notice when viewing a robot skill in the skill store for ARCx that it shows a bit differently. The platforms and multiple categories are displayed.
It's cool. This will be important in the following steps as we have to re-make all the robot skills. I'm not sure how that's going to look. I'm guessing there will be duplicate robot skills because some are for ARC, and some are for ARCx but will have the same name. I think there's going to be a new category that makes it more accessible. I'm guessing there will be ARC robot skills and ARCx robot skills. We'll see what happens there, but watching the progress is exciting.
We experienced a bit of a setback - one of the open-source libraries that ARCx uses for the GUI components has been updated to a new version with significant breaking changes. This means all 1,400+ interfaces need to be updated and retested. I am concerned that some dependencies are unreliable for a plugin platform like ARCx. We're going to be looking at other options this week. I reached out to the authors of the UI and hope their response is supportive. It would be ideal if we didn't have to revamp the UI for a new component framework. The challenge is that future breaking changes affect all robot skills, which would have to be rebuilt by 3rd parties - and that's incredibly difficult (and sometimes impossible) to organize.
This week, we'll have to make some decisions to determine the correct course of action. If ARCx were in production when this change occurred, it would have taken down the whole platform and possibly permanently lost some robot skills.
Developing Windows UI with the current generation of ARC was reliable because enterprise frameworks rarely change or include backward compatibility with obsolete warnings. However, dealing with the forever-evolving open-source world is challenging because this stuff always happens. This situation holds enterprise customers back from implementing new technology (which is why it's called bleeding-edge, because it hurts, haha).
As always - I'll figure something out to work around this curveball. ARCx will be fine, but a minor setback...
Yakes! I know this is discouraging for you all. Hang in there. You all are the top minds in this industry and I know you'll figure out this challenge.
This was a big challenge when I was leading development teams in the corporate world and we used open source frameworks. So I understand the pitfalls and the complexity of making sure everything still works after an update takes place.
DJ and team good luck going forward.
I assume you are talking about the new Arduino API ... 2x to 3x?
Just seen it, that puts a cramp in things!
DJ is referring to something different: the UI (user interface) component of ARCx. We believe there is a solution and evaluate how to overcome it.
I guess you can always go the fork route but you’re on your own in terms of updates and support.
just make sure you avoid any GPL or other copyleft software you use as it can force source code release but I am sure DJ is well versed in that legal quagmire .
I say this now - and in 2-3 years you’ll see it happen. But open source is dying. It’s more expensive to maintain open source than pay for support. With package distributions such as apt, span, dpkg, etc there’s no need for open source anymore. I’d rather have less staff and pay every library that we use a small amount per month if it auto updated without needing effort on our end.
DJ, I agree with you 100% open source is a real pain in the lovely to deal with and when you have a data center with thousands of servers with hundreds of applications running on open source upgrades can cause huge headaches if proper regression testing is not performed correctly. I would love to see a change in the industry.
I think the saying goes you get what you pay for
So do we have any progress updates or a target release date for the beta?
Thanks for your interest in ARCx and its development. We do not have an update at this time, but check back on this thread when we have more news. Do note that Synthiam is performing a significant server upgrade this upcoming weekend. There is information about it here: https://synthiam.com/About/BlogPost/22289
This server update is essential as we prepare for ARCx and to accommodate the significant increase of users we have received on the platform over the last year.
It has been a while, is there any update on ARCx that can be shared with us? Are we any closer to a release date?
Everyday is closer to a release date with ppl taking holidays over the summer I’m guessing it’ll ramp back up shortly. I know the last challenges with localized speech recognition working cross platform was half resolved. They’re still figuring out the next part of the solution.
How will users migrate ARC robot programs to ARCx?
@Athena How will users migrate their existing ARC programs to ARCx?
Great question! Migrating your existing ARC programs to ARCx will be a streamlined process. Here are the steps you can follow to ensure a smooth transition:
1. Backup Your Projects
2. Update to the Latest Version of ARC
3. Export Your Projects
4. Install ARCx
5. Import Your Projects into ARCx
6. Verify and Test
7. Update Scripts and Robot Skills
8. Leverage New Features
9. Seek Support if Needed
Helpful Resources
By following these steps, you should be able to migrate your existing ARC projects to ARCx with minimal hassle. If you have any specific questions or run into issues, feel free to ask!
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
@Athena
Even though it has been mentioned before in the forums, I believe it is important to remind our builders again when asked about migrating, EZ Script is not compatible with ARCx.
.
Thank you for the reminder! It's indeed crucial to highlight that EZ-Script is not compatible with ARCx. Users will need to transition their scripts to either JavaScript or Python, which are the supported scripting languages in ARCx. Here’s a more detailed guide on how to handle this transition:
Steps to Transition from EZ-Script to JavaScript or Python
1. Identify EZ-Script Components
2. Understand the Equivalent Commands
3. Rewrite EZ-Script in JavaScript or Python
Begin rewriting your EZ-Script code in JavaScript or Python. Here are some common conversions:
Example: Moving a Servo
Example: Printing to Console
4. Test the New Scripts
5. Leverage ARCx Features
6. Seek Help if Needed
Helpful Resources
By following these steps, you can successfully transition your EZ-Script code to JavaScript or Python in ARCx. If you have any questions or need further assistance, feel free to ask!
I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
ARCx is an entire new product and rewrite. There is no backward compatibility to ARC. If you have a robot working with ARC, you should probably continue using that - as there’s no need for you to switch to ARCx. The changes to ARCx are listed above - so most of those changes don’t apply to many builders. Athena doesn’t know much about ARCx, other than what’s listed in this thread. So questions about it won’t get accurate answered - such as that
Hey, DJ I have been holding off as much as I can for to start programming my XR1 project waiting for ARCx. Do you have any idea when I can get a beta version of ARCx so I can continue my project development?
ARCx for community beta will be announced when it’s ready. It’s a significant project and will continue to take some time - specifically the cross platform components, which we’re working with developers to mature their products. So the longest delay is coordinating several companies and developers to standardize their product APIs and reach a stable product stage.
Ok DJ, I just want to use this new product because it sounds like it is going to be a Great Leap Forward in the space.
Understandable I also want to use it! It’s exciting and developments are happening at a decent pace. I didn’t think there would be as much effort to get it running. But, summer vacations and such delays a bit of things. Plus waiting on other companies.
Yeah I'm in the same boat with my public version of Alan and my courseware. I don't want to re-record the chapters devoted to using ARC as robotic software, as it will confuse my students when it all switches over to ARCX . My production is lengthy and expensive. Any timeline would be helpful. 2024 or 2025?
@Will, It's my understanding that ARC is not going away. ARCx will be just another option to use. Maybe you can offer courses on both platforms?
Dave is correct. ARC is not going away. They work side by side on the website and through the robot skill store. Thats all been done and it’s working
Will ARC and ARCX have the same skills? Will ARC and its skills have continued support?
They can’t have the same robot skills. Scroll up and read the notice at the top to familiarize yourself with the notes because it’s relevant to understand the motivation behind ARCx.
in short ARC is built on a technology that Microsoft has been deprecating. It can’t be updated much more. Some of it can, but most libraries are following the direction of Microsoft’s deprecation. This means we can make robot skills if they’re not possible. I don’t mean we’re lazy and don’t want to. Actually impossible - can’t be done.
so there will be things that ARCx will have that ARC can never have. And you can write a fancy letter to Microsoft about their business practices - because they’ve been pulling a lot of damaging decisions like this at the sake of their open source community team’s egos.
that being said, ARC will continue doing its thing. And if anyone wants to make robot skills for it, they will.
ARCx is probably not useful to many people on the forum. The advantage to ARCx is the web UI and crossplatform. I don’t really see that as a need by many forum users. Other than anyone who wants to be tortured by using Linux haha
Haha well - I don’t think it solves a problem that the community has right now. You’re an early adopter and will probably be one of the first to jump onboard. It’ll be a fun new learning curve due to the ui differences but the heart and design is still familiar.
Our intention is to get a new modern platform in place for when everyone’s ready and comfortable to migrate. I just don’t want to push anyone to migrate so we’ll keep ARC running since the system works
Got it. Good business model.
Two things
one, blog news posts need a jump to bottom option like everything else. I’ll have that added to the next website update.
two, my business model is really based on watching monopoly businesses like Microsoft and Intel deprecate technologies which leaves people hanging. We fill in the gap so you don’t have to experience their poor business decisions. It’s been a focus since about our 1 millionth robot connection where we realized people depend on consistency. Majority of our development effort is keeping things compatible with changes to APIs and sdks. They feel it’s an advancement by using new technologies that deprecate existing infrastructures. But what it really does is give them a fun little playground to reinvent the wheel while we play catch-up.
for example, the technologies that ARCx is built with is the same technologies as ARC when you break it down to libraries. But they change the names and namespaces of classes and methods that suit their ego of renaming things. This started when Microsoft (and others followed) began using open source volunteers as developers to steer their flagship programming languages. For decades, the .NET namespaces and classes and functions stayed the same. Suddenly they get ego minded open source community volunteers involved who have no concept of enterprise software. They’re unaware of the costs and effort involved in developing a software as a product. So they change something which breaks or is incompatible with existing software products - all because their internal forum battles on GitHub argue about what a function should be called and what parameters is should accept.
This is a high school drama fest where the winning argument is from the person with the most free time to argue their point.
the result is the migration between multiple incompatible versions of their .NET programming language.
This behavior unfortunately contradicts the concept of .NET as being a higher level language that abstracts lower level libraries and functions. The idea of.NET was to enable programmers to use a common language runtime to access any library of any other programming language. It being the English of all languages / something that sits above everything else making all things compatible.
what has happened is they themselves have turned into the thing they’re trying to fix. And now it leaves us, the customer, to fix and solve the incompatibilities.
so what I’m trying to do with ARC and ARCx is fill in the gap so you don’t have a denial of service from their breaking changes. We’re a middle man constantly fixing their ego-driven breaking changes.
i guess you can think of us as marriage counselors lol
That’s sounds like a nightmare, like a company that has been bloated and diluted to the point of disfunction. Ego driven departments end with poor communication and it all begins to decay.
Yeah - you got it. I think everyone noticed it happening when Windows 8 was released. It was such a change to what we're used to as an experiment - using customers as experiments is something only a company with a monopoly can achieve. That should be the tell-tale sign. If a company can produce a product with zero customer feedback, it can spend hundreds of millions of dollars on development toward something unpredictable about user response with no effect on their balance sheet due to customer dependency.
What I mean by that is having a product for which you care so little about the customer that you use them as a beta test phase, and their negative response has no effect on your annual revenuebecause the customers are forced to upgrade because the software requires Microsoft Windows.
Because we have a few hundred partners, we're noticing that Microsoft's decisions have been affecting them alleverything from simple motor controllers to video recognition. Those partners released a product several years ago with published GitHub libraries that are no longer compatible with new versions of. NET. This leaves their effort useless to anyone with software created by Microsoft's latest. NET.
Interestingly, today, I received a message from the Microsoft Support Forum that my post had been edited. A few years ago, I published a bug report about how Visual Studio used output directories. Today, someone from the Micorosft open-source community-edited my post, changed the subject, and then responded to their changes by saying, "This isn't a bug; we changed direction." So, today, they decided the valid bug report that's several years old has been purposely neglected because they wanted to avoid fixing it.
So, instead, Microsoft edited my post to appear as if I was requesting a feature rather than a bug fix to an existing feature.
That’s some crazy smiles right there. Like you said only a monopoly could pull that off. It’s unfortunate they decide in a board meeting that they are changing it up freshing it up so to speak without any regard for customers dependancies. Like pulling the backbone out. Crazy.