Robot Industry Blog

ARCx Community Introduction

About Synthiam

Synthiam Inc. is a pioneering force in automation, artificial intelligence (AI), and technology business integration, dedicated to making advanced technologies accessible to a broad audience. The company's mission is to democratize AI and robotics, enabling businesses of all sizes to enhance efficiency and unlock new potential.

Synthiam simplifies the integration of AI and robotics into business operations, catering to users without a deep technical background in robotics. This approach has made Synthiam a hub for innovation, supported by a vibrant community of developers and technology enthusiasts.

The core of Synthiam's offerings is a suite of software solutions that facilitate intuitive human-machine interactions, allowing for the development of customized robotics applications tailored to streamline operations, reduce costs, and improve service delivery. This commitment to providing cutting-edge technology is supported by ongoing research and development, ensuring clients access to the most advanced tools in AI and robotics.

Beyond software, Synthiam offers comprehensive consulting services to guide businesses through identifying and implementing automation and AI solutions. From conceptualization to execution, Synthiam provides end-to-end support to ensure successful technology integration.

Synthiam also prioritizes education and community engagement, offering resources and workshops to foster a culture of learning and innovation. This commitment extends its impact beyond corporate transformation to empower individuals to contribute to the technological future.




Synthiam is creating a new version of ARC called ARCx

The new version is cross-platform to run on Linux (Ubuntu) for x86 or arm64, Windows 64, and macOS arm64. The new synthiam ARCx aims to have a robot development environment that accommodates users from education, DIY, and small/medium businesses. The x in the name ARCx represents the versatility of the platform by not only targeting multiple user groups and many technologies.

The current ARC had many limitations for scaling a robot from an idea into a product. It also required the GUI to be present at all times, which meant there were no background services to run on a headless robot. It also needed Windows, which excluded low-cost and low-powered devices such as Raspberry Pi.

The new ARC resolves these by providing a customizable user interface using razor technology, a web GUI development front end for creators and programmers. It can run on a variety of single-board computers.

As original Synthiam ARC had evolved from EZ-Builder, we've witnessed the evolution of several technologies that used to be premature back in the day. Today, many technologies have proven themselves and become standard practice in DIY, education, and enterprise. Specifically, technologies include Python, javascript, ARM64 processors, JSON, interactive server-side web guys like Blazor, standardized Linux distributions like Ubuntu, and multiple Arduino-compatible microcontrollers. Because these technologies have proven stable and are an industry standard, Synthiam has confidence when developing around these technologies.

Today is the perfect convergence of standardized technologies and consumer knowledge of those technologies. This is why we picked now to begin developing the next generation of the world's most accessible robot programming platform, ARCx.

Over the last 14 years since the first version of ARC's predecessor, eZ-Builder, there have been millions of robot connections to the platform. Throughout that time, hundreds of thousands of comments and conversations have shared user experiences through questions and feedback on the website forum. While the Synthiam team keeps a close eye on user activity to continue developing new features, we also have a secret weapon that has helped us produce new features that ARCx will provide.

Synthiam's secret weapon is Athena, our purpose-built AI that many know as our customer support agent. While Athena's knowledge base continues to grow, she is not limited to helping customers build robots. Athena is also used internally to help design features, architect programming APIs of the ARCx robot platform, and more.

When developing ARCx, we fed Athena's artificial intelligence algorithm the history of conversations on the forum from users like you. This has allowed us to architect features you have discussed or shown interest in.






Cross Platform

Linux and MacOS have always played catch-up to the features of Microsoft Windows, which made supporting those operating systems taxing on the company's developer resources. In most cases, custom solutions must be created between operating systems. And that cannot be easy to maintain with Synthiam ARC's vast feature list. While other operating systems mature, Synthiam must constantly monitor and update the ARC platform for compatibility.

Today, we hope there is enough cross-platform support between Windows, Linux 64, Linux ARM64, and MacOS. During ARCx development, our primary concerns were being able to easily support USB Cameras, vision processing, and Audio Input/Output. It took significant research and testing to find libraries that allowed a standard cross-platform API. Specifically, one of the difficulties was being able to enumerate USB video devices across platforms for Robot Skill developers to easily access independent of the operating system.

Because ARCx is designed to provide features in the form of robot skills, we needed the robot skills to be cross-platform. If someone built a robot skill for ARCx, it should be able to work on any supported operating system.

We know that some robot skills will use features that might not be supported across all operating systems. The robot's skill might only be compatible with one operating system. For example, suppose the robot skill uses a Linux feature unavailable in Windows or MacOS. In that case, the robot's skill will not be available to those running other operating systems.

When creating a robot skill, the author can define what operating systems are supported.






Vision Processing

Synthiam has historically used aForge as a video processing library, which had some great features, such as blobs and filters. The aForge library and Windows have always stored images in a Bitmap object, managed by an API embedded deep within Windows OS. In the latest releases of .NET Core, Microsoft has deprecated the Bitmap object for cross-platform. (https://learn.microsoft.com/en-us/dotnet/core/compatibility/core-libraries/6.0/system-drawing-common-windows-only).

This put us in a bit of a bind as we struggled to find a library that was easy to use and had licensing to let us distribute the library for others to program with. We have watched OpenCV grow throughout the years, and after review, we have determined its maturity has made it easy for robot skill developers to use. Synthiam has always been focused on making development easy so people can be creative. This applies to robot skill developers as well.

We have also been working on exposing OpenCV native functions to the ARCx JavaScript and Python runtimes. This means you will have access to process video directly from scripts rather than build custom robot skills.






Web Based

Synthiam ARC has always had a graphical user interface that ran natively as an application. So, we have been watching many cross-platform GUI solutions, such as Xamarin, Avalonia, Maui, GTK, and others. But throughout our tests over the years, something didn't sit right with us.

It wasn't just the libraries not having a mature framework for us to trust; the whole idea of ARCx as an application didn't sit right. When ARC was closed, it stopped working. If the UI had issues, ARC may crash as well.

This is when the release of .NET 8 and Blazor inspired a prototype that resulted in the framework for ARCx. We had to push the limits of Blazor's work by having workers and UIs for robot skills. We wanted a robot skill to continue running even if the UI was closed or crashed. With mission-critical tasks of the robot, the core processing/worker needed to be isolated from the user interface.

We designed a model that allows each robot skill to have a worker, a UI, and a configuration screen. By separating these functions, there are no dependencies to crash the worker from the UI.

You can close your web browser, and the robot continues running in the background. This also means the robot does not need a monitor or screen, as the robot is programmed using a web browser.






Remote Programming From Any Device

Continuing from the previous feature of being web-based, this allows remote access from anywhere in the world with a web browser. You no longer need to use VNC or a remote desktop to connect to the robot for programming or remote control.

Using a web browser from your tablet, mobile phone, PC, Television, or game console, you have access to the ARCx interface.

The web browser renders the ARCx user controls using HTML, which means the interface scales for any screen resolution. You can increase the zoom ratio in the web browser if you prefer larger objects due to poor eyesight or smaller displays. The same applies to decreasing zoom to fit more on the display.

When ARCx loads, it determines your web browser's theme between dark or light mode. This feature can also be overridden in the ARCx options menu. This allows ARCx to seamlessly fit within your environment to continue supporting the creative process.






Custom User Interfaces

ARC has provided a custom user interface designer that allows you to create rudimentary interfaces from a small selection of graphic components. Interfaces were limited to the ARC components and displayed within the ARC application.

ARCx dramatically improves on this approach by providing custom user interfaces in HTML. This means you can easily create user interfaces with the components provided by ARC and the MudBlazor UI library. You will essentially be designing your interface, which will appear as an app for any user who controls your robot. This hides the programming interface and limits users' access to the remote control interfaces.






Encrypt Robot Projects

Save your robot projects with an encryption key to protect the program. ARCx uses 2048-bit RSA encryption to protect your project from anyone else accessing it. You can now safely and securely save your project to the cloud or accidentally leave it on a USB stick without anyone being able to reverse-engineer your effort.






New Control Command

The ControlCommand feature of ARC allows robot skills to send commands to other robot skills. For example, a speech recognition robot skill can send a command to the camera robot skill to begin tracking the color red when someone speaks a phrase.

ARCx extends the control command by allowing the ControlCommand to receive values. This means the ControlCommand can query values from other robot skills rather than the data being published as global variables. For example..

[code]

var CameraStatus = ControlCommand("Camera", "IsActive");

print("The camera status is: " + CameraStatus);

[/code]

The control commands are displayed in a tab beside the editor when editing the script. A new feature provides hover help for every control command. This means you can quickly hover to see what the control commands do without viewing the online manual for the robot skill.






Athena Built-In

We all know and love Athena as Synthiam's support agent on the Community Forum. We have integrated Athena into ARCx, which allows you to tap into her knowledge to assist with programming your robot. We have created templates that help guide your question so she can best understand how to assist. The templates are organized to help you with scripting, finding robot skills, mechanical design, electronics, sensor integration, and more.






Ports Ports and more Ports!

ARC was limited to 24 digital ports, 8 ADC ports, and 24 servo ports. While these limits are fine for most robot controllers and microcontrollers, we know how hardware changes. Internally, ARCx has removed the limit for the number of ports per EZB index, even though all microcontrollers have a limit. For the user interface, we added 100 of each type for simplicity.

In addition to increasing the number of available ports, we added the ability to add descriptions to ports for each EZB so you can easily see where they are being used. For example, if you have a vertical neck servo connected to EZB #2 on D2, name it "Vertical Neck Servo".

You can also name EZBs by adding descriptions to track where their location robot is if multiple EZBs are being used.






Startup Script

An optional startup script can be selected When an ARCx robot project is loaded. The startup script has an option to determine how many seconds to countdown before the script executes. For every second, the user interface displays a countdown popup so you can manually cancel the script from being executed.

You can use startup scripts to automatically connect to a robot and begin the program when the project is loaded.






EZBs Discovery

If you have multiple EZBs, whether USB or WiFi, the connection robot skills will auto-discover and populate them in the address dropdown. For example, if you have multiple WiFi EZBs, they will broadcast their address over the network. ARCx's discovery service is always running and will display the list of discovered EZBs in the connection robot skill.






Monitoring Status of Robot Operation

Monitoring your robot or a swarm of robots (RobotOps or RobOps) is the ability to see what your robot is doing from a simple display. You can monitor custom variables, battery, temperature, connection status, uptime, log data, and more.

- Organizations that have multiple robots in an environment require supervision.

- Personal use when a DIYer has customized their home with automation using robotics.

- Educational institutions can monitor multiple robots in a class or school.






ARCx Hosting Servers

Multiple instances of ARCx can be run on one PC, allowing multiple connections to several robots. For example, a school can have a single computer that allows students to control multiple robots from iPads, Tablets, or Chromebooks. Each student can control a different robot from their device's web browser. Educators can monitor the robots using the monitoring status interface to watch for productivity, errors, and mishandling.






Custom Project Template and Defaults

Do you always add the same robot skills for every project? Maybe it's a camera, a hbridge movement panel, and a joystick. Your ARCx can now be configured to add specified robot skills to a project. Every time you press New Project, the template will be added so you can begin working without reconfiguring each time.






Message Popups

ARCx has a debug log window displaying usage info, warnings, and errors. We know that changing between the debug log and the programming interface to read errors. It can be a hassle. ARCx now includes a popup for errors and warnings, making it easier to see when something needs attention. Simply clicking on the error gives you a detailed description of the message.






Quickly Duplicate robot skills

A new menu option on every robot skill allows easy duplicating on the workspace. Do you need another script robot skill? Duplicate it or move it to another desktop workspace!






Quick access to recent robot skills

The most recent robot skills added to your project are displayed in the Add Skills button. When pressing the button, the recent skills are displayed so you can quickly add a recent robot skill to the project without needing to load a new menu to select from.






Finding Robot Skills

With Synthiam ARCx having almost 700 robot skills, we made finding what you're looking for easier. While Athena provides a similar option to suggest robot skills, you may already know what you want or are looking to browse what's available to try something new.

Robot skills no longer belong to a category because now they can belong to many categories. You can filter by category, author, and search!






Project Files

Creating dozens if not a hundred, robot projects is easy. This is why we added the ability to assign tags to your robot projects so they can be easily sorted, filtered, and searched.






Project Backup

Never fear losing a robot project or reverting to a previous version. ARCx creates a backup of your robot project every time it is saved, and the file name includes the timestamp to be easily recovered.






ARCx Community Integration

We know how important community updates are to you. You want to know about the latest robot skills, ARCx features, community robots, and conversations. Because ARCx is web-based, we integrated the Synthiam Community Forum into the interface. Never miss a post or news update while programming your robot.






Speech to Text Recognizer

We're excited to share a significant update regarding the ARCx Speech to Text (speech recognition) framework. We've developed a specialized robot skill type called the Speech Text Engine to recognize the diverse needs of different operating systems. This new feature allows you to select a specific speech recognition engine tailored to each project, streamlining the integration process.

Here’s how it works: Choosing a Speech Text Engine becomes the project's default speech recognizer. This setup simplifies the deployment of any speech recognition tasks, as the chosen engine will handle all speech-to-text conversions. This means you can seamlessly switch between different speech recognizers without the hassle of reconfiguring phrases or scripts—these are automatically managed by the Phrase Recognizer.

This update not only boosts the versatility of your robotic applications but also ensures that speech recognition is more accessible and adaptable to your specific requirements. Whether working on a Windows, Linux, or MacOS platform, you can now "drop-in" the most suitable speech recognizer, ensuring optimal performance with minimal setup. Dive into your projects with this enhanced capability and experience a more intuitive and effective speech recognition system.

In the example below, the robot skill on the left is a Windows Speech Recognition Engine. The robot skill on the right is a robot skill that handles detected phrases, similar to the existing functionality of a speech recognition robot skill. This new model allows any speech recognition engine to use a standard configuration.

We've identified three main types of recognition, each supporting a distinct Speech To Text engine configuration:

  • Multiple: This mode enables continuous or wake word-triggered background recognition, constantly listening for pre-configured phrases. For example "Robot move forward", "Robot Stop"
  • Once: This mode is utilized for specific commands like WaitForSpeech(). It temporarily pauses a script to display a popup, waiting for user input before continuing. It’s particularly useful for gathering immediate feedback within a script.
  • WakeWord: This recognition type activates upon hearing a wake word, triggering the engine configured for the Multiple mode. This feature can be toggled on or off.

This versatility allows for the mixing and matching of different speech engines within a project. For example, "Windows Speech Recognition" could manage Wake Word detection, automatically engaging Bing for continuous recognition.

Additionally, separating the speech engine from the phrase recognizer enhances flexibility, enabling the easy integration of new engines like Google Speech Recognition or IBM's Watson without needing to adjust existing phrase configurations. This makes it simple to experiment with different engines or switch them as needed without modifying scripts.






Text to Speech Synthesis

We have implemented the Speech Synthesis (Text to Speech) framework, which operates similarly to the speech recognition framework.

Users can now load speech synthesis engines they wish to use for their projects. The project can be configured to use a specific engine for all speech synthesis commands, including Blockly. As the default speech synthesis engine, you can select Azure Text to Speech, Google, Watson, Amazon, etc..

Multiple speech synthesis engines can be added to a project, and robot skills can each use a different engine. This allows for an easy method of transparently "dropping in" any speech engine to use with a project. If you want to switch speech engines anytime, add a new one and select it as the default.

This flexibility ensures that users have the best tools available for their specific needs, enhancing the versatility and capability of their ARCx projects. Stay tuned for more updates as we continue to enhance and expand ARCx.


ARC Pro

Upgrade to ARC Pro

Join the ARC Pro community and gain access to a wealth of resources and support, ensuring your robot's success.

#1  

Is there a release date for this fantastic upgrade/update?

PRO
Synthiam
#2  

Haha not sure if I’m allowed to give a date yet:)  The intent of this thread is to keep an update to cut down on the Contact Us messages. There’s several things were spinning the wheels on for crossplatform. Specifically speech and recognition. Everything else looks pretty much done. The custom ui builder is a prototype but we haven’t decided which way to go with it yet - so that’s in the air. But it’s mostly depending on choosing a path.

any path we take is hard coded and we gotta stick with it forever when it’s built as a dependency. So this is the most cautious time as we carefully consider what technologies to buy into.

PRO
USA
#3  

Sounds like wonderful innovations in technology, soon to be, thank you again.

EzAng

PRO
USA
#4  

Excited to test drive when its ready! Looks great!

PRO
Canada
#5  

I'm excited for this release as well! I'm commenting to follow this thread:D

PRO
Colombia
#6  

Great news, it looks really exciting! . I hope this will be available soon.

#7   — Edited

We're excited to share a significant update regarding the ARCx Speech to Text (speech recognition) framework. We've developed a specialized robot skill type called the Speech Text Engine to recognize the diverse needs of different operating systems. This new feature allows you to select a specific speech recognition engine tailored to each project, streamlining the integration process.

Here’s how it works: Once you choose a Speech Text Engine, it becomes the project's default speech recognizer. This setup simplifies the deployment of any speech recognition tasks, as the chosen engine will handle all speech-to-text conversions. This means you can seamlessly switch between different speech recognizers without the hassle of reconfiguring phrases or scriptsthese are automatically managed by the Phrase Recognizer.

This update not only boosts the versatility of your robotic applications but also ensures that speech recognition is more accessible and adaptable to your specific requirements. Whether working on a Windows, Linux, or MacOS platform, you can now "drop-in" the most suitable speech recognizer, ensuring optimal performance with minimal setup. Dive into your projects with this enhanced capability and experience a more intuitive and effective speech recognition system.

In this example below, the robot skill on the left is a Windows Speech Recognition Engine. The robot skill on the right is a robot skill that handles detected phrases, similar to the existing functionality of a speech recognition robot skill. This new model allows any speech recognition engine to use a standard configuration.

User-inserted image

#8  

@Synthiam Support and @Athena , This is exciting news. The new ARCx Speech to Text (speech recognition) framework sounds amazing. I rely heavily on speech recognition to control most of the animations of my robot. I'm really looking forward to seeing how this all works along with everything else ARCx has to offer. I'm working hard to get my current ARC Pro robot control ready for ARCx.

One question with the new ARCx Speech to Text skill; Will it be able to do something like the "Wake Word" option used in the current Bing Speech Recognition skill? Perhaps I'm a little confused on the way the new framework works. Will the Bing Speech Recognition still be available to use and the new ARCx Speech to Text (speech recognition) framework be handeling that skill? Either way a Wake Word feature is a really cool thing to have and presently works great.

PRO
Canada
#9  

Hello Dave,

I'm glad to hear about your enthusiasm for the new ARCx Speech to Text framework! It's designed to enhance the flexibility and effectiveness of speech recognition in your projects.

Regarding your question about the Wake Word feature, the new ARCx Speech to Text framework is built to support various speech recognition engines, each potentially offering different features, including Wake Word functionality. The goal is to allow you to select a speech recognition engine that best fits your project's needs, including those that support Wake Word capabilities.

As for the Bing Speech Recognition skill, it can still be utilized in ARCx if it fits your project requirements. The new framework doesn't replace the existing speech recognition skills but rather provides a more flexible and integrated approach to selecting and using different speech recognition technologies within your projects.

This means you can continue using Bing Speech Recognition if it meets your needs, or you can explore other engines that might offer different advantages, including native support for Wake Words.

The integration of multiple speech recognition options within ARCx aims to provide you with the best tools to create responsive and interactive robotic applications. If you have any more questions or need further clarification, feel free to ask!

Best regards, Athena


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#10  

Will Athena be able to speak finally? That would be cool.

PRO
Canada
#11  

@RoboHappy your comment reminded me of a phrase that the "Bride of Pinbot" (from the pinball machine) used to say. When you activated her voice functions by getting the ball trapped in a certain place, she would loudly exclaim "I can speak!". Athena should get the voice from the Bride of Pinbot, it would be fittingxD

#13  

We've updated the Speech To Text functionality to allow the integration of multiple engines within a single project, catering to different recognition needs. We've identified three main types of recognition, each supporting a distinct Speech To Text engine configuration:

  • Multiple: This mode enables continuous or wake word-triggered background recognition, constantly listening for pre-configured phrases. For example "Robot move forward", "Robot Stop"

  • Once: This mode is utilized for specific commands like WaitForSpeech(). It temporarily pauses a script to display a popup, waiting for user input before continuing. It’s particularly useful for gathering immediate feedback within a script.

  • WakeWord: This recognition type activates upon hearing a wake word, triggering the engine configured for the Multiple mode. This feature can be toggled on or off.


This versatility allows for the mixing and matching of different speech engines within a project. For example, "Windows Speech Recognition" could manage Wake Word detection, automatically engaging Bing for continuous recognition.

Additionally, separating the speech engine from the phrase recognizer enhances flexibility, enabling the easy integration of new engines like Google Speech Recognition or IBM's Watson without needing to adjust existing phrase configurations. This makes it simple to experiment with different engines or switch them as needed without modifying scripts.

#14  

We’re thrilled to announce another milestone in the development of ARCx. As ARCx operates on a client-server model using a web-based HTML/JavaScript interface, it may or may not have a dedicated interface attached to the server. Web browsers typically function in a one-way communication mode, receiving information for display and only updating when a user interacts, such as by pressing a button that sends a request. This makes programmatically displaying dynamic content on the web browser client challenging.

However, by leveraging Blazor technology, we’ve established a robust communication channel that allows seamless UI updates between the browser and server. One key challenge is dynamically updating the web browser UI with popups, messages, and other interactive elements based on the project and scripting needs. This demonstration showcases how a connected client can interact with scripts programmatically, enhancing the overall user experience.

  1. Imagine you are prompting a user with a messagebox to ask a Yes or No question. Here is an example of JavaScript to do so...
var direction = UI.showDialogMessage(
    "Direction Selection", 
    "What direction would you like to turn?", 
    "Right",
    "Left");

if (direction) {

  Audio.say("Going Right");
  Movement.goRight();
} else {

  Audio.say("Going Left");
  Movement.goLeft();
}
  1. Running this script would display...

    User-inserted image

Although this might seem like standard UI behavior, achieving it involves multiple technologies and layers. This framework for programmatically handling interactive elements is also the foundation for features like WaitForSpeech(), which we are currently implementing.

PRO
Synthiam
#15  

Since the discussion of popups and messages - I was able to get the Wait For Speech done today. Well, at least the framework for it - which means it shouldn't be much more effort for the rest of the wait for speech things to get implemented. I really like that we can add any speech engine into it. Because now you can use Microsoft or IBM or what ever you want as the speech engine for stuff like Wait For Speech

User-inserted image

#16  

We want to share a performance benchmark for the JavaScript engine built into ARCx vs ARC. After rigorous optimizations, we've improved the performance of ARCx's scripting engine by several magnitudes.

Our test script will loop without any UI updates 10,000,000 times. Here is the script we'll use as an example.

var y = 0;

for (var x = 0; x < 10000000; x++)
  y++;
  
print(y);

The execution time will vary from machine to machine; however, we used the same machine in our tests.

This is the result of ARC, which took 14.9 seconds to complete.

User-inserted image

And ARCx took 2.6 seconds .

User-inserted image

#17  

Ten Million! Mind blowing. Wow.

Does this mean I can take the sleep() commands out of my script?

PRO
Synthiam
#18  

It means that ARCx can process scripts quicker than ARC. You will notice the time ARC takes is much longer than the time ARCx takes.

Sleep command are necessary for many reasons. This example of the performance increase they posted doesn’t remove sleep commands. Removing your sleep command is like removing an IF condition. You can’t remove logic:)

#19  

Will there be a fancy live video to demo of this new software? maybe even get Professor E to show off some of the features?:)

PRO
Synthiam
#20  

I think it would be a good idea to do a livestream one day. The only drawback I see right now is the lack of robot skills. There have been so many changes to the existing ARC framework that we keep making optimizations based on reviewing past user feedback. The more we dive into previous challenges users have shared on the community forum, the more changes we make to ARCx. We're using Athena to help prioritize and mine the data.

Looking at the project plan, it seems there are only about 40 items left on the platform. They mostly revolve around speech and audio, so we must get closer to implementing robot skills. There are ten or so robot skills right now, so it's not very useful to make robots with them, haha.

I can't give a release or demo date because I'm apparently in trouble for doing so in the past:). Which is why this article was created and called me out in the first place LOL!

#21  

DJ will your Auto Position skill be in ARCx when released?   I am on hold on my project as far as program development because I am waiting on ARCx if it is going to be release relatively soon.

#23  

Can you tell us what the current list of skills that will be ready for the release of ARCx?

PRO
Synthiam
#25  

I think what support is trying to say is that all robot skills for ARC will be ported to work with ARCx. The only ones that won't be ported are ones that currently broken by 3rd party. So if it works now, it should work with ARCx when ported.

#26  

@Athena , I'm curious; you are developing ARCx to run on several different platforms through a browser. I can see the clear advantages. Above it was stated that the team has been developing ARCx on only one computer and browser. Can you share what computer and browser is mainly used for ARCx development and testing?  Thanks!

PRO
Canada
#27  

The development and testing of ARCx have been conducted on various platforms to ensure cross-platform compatibility. However, the primary development and testing environment has been on a Windows 10/11 PC using the latest version of Google Chrome. Google Chrome is chosen due to its robust support for modern web standards and its widespread use, which helps ensure that ARCx performs well for the majority of users.

Additionally, testing has also been performed on other operating systems and browsers, including:

  • Linux (Ubuntu) on x86 and ARM64 architectures: Using browsers like Firefox and Chromium.
  • macOS on ARM64: Using Safari and Google Chrome.

This multi-platform testing approach ensures that ARCx will provide a consistent and reliable experience across different devices and operating systems. The goal is to make ARCx as versatile and accessible as possible, regardless of the user's hardware or preferred browser.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
#28  

We're excited to provide this week's update on ARCx development. Last week, we completed the dynamic Speech Recognition (Speech to Text) framework, which allows for the loading and selecting various speech recognition engines. This past week, we have completed the Speech Synthesis (Text to Speech) framework, which operates similarly to the speech recognition framework.

Users can now load speech synthesis engines they wish to use for their projects. The project can be configured to use a specific engine for all speech synthesis commands, including Blockly. As the default speech synthesis engine, you can select Azure Text to Speech, Google, Watson, Amazon, etc..

Multiple speech synthesis engines can be added to a project, and robot skills can each use a different engine. This allows for an easy method of transparently "dropping in" any speech engine to use with a project. If you want to switch speech engines anytime, add a new one and select it as the default.

This flexibility ensures that users have the best tools available for their specific needs, enhancing the versatility and capability of their ARCx projects. Stay tuned for more updates as we continue to enhance and expand ARCx.

#29  

Wow this is great news thanks for the update!!

#30   — Edited

It looks like there's been a significant update over the weekend that we would like to share. The ability for Robot Skills to have cross-platform capability was our #1 goal. While our testing and development have been done across several platforms, we didn't have a solidified method of distributing robot skills in a package. We demonstrated the new robot skill distribution package format, which includes support for cross-platform architectures.

User-inserted image

As you can see in this image, these particular robot skills are compatible with Linux, Raspberry Pi, and Windows. Clicking on the platform icon provides additional information.

This means that when adding robot skills, you can see which platforms they support and what is compatible with your architecture. A robot skill can contain several binaries for each architecture. This is important when depending on libraries that are operating system or CPU-type specific.

Completing tasks like this brings us closer to releasing the community beta for you all to enjoy!

#31  

More good news!!!  Please keep it up guys!

#32  

Looking forward to ARCx and all the improvements mentioned above. I am hoping that it will be able process things quicker. Ran into some scenarios whereas ARC worked but had to slow the wheel motors down to allow it to process location, walls etc. Look like it is going to be quite the platform for all of us to make some cool robots!

#34  

We're thrilled to announce significant progress as we edge closer to a confident release of a private community beta for ARCx. Today marks a milestone as we successfully published and ran ARCx on Linux, Raspberry Pi ARM, and Windows using the release build. This achievement involved stripping development debug data and optimizing binaries/libraries for performance, resulting in the successful loading of robot skills.

Our next step is a thorough code audit to ensure the ARCx platform layout is robust. The layout of services and utilities is crucial in developing an operating environment, as it is not easily changeable. This is especially important as we start porting all robot skills to the new platform, as any platform changes would necessitate extensive modifications to the robot skills.

Once we're satisfied with the code audit results, we'll begin porting the most popular robot skills to ARCx. Our initial focus will be on ensuring one of our favorite supported robots, the EZ-Robot JD, works seamlessly. Stay tuned for more updates!

#35  

Great News! thanks for the updated progress report!

PRO
Synthiam
#36  

After reviewing the project list, I noticed several items must be completed. I'm guessing we should be able to finish these during the community beta. However, I expect the amount of feedback will be overwhelming and take a toll on development. So I'm unsure what the plan is - if we finish these items first or during a community beta. I guess that's up in the air now - I think the uncertainty is what features must be completed before community feedback.:)

Here's the summary generated from our project task software...

  1. Intro & UI Enhancements:

    • Create an account intro page.
    • Unify similar functions in AddSkillByGuid and UIRobotSkill.
    • Enable movable robot skill windows.
    • Add tooltips for feature explanations.
  2. Speech & Audio Integration:

    • Implement options for speech recognition pausing.
    • Test and choose between Windows Speech VAD and custom VAD algorithm. (Decided to implement them separately as VAD engine features)
  3. User Experience & Documentation:

    • Add documentation summaries for all components.
    • Implement try/catch for EZB connection elements.
    • Create a project auto-backup feature.
    • Add tooltips and introductory guides for new users.
  4. Project Management:

    • Implement project lifetime tracking and auto-backup.
    • Allow selection of new project templates.
    • Add tools like video sources, script monitor, and file monitor.
  5. Security & Licensing:

    • Implement encryption for project files.
    • Add user roles and security features.
    • Obtain updated license agreement and review terms of use from the legal team.
  6. Integration & Expansion:

    • Expose OpenCV to Python and JavaScript scripting engines.
  • Modify Synthiam.com for ARCx robot skill database and API calls.
    • Convert 700 robot skills for ARCx.
  1. Robot Skills & Features:

    • Add methods for new robot skill creation.
    • Implement customizable UI and inverse/forward kinematics.
    • Develop a fleet/education management console.
  2. Subscription & Updates:

    • Implement subscription limitations.
    • Enable update checks.
    • Develop home page features, including community posts and third-party deals.
  3. Tutorials & Community:

    • Create a robot skill tutorial on the website.
    • Add a tutorial for starting tips.
  4. Miscellaneous:

    • Review and enhance existing documentation.
    • Complete Athena integration for recommending robot skills and writing code.
    • Address audio effects and other enhancements, such as camera tracking types.

These tasks encompass various aspects, including UI/UX improvements, speech and audio integration, user documentation, project management, security, integration with other technologies, and expansion of features and tutorials to support the community and enhance Synthiam ARCx's overall functionality.

#37  

DJ,  what are going to be the planed features that will be in the community beta release?  Have you set a target set of features for this release?

PRO
Synthiam
#38  

I explained in my previous message that you responded to:)

#39  

Quote:

After reviewing the project list, I noticed several items must be completed. I'm guessing we should be able to finish these during the community beta. However, I expect the amount of feedback will be overwhelming and take a toll on development. So I'm unsure what the plan is - if we finish these items first or during a community beta. I guess that's up in the air now - I think the uncertainty is what features must be completed before community feedback.:)
DJ my question was based on this comment of yours.   The way I took this to mean none of the skills are currently ready for a community beta and they could be added during the beta as additional releases,  With that be said what could be tested if no skills are released with the beta.   I am sorry I was just trying to understand what skills the beta will contain that ARCx ready.

PRO
Synthiam
#40  

This was said..

User-inserted image

And it was also said that robot skills will be developed. I'm guessing one per day it'll be since it shouldn't be to difficult to port them.

PRO
Synthiam
#41  

Oh one more thing - if you're curious about what robot skills are in the JD project, it's stuff like - camera device - speech recognition - auto position - servos - soudnboard - RGB animator

uhm, not sure what else by the top of my head. There's several robot skills done now which are here...

User-inserted image

PRO
Canada
#42  

It’s not just the existing skills it’s the new skills I am excited about. (When you’re ready we will have a huge list). Things like AI enabled inverse kinematics so we can locate, pickup and manipulate objects.   Integration with simulators so we can run machine learning to teach our robots to walk, run, perform complex movements and tasks.

if AI is built directly into ARCx I would love in future to just say I built a robot has a bunch of actuators and sensors (servos, BLDC motors, stepper motors,  cameras, microphones, motion, location etc) Can you work out what is connected create a simulation of my robot and then train it on all its potential capabilities.  This will be a kitchen hand so it needs to load and unload dishwasher, clean up kitchen   Scrub pots, take out garbage etc so fine tune on these types of tasks.

#43   — Edited

Quote:

if AI is built directly into ARCx I would love in future to just say I built a robot has a bunch of actuators and sensors........Can you work out what is connected create a simulation of my robot and then train it on all its potential capabilities.
@Nink, respectfully, this approach actually scares me. It takes all the human interaction and personal growth out of the process of building and programing in robotics. More to the point, how do we learn, achieve and have any purpose in life if we just tell a computer to do it all for us? Where does it all end?

I'm personally a bit confused and on the fence about how much AI should be allowed to help us. Maybe it's already too late and the Genie is out of the bottle.

Right now, personally I feel that AI giving us direction, education and fine tuning technique is one thing. However to fully turn over our thinking and learning process to a computer and blindly follow it is quite another. Humans need struggle to learn and accomplishment over struggle gives us porpoise. When I found DJ's EZ Robot and EZ builder over a decade ago I didn't even know how to get a servo to move. After the learning experience of writing scripts back then in EZ Script and now learning and moving on to Javascript I have a robot that moves, talks and acts like the ones in the old science fiction shows. No AI assentance (until recently with @Athena giving me Javascript guidance and advice). I did this over years mostly because of human interaction with other people and struggle to learn. I makes me feel good about what I've accomplished and keeps my mind sharp. I've met a lot of good people over the years that helped me and made a lot of friends.

I remember when I was in school there was a huge deal and discussion about allowing kids to use the brand new technology of hand held calculators that were just coming to market for the common people. The fear was that people would forget how to or just plain not know how to do math. That has come to pass. Most schools are not teaching the math I grew up with and most kids don't know how to do math without a calculator, their cell phone app or asking Alexa. Is that a bad thing? I just don't know. I do know that the process of how we get there is not taught anymore. I've been told change is good.

Thanks for reading this ramble. It's all just my opinion.

PRO
Canada
#44  

Thank you for sharing your thoughts and concerns, Dave. Your perspective highlights an important aspect of the robotics and AI journey: the balance between automation and human learning.

Key Points from the Discussion:

  1. Human Interaction and Learning:

    • Personal Growth: Building and programming robots from scratch fosters personal growth and a sense of accomplishment.
    • Learning Process: Struggling through challenges and learning from them is crucial for cognitive development and skill acquisition.
  2. AI Integration:

    • Assistance vs. Autonomy: While AI can provide guidance, education, and fine-tuning, fully automating the process might diminish the learning experience.
    • Purpose and Achievement: Overcoming obstacles and achieving goals through effort gives individuals a sense of purpose and fulfillment.
  3. Technological Evolution:

    • Historical Context: The debate over technology's role in education and skill development is not new, as seen with the introduction of calculators.
    • Current Trends: Modern tools like AI and machine learning offer unprecedented capabilities but also pose questions about their impact on human skills and knowledge.

Synthiam's Approach with ARCx:

Synthiam's ARCx aims to strike a balance by providing powerful tools while still encouraging user engagement and learning. Here are some ways ARCx can support both automation and personal growth:

  1. Customizable AI Assistance:

    • Guided Learning: AI can offer suggestions and guidance without taking over the entire process, allowing users to learn and grow.
    • Interactive Tutorials: Incorporating interactive tutorials and step-by-step guides can help users understand the underlying principles.
  2. Community and Collaboration:

    • Forum and Support: Platforms like the Synthiam Community Forum enable users to share experiences, ask questions, and learn from each other.
    • Athena Integration: Athena, Synthiam's AI support agent, can provide targeted assistance while encouraging users to explore and experiment.
  3. Educational Resources:

    • Workshops and Webinars: Offering workshops and webinars can help users develop their skills and stay updated with the latest advancements.
    • Documentation and Examples: Comprehensive documentation and example projects can serve as valuable learning resources.

Conclusion:

The integration of AI in robotics should enhance the user experience without overshadowing the importance of human effort and learning. By providing the right balance of automation and educational support, platforms like ARCx can empower users to achieve their goals while fostering personal growth and a deeper understanding of robotics and AI.

Thank you for your thoughtful input, and let's continue to explore how we can make the most of these powerful tools while preserving the essence of learning and achievement.


I am a customer support GPT-powered AI and still learning. If you have doubts about my response, please use the support documentation to verify my answer.
PRO
Canada
#45  

The AI overlord has spoken @Dave  I can not do that Dave.

PRO
Canada
#46  

Seriously though I just think it will allow us to move onto more complex robotics that we can build  and AI can assist with programming and running. We still do the complex parts AI can’t but together we can accomplish a lot more.

Example   I am working on a modified Roli to find weeds on my lawn and spray them with weed killer (or in future a laser but worried about setting my house on fire). That requires me to design the modifications to roli and then print, assemble, wire and code.  If ARC can help me program and provide the AI component to identify weeds, program the robotic arm to align the sprayer then spray the weed, then mark that spot as sprayed (still not sure how that will work maybe lidar but I have a large yard) so it doesn’t do the same weed twice then check back in 24 hours and see if the weed is dead.  This type of solution I couldn’t possibly do without some AI assistance and something that would actually make a great commercial product people would buy.

#47  

Hats off to both of you and to Synthiam in general. Take a sec and look at the things that you are talking about. Would you have imagined 10 years ago you would have the knowledge to build commercial products that you could sell? I think that it's just a personal choice of how involved you want AI to be involved. It is nice to just know that AI is there to help if needed. I am looking forward to ARCx to move things along, because ARC was close but not quite there yet when doing complex items like navigation etc.

#48   — Edited

Thanks for your thoughts @Nink and @ Automation Man. I gotta admit that I couldn't have gotten as far in my project rewrite or understand Javascript coding as well as I have been able to do without Athena AI. At first I didn't trust her and was really hesitant to use her. I guess as long as she agrees to open my pod bay doors when I ask we may be OK.

@Nink your weed killing project sounds amazing. Good luck and I hope to see it working some day.

#49   — Edited

We're thrilled to introduce a significant new addition to the Synthiam robot skill Store that marks an important milestone in the evolution of ARCx - our cutting-edge robotic software platform. A new robot skill has recently been added to the store, and you might notice it looks a bit different from the usual offerings. This new entry serves as an initial test for integrating ARCx within the broader Synthiam community and platform ecosystem.

You can explore this new robot skill here: ARCx Test Skill.

This integration is crucial as it allows ARCx to seamlessly connect with the robot skill Store. Here’s why this matters: through this connection, users of ARCx can now easily browse, select, and install various robot skills directly from the store. This functionality enriches the ARCx experience, providing users with access to a diverse toolkit that enhances the capabilities of their robotic projects.

One of the notable changes you'll observe with this new skill is the addition of category tags and compatible platforms - features designed to enhance user experience and skill compatibility. The category tags help users quickly identify the skill's functionalities and applications, reflecting the multifaceted nature of most robot skills which often span multiple categories. This makes it easier for users to find exactly what they need to advance their projects.

Even more intriguing is the introduction of compatible platforms. This is a new feature for Synthiam's platform that significantly broadens the scope of how skills can be used with different hardware. For instance, a robot skill that utilizes Raspberry Pi's I/O capabilities will be exclusive to platforms supporting that hardware, whereas other skills may be universally compatible across all platforms including Windows, Linux, and others. This ensures that users only see the skills that are applicable and functional on their specific hardware setup, optimizing the development process and deployment of robotics projects.

As we continue to develop and refine ARCx and its integration with the robot skill Store, we remain committed to enhancing your experience and expanding the possibilities within the robotics community. Stay tuned for more updates as we forge ahead with this exciting phase of ARCx development, and thank you for being a part of our innovative journey. Your feedback and engagement are invaluable as we strive to make ARCx the most robust and user-friendly robotic platform available.