Hey everyone - this is a quick note that the Synthiam servers will be offline on Tuesday, April 18th, between 11:00 PM Mountain Time for approximately 3-4 hours (hopefully less). This should not affect the operation of ARC because the local subscription cache will take priority. However, the website and community forum will be offline.
Pacific Time: 10:00 PM Mountain Time 11:00 PM Central Time: 12:00 AM (April 19th) Eastern Time: 1:00 AM (April 19th) UTC 5:00 AM (April 19th) Additional time zones can be calculated: https://www.worldtimebuddy.com/mst-to-utc-converter
Affected Services
- Cognitive services (bing speech recognition, vision, emotion, face)
- Project cloud storage
- ARC diagnostics and logging
- 3d printed downloadable files
- Website (swag, purchases, documentation, community forum, etc)
- Account creation (from ARC)
- Online servo profiles
We're migrating the infrastructure to a faster server cluster and expanding storage. For those interested in the sizes of our infrastructure, the Synthiam platform consists of 5.8 million files of 350 GB. This is mostly cloud projects and historical revisions of each saved file. I was just as surprised as you when I was informed of the file count for the platform! Each current server (website, cloud authenticator, logger, exosphere, file manager) is 4-core with 16 GB of ram. The new servers are 16-core with 40 GB of ram. So we should see a significant performance improvement as the current servers are running full-tilt with the increased usage we experience.
There will be an information banner on the website throughout the day on Tuesday as a reminder. Ideally, offline servers should not affect anything other than visiting the website. Subscription caches should take care of authentication while they're down.
What time zone is this?
Updated the first post
I hope you upgrade goes smooth. I think the model cloud providers charge for servers (bare metal/IaaS/PaaS etc ) needs to change. The cost increase to go from 4 core 16GB to servers with 16 core and 40GB RAM is really hard pill to swallow on a monthly bases especially when the PC on your desk that costs $2000 probably has similar specs. Yeah sure cloud hosting includes power, network, backup etc but I have to envision a lot of companies are going to consider going back to on premise models and just consume SaaS services as needed, especially now all the commercial real estate is sitting empty because no one wants to go back to the office.
That does open an interesting discussion. I think security (physical security) is one of the biggest qualities that companies consider when choosing cloud.
Physical security is more than theft, because it includes fire, flood, electrical spikes, power outages, etc
the next benefit would be virtual physical locations that cloud provides. Because these cloud infrastructures have a fast and huge network that stretches across the world. This allows you to choose where your server is stored. Essentially, it’s not stored in a single location anyway because it’s just a process that floats across multiple available resources. But, the virtual physical location is determined by the ip (to clients). So you can appear anywhere - and that’s good for SEO and performance.
now, those two aside, the server costs can be quite high. But that also saves a company from hiring a network/server admin - so there’s cost savings there. If we wanted a local server cluster, we’d need to hire someone to maintain it. From the outside, you’d probably think there isn’t a lot going on behind the scenes for Synthiam’s platform - and I like that we’ve hidden the complexities. I like that we look simple
. There’s quite a few servers doing many things - specially since we work with so many universities and colleges for exosphere and telepresence hosting. Also the cloud project and archival history is a big feature that you’d be surprised how often is used. Even consider a company where all a person or student does throughout the day is program servos to do things. They're constantly saving project revisions 3-5 times per hour. In an 8 hour day 5 days a week multiply that by many users - it adds up
I think the huge downfall that Ive noticed with cloud is the scalability is deprecated on older offerings - making money from customers by deprecating products. For example, our cluster was put up in 2018. It’s only been 5 years, yet azure has deprecated the server types we’re using. This means there’s no seamless upgrade path. We can’t just push a button to grow the hard drives capacity or add more CPU’s. Instead it has to be migrated to entirely new vms
if we had our own server, drives could be added or swapped in the raid configuration to expand storage. That would have prevented downtime.
in summary, I’m a fan for local storage - I think cloud costs are out of hand. There’s no savings in the cloud like there once was. If you do local storage, an offsite backup should be top priority. Even with our cloud we have offsite backup at 2 am daily.
I agree there are pro's and cons of cloud, no capital outlay, if you architect your solution to use stretch cluster that take advantage of multi zone regions so you never have an outage, if you scale up and scale down based on demand. There are also huge advantages if you utilize containerized microservices and Kubernetes or serverlesss computing so you only pay for the compute used on demand.
The other gotcha I see companies miss when upgrading their servers is the software license costs. Example you take a 2 core oracle DB and throw it on a 16 Core server with VMware or Hyper-V, you pin it to 2 cores and you get hit with a software audit telling you to pay for 16 cores despite the fact you only had 2 vCPUs assigned. You spin up dev / test / staging environment and the software companies say you have to license these as well. Some areas you have a win example windows server datacentre addition that is on a 4 core box is already licensed up to 16 core so you can actually reduce costs if you consolidate to a single server with multiple VM's.
Okay 5.8 million files... ugh - we've been up for the last week non-stop on this. I can't believe they deprecated the server package we had, which prevents us from simply adding more storage space and adding more cpus. It's so much work because of that - so we figured out how to copy portions of the data based on how often the data is accessed. We're prioritizing, which means some stuff might not be available for a day or two as the files migrate. The only files that would affect anyone on this forum would be cloud history revisions. But outside of that, the rest is exosphere machine learning data and that won't affect anyone here on the forum. Those customers have been informed.
Typically when I had to architect cloud migrations I would solution the use of data transfer tool like Aspera that provide real time encryption and then uses a UDP hybrid protocol called FASP to migrate the files. I believe Aspera is available in Azure and I am sure there are other alternatives that provide similar high speed encrypted UDP data transfer capabilities. (and no there isn't any data loss).
We don't need encryption because it's all within our virtual network in the cluster. UDP also requires CRC checksum during transfer which is additional overhead - with the file count we have to move, smb will be fine. We're not using Windows copy - I created a multi-threaded copy utility that uses the Windows file system API. It works fast enough that the i/o on SSD is full tilt. Funny, the data folder is still called "ez-robot uploads" because it's been that way since 2011
.
We will migrate to our locally hosted servers next - because this is ridiculous. Ironically, one of our ex-employees was chatting with me last night. He reminded me that in 2018 when we spun up the current production servers, Azure did the same thing - deprecated the "server package," and we had to copy and rebuild everything. So it appears Microsoft deprecates server packages to make people have to pay for migration.