Cloud Provider Comparisons

Cloud Provider Comparisons: AWS vs Azure vs GCP – Artificial Intelligence and Machine Learning

Episode description

In Cloud Provider Comparisons, we take a look at the same cloud services across the three major public cloud providers – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). In this video, we focus on artificial intelligence (AI) and machine learning (ML). We compare the major ‘building block’ services (those you can use without having to know much about ML), explainability and bias services, and ML infrastructure and platforms. If you’re curious about how the AI and ML services of AWS, Azure and Google Cloud Platform match up, join ACG Senior Training Architect Scott Pletcher and watch on to find out more!

Timestamps:
0:00 Introduction
0:52 What is AI and ML (and what’s the difference)?
2:41 What this episode will cover
3:33 Machine learning building block services
7:56 Machine learning platforms
10:20 Machine learning infrastructure
12:30 Machine learning explainability and bias

Here’s some more detail on the AI and ML services we’ll cover in this episode:

ML building blocks:

  • Speech to text: Amazon Transcribe, Azure Speech to Text, GCP Speech to Text
  • Text to speech: Amazon Polly, Azure Text to Speech, GCP Text to Speech
  • Chatbots: Amazon Lex, Azure Language Understanding, GCP DialogFlow
  • Language translation: Amazon Translate, Azure Translator, Google Cloud Translation
  • Text analytics: Amazon Comprehend, Azure Text Analytics, GCP Natural Language
  • Document analysis: Amazon Textract, Azure Form Recognizer, GCP Document AI
  • Image & video analysis: Amazon Rekognition, Azure Computer Vision and Video Indexer, GCP Vision AI and Video AI
  • Anomaly detection: Amazon Lookout family and Fraud Detector, Azure Anomaly Detector and Metrics Advisor, GCP Cloud Inference
  • Personalization: Amazon Personalize, Azure Personalizer, GCP Recommendations AI

ML platforms:

  • ML tools (Jupyter Notebook) and learning frameworks (Tensorflow, MXNet, Keras, PyTorch, Chainer, SciKit Learn)
  • Guided model development: Amazon SageMaker Autopilot, Azure Automated ML and Designer, GCP AutoML
  • Full ML workbench: Amazon SageMaker Studio, Azure Machine Learning Notebooks, GCP AI Platform
  • MLOps: Amazon Sagemaker MLOps, Azure MLOps, GCP Pipeline

ML explainability and bias: Amazon SageMaker Clarify, Azure Responsible ML, GCP AI Explanations


We’re busy creating more in this series – subscribe to stay updated on when we drop a new video!
https://www.youtube.com/channel/UCp8lLM2JP_1pv6E0NQ38pqw/?sub_confirmation=1

Let us know in the comments what other cloud services you’d like to see us cover!

Sign up for a free ACG account! https://bit.ly/2R07VSz

Like us on Facebook! https://www.facebook.com/acloudguru

Follow us on Twitter! https://twitter.com/acloudguru

Join the conversation on Discord! https://discord.com/invite/acloudguru


Episode resources:

Series description

In Cloud Provider Comparisons, we explore and compare the same cloud service across the three major public cloud providers - Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

Okay hello Cloud Gurus, Scott Pletcher here and welcome to Cloud Provider Comparisons. In this series, we take a look at the same cloud services across different cloud providers. We'll look at the similarities, the differences, and anything else that might be interesting. In this episode, we're going to dive into a space that's super dynamic for cloud providers, artificial intelligence and machine learning. Along with serverless, AI and ML might just be the killer app for the cloud, combining massive data handling with virtually limitless computing power and pay only for what you need economic model.

If you're curious of how the AI and ML services of Azure, AWS, and Google Cloud Platform match up, stick around. Before we get going, let me clear the air a bit. Artificial intelligence and machine learning are often used interchangeably by the popular press, but at least in the eyes of the AI community, they are not the same thing. Artificial intelligence is our pursuit of simulating human thought and decision in an automated fashion. Machine learning is, at least according to Arthur Samuel, the guy who coined the term back in 1959, the field of study that gives computers the ability to learn without being explicitly programmed. In other words,

machine learning is one method we can use to try to achieve artificial intelligence. Now that said, even some of our cloud providers make liberal use of the term AI and ML. So just be aware of this as it's usually part marketing speak. In a broad sense, you'll need to have three things to get those machines a'learnin. First, you'll need lots and lots of data. Next, you'll need some way to apply computation or algorithms to that data.

And last but not least, you kind of need to know what you're doing. Not too long ago, the equipment and expertise needed to do anything close to machine learning or artificial intelligence was so prohibitively expensive and specialized that only governments and a few universities could afford it. Cloud computing has managed to bring these three things within reach of anyone who happens to have an internet connection. We can manage massive amounts of data and harness immense cloud computing power using point and click tools that the cloud providers have created and only pay for specifically what we need. While knowledge is still important, cloud providers have created some turnkey services that let us make use of very powerful machine learning technology through a simple API call.

In this episode, we're going to compare the AI and ML offerings of AWS, GCP, and Azure across three different areas: machine learning platforms, machine learning infrastructure, and machine learning building blocks, and these are the services that you can use without having to know very much at all about learning. Now, a bit of a disclaimer here, cloud providers are investing massive amounts of resources into expanding their machine learning offerings. So the features and services are going to be always evolving, at least for the foreseeable future. This episode can hopefully give you some background on services offered and the respective terminology used by cloud providers, but don't expect some declaration about one provider being better than the others. This space is just too volatile. We'll get started with machine learning building blocks, as these are usually the most common way people get started with machine learning because the barrier to entry is so low.

They are ready to go services that are available as an API call or using the SDK from the cloud provider. Now all the providers we're going to talk about here offer rest APIs for their machine learning services. Let's take a look at some common ML uses and see what we have. Speech to text and text to speech are things that we probably use daily and maybe take for granted, but there's some pretty complex things going on behind the scenes. Fortunately, our cloud providers have us covered. For speech to text,

AWS has a service called Amazon Transcribe, while both Azure and GCP opt for the most obvious names for this service, Speech to Text. For converting text to audible speech, Azure and GCP again win the Mr Obvious award by sticking with Text to Speech as the service name. AWS conjures up images of a parrot by calling their service Polly. Like it or not, chatbots have started becoming more commonplace as a first line of customer support. Our cloud providers are doing their part to help chatbots be less disappointing by creating services.

Azure calls their chatbot space Language Understanding. GCP uses DialogFlow as their service name and AWS calls theirs Lex. Anyone who's been around the internet since back in the day might recall a website called Babelfish. Babelfish was at free language translation website and for the late nineties, I thought it was just about the most amazing slice of technology that I had ever seen. Of course, language translation is more table stakes than novelty these days. Fortunately,

all three of our cloud providers have chosen to stick to literal naming for their own translation service. GCP has Translation, Azure has Translator and AWS has Translate. Text analytics services can take natural language, meaning how we speak to one another, and extract certain themes, topics, and sentiments. GCP calls their text analytics service Natural Language. AWS calls theirs Amazon Comprehend and Azure, ever the literalist, just calls theirs Text Analytics. An evolution of text analytics is document analysis where machine learning can do stuff like summarize articles or detect information in forms.

Azure's text analytics service can do some of this stuff and they also have Form Recognizer for, you guessed it, form data extraction. AWS has Amazon Textract and Google has Documents AI. Onto image and video analysis services. These services can recognize objects and people in pictures, map faces or detect potentially objectionable content. Azure gives us Computer Vision and Face Video Indexer services while GCP calls their image and video services Vision and Video respectively.

AWS bundles both image and video analysis under their Rekognition product, intentionally spelled with a K for some reason. Computers are pretty good at detecting when things are out of the ordinary, but you normally have to tell them specifically what to watch. Cloud providers have used machine learning to create services that can just watch a stream of events or data and figure out what's different. This is called anomaly detection. GCP uses this in Cloud Inference.

AWS has the Amazon Lookout family of services and Fraud Detector. And Azure puts this service to work with Anomaly Detector and Metrics Adviser. Finally, recommendation engines are becoming a popular addition to e-commerce sites and our cloud providers have tried to do the heavy lifting for us here. Azure has Personalize, GCP calls theirs Recommendations AI, AWS offers Amazon Personalize, based on the same technology they developed for their commerce site. Now, one thing to keep in mind here is your recommendations will only be as good as the transactional data you're able to feed in. In fact,

that goes for most all these services. If your source data is sketchy, your results may be disappointing. When I talk about machine learning platforms, I'm referring to the workbench and tools that ML practitioners would use. It's analogous to a developer using an IDE and some libraries to write their code. For machine learning, Jupyter Notebook is the current defacto workbench for data scientists so it's no surprise that all the cloud providers offer Jupyter Notebooks or some slightly rebranded version as part of their platforms.

Another consistency is in the support of major machine learning frameworks TensorFlow, MXNet, Keras, PyTorch, Chainer, SciKit Learn and several more are fully supported. Features such as security, collaboration and data management are all well integrated by all the vendors, but the specifics on how you use these things varies by provider. For those who are just starting out on their machine learning journey, our cloud providers have invested in some gentle introductions. AWS calls their just getting started service SageMaker Autopilot. Azure has Automated ML and a neat drag and drop tool called Designer.

GCP has a line of guided model creation tools that they call AutoML. For the seasoned pro who doesn't need the training wheels, AWS offers SageMaker Studio, Azure has Machine Learning Notebooks, and GCP calls their main ML development platform simply AI Platform. Another feature getting lots of attention as of late is the DevOps equivalent for machine learning, so called MLOps. Azure just calls their ML ops offering MLOps, AWS has SageMaker MLOps and GCP accomplishes this via their Pipeline service. AWS has something that I haven't seen on the other platforms yet, but I'm sure that's just a matter of time, Augmented AI.

This is a way to enlist the reasoning power of teams of real live humans to help improve your machine learning service. Let's say you've determined that your machine learning model is about 95% accurate at identifying pictures of angry ferrets, but you must have 100% accuracy. For those cases where the ML model's confidence is low, you can direct that picture over to a live human, which will then make the determination of anger or not anger. All of our cloud providers really, really like containers for their respective machine learning platforms. And this is for good reason. Containers are relatively lightweight, portable,

can be shuffled around without much hassle. All the providers offer push button deployment of containers for specific versions of the ML frameworks, optimized for training validation and inferences. If you're more of a do it yourself person, all the providers have platform optimized virtual machines for all the major frameworks as well. Most people use this option if they already have a model trained on prem. For example, if you already have a model created using PyTorch, you can just spin up a VM with that specific version of PyTorch in the cloud and copy your model out there.

There is a bit of an arms race with machine learning optimized hardware among the cloud providers, each claiming superior performance and economics. All of the providers offer various levels of CPU and GPU virtual machine types. Additionally, some have also invested in specialized hardware in the form of application specific integrated circuits and field programmable gate arrays. AWS offers Habana Gaudi ASIC instances and a custom processor they call AWS Trainium optimized for model training. They also offer an ASIC called Inferentia for machine learning inferences.

GCP has long offered their custom tensor processing unit or TPU, which is an ASIC optimized for the TensorFlow framework. Not to be outdone, Azure offers a line of FPGA based virtual machines tuned specifically for machine learning workloads. Now there is a tradeoff here. These specialized hardware platforms are really good at machine learning tasks, but they're not much good for anything else. Economically, CPU and GPU based machines are much more flexible and generally what people use first, as they develop and refine their ML models. For all its promise and opportunity, developing quality machine learning models is really hard. If you get it wrong,

the resulting ML generated decisions can range anywhere from slightly embarrassing to downright immoral, both for ethical and sometimes regulatory reasons. We need to be able to explain how our machine learning model makes its decisions. Practitioners call this explainability and fortunately our cloud providers have tools to help us out in this area. AWS has SageMaker Clarify, which can help provide a view into how data elements influence the model generation process and evaluate fairness. Azure has this ability integrated into Responsible ML and Fairness SDK. GCP provides this under the name AI Explanations.

Machine learning is a rapidly evolving and iterating space and the cloud has just accelerated that even more. Now, if you're just getting started on your ML journey, check out Intro to Machine Learning on the ACG platform by yours truly. I use history, simile and illustrations to demystify all those scary words. Now, once you have the basics, you can then pick your cloud, and dive a bit deeper. We have courses and hands-on labs to let you dive deep into the ML offerings of AWS, GCP and Azure. Links to all this stuff down below.

Thanks for watching. Stay safe, take care of one another and keep being awesome Cloud Gurus.

More videos in this series

Master the Cloud with ACG

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?