Skip to content

Contact sales

By filling out this form and clicking submit, you acknowledge our privacy policy.

The Cloud Dictionary of Pain: Five Of AWS's Toughest Cloud Topics

Jun 08, 2023 • 13 Minute Read

Please set an alt value for this image...
  • Cloud
  • Learning & Development
  • AWS

Good morning, everyone; class is in session! Please take your seats—and buckle up.

Today, we’re talking about terminology. Amazon, Google, and Microsoft have built enough cloud products that if you need to do something—anything—adjacent to your business, there’s a good chance you can do it on the cloud. But with that comprehensive set of products comes another challenge beyond just knowing where and when to use them: what they’re called and what they do in the first place.

Ensuring everyone on your team can “speak cloud” or have a baseline cloud understanding can be one of the biggest challenges companies face. We’re here to help you achieve that baseline cloud fluency with your team. To do that, we’ve built a cloud dictionary of some of the most challenging topics you’ll run into it. Given the difficulty here, we’ve decided to name this the Cloud Dictionary of Pain.

Cloud Dictionary

Get the full Cloud Dictionary of Pain
Speaking cloud doesn’t have to be hard. In our cloud guide, you’ll find the full list of succinct definitions to some of the most painful cloud concepts.


We analyzed 2.7 million responses to thousands of quiz questions on our platforms to identify some of the most challenging cloud infrastructure topics. We looked for questions where learners fell below a 60% success rate and, based on the topics in those questions, have outlined more than 20 challenging topics that learners can, at times, struggle to decrypt—at least in some specific scenarios.

Today we’ll be covering five of the most challenging topics we identified: Amazon SQS, Elastic Load Balancing, AWS VPCs, AWS Lambda, and Subnets. This list is a subset of dozens of terms and topics we attacked across all three major cloud platforms: AWS, Microsoft Azure, and Google Cloud. You can find our complete walkthrough to Amazon’s thorny topics in the full Cloud Dictionary of Pain

With all that out of the way, let’s get started!

AWS predictions for 2021

AWS Virtual Private Clouds (VPCs)

WHAT ARE AWS VPCs?

Think of a VPC as a virtual data center in the cloud. Your data center may have public web servers that are directly accessible to the internet, and a more private set of servers that can only be accessed by direct connection or over a VPN. If you’re dealing with especially sensitive customer information (such as something regulated), you’d want to isolate that data even more. 

VPCs allow you to provision an isolated section of AWS where you can launch resources in a spot you define. So you can think of it as a way to do stuff with your private information without that confidential information touching the internet. VPCs are the place to put your database, your application servers, your back-end reporting processes - anything you don’t want directly exposed to anyone with an internet connection. 

VPCs allow you to set your IP range, create subnets, and configure root tables and network gateways. You could create a public-facing subnet connected to the web while your backend systems are isolated and not connected to the internet at all. Multiple layers of security can help you control access to each subnet - what’s known as “defense-in-depth”. Every AWS account gives you a default VPC out of the box, but you’ll likely want to set up your own for improved security and customization.

WHY IS IT HARD?

You’re essentially trying to keep some information safe and off the internet while connecting internet-facing services to both that information the web simultaneously. That’s complex! Just look at this flow chart:

Please set an alt value for this image...

Okay, that’s a lot.

We’re just getting started. All those elements are there to defend your information. But with all those arrows, you can see that there are many points where things can break down. The whole process can get particularly hairy when you’re configuring a custom VPC but don’t have a ton of expertise in networking; for example, deploying a serverless multi-region, active-active backend inside an Amazon VPC (Virtual Private Cloud).

Just one example: there is no transitive peering between VPCs. You can have one VPC talk to another VPC, but if that VPC talks to a third one, the first one can’t speak to the third one. If VPC A can talk to VPC B, and B can talk to C, A still CAN NOT speak to C! Increasingly, many people will also question whether they need a VPC in the first place. After all, the rise of “zero-trust” networking and serverless architectures has moved more workloads out of private networks and into reliance on good authorization strategies (like AWS IAM). 

Our learners missed tough questions 50.3% of the time related to VPCs.

Give me a hard one!

Your company wants the on-premises network of its nearby branch office to securely connect with the instances launched into the amazon virtual private cloud VPC environment of its headquarters. Which of the following proposed solutions is the correct one?

27% of learners got this one right. Next!

AWS Lambda and Serverless Tools

WHAT IS AWS LAMBDA?

AWS Lambda lets you run code on a tiny piece of compute called an “execution environment”, even more abstracted than an EC2 instance. (That’s why Lambda-based systems are often called “serverless”: there are still servers somewhere down there, you just don’t have to know or care about them.) 

You can trigger functions with events like a click of a button on your website or upload of a photo (or perhaps an A Cloud Guru lecture). For example, when you talk to Alexa, your question might not be querying a specific relational database to check for the response. Instead it may be passing a request through AWS API Gateway to a Lambda function. And those Lambda functions can trigger another Lambda function. Once again, in very classic AWS fashion, it’s Lambdas all the way down.

To be clear, this kind of scaling is different from auto-scaling if you have users spread across multiple servers. A million users hitting your site may trigger a million Lambda events. The difference between 1 user and 1 million users calling that action is a function of cost, not speed. You only pay for whenever your application executes code. 

WHY IS IT HARD?

It’s a whole different paradigm! 

Lambda functions have short runtime limits, rely heavily on event-driven programming, and don’t play well with some legacy technologies. And yet many people refer to serverless architecture as the future of applications. There are a lot of reasons why that’s the case. But from a business perspective (especially if you’re the one with the team corporate card), the primary reason is serverless architectures can be an extraordinary cost saving. 

Lambda calls are free up to 1 million requests, and only 20 cents per million requests after that. And that’s not considering all the sysadmin time you’re freeing up. To put this into perspective: by using serverless and Lambda on the back end, our compute costs to serve millions of users here at A Cloud Guru became a rounding error. If you see an opportunity to offload some of your operations to Lambda for any reason, there’s a good chance you probably should.

Our learners missed tough questions 48% of the time related to Lambda and serverless. 

optimize cloud bill

Elastic Load Balancing

WHAT IS ELASTIC LOAD BALANCING?

It’s exactly what it sounds like: a physical or virtual device designed to help you balance the network load across multiple servers. You have a variety of ways to plan out your auto-scaling and generally have three options: 

  1. Application load balancers can see inside your application and the requests it’s making. Application load balancers are great for a scaling engine that will make intelligent decisions for you. 
  2. Network load balancers work best when you need to focus on performance. These load balancers operate at the connection level. 
  3. Classic load balancers are just legacy elastic load balancers. This is a cheap option if you don’t care how AWS routes your traffic. (Amazon has largely deprecated these.) 

WHY IS IT HARD?

Two reasons: 

  • You first have to select the best load balancer, and then you have a lot of specific features you can enable to make those load balancers more efficient. As always, there are going to be tradeoffs. The challenge is maximizing the value you get out of your cloud configurations. Application load balancers are best-suited for HTTP and HTTPS traffic, operating at layer seven. Applications use Network load balancers to balance TCP traffic, working on the connection level (layer four). Legacy balancers can use layer seven-specific features for HTTP/HTTPS applications, but it isn’t application-aware. 
  • Once you’ve selected a load balancer, you’re going to have to address how you’d like to route your traffic to your various EC2 instances. These instances may be across The AWS Dictionary of Pain multiple availability zones and tied to different load balancers. But it is possible to intelligently route traffic to optimize your performance for each EC2 instance. 

Here are a few top-level configurations you’ll have to address: 

  • STICKY SESSIONS: You may want to ensure that one user is only ever tied to one EC2 instance—such as if they’re saving information to disk. But you may also end up with unused EC2 instances by using sticky sessions.
  • CROSS-ZONE LOAD BALANCING: You may want to automatically balance your incoming traffic across all EC2 instances regardless of the availability zone you’re currently using. You may sacrifice some performance by sending your traffic into different geographic regions if you use cross-zone load balancing.
  • PATH PATTERNS: You can route your traffic intelligently based on URL paths, such as myurl.com/images. But this may also not account for surges in traffic to specific pages—like, say, some celebrity posts an image on your images page.

Our learners missed tough questions 53% of the time related to load balancers.

Using SQS with Lambda | AWS DevOps Pro

Simple Queue Service (SQS)

WHAT IS AWS SQS?

SQS is one of AWS’s oldest services, first announced in 2004. It gives you access to a message queue to store messages while waiting for a computer to process them. Amazon SQS enables you to decouple components of an application and let them communicate asynchronously (on their own schedules). Each message is sent to a service like a Lambda function or EC2 instance.

Those decoupled components can store their messages in a fail-safe queue that will wait for computation. This can be really useful if you, for example, lose an availability zone for whatever reason. The message becomes available in the queue if an EC2 instance goes down, enabling other EC2 instances to pick up the slack.

WHY IS IT HARD?

If you’re building on SQS, you’re by definition creating a distributed application: one that does work across a network via decoupled pieces. And that brings with it a whole new layer of design and operational considerations. You can choose whether you want to create a standard queue or a FIFO (first-in-first-out) queue. Both have some advantages: Standard queues ensure that your message is delivered at least once and make a best effort to arrange them in the same order as received. But that’s a best effort, and you shouldn’t assume order! FIFO queues deliver messages in the exact order as they were received.

The message is delivered and remains available until a consumer processes it and deletes it. Duplicates are not allowed into the queue in this case. SQS also implements visibility timeout—essentially how long the queue waits for one consumer before making it available to the next one. For example, let’s say a message is made available to one EC2 instance and isn’t executed in some target time. That message will then be available for another EC2 instance to grab it and run it. But this also means SQS may deliver your message more than once.

Our learners missed tough questions 49% of the time related to SQS. 

Subnets

WHAT ARE AWS SUBNETS?

Shorthand for subnetwork, a subnet is just a subsection of a network. When you create a VPC, you’ll have a series of subnets associated with all the applications within a specific availability zone. We’ll provision resources, like an EC2 instance or an RDS database, within particular subnets—which can be public or private. Public subnets have a route to the internet that’s associated with an internet gateway. Public subnets can also talk to other public subnets. Private subnets do not have a path to the internet, but they can connect to public subnets within your VPC. 

WHY IS IT HARD?

You can’t have one subnet across multiple availability zones. You’ll probably hear something along the lines of “one subnet equals one availability zone.” Let’s say you’ve decided to launch a VPC within a particular region, and within that region, AWS offers a set of availability zones. If you’d like to keep some information private— such as a set of customer information in an RDS database—you would launch a private subnet within one availability zone.

However, suppose you wanted to launch a subnet within a different availability zone (such as looking for some redundancy). In that case, that subnet will not be able to talk to your private subnet in a different availability zone. Your private subnet won’t span multiple availability zones. So this makes it an important consideration when figuring out how to handle disaster recovery.

Our learners missed tough questions 49.1% of the time related to subnets.

We hope after going through these examples, you’re getting a taste of just how hairy the cloud can be. It has its own custom dictionary to go along with its sprawling urban environment of products.

We can empathize how complicated the cloud can be and hope we can provide some guidance. Then again, we didn’t write down the rules. (Well, we wrote the questions—we are genuinely sorry about that.)

You can find our full walkthrough to Amazon’s thorny topics in the full Cloud Dictionary of Pain. You can also find their equivalents—and what makes Azure and GCP uniquely tortuous—in our Azure and Google Cloud Dictionary of Pain. You will also find our full methodology.


Make your career the definition of awesome

Looking to level up your cloud career? Get started with A Cloud Guru and see how getting hands-on can help you master modern tech skills.