AWS Certified Solutions Architect - Professional 2020

Sign Up Free or Log In to participate!

S3 anti patterns

I was reading aws storage services white paper(2016). I came across S3 anti patterns and alternative solutions.

One of the scenario for anti pattern was for rapidly changing data, for which the reason is described as below:

Data that must be updated very frequently might be better served by storage solutions that take into account read and write latencies, such as Amazon EBS volumes, Amazon RDS, Amazon DynamoDB, Amazon EFS, or relational databases running on Amazon EC2.

Why is S3 not a viable solution for the above?

2 Answers

Hi Sai,

S3 is an object store and not really like block storage.  For this reason, AWS usually recommends that you don’t try to use S3 for rapidly changing data….like a cache for example.  Plus, you’re also charged for accessing your data so that can add up.   I’ve heard of many a person get a shock on their AWS bill when they try to use S3 as the backing storage for a Nextcloud deployment for example.  In older versions of Nextcloud, it would touch the files very frequently as part of its index management which incurred a charge every time.

–Scott

I’ll start by echoing Scott’s comments.

The one thing I’d add is that S3 is "eventually consistent".  That means you could do a write to an S3 object, and then do a read immediately after, and either get a "not found" if you just created the object, or an old version if you changed it.  The more frequently you update the data, the more likely this type of thing will happen.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?