AWS Certified Solutions Architect - Professional 2020

Sign Up Free or Log In to participate!

How is SQS a better service to handle “steady stream” of 1GB BLOBs, over Kinesis?

Quiz question says SQS is a better service over Kinesis to "We are designing an application where we need to accept a steady stream of large binary objects up to 1GB each. We want our architecture to allow for scaling out. What would you select as the best option for intake of the BLOBs?"

1 Answers

From here: https://aws.amazon.com/kinesis/data-streams/faqs/

The maximum size of a data blob (the data payload before Base64-encoding) within one record is 1 megabyte (MB).

Ram D

Right! I did get confused with 1000 PUT records. Even with those 1000 PUT records, one shard is 1MBps and by default we could have upto 10 shards, thereby restricting capacity to 10MBps. A 1GB data could be split into smaller chunks, and streamed, but shows that scaling out is NOT easy, given that re-sharding is manual, if needed. On the other hand, even though SQS does not support "streaming data" per se, using Java API that supports 2GB events, and scaling automatically managed, it is probably a better option. I also looked at choices and I dont see Kinesis Firehose, which is probably better than either of these.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?