Ref the question: ‘We are designing an application where we need to accept a steady stream of large binary objects up to 1GB each. We want our architecture to allow for scaling out. What would you select as the best option for intake of the BLOBs?’
SQS is suggested as an answer. How about kinesis in this case? Admittedly producer and consumer may need to split/join, but kinesis would allow a much better solution that scales with sizes and volumes than SQS. Kinesis/Kafka can also help send those streams in order.
When answering AWS exam questions, you really need to just answer what is given without trying to insert any other knowledge. I call this the "Practitioner’s Curse" as it’s real hard to not insert additional "if we did this then.." and "in the real world…". In this quiz question, it’s testing your knowledge that Kinesis messages max is 1MB. If we had to divide up 1GB of data into 1MB chunks just to use Kinesis, that seems like a lot of work plus room for corrupted messages in the event of failure.
I’d also ask you to defend your assertion that Kinesis scales better than SQS.