AWS Certified Solutions Architect - Professional 2020

Sign Up Free or Log In to participate!

SA Pro Exam Simulator Question – Improving DynamoDB Performance

Can someone please explain why A and C are correct answers and not C? I know for C – the correct name is "global secondary index" is that why it’s not right? In reality, can you use global or local secondary index to improve performance without increasing AWS cost?

Also why is A correct? if you are archiving the data, why do you need to export the data and create a new table? Can’t you just set TTL and move expired data to S3?

I included the exam question and right answer/explanation from the simulator below. Thanks!

=== Content removed ===

The intent of these forums is to provide to aid students to learn by engaging in discussion and share ideas and opinions. Sometimes that involves posting a question to discuss a point. However when you post a question we expect you to also include your diagnosis of the question and opinion of the issue plus a question about the specific issue you are having trouble with.

Simply dropping a question and the answers in the forum and expecting others to solve it for you is not considered acceptable. Please feel welcome to post the question again but include your own analysis and a question about the issue that you are struggling with.

Unfortunately, this stance has been necessitated by people abusing this site to support dishonest people profiting from question theft in breach of the NDA of this and other legitimate training sites.



1 Answers

Yes, you probably should have been using the TTL and DynamoDB streams to archive older data.  However, the question states that we already have 1TB of data in DynamoDB, which would mean a minimum of 100 partitions.  Even if you delete older data now, you still have 100+ partitions, which really limits your performance in each partition.  A global secondary index will add some cost, and according to the question, you already have an optimized query using the primary key. 


There is some more info about partition keys here:


Thanks Ben! So the main issue is that our db is just too big and we need to look for any answer that reduces the database size?


Sorry, response got cut off accidently. I’m still confused about how answer "A – Export the data then import it into a newly created table" is a valid answer. How does A reduce the database size? or are we assuming that we are using the exported db as a backup and when we create the table, we are creating a smaller table with less data/more current data..?


The key to to this question is the last sentence that says "done together". My interpretation of the question/answer is that you first archive older data, and then you create a new table from the now smaller data set.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?