I’m sorry but to my opinion this auto-scaling experiment is not very well presented. The expert level challenge provides no information at all on what to do. Based on the very limited information that the need is for something that auto-scales for calculations every evening, the best approach would be I think to use AWS batch. But that is not what is expected/demonstrated. Based on the description, why did this challenge was answered with an auto-scaling group and tested with a simple CPU stress, why not AWS Batch?
I know there are different path and we can choose our own, but I think this challenge could be improved by presenting it differently if the objective is to have us build an auto-scaling group.
As you probably know, there are lots of different ways to implement scalability. AWS Batch would probably be an option as would about five other ways I can think of to solve the limited scenario presented in the lab. I wanted the lab to demonstrate how auto-scaling groups work and how they can be driven by thresholds. I encourage you to build whatever solution you think will solve the presented issue.
Thank you for the answer. It is just that the way it is presented it felt to me that I was supposed to find a particular solution that I would have to test after based on some code in your repo or something. I literally stared at the question for more than 15 minutes. I kind of guessed you wanted us to build an auto-scaling group, but then I was "do I need to install web servers or something?"… and I totally gave up. Anyway, my point is not to criticize, but just to give feedback on my personal experience if it helps you improve it, the next time you record it.