1 Answers
Personally I would say A & B, the other answers look like nonsense to me ;o)
Faye, Thank you for confirming this. I agree. A. Ensure that file permissions for monitored files that allow the CloudWatch Logs agent to read the file have not been modified. — This could definitely break the successful delivery of logs. If the permissions were modified to "deny all" the logs would not be delivered. B. Verify that the OS Log rotation rules are compatible with the configuration requirements for agent streaming. — These incompatibilities could be problematic. C. Configure an Amazon Kinesis producer to first put the logs into Amazon Kinesis Streams. — Total overkill D. Create a CloudWatch Logs metric to isolate a value that changes at least once during the period before logging stops. — If the logs stopped delivering, I am not sure you could set up a metric to troubleshoot. E. Use AWS CloudFormation to dynamically create and maintain the configuration file for the CloudWatch Logs agent. — This might be a great solution to standardize the environment going forward but it will not help with the current troubleshooting.
To me A and B seems to be the right answer. If someone changes the file permission that will definitely cause an issue and B can create problems if there are incompatibilities between the log rotation rules and the configuration. Thoughts?
The question has a twist "What steps are necessary to identify the cause of this phenomenon" – steps to identify cause, rather than the cause itself
Good point Sam. Still I think that C, D, E do not help in the root cause analysis. C) Using Kinesis for analysis is overkill D) CloudWatch logs would help but not if it is set up post facto E) Cloudformation is overkill as well and it’s going to be more helpful to have consistent deployments than to troubleshoot an issue