Logging has always been a tool for developers to debug their applications, from the simple “ print(“Here”)“` to using more formal libraries such as log4j and logging modules. While effective for solo developers reading logs to pinpoint issues, this type of “unstructured logging” does not scale while performing deeper analytics or building reports on how an application was used.
Accelerate your career in cloud
During a recent Twitter Space on building Serverless applications, the idea of structured logging was discussed as a way to scale logging efforts and provide a consistent way to consume logs.
What is structured logging?
Structured logging is a methodology to log information in a consistent format that allows logs to be treated as data rather than text. Structured logs are often expressed in JSON which allows developers the ability to efficiently store, retrieve and analyze logs.
Some of the main benefits that enable faster debugging include…
- Better search – By leveraging the JSON format we can create queries on fields without having to rely on brittle regex patterns of raw text
- Better integration – By using a consistent JSON format, applications can ingest the data easily for downstream tasks such as dashboards or analysis.
- Better readability – By leveraging a consistent format, consumers of logs such as system administrators can parse data much more effectively than reading raw text files.
In this post, I’ll walk through an example in Python highlighting the differences between unstructured and structured logging on a simple AWS Lambda Function.
Unstructured logging example
In this example, we will just use the default print statement. Simply saying “Here” allows us to know where in the function the code is executed.
While simple it allows for quick human readable debugging processes, when looking through log events. In the context of AWS Lambda we can use AWS CloudWatch log groups to easily verify our code ran.
While great for individual debugging on simple use cases, simply printing out text does not convey the context needed for deeper analysis or an efficient way to search. An example could be looking for logs from customers in a certain region, or performing an action on a specific page. Having to do text match or regular expressions to find data in logs is not ideal for nuanced searching.
The logging module also suffers from the same problem as a standard print statement.
The data is still in a text format, even though it offers additional information such as log-level the log message is still treated as text rather than data.
Structured logging example
In order to treat logs as data we must create a structure that enables us to express logs as data. There are packages such as Python JSON Logger that provide a mechanism to transform logs into JSON, but you can also create a class to encapsulate your data.
Now when we view the data in Cloudwatch we get a nice pretty JSON with any fields that we want.
Since the data is in a JSON format that allows us to use Cloudwatch Log Insights to run queries on data using custom fields we decided without any additional configuration.
For more examples of leveraging Cloudwatch logs, check out the AWS Compute blog.
When we port the logs we also get an easy to consume JSON format that can be consumed in other applications.
The log message is now consumable as a JSON object in the structured example vs the raw text in the other logging examples.
As you see structured logging is not a “hard requirement” that must be implemented in a certain way. It is just a framework that enables you to treat logs as data to enable a more robust debugging methodology for you, your team, and organization.
About the Author
Banjo is a Senior Developer Advocate at AWS, where he helps builders get excited about using AWS. Banjo is passionate about operationalizing data and has started a podcast, a meetup, and open-source projects around utilizing data. When not building the next big thing, Banjo likes to relax by playing video games especially JRPGs and exploring events happening around him.