It’s no secret that containers and Docker are pretty awesome. Docker is an extremely powerful tool for running and managing containers, enhance your skills by getting a Docker certification. But an ad-hoc approach can lead to some serious Docker security slip-ups. Still, Docker and containers are nothing to be afraid of! You just need to be conscious of security.
Here are 10 Docker security best practices to help you steer clear of a Docker disaster.
1. Docker image best practice
Avoid existing images that bundle a little you need with a lot you might not. If you’re resource and security conscious (and you should probably at least try to be), consider building your own images from scratch. That way you can install only the bare minimum of what’s needed. This can drastically reduce the number of failure points. In addition to the security benefits that come with that, you’re also effectively reducing the strain on the overall resources of the system.
2. Ensure sources are secure and trusted
Ensure your image and resource sources are secure and trusted. One way to achieve this goal is to create your own, containing only images you’ve made. You’ll need to secure and restrict access to the repositories you create and host.
This approach requires an investment of a time and effort, but in the long run — especially for larger scale deployments — it pays off big time.
3. Run Docker with users with least privilege
This is a good advice for any service that’s running on the system. Always make sure that Docker is running with a user that has the bare minimum in terms of privileges. It only needs to be able to do what it needs to do and nothing else. This goes for all services that run inside the Docker image and the Docker service itself.
It can be tempting to run things as root. It’s less hassle. But it’s also a really bad idea. Why? If any of your services is compromised and that service is running in the context of the root user, the damage to the system can be enormous. But should the process run in the context of a non-privileged user, then the damage will be contained to a single section of the system.
4. Sign images and verify signatures
Signing images and verifying signatures upon retrieval is generally a good idea. This ensures the image hasn’t been tampered with at all. Furthermore, it ensures that if the image was corrupt in the process, you can act in time to rectify the error. A container with a corrupt image can run. Sometimes it might crash, sometimes it might not work at all. But it can run, and without performing a verification you won’t know. This corruption can cause unexpected and unpredictable behavior, making it insecure and unstable.
5. Use active monitoring for known vulnerabilities
Perform regular vulnerability scans and implement an active monitoring solution to find any known vulnerabilities.
New vulnerabilities pop up daily and you always need to make sure to update not only Docker but all the services running on the system. Should a vulnerability exist for a service for which there is no fix, you’ll seriously need to consider terminating the service. Another good thing to take into consideration is to only have things running on your system that are actively maintained.
6. Set resource limits for containers
All containers should have set resource limits. For one, this limits the ability of the container to over consume the resources of the system. It also forces you to think about the container and its purpose and plan ahead of time to know exactly what it needs in terms of resources to work properly.
7. Properly configure and secure the host OS
Docker and its containers can’t be considered safe or secure if the underlying host operating system hasn’t been properly secured and configured. Whatever OS you end up using, you need to make sure to follow all security best practices. And make sure that you’ve properly secured access to the host server and properly configured the underlying OS.
8. Work with built-in, well-known security mechanisms
More or less, all the operating systems have both mandatory access controls and discretionary access controls. Docker should be adjusted and configured to work with these mechanisms on all the operating systems on which it runs.
For example, on CentOS and Fedora you have SELinux, which allows for highly granular access control. And Docker is perfectly compatible with it. Even so, many opt to disable SELinux as some guides online advocate this. While it can make it easier to configure services without SELinux enabled, but this is not a good idea! Use what’s there, what’s been proven to work, and don’t disable things blindly following various guides. Instead, invest will and effort to make it work. Go the extra mile, and your system will be far safer for it. And in the long run it will pay off.
9. Restrict system calls from within the image
All system calls can be allowed or denied. Not all system calls are needed for a container to run and function.
Keeping this in mind, you can get a list of all the system calls that are being made from within the container and only allow those and nothing else. Remember that different operating systems will have different names for system calls and will use different system calls to perform the same action, so you’ll need to adapt your configuration accordingly.
10. Limit network access
Network bandwidth can be treated like any other resource of the system. You have a limit on it, just like you have a limit on CPU processing power. It’s up to you to make allocations and specify how much incoming and outgoing traffic there can be per a container. Some will need more, some less.
In addition, your firewall rules should always only allow traffic required for your containers to work. For example, if you have an apache server running in one container and it serves a website frontend, then that container only needs to communicate on two ports (80 and 443) to the outside world; internally, it should only be allowed to communicate with theoretically another container that runs the backend of the website and not — for example — with the database server for the web application. There’s just no need for it. Web server communicates with the backend container, backend container communicates with the database. So you have a predefined lines of communication. In short, only allow the bare minimum of what is needed.
Looking to level up your Docker container IQ?
Ready to get serious about container security? Check out my four-part learning path all around securing containers, starting with the Secure Container Host Operating System course.
Looking for more? You got it. A Cloud Guru has the Docker and Kubernetes content you need to go from container newcomer to container champion in no time flat.
Keep up with all things Kubernetes with our original series Kubernetes This Month, or check out some of our other container-related courses.
- Learn the basics of Amazon Elastic Kubernetes Service (EKS) or get an intro deploying containers with Google Kubernetes Engine (GKE).
- Come up to speed on Docker pronto with our Docker quick start course.
- Get in the weeds with a Docker deep dive.
- Learn the nitty-gritty around Kubernetes security.