What is Kubernetes?
Using containers makes life simple. You can set up and package everything that you need into an application, and execute that package without the virtual machine overhead that is normally rendered during that process. Containers are amazing and helpful tools, benefitting our machines that make the world a better place, but that is only part of the picture. With just that much power, we need a way to control it, to manage it. So how do we do that? We do that by orchestrating the containers with tools like Docker and in this case, Kubernetes.
Kubernetes is an OpenSource system of tools that manages containers. It does this by providing tools for deploying, scaling, and managing containerized applications. Written in the Go programming language, it is actually not serving as a Platform-as-a-Service but simply a basic framework that a PaaS can sit on top of. It allows users to use what kind of application frameworks and other tools that they would like to set up their platform to benefit their needs.
How it works; Kubernetes Ecosystem
How it all works is pretty simple. In Kubernetes, the main unit is called a pod. A pod is a group of containers that are put together as a group on the same node (virtual machine) and are designed specifically for easy communication. A cluster is a set of nodes that run containerized applications. It is important to know that every cluster has at least one node. The worker node hosts the pods that are the components of the application. The Control Plane manages the worker node(s) and pods in the cluster. The Control Plane and cluster are spread out across multiple nodes and pods, providing high availability and reliability. There are components to the Control Plane and the node that are crucial to its functionality.
[You can try creating your own pod policy here]
Kubernetes works on the principles of actual states and defined states. In a YAML/JSON format, the defined state is set. Then Kubernetes uses a controller to check the difference of the new YAML/JSON state-defined and the state in the cluster.
For example, let’s say there are 3 pods, one running and two more scheduled. If one pod was killed off, would another pod start? Yes! It will try to start a container continuously and to replace it to make sure that there are 3 pods. This is because Kubernetes will not stop until the actual state matches the defined state, which is a powerful tool to have. Next is Primitives, which is more than just Pods. it’s Deployments, Persistent Volume Claims, Services, routes, etc. In conclusion, you can map out how this applies to your workspace and traditional IT environment.
Over time, Kubernetes will increase development velocity and reduce the complex nature of operational duties. However, it can also be used to secure your applications. Security must instead be built into each phase of the container life cycle: build, deploy, and runtime. Misconfigurations added to the cycle can be costly. Common issues cover a variety of concerns: to removing vulnerabilities prior to the deploy phase, These tips and certain configurations help ensure that your Kubernetes cluster is configured for maximum coverage.
1. Use the latest version available
Security updates are added to each update that is rolled out each quarter. It is recommended as a best practice to run the latest version, or a version or two behind. The further behind you get in versions, the harder it can be to complete an upgrade. Making plans to upgrade once a quarter is best, to keep it managed and easy. You can check your version with the following command:
$ kubectl version --short
2. Use namespaces to make a boundary
Creating separate namespaces is an important first level of security between system components. It’s easy to add Network Policies as security controls due to different workloads existing in separate namespaces. You can check your current namespaces with the following command:
kubectl get ns
3. Create Cluster network policies
Using Network Policies allows you to control the network access of your container’s applications. In order to use them, you will have to make sure to have a networking provider that supports it. For example, with Google Kubernetes Engine, or GKE, you will have to opt in to use this resource. After that, it’s recommended to start with the default basic network policies. One of the defaults is to block traffic from other namespaces.
Once you know policies are supported and you have it set up, you can use the following to check it:
kubectl get networkpolicies --all-namespaces
4. Harden Your Node Security
Ensuring that the node is secure is key and a priority.
The following tips will help you with hardening the node, the core of your system setup:
- Make sure the Host is secure and configured correctly.
- Limit access to sensitive ports via network blocks. You can do this with
kubelet. I recommend to run network blocks on port 10250 and 10255, and limit only trusted networks to the Kubernetes API. These specific ports have been frequently used by unwanted and malicious users for abusive purposes, such as running crypto miners in clusters without authentication and authorization on the Kubernetes API.
- Control admin access to the nodes. This should be restricted in general, as most tasks can be completed without access to the node.
The natural abilities that come with containers and Kubernetes have the potential for extremely secure applications. Getting just the right settings for this to happen is quite a lot of work and can be a daunting task. Best practices followed in setting up and configuring the service like it is mentioned here, helps you gain the best and optimal position for better security. Kubernetes, in general, is a great service to use to manage your containers. It’s become the standard in orchestrators for containers and microservices. Businesses are transforming their organizations by embracing DevOps principles and container technologies, making their services more valuable, strategic, and effective.