A Kernel? In a Container?
Share on facebook
Share on twitter
Share on linkedin

A Kernel? In a Container?!


Recently I was blessed with the opportunity to attend KubeCon2017 in Austin, TX, where I had opportunity to get the inside scoop on Kubernetes, OpenShift (❤❤❤). Heptio, and a few other projects that I’ve come to enjoy since starting my career at Linux Academy. By a stroke of sheer luck, I happened to be present when the OpenStack Foundation announced a totally new project combining elements from Hyper RunV and Intel Clear Containers, named Kata Containers.

A basic diagram of Kata Containers infrastructure – from katacontainers.io

According to their website, Kata Containers offers “…the speed of containers, [with] the security of VMs” by including a light-weight custom kernel in each container.

Wait. Wut.

Being perfectly honest, I was #shook but highly intrigued, as it seemed like the container craze had come full-circle back to regular ol’ VM’s. I mean, doesn’t the main attraction to containers lie in their ability to run without a kernel, making it lighter and faster than a traditional VM? Well, once I finished scoffing, I decided to check out a live demo by the Kata Containers team to help decide if this was a project worth paying notice to. Before I get into my opinion on the project, let’s quickly overview the technical specs.Regular containers run in isolated namespaces on a virtual machine, sitting on top of the VM and sharing its kernel, binaries, and libraries – usually with RO access to those shared resources. These containers are extremely lightweight, fast, and “portable,” meaning they can be easily used and shared across a variety of different deployments (think DockerHub). Now, that all sounds great, but because of those shared resources, containers can potentially be used to exploit various kernel vulnerabilities, such as Dirty COW (CVE-2016-5195), which gave a lot of potential Container Cheerleaders in industries that require high security a bit of pause when jumping on the bandwagon (and rightfully so).

Containers run in an isolated namespace within a virtual machine, sharing a kernel with the hypervisor.


In 2015, Intel engineers created an open source project named Clear Containers, utilizing Intel VTX technology to “lock down” each container with it’s own minimal OS, kernel, and */bins, isolating the underlying hardware and all other containers on the host, while maintaining interoperability with popular container engines such as Docker, Kubernetes, and Rkt. Clear Containers sets itself apart from regular ol’ VMs by boasting of a <100ms boot time, all without sacrificing the security provided by traditional virtual machines.To maintain compliance with the Open Container Initiative (OCI), Clear Containers used the hypervisor-agnostic Docker runtime from Hyper-V, named RunV, which provides tech-agnostic support for multi-architecture, multi-hypervisor, and multi-tenant k8s environments – just to name a few bonuses.

Each pod is an isolated hypervisor that offers the speed of a container with the security of a VM, integrating seamlessly with the container ecosystem & management layers.

By These Powers Combined…

Under the guidance of the OpenStack Foundation, the two code bases were merged to create Kata Containers, a completely new open source project independent of all other OpenStack projects, and with its own governing board, but with the full support of the OpenStack Community to aid in attracting contributors and driving technology adoption. Their goal is to retool virtualization to fit container-native applications without sacrificing the speed of containers or the security of virtual machines.

I want it now. Where is it?!

Well, Kata Containers isn’t quite ready for use yet, but if you can’t wait to get your hands on some super-fast, super-secure kernel’d containers, check out the Intel Clear Containers project on GitHub. While you wait, you can talk with other equally curious individuals on the #kata-dev IRC channel.

So What do you Think?

So, it took a full 24 hours for me to get past my own cynicism after the announcement, but after a great demo and a long conversation with a very kind and patient soul at the Intel booth about the project, I’m ready for a cup of that Kata Containers-flavored koolaid. I’ll be keeping a close eye on the project to see how it develops, and I suggest you do the same.Sources:


Get more insights, news, and assorted awesomeness around all things cloud learning.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?