7 min read
Speaker: Ted Dunning, Chief Application Architect at MapR
Editor’s Note: Containerization of large-scale applications is widely used to ensure you can deploy in a repeatable and predictable way. In this short Whiteboard Walkthrough video, Ted Dunning, Chief Application Architect at MapR Technologies, takes you through the steps in the life cycle of Kubernetes, a leading software for orchestration of containerized applications, as you go from source code to service. Learn more about how you can take advantage of the MapR Clarity Program to help in your journey of developing containerized applications.
Hi. I'm Ted Dunning from MapR Technologies, and I'd like to talk today about the software life cycle that you have in a Kubernetes environment. In fact, we could call it the Kubernetes life cycle. Now, one of the interesting things about this life cycle is it's very different from old style software life cycles, and it's even different than the virtualization sort of life cycle that you might be familiar with. It's different in a couple of different aspects, and those aspects have to do with additional steps in the process, which help bring more controlled repeatability to the entire process of going from source code to service.
Now, we have started for a long time with source code control, source code control that's compiled to binary, but it wasn't all that long ago, a few decades, when people would patch the output of a compiler. It would actually go in and hack on the binaries. They would even, in some cases, hack on the binaries in a running program. Nowadays, that is considered absolutely insane, and in fact, in modern systems, it is pretty crazy.
When computers were tiny, when they only had a few bits, it was a much more plausible thing to do, but we've realized over time that the ability to lock down different steps, lock down exactly which source code we build from, lock down exactly what the output of the compiler is so that we can reconstruct those artifacts is now recognized as a really critical step.
Now, expanding that idea is the fundamental concept behind this entire life cycle, so not only do we have a compiler which freezes the actual executable binary, but we go further, and we package that data into a container. The container contains all of the dependencies, so we've removed yet another degree of freedom, one more degree of flexibility, but the flexibility there is the flexibility to fail. Not really good flexibility, so we're removing these flexible degrees of freedom in order to get control over what the software eventually does, but just putting it into a container isn't good enough.
That leaves us with something a lot like the virtualization, the virtual machine sort of life cycle for software in which we would actually log into these virtual machines and make changes to which software is installed there, which dependencies we would upgrade the operating system, and that is actually very, very similar, hacking on that virtual machine to hacking on the binary, just at a larger scale.
As more and more systems are more than one machine, it's become apparent that we needed something bigger than that, and we also needed to extend that control and repeatability into that process, so what we use now are containers which are lightweight, but they have the same property of locking down dependencies. So, we've removed quite a bit of variability from the system at that point, but as I said, we need to build these systems out of more than just one machine, more than one apparent machine, more than one container.
The next step is what we might call construction. Now, this is following the Kubernetes terminology, and from there, we build a system, a system out of a bunch of images, which group the individual containers together into single things that have to be resident on the same machine. These are called pods, and then we group them into larger amalgamations as well. This is controlling the topology, the ‘hip bone connected to the shinbone’ sort of aspects of a system.
Now, it still leaves some degrees of freedom, which are useful at the next step, but it freezes down all of the aspects of what connects to what, and what functions are in the system, and what versions of what functions do we have. We might have load balancers, and web servers, and databases, and things like that, but those are all now frozen in terms of what functions exist and how they connect and such. We call it the topology of the system. What's connected to what?
What we do not have there are the secrets needed, certificates, tickets, all of those sorts of things that are actually needed in a runtime situation. We also do not have the actual scale of certain services. There may be one of a service or a hundred. That's left for later. We also have not connected the end storage layer at that point. The final step then is to deploy it, and that's when our software, originally our source code, becomes a full-fledged service, and at this point, we lock in all of the final runtime aspects: the certificates, the tickets, all of the secrets needed to function securely as well as bind to storage systems and set the scale.
That allows us then to deploy that service at any time at any scale in any environment in a very, very repeatable way. So, from software, from source code, all the way to service, in the Kubernetes life cycle, we have what engineering is really all about, a highly repeatable process that gives us repeatable results. I'm Ted Dunning with MapR Technologies, and this has been the Kubernetes life cycle for software.
Stay ahead of the bleeding edge...get the best of Big Data in your inbox.