Kube: A Starting Point
Disclaimer: The loose terminology is intentional for the sake of simplicity. This is a beginner’s guide after all.
Kube. Kubernetes. K8s. So where do you start? Well, that depends on your background. But most everyone has logged onto a linux machine at some point. _If you haven’t, I recommend starting with Linode’s documentation - set up a server, log in via SSH, and secure that server.
In the older days before containers and orchestrators, cloud providers, and in some cases even automation tools, apps were hosted in on-prem datacenters, installed on Linux servers and manually managed by a SysAdmin (loose definition). This included everything, from the load-balancers to the databases. Linux refers to these “apps” (HAProxy, Certbox, Nginx, your app, etc…) as services. Services are setup and managed by a service manager, for example systemd
or systemctl
. This service manager use configuration files to start the service and making sure it stayed in a running state. It also provided the application with its startup configuration filepath. A multitude of settings are/were possible. Even when the system’s package manager was used to install an application, if that app was meant to stay running, most likely a Systemd config was created for it upon install.
But once you had the application running and all the services running, that wasn’t it. You also had to make sure all the networking was done properly as well. Opening specific firewall ports to specific IPs and ensuring connections were successfully established.
Of course, Ansible definitely helped make all this a lot easier, but it was still a very manual process. Let’s just say, lots of pulling up servers to look at logs. And then containers came along.
With the help of Docker, containers made it possible to run minimal VMs (a minimally Linux instance) on any local machine. Naturally, this changed everything. All of sudden it was possible to run multiple copies of the same application on the same machine and have them work together. Hence, something had to come about to help manage that. Enters Kubernetes. There are others before and after. Read up on the K8s lore.
Kube is referred to as a Container Orchestration platform. It helps to orchestrate the creation, running, and networking of containers given specified parameters. It is able to scale these containers based on load and stress - as well as perform self-healing operations in order to resolve incidents before they happen. If you’re using a cloud provider, those operations can even include automatically increasing/decreasing the number of instances needed based on usage. So it does all the networking, managing of file systems, everything. You just give it a set of parameters to stay within during a given situation.
It is firmly a container orchestration platform. But the sake of this article, saying we only know basic information about Linux, Kube can be looked at as an Operating System. Not unlike Linux (in fact Kube nodes are typically Linux instances), Kube is a place to install applications and services, network them, and run them. So if we look at this way, we start to see some pieces fall into place. For example, to serve an application to the web, you still need NGINX, or a webserver in general. And luckily, NGINX has a prebuilt controller system that works with Kube right out of the box.
The next question is “How does it do all of this?”. Through a “controller node”. This controller sits on a separate VM and is an API server for interacting with the Operating System that is Kube. The controller is provided with a set of “worker nodes”, which can be considering Linux instances themselves, that are used to run all of these services and applications. Once the controller has a list of worker nodes, it installs an agent on them so that they are able to report back to the controller, a container platform like Containerd or Docker for running applications, and performs a few more tasks to ensure the worker node is secure. And as a note, in a cloud provider managed kubernetes environment like below, this controller node is completely managed by the cloud provider.
What does this leave us with? An operating system that you interact with through an API. And that API accepts yaml files as its data. We make API calls, provide it with the necessary data, and it performs the task. As opposed to Linux, where you’d have to log onto the server and manually perform those tasks, or use an automation tool like Ansible to do essentially the same thing but quicker. The handy tool recommended for interacting with K8s, kubectl
, helps us make these API calls with ease.
So with that background, take a look at this example Linode Kube project. Set it up and try it out.
Here’s a few hints to help along the way:
manifests/
contains the yaml files created usinghelm template
.site/
contains the static site, but more importantly theDockerfile
(first use of NGINX).charts/site
contains the helm chart for installing the site. This is just a templated way of installing all the necessary pieces that allows Kube to run the app. Look at each template for what its installing and piece together its responsibility.- Another nginx hint: It uses an “ingress” and a “ingress controller”.
- The ingress allows traffic to in on a specified IP. If used in a cloud provider, creating this typically results in the creation of a LoadBalancer.
- The ingress controller is what routes all the traffic to the apps and services you have running.
One final note before you start your journey. Instead of using Minikube locally as most guides recommend, a Cloud provider will allow you to interact with Kubernetes over the web like in a production setting. This includes the ability to setup load balancers, route traffic to your cluster, and secure your applications. I recommend Linode here because of its simplicity, and for that reason only. I would not recommend this setup as production grade.
And Minikube is freaking awesome! But its easier to work with once you’ve explored Kube in a more native setting where its right at home.