Posted on 02 Mar 2020
This is the fourth article of the Getting Started with Kubernetes article series. In this article, I want to explain how I run my applications on a Kubernetes cluster using a simple project based on Vagrant and VirtualBox. In order to test the cluster, we will create a “Hello K8s” application for Kubernetes.
Almost all the tutorials on the Internet suggest starting using Minikube, a single node version of Kubernetes whose goal is to make life easier for those approaching the platform. The problem with Minikube is that it doesn’t allow you to prove the essence of Kubernetes, that is, orchestration on multiple nodes. With Minikube you can’t see what happens to your Pods when a node goes down.
Then there are tools like Kubespray that allow you to run, thanks to virtualization tools like Vagrant and VirtualBox, a cluster with multiple nodes on your development machine. This option if it is certainly valid when you are already familiar with Kubernetes, in the beginning, it abstracts many activities and does not allow you to understand what are the components that really serve your application and how they are installed.
For this reason, I use the k8s-cluster project which allows you to create a cluster on your development machine thanks to Vagrant and VirtualBox. Here the commands to create the cluster:
The idea behind the k8s-cluster is to have a YAML configuration file where to describe the number of desired nodes and their characteristics. By default, we have 3 nodes having 2 CPUs and 2 Gb of RAM with Ubuntu 16.04 Xenial as the operating system.
The YAML file is read by the Vagrantfile which instantiates the number of nodes on VirtualBox reported in the file with the described characteristics. Each node is then configured with three scripts:
The Vagrantfile is very simple because it reads the YAML file and, for each entry, generates a Server type object. It is important that the first is always the master. At this point, Vagrant loop on each Server object and instantiates a node on the VagrantBox. If the node is the master then it will execute the configure_box.sh and configure_master.sh scripts, otherwise configure_box.sh and configure_worker.sh.
This script installs the following components on all three nodes:
Here the code to install the Docker engine:
Then the script adds the vagrant user to the docker group, in this way it can run docker commands.
The script installs kubectl, kubeadm, and kubelet using the following code:
This script performs the following actions:
This is the code to initialize the cluster:
The second step configures the vagrant user to use kubectl commands:
The third step installs the Calico network plugin:
Then the script generates the joining script to run on the worker nodes:
Finally, the script configures ssh to enable password authentication:
On the worker node, the only step performed is a copy of the joining script from the master node and its execution to let the worker node join the cluster.
Currently to manage the cluster you need to access your Vagrant machines via ssh to use kubectl commands. You can avoid this installing kubectl on your local machine and use it to control your cluster.
To do that you need to install kubectl on your machine following this guide. Then you need to copy the Kubernetes credentials from your remote host:
Running the kubectl get pods command, you should see the cluster nodes.
This is your first Kubernetes “Hello World” application. It is an Nginx web server that listens on 80 port and when you connect to it with your browser the “Hello World!” message will appear with hostname and image version. This will be useful to understand which Pod responded to a browser request and its hostname and which version is currently in use.
In the following article, I created a Hello World application for Docker that we will reuse for Kubernetes with small changes. Here the Dockerfile.
As you can notice, in this Dockerfile we install PHP in addition to Nginx to run the index.php file in the www-data folder. The reason why we use a PHP file instead of an HTML one is that we want to print the version of the application and the hostname in order to know which version of the application we are running and on which Pod.
The ENTRYPOINT of the Docker container is the entrypoint.sh script that set the right permission for the /var/log/nginx folder and it will start the php7.0-fpm and the nginx services. You can check out the source code here.
The docker image of this application is now on my Docker Hub account sasadangelo/hello-k8s.
Kubernetes allows running a containerized application in three approaches: generators, imperative, and declarative. The first two methods are achieved via kubectl CLI while the third method is achieved declaring the desired state in a YAML configuration file. In all the cases, the result is this.
Let’s analyze all these methods in detail.
This is the easiest method and it is achieved using the kubect run and kubectl expose commands. It is useful when you want to run a quick test just to check if the application works. Since no deployment is created behind the scene you cannot scale the Pod.
The command to run the application is:
Check if the Pod is running typing the kubectl get pods command. In order to connect with the browser from your host machine, you need to expose the Pod via Service using the following command:
You can type now in your browser the URL ÌP:PORT, where IP is the 192.168.x.x address of one of the two worker nodes (k8s-node-1 or k8s-node-2) and PORT is the one you get typing the command:
Clean up the configuration using the commands:
This method is achieved using the commands kubect create and kubectl expose. The first command creates a deployment behind the scene so you can scale the Pos as you prefer. The command to deploy and run the application is:
Check if the Deployment is created and the Pod is running typing the kubectl get deployments and kubectl get pods commands.
In order to connect with the browser from your host machine you need to expose the Deployment via Service using the following command:
You can type now in your browser the URL ÌP:PORT, where IP is the 192.168.x.x address of one of the two worker nodes ( k8s-node-1 or k8s-node-2 ) and PORT is the one you get typing the command:
Scale the application to 5 pods with the following commands:
See the 5 pods running using the kubectl get pods command. If you type your browser Reload button continuously you can notice sometimes the hostname change because different pods will respond. Attention!!! It could be possible you have to type the Reload button a lot of time before see the hostname change due to Pod affinity.
Clean up the configuration using the commands:
This method is achieved using the commands kubectl apply. This command uses a deployment file where is defined as the deployment and the service resource objects.
The command to deploy and run the application is:
You can see 5 pods running using the kubectl get pods command. You can type now in your browser the URL ÌP:PORT, where IP is the 192.168.x.x address of one of the two worker nodes ( k8s-node-1 or k8s-node-2 ) and PORT is the one you get typing the command:
Clean up the configuration using the commands:
In this article, we started playing with Kubernetes creating our own cluster and deploy a “Hello World” application using different approaches. In the next articles, we will explore more on Kubernetes running more complex applications.