在树莓派 Raspberry Pi 3 Model B+ 上部署安装 Docker 和 Kubernetes
1 | $ nano hostname_and_ip.sh |
and insert the script:
1 | #!/bin/sh |
Now run the script, an example of my naming and ip convention can be seen below. But adapt to your liking.
First argument: the new hostname
Second argument: the new static IP
Third argument: the IP of your Router
master: 192.168.1.100
1 | $ sh hostname_and_ip.sh k8s-master 192.168.1.100 192.168.1.1 |
worker-02: 192.168.1.102
1 | $ sh hostname_and_ip.sh k8s-worker-02 192.168.1.102 192.168.1.1 |
worker-03: 192.168.1.103
1 | $ sh hostname_and_ip.sh k8s-worker-03 192.168.1.103 192.168.1.1 |
Now, reboot the Pi. You should be able to access the Pi over SSH as follows:
1 | $ ssh pi@k8s-master.local (or k8s-worker-01.local etc.) |
Verify that your Pi now also has a new static IP by running ifconfig.
Installing the prerequisites
Now, that the static networking and naming is in place, we need to install some software on the Raspberry Pi.
Therefore, create a new file on the Raspberry Pi:
1 | $ nano install.sh |
Copy and insert the following script.
1 | #!/bin/sh |
Execute the script
1 | $ sh install.sh |
This will install and configure docker, disable swap and install kubeadm.
Reboot the machine, and repeat this process for all your Raspberry Pis.
Initialize the Kubernetes master
So, we are now ready to set up Kubernetes. To do this, we are going to use the awesome tool called, kubeadm. This makes it pretty easy to spin up a Kubernetes cluster by, basically, running kubeadm init on the master node and kubeadm join on the worker nodes.
One of the purposes of this cluster is going to be demoing Kubernetes stuff. One example could be to pull out the network cable of one of the worker nodes and demoing how Kubernetes deals with this situation by rescheduling the pods from the lost node.
Therefore, we would like to change one of the arguments to the kube-controller-manager, namely, pod-eviction-timeout which defaults to 5 minutes. That’s a long time to wait in a presentation. Instead, we want to change this to 10s. You may also want to change the time that Kubernetes allows the node to be unresponsive. It defaults to 40 seconds. To change this, add the following argument to the master configuration : node-monitor-grace-period: 10s.
Changing arguments passed to the different Kubernetes core components by kubeadm is pretty simple. We just have to pass a YAML configuration file specifying the arguments we want to change. Let’s do that.
Create the configuration file:
1 | $ nano kubeadm_conf.yaml |
Copy and insert the following
1 | apiVersion: kubeadm.k8s.io/v1alpha1 |
Save and run
1 | $ sudo kubeadm init --config kubeadm_conf.yaml |
This takes a couple of minutes. Once the process finishes you should see something similar to:
1 | ... |
Follow the instructions in the output:
1 | $ mkdir -p $HOME/.kube |
You can verify that your master node is up and running.
1 | pi@master:~ $ kubectl get nodes |
Don’t mind the status being NotReady. In order for the master node to become ready, we need to install a container network. But before we do that, let’s add some more nodes to the cluster first.
Setting up the worker nodes
Alright, next up we need to spin up some workers for the cluster to be complete.
Assuming you have already set up the prerequisites mentioned above, we basically only need to run the kubeadm join on each of your worker nodes. As shown above, kubeadm outputs the command that you need to run on all your worker nodes.
1 | $ sudo kubeadm join --token TOKEN 192.168.1.100:6443 --discovery-token-ca-cert-hash HASH |
Repeat for every node.
Set up weave as the container network
Nodes are not able to communicate without a container network, which is something you have to provide. Therefore, the final piece of the puzzle is to add one. We will be using weave-net for this. On the master node, run the following command:
1 | $ kubectl apply -f “https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d ‘\n’) |
All set and done…
That was it. You should now have a fully functioning Raspberry Pi Kubernetes cluster. Verify your setup by running
1 | pi@k8s-master:~ $ kubectl get nodes |