This is part one of few posts about kubernetes… If you don’t know what kubernetes is, then you probably came here by mistake… I’ll make simple setup, with one master and one worker node, connected by WireGuard VPN (it seems that it still isn’tk finished/stable/tested enough, but should probably be OK for this setup?), using Flannel for semi-automatic network configuration between nodes in cluster. For more info check their github page and kubernetes page about cluster networking.

I installed this on Hetzner cloud, CX11 instances with Ubuntu 18.04, but all steps should be similar for other linux versions… Setup is simple, one master node and one worker node. These steps are compile from official tutorial i bits and pieces from around web…

On master node, you need to install wireguard (first add it’s ppa), docker-io, kubeadm and nfs server packages…

add-apt-repository ppa:wireguard/wireguard
apt install wireguard linux-headers-$(uname -r) \
		apt-transport-https ca-certificates \
		docker.io nfs-kernel-server

And enable docker service (there is no need to start it, kubeadm will do it)

systemctl enable docker

For kubeadm, you need to add custom repo (check if there is bionic repo available)…

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
apt install kubeadm

I decided that master and worker node(s) should communicate over vpn tunnel, so wireguard needs to be configured. First, generate private and public keys:

wg genkey | tee privatekey | wg pubkey > publickey

And in /etc/wireguard/wg0.conf put:

[Interface]
Address = <desired IP>
ListenPort = <some high port>
PrivateKey = <content of privatekey generated before>

Now, wireguard can be enabled and started:

systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0

If you want kubernetes to listen only on wireguard interface, you’ll need to specify that in /etc/default/kubelet file:

KUBELET_EXTRA_ARGS=--node-ip=<ip from wg0 config>

All that is left is to run kubernetes initialization:

kubeadm init --apiserver-advertise-address 10.10.0.1 \
            --service-cidr 10.96.0.0/16 \
            --pod-network-cidr 10.244.0.0/16
            --ignore-preflight-errors=NumCPU \

Some explanation:

  • apiserver-advertise-address - On what interface should API server bind. I choose IP of wireguard (wg0) interface.
  • service-cidr - Use alternative range of IP address for service VIPs.
  • pod-network-cidr - Specify range of IP addresses for the pod network. If set, the control plane will automatically allocate CIDRs for every node.

More details can be found on kubeadm init reference page.

When init finishes, it will print command that needs to be run on nodes to join then to cluster. Resist temptation to run it now, there is some more work on master node left…. But, you need to copy config file to .kubes on workstation (or server itself) so that kubectl command can communicate with kubernetes. Since I have wireguard up and running on my laptop, I copied it to it…

Now, what is needed is to define networking model that kubernetes will use for nodes and pods (containers) communication. This is, for me, complicated field, and I tried flannel, but it didn’t worked for me with ufw (maybe just some ports needs to be opened…)

Next one I tried is Weave Net, and it worked…

Just follow their guide for kubernetes:

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

This will create all what is needed for weave net service to work, and all new nodes will be automatically attached to network…


Now, worker nodes needs to be configured to be able to join to kubernetes cluster…

First, install needed packages:

add-apt-repository ppa:wireguard/wireguard
apt install wireguard linux-headers-$(uname -r) \
		apt-transport-https ca-certificates \
		docker.io nfs-common

Yes, same as for master node, except we need only nfs-common, not nfs server. And enable docker service (there is no need to start it, kubeadm will do it)

systemctl enable docker

Next is kubeadm:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
apt install kubeadm

When using wireguard, on client is similar configuration as on server:

wg genkey | tee privatekey | wg pubkey > publickey

In /etc/wireguard/wg0.conf put:

[Interface]
Address = <desired IP>
ListenPort = <some high port>
PrivateKey = <content of privatekey generated before>

[Peer]
PublicKey = <public key generated on master node>
AllowedIPs = <all IP's allowed on interface>
Endpoint = <public IP of master node>:<port>
PersistentKeepalive = 25

And back on master node, add to wg0.conf

[Peer]
PublicKey = <public key generated on worker node>
AllowedIPs = <all IP's allowed on interface>
Endpoint = <public IP of worker node>:<port>
PersistentKeepalive = 25

Now, wireguard can be enabled and started:

systemctl enable wg-quick@wg0
systemctl start wg-quick@wg0

If you want kubernetes to listen only on wireguard interface, you’ll need to specify that in /etc/default/kubelet file:

KUBELET_EXTRA_ARGS=--node-ip=<ip from wg0 config>

Now is finnaly time to join worker node to master node. On master run following command:

kubeadm token create --print-join-command

This will generate tokens and print complete command that needs to be executed on worker node.

 kubeadm join ......................

On workstation (or master node) run

kubectl get nodes

and you should get list of all nodes in cluster (including master one).

That’s basically it. You now have kubernetes up and running. It’s not accessible from outside, but for that you need to install another controller/proxy…

What is left is to enable firewall. Since I’m on ubuntu, ufw is what I use. There are some ports to be opened, and I allow all communication on virtual ethernet devices since they work on VPN in this case…

For ssh access, you need to leave port 22 (or what ever port your sshd is listening) on external eth port, and TCP 6783 and UDP 6783/6784 ports for weave net (for me, these) on internal eth ports if you are not allowing all on them.

ufw allow 22/tcp
ufw allow 6783/tcp
ufw allow 6783/udp
ufw allow 6784/udp
ufw allow <port for wg>/udp
ufw enable

Or, if you want to allow all on virtual ports, use:

ufw allow 22/tcp
ufw allow in on wg0
ufw allow in on weave
ufw allow <port for wg>/udp
ufw enable

So, choose either block of command, and same need to be run on all nodes in cluster, master and workers!

That’s it for basic cluster that works and does nothing! :-D