I just decided that Proxmox is not challenging enough and interesting enough to keep me trying stuff and investing learning time in it. Yeah, a lot of Proxmox lovers will hate me for saying that but the time for a new platform/way to do things is among us, and maybe you already notice, I don’t do VMware stuff anymore.
I just want to run Kubernetes, a lot of Kubernetes clusters for testing solutions like Cilium and others, my real issue with Proxmox is the integration with automation tools like Terraform (there is not an official module for it) and the Storage plugins for consuming Proxmox storage on Kubernetes is terrible. I always liked Harvester and the latest integration into the downstream cluster running on Harvester and deployed with Rancher is awesome!
So, this is the plan.
Currently running a 3-node Proxmox cluster with Ceph and a few NVME and Intel SSD drives that are good enough for decent storage performance. I will remove 2 nodes from that cluster and convert them to Harvester, now there is support for a 2-node Cluster + Witness node. The witness node will run on the stand-alone Proxmox (Harvester doesn’t have a USB pass-through yet and I run unRAID on that host).
The 2 HP z440 will be my new 2-node Harvester cluster.
This machine will have 4 drives, 2 NVME, and 2 Intel SSD, in the RAM area, there will be 256GB of RAM available and enough CPU to run at least 3 or 4 RKE2 clusters for testing and production stuff.
At this point, we know how to install Cilium and create a BGP peering with our routers. Now we need to let the outside world reach our Kubernetes apps.
When using Cilium you can reach an application using the Pod IP address or using a LoadBalance IP assigned to a Service. In the previous article we only advertised the Pod Address to our BGP neighbors, lets add more stuff so we can be close to a real deployment.
If you already have cloned the repo, go and do a pull so you can get the new config files and other stuff in the Makefile, or better yet, go and do a new clone of the repo and start from scratch, that’s the idea of the repo!
New stuff in this LAB.
serviceSubnet in cluster.yaml (10.2.0.0/16)
serviceSelector in the CiliumBGPPeeringPolicy (service = public), this useful to identify what LoadBalancer will be announced by this peering policy.
public-pool.yaml with the configuration for the LoadBalancer IP Pool.
If you look at the topo.yaml file, will find a new Linux node (client0) for testing, this is based on alpine:latest, will test reachability to our LoadBalancer IP from this container that is connected to tor1 with IP address 10.0.100.2/24
Bookinfo application so we can have something to reach from client0.
Now let’s build the environment, just like before, running make will create a KinD cluster with 4 nodes (1 control-plane and 3 workers), a containerlab topology with 3 routers (FRR), and 1 client (Alpine). decide to let alone the Cilium install manually or with make cilium, in case there is a need to do something different with the same KinD cluster or add another option to Cilium at install time.
This is the result of running make, as you can see in the image, now you can go and install Cilium in whatever way you like the most, in this case, I will use these options:
You can validate the configurations with the following commands.
kubectl get -f cilium-bgp-peering-policies.yaml -oyaml
kubectl get -f public-pool.yaml -oyaml
Our lab environment is ready to assign IP to LoadBalancer services, lest check the existing ones first.
Is time to deploy our test application.
Now there is an app in the repo, you can deploy the bookinfo application, which is used by Istio to do some demos, I just cloned it and added a Service to pick up an Address from our IP Pool and advertised it to the Tor(0,1) routers.
Also, from Cilium itself we can validate that his address is being announced from the virtualRouters.
Within the Cilium CLI exists a subcommand called bgp (hard to pass!!) and with this, we can validate a few things.
cilium bgp routes advertised ipv4 unicast
Our four nodes are announcing the same address to upstream routers, this is because of the trafficPolicy assigned to the service.
Is time to reach our App.
We need to get into client0 container, this is an alpine container so ash is the shell.
Installing curl and Lynx. In case you don’t know what Lynx is, is a console browser, this feels like traveling to the past when the one that stayed more in the console was the strongest.
We can see that curl is reaching the app, this way is hard to interact we the application, now with Lynx!
Hello again!
So now you want to learn about Cilium BGP functionality, for me this is one of the most exciting features of Cilium, maybe the reason is the addiction that I already have for BGP, who knows (AS207036). Back to the point, with Cilium you can establish a BGP session with your routers (Tor, border, or core, you decide.) and announce PodCIDR or LoadBalance for services.
For this learning exercise, we will use Kind and other tools to run a K8s cluster locally on any Linux (Windows or MacOS) machine. There is a lot of info on the internet on how to get Kind up and running and even how to install Cilium, I decided to build a collection of Cilium Labs (ciliumlabs) to speed up the process of getting a Cilium testing environment up and running.
First, go and clone the repo, all the information is on the README of each lab type, in this case, bgp/README.md is the one with the steps to get this ready, but we first need to install the prereqs. The prerequisites are listed in the main README file. In my environment, all this is met so I can proceed with the lab creation.
Primero definamos la necesidad de GNU Stow (Stow en lo adelante), si tienes acceso a diferentes equipos, ya sea laptops (Linux o macOS), desktops (Linux o macOS) y servidores Linux (quien usa macOS en servidores?), talves has notado lo incomodo que es tener una configuracion de shell bien customizada pero cuando entras a otro equipo por ssh, hechas de menos todos esos alias y otras utilidades. Stow es la solucion para llevar estas customizaciones a otros entornos *nix.
En mi caso, tengo dos equipos de uso diario, Macbook Pro M1 y un Mac Mini M2. En ambos uso zsh como shell y tengo las mismas customizaciones ya que he instalado los mismos paquetes.
Hace ya 3 años que se liberó al público el proyecto Harvester, inmediatamente vi el anuncio, fui a leer la documentación y estaba muy emocionado de usar la plataforma por el hecho de que está basado en Kubernetes y lo estaba desarrollando Rancher Labs.
Que quiere decir esto, Harvester usa todos los mecanismos disponibles para garantizar que una VM siempre esté disponible. Otros componentes como Longhorn y Kube-VIP hacen que este hypervisor pueda ofrecer almacenamiento y IP en Load Balance. El componente más importante es Kube-Virt.
Personalmente lo más interesante que veo en Harvester es la integración con Rancher. Desde el mánager de Rancher, podemos conectarnos directamente a Harvester. Tenemos un punto central de administración.