HomeLAB is going almost 100% HarvesterHCI

 

A black and orange x-shaped symbol

Description automatically generated A close-up of a logo

Description automatically generated

I just decided that Proxmox is not challenging enough and interesting enough to keep me trying stuff and investing learning time in it. Yeah, a lot of Proxmox lovers will hate me for saying that but the time for a new platform/way to do things is among us, and maybe you already notice, I don’t do VMware stuff anymore.

I just want to run Kubernetes, a lot of Kubernetes clusters for testing solutions like Cilium and others, my real issue with Proxmox is the integration with automation tools like Terraform (there is not an official module for it) and the Storage plugins for consuming Proxmox storage on Kubernetes is terrible. I always liked Harvester and the latest integration into the downstream cluster running on Harvester and deployed with Rancher is awesome!

So, this is the plan.

Currently running a 3-node Proxmox cluster with Ceph and a few NVME and Intel SSD drives that are good enough for decent storage performance. I will remove 2 nodes from that cluster and convert them to Harvester, now there is support for a 2-node Cluster + Witness node. The witness node will run on the stand-alone Proxmox (Harvester doesn’t have a USB pass-through yet and I run unRAID on that host).

The 2 HP z440 will be my new 2-node Harvester cluster.

This machine will have 4 drives, 2 NVME, and 2 Intel SSD, in the RAM area, there will be 256GB of RAM available and enough CPU to run at least 3 or 4 RKE2 clusters for testing and production stuff.

I’ll update on the progress…

 

Cilium BGP Lab with LoadBalancing and more!

 

At this point, we know how to install Cilium and create a BGP peering with our routers. Now we need to let the outside world reach our Kubernetes apps.

If you don’t have the KinD cluster with Cilium go to https://arielantigua.com/weblog/2024/07/cilium-bgp-lab-locally/

When using Cilium you can reach an application using the Pod IP address or using a LoadBalance IP assigned to a Service. In the previous article we only advertised the Pod Address to our BGP neighbors, lets add more stuff so we can be close to a real deployment.

If you already have cloned the repo, go and do a pull so you can get the new config files and other stuff in the Makefile, or better yet, go and do a new clone of the repo and start from scratch, that’s the idea of the repo!

New stuff in this LAB.

      • serviceSubnet in cluster.yaml (10.2.0.0/16)
      • serviceSelector in the CiliumBGPPeeringPolicy (service = public), this useful to identify what LoadBalancer will be announced by this peering policy.
      • public-pool.yaml with the configuration for the LoadBalancer IP Pool.
      • If you look at the topo.yaml file, will find a new Linux node (client0) for testing, this is based on alpine:latest, will test reachability to our LoadBalancer IP from this container that is connected to tor1 with IP address 10.0.100.2/24
      • Bookinfo application so we can have something to reach from client0.

Now let’s build the environment, just like before, running make will create a KinD cluster with 4 nodes (1 control-plane and 3 workers), a containerlab topology with 3 routers (FRR), and 1 client (Alpine). decide to let alone the Cilium install manually or with make cilium, in case there is a need to do something different with the same KinD cluster or add another option to Cilium at install time.

A black screen with white text Description automatically generated

This is the result of running make, as you can see in the image, now you can go and install Cilium in whatever way you like the most, in this case, I will use these options:

cilium install --version=1.15 \
--helm-set ipam.mode=kubernetes \
--helm-set tunnel-protocol=vxlan \
--helm-set ipv4NativeRoutingCIDR="10.0.0.0/8" \
--helm-set bgpControlPlane.enabled=true \
--helm-set k8s.requireIPv4PodCIDR=true

The fastest way is to do this: make cilium

A screen shot of a computer Description automatically generated

A screen shot of a computer screen Description automatically generated

The nodes are ready for workloads.

Now is the time to apply both, the CiliumBGPPeeringPolicy and the CiliumLoadBalancerIPPool.

You can do it with make or the official way with kubectl.

kubectl apply -f cilium-bgp-peering-policies.yaml 

kubectl apply -f public-pool.yaml

A screen shot of a computer screen Description automatically generated

You can validate the configurations with the following commands.

kubectl get -f cilium-bgp-peering-policies.yaml -oyaml

kubectl get -f public-pool.yaml -oyaml

Our lab environment is ready to assign IP to LoadBalancer services, lest check the existing ones first.

Is time to deploy our test application.

Now there is an app in the repo, you can deploy the bookinfo application, which is used by Istio to do some demos, I just cloned it and added a Service to pick up an Address from our IP Pool and advertised it to the Tor(0,1) routers.

https://github.com/aredan/ciliumlabs/tree/main/bookinfo/kustomize

kubectl apply -k kustomize

A screenshot of a computer program Description automatically generated

Let’s check the Services that we have now.

A screen shot of a computer Description automatically generated

There is our LoadBalancer IP address (10.0.10.1) and others ClusterIP, the LoadBalancer is the one that we will be testing from client0.

A screenshot of a computer program Description automatically generated

We can see there is an IP assigned to the Service, but is better if we can validate that this address is being announced to the Tor routers.

docker exec -it clab-bgp-cplane-router0 vtysh -c 'show bgp ipv4'

A computer screen shot of a computer Description automatically generated

Also, from Cilium itself we can validate that his address is being announced from the virtualRouters.

Within the Cilium CLI exists a subcommand called bgp (hard to pass!!) and with this, we can validate a few things.

A screenshot of a computer Description automatically generated

A screen shot of a computer Description automatically generated

cilium bgp routes advertised ipv4 unicast

A black screen with white text Description automatically generated

Our four nodes are announcing the same address to upstream routers, this is because of the trafficPolicy assigned to the service.

Is time to reach our App.

We need to get into client0 container, this is an alpine container so ash is the shell.

Installing curl and Lynx. In case you don’t know what Lynx is, is a console browser, this feels like traveling to the past when the one that stayed more in the console was the strongest.

A black screen with white text Description automatically generated

A screenshot of a computer program Description automatically generated

A computer screen shot of a black screen Description automatically generated

We can see that curl is reaching the app, this way is hard to interact we the application, now with Lynx!

lynx http://10.0.10.1

A screenshot of a computer Description automatically generated

 

Isovalent (Cilium creators) announced new support for ClusterIP in BGP !!

Cilium BGP Lab, locally!

Maybe you already know about Cilium, You don’t?
Go read https://docs.cilium.io/en/stable/overview/intro/ and come back !!

Hello again!
So now you want to learn about Cilium BGP functionality, for me this is one of the most exciting features of Cilium, maybe the reason is the addiction that I already have for BGP, who knows (AS207036). Back to the point, with Cilium you can establish a BGP session with your routers (Tor, border, or core, you decide.) and announce PodCIDR or LoadBalance for services.

For this learning exercise, we will use Kind and other tools to run a K8s cluster locally on any Linux (Windows or MacOS) machine. There is a lot of info on the internet on how to get Kind up and running and even how to install Cilium, I decided to build a collection of Cilium Labs (ciliumlabs) to speed up the process of getting a Cilium testing environment up and running.

First, go and clone the repo, all the information is on the README of each lab type, in this case, bgp/README.md is the one with the steps to get this ready, but we first need to install the prereqs. The prerequisites are listed in the main README file. In my environment, all this is met so I can proceed with the lab creation.

Continuar leyendo «Cilium BGP Lab, locally!»

tkg-bootstrap – VM para iniciar un cluster de Tanzu.

tkg-bootstrap – VM para iniciar un cluster de Tanzu.

Despues de leer la documentacion de todo lo necesario para iniciar un TKG en vSphere, pense, porque no creo una VM empaquetada que cuente con todo lo necesario. Solo entrar a la VM y ejecutar el procedimiento para instalar Tanzu en vSphere.

Todo esta en un repo de Github, actualmente solo esta disponible el codigo para que usted mismo cree su VM (.OVA) y pueda importarla en un ambiente de vSphere (o en Workstation).

Desde dicha VM se puede ejecutar el procedimiento para inicializar la creacion de TKG, actualmente creo que la unica limitante es que tenemos que mantener dicha VM para futuros updates del ambiente desplegado.

https://github.com/aredan/tkg-bootstrap

toda la informacion necesario esta en el README.md del repo.

kubeconfig con direnv, múltiples clusters de Kubernetes.

kubeconfig con direnv, múltiples clusters de Kubernetes.

Desde la documentación de Kubernetes:

Utilice los archivos kubeconfig para organizar la información acerca de los clústeres, los usuarios, los Namespaces y los mecanismos de autenticación. La herramienta de línea de comandos kubectl utiliza los archivos kubeconfig para hallar la información que necesita para escoger un clúster y comunicarse con el servidor API de un clúster.

Nota: Un archivo utilizado para configurar el acceso a los clústeres se denomina archivo kubeconfig. Esta es una forma genérica de referirse a los archivos de configuración. Esto no significa que exista un archivo llamado kubeconfig.

Por defecto, kubectl busca un archivo llamado config en el directorio $HOME/.kube. Puedes especificar otros archivos kubeconfig mediante la configuración de la variable de entorno KUBECONFIG o mediante la configuración del flag –kubeconfig.

Cuando contamos con un solo cluster de k8s, es bastante fácil conectarse a el usando kubectl con solo colocar el archivo config en .kube (como dice el texto anterior). ¿Pero que pasa cuando tenemos varios servidores de k8s con los cuales queremos interactuar para realizar ciertas tareas?

Continuar leyendo «kubeconfig con direnv, múltiples clusters de Kubernetes.»