HomeLAB is going almost 100% HarvesterHCI

 

A black and orange x-shaped symbol

Description automatically generated A close-up of a logo

Description automatically generated

I just decided that Proxmox is not challenging enough and interesting enough to keep me trying stuff and investing learning time in it. Yeah, a lot of Proxmox lovers will hate me for saying that but the time for a new platform/way to do things is among us, and maybe you already notice, I don’t do VMware stuff anymore.

I just want to run Kubernetes, a lot of Kubernetes clusters for testing solutions like Cilium and others, my real issue with Proxmox is the integration with automation tools like Terraform (there is not an official module for it) and the Storage plugins for consuming Proxmox storage on Kubernetes is terrible. I always liked Harvester and the latest integration into the downstream cluster running on Harvester and deployed with Rancher is awesome!

So, this is the plan.

Currently running a 3-node Proxmox cluster with Ceph and a few NVME and Intel SSD drives that are good enough for decent storage performance. I will remove 2 nodes from that cluster and convert them to Harvester, now there is support for a 2-node Cluster + Witness node. The witness node will run on the stand-alone Proxmox (Harvester doesn’t have a USB pass-through yet and I run unRAID on that host).

The 2 HP z440 will be my new 2-node Harvester cluster.

This machine will have 4 drives, 2 NVME, and 2 Intel SSD, in the RAM area, there will be 256GB of RAM available and enough CPU to run at least 3 or 4 RKE2 clusters for testing and production stuff.

I’ll update on the progress…

 

Cilium BGP Lab with LoadBalancing and more!

 

At this point, we know how to install Cilium and create a BGP peering with our routers. Now we need to let the outside world reach our Kubernetes apps.

If you don’t have the KinD cluster with Cilium go to https://arielantigua.com/weblog/2024/07/cilium-bgp-lab-locally/

When using Cilium you can reach an application using the Pod IP address or using a LoadBalance IP assigned to a Service. In the previous article we only advertised the Pod Address to our BGP neighbors, lets add more stuff so we can be close to a real deployment.

If you already have cloned the repo, go and do a pull so you can get the new config files and other stuff in the Makefile, or better yet, go and do a new clone of the repo and start from scratch, that’s the idea of the repo!

Continuar leyendo «Cilium BGP Lab with LoadBalancing and more!»

Cilium BGP Lab, locally!

Maybe you already know about Cilium, You don’t?
Go read https://docs.cilium.io/en/stable/overview/intro/ and come back !!

Hello again!
So now you want to learn about Cilium BGP functionality, for me this is one of the most exciting features of Cilium, maybe the reason is the addiction that I already have for BGP, who knows (AS207036). Back to the point, with Cilium you can establish a BGP session with your routers (Tor, border, or core, you decide.) and announce PodCIDR or LoadBalance for services.

For this learning exercise, we will use Kind and other tools to run a K8s cluster locally on any Linux (Windows or MacOS) machine. There is a lot of info on the internet on how to get Kind up and running and even how to install Cilium, I decided to build a collection of Cilium Labs (ciliumlabs) to speed up the process of getting a Cilium testing environment up and running.

First, go and clone the repo, all the information is on the README of each lab type, in this case, bgp/README.md is the one with the steps to get this ready, but we first need to install the prereqs. The prerequisites are listed in the main README file. In my environment, all this is met so I can proceed with the lab creation.

Continuar leyendo «Cilium BGP Lab, locally!»

tkg-bootstrap – VM para iniciar un cluster de Tanzu.

tkg-bootstrap – VM para iniciar un cluster de Tanzu.

Despues de leer la documentacion de todo lo necesario para iniciar un TKG en vSphere, pense, porque no creo una VM empaquetada que cuente con todo lo necesario. Solo entrar a la VM y ejecutar el procedimiento para instalar Tanzu en vSphere.

Todo esta en un repo de Github, actualmente solo esta disponible el codigo para que usted mismo cree su VM (.OVA) y pueda importarla en un ambiente de vSphere (o en Workstation).

Desde dicha VM se puede ejecutar el procedimiento para inicializar la creacion de TKG, actualmente creo que la unica limitante es que tenemos que mantener dicha VM para futuros updates del ambiente desplegado.

https://github.com/aredan/tkg-bootstrap

toda la informacion necesario esta en el README.md del repo.

kubeconfig con direnv, múltiples clusters de Kubernetes.

kubeconfig con direnv, múltiples clusters de Kubernetes.

Desde la documentación de Kubernetes:

Utilice los archivos kubeconfig para organizar la información acerca de los clústeres, los usuarios, los Namespaces y los mecanismos de autenticación. La herramienta de línea de comandos kubectl utiliza los archivos kubeconfig para hallar la información que necesita para escoger un clúster y comunicarse con el servidor API de un clúster.

Nota: Un archivo utilizado para configurar el acceso a los clústeres se denomina archivo kubeconfig. Esta es una forma genérica de referirse a los archivos de configuración. Esto no significa que exista un archivo llamado kubeconfig.

Por defecto, kubectl busca un archivo llamado config en el directorio $HOME/.kube. Puedes especificar otros archivos kubeconfig mediante la configuración de la variable de entorno KUBECONFIG o mediante la configuración del flag –kubeconfig.

Cuando contamos con un solo cluster de k8s, es bastante fácil conectarse a el usando kubectl con solo colocar el archivo config en .kube (como dice el texto anterior). ¿Pero que pasa cuando tenemos varios servidores de k8s con los cuales queremos interactuar para realizar ciertas tareas?

Continuar leyendo «kubeconfig con direnv, múltiples clusters de Kubernetes.»