Platform9 KubeVirt solution.

 

This is my opinion about this platform.

A few weeks ago, Platform9 announced a Hands-on-Lab for their KubeVirt implementation, and after using Harvester for running VMs mainly for deploying Rancher RKE clusters, I got my hands on this platform and the differences are huge.

First, Platform9 keeps its offering very close to the upstream project, what does this mean, it looks like you installed KubeVirt manually in your K8s cluster, this is good. The good thing about it is that you are more familiar with the solution and when the time to move to another KubeVirt offering comes, the changes will be minimal.

As you may know, Kubernetes goes first. PMK (Platform9 Managed Kubernetes) needs to be installed.

https://platform9.com/docs/kubernetes/get-started-bare-metal

pf9ctl is the tool used to create a K8s cluster managed from PMK. In the previous link, you can see how easy is to create a cluster with just one Master node (for testing of course!) and one Worker, this was the scenario of the Hands-on-Labs.

The pre-node option for pf9ctl will install an agent and begin promoting the server to a PMK node that can be used to build a cluster. This progress can be monitored in the infrastructure -> Nodes section of the platform.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

These two nodes are already assigned to a cluster, there you can see the Role assigned to each of them.

With a K8s cluster already running, is time to add KubeVirt. Platform9 provides this as an add-on, with just one click it can be installed!

From Infrastructure -> Clusters -> Managed, a list of managed clusters will appear, there we select the one intended for KubeVirt.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

There are some similarities with the Node section from Infrastructure. Here the information is about Kubernetes, Let’s click Add-ons and search for KubeVirt. In this cluster, the add-on is already active. But as I said, is just one click away.

A screenshot of a computer

Description automatically generated

In the Platform9 KubeVirt documentation, the detail of the steps are for a cluster with a KubeVirt add-on added at build time, this is the fastest way to do it for a new cluster, in the case that the cluster already exists, the add-on can be added without issues. One dependency for KubeVirt is Luigi, which is a network plugin operator.

KubeVirt section.

A screenshot of a computer

Description automatically generated

A lot of information. Virtual Machines section, you can easily see the total, running or the VMs being migrated.

Virtual Machine creation.

Still, in the KubeVirt section of the platform, we need to go to Virtual Machines, there we have three areas of interest. All VMs, Live Migrations, and Instance Types.

In All VMs, is where all the created VMs will appear. In the top right, we have Add Virtual Machine.

A screenshot of a computer

Description automatically generated

Clicking the Create using wizard will bring this page:

A screenshot of a computer

Description automatically generated

The best part is that while we select the desired options for our VM, the right side of the wizard with the YAML syntax will start updating itself!

That’s a great feature, this way we can start learning how to do the YAML version of the VM creation process and maybe run some CI/CD and automagically get VMs.

What can we do with VMs on this implementation of KubeVirt?

From the Virtual Machines -> All VMs section, the list of available VMs will appear, there we can manage those VMs.

A screenshot of a computer

Description automatically generated

Selecting a VM gives us more information and a lot of other parameters to modify, like disk size, memory size, and networking.

A screenshot of a computer

Description automatically generated

There is a lot more to talk about, I’m planning to keep getting into Platform9 KubeVirt solution and do a comparison to Harvester!

While creating our cluster, we selected an older version of Kubernetes, the idea is to be able to run an upgrade and see how things are handled for our VMs.

In Infrastructure -> Clusters -> Managed we can select the cluster that will be upgraded, in my case there is only one.

A screenshot of a computer

Description automatically generated

A screenshot of a computer program

Description automatically generated

Here I selected Patch and clicked Upgrade Now.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

The steps for the upgrade are very similar to the initial install.

While upgrading I noticed that the VMs first were moved to the Worker node, this is expected, the first nodes to upgrade on K8s are the Master nodes.

Now we are at 1.26.14-pmk.

A screenshot of a computer

Description automatically generated

Of course, a cluster with just one Master and one Worker is not a production-ready cluster, and doing an upgrade to that will cause connectivity loss and other issues.

Next, I will try to get my hands on PMK access to try to build a cluster in my homelab, here I will be testing more stuff related to Storage and Networking, MetalLB being the more interesting one!

Just like the OpenStack HoL version, there will be some videos on YouTube, stay tuned!

 

I want to see more KubeVirt out there…

KubeVirt · GitHub

Let’s do this.
I think this is the second English-post in the entire history of this blog. Be gentle with me.

KubeVirt, is a known name for the people already doing containers, a normal player for people in the OpenShift world, but why if you come from others environment (VMware?) is like this doesn’t exist or is just being ignore?

If you can read Spanish and go back in the post history of this site, you can see that when is about VMs, VMware was the only option, not the same with Kubernetes, I started playing with K8s since 2018 and by that time, VMware didn’t have an offering that can compete with K8s. so I stick with K8s installed directly with kubeadm or the Rancher’s offering (RKE, RKE2 and K3s).

Why did I do a little recap?
Because is time to start trying new solutions to manage our VMs.

I love Harvester, love the fact that it can connect to Rancher UI and can be managed from the same place that I already have my containers, but is so resource intensive! The same host that was running Proxmox and ESXi can’t cope with the demand, I hope that this solution keeps growing and get my hands on others! (looking at you Platform9!).

My next steps, read more about plain KubeVirt to know more how this thing works behind the nice UI provided by Rancher or OpenShift… wish me luck !

 

Kubernetes – Explorando un Cluster de Kubernetes con VMware Octant – Update.

Kubernetes – Explorando un Cluster de Kubernetes con VMware Octant – Update.

Ya hace un tiempo, tal vez demasiado, había hablado de este cliente desktop para conectarnos a un cluster de k8s. En ese momento lo miré y no me sentí muy convencido de cuáles eran las bondades de usar algo así, y la principal razón era que estaba muy emocionado con Rancher UI.

Continuar leyendo «Kubernetes – Explorando un Cluster de Kubernetes con VMware Octant – Update.»

RPKI con GoRTR de Cloudflare – en Kubernetes!

RPKI con GoRTR de Cloudflare – en Kubernetes !

Hace un tiempo probé con RPKI, en esos días estaba tomando unos entrenamientos de MANRS, con RPKI podemos asegurar que los prefijos que recibimos vía BGP con de quien realmente deberían ser, en otras palabras, si por alguna razón un tercero mal intencionado se hace con un bloque de direcciones IP que no le pertenece, pero esta tiene ROA habilitado, el trabajo de RPKI es no permitir que esos bloques de direcciones IP sean insertados en nuestra tabla de enrutamiento.

https://github.com/cloudflare/gortr

Continuar leyendo «RPKI con GoRTR de Cloudflare – en Kubernetes!»

Kubernetes – Respaldando un cluster de k8s con Kasten K10.

Kubernetes – Respaldando un cluster de k8s con Kasten K10.

Kasten K10 es una solución de respaldos para Kubernetes, ya he tocado el tema de los respaldos en dos ocasiones con dos herramientas diferentes, K10 tiene ventajas muy claras en frente a los respaldos de Longhorn o Velero.

El primer punto a favor que tiene la herramienta es la facilidad con la que se instala, sin embargo, K10 hace uso de funcionalidades que no vienen hábiles por defecto y tenemos que aplicar CRDs al cluster. K10 se apoya de VolumeSnapshot de Kubernetes que hasta hace varias versiones era Beta y que aun en versión 1.19 se deben agregar de manera manual algunos CRD.

Otro punto a tener en cuenta es que la solución de almacenamiento habilitada en k8s debe soportar el CSI de snapshots. En mi ambiente de “producción” estoy usando Longhorn 1.0.2 y esta opción solo está disponible para la versión 1.1.0, esto quiere decir que la instalación descrita en este artículo fue hecha en un cluster para estos fines ya que por circunstancias ajenas aun no puedo actualizar Longhorn a la última versión.

Continuar leyendo «Kubernetes – Respaldando un cluster de k8s con Kasten K10.»