Creating Linux VM with Harvester HCI.

In a previous article, we saw how to integrate Harvester in Rancher UI, and from there we were able to request a new K8s cluster with just a few clicks. Now is Virtual Machine time. How fast can we deploy a Linux VM.

Look at https://arielantigua.com/weblog/2023/12/harvester-hci-en-el-homelab/

For installing Harvester.

Linux VM.

This is easier than expected. You just need an img or qcow2 file imported into Harvester. Navigate to Images and click Create.

For Ubuntu [ https://cloud-images.ubuntu.com/ ]

A screenshot of a computer

Description automatically generated

A screenshot of a black screen

Description automatically generated

You need to import an image or ISO file, then go to Virtual Machines click Create.

A screenshot of a computer

Description automatically generated

If we analyze this image, almost everything is the same as other Virtual Machine Platforms, one thing that stood out is Namespace. This is a concept from Kubernetes, with this option we can logically separate VM from other projects or owners.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

At this point, we can click Create and a new VM will be available. But let’s first click on Advance Options and see what else we can add to this VM.

A screenshot of a computer

Description automatically generated

We can use a Cloud Config to customize our VM, not only we can insert configuration parameters in this area, but we can also create templates that can be reused easily.

A screenshot of a computer

Description automatically generated

This Cloud Config template is for my Rancher VMs for K8s cluster with RKE2 and K3s. it can be used for normal Ubuntu VMs.

Now is the time to click Create and get our new VM.

A screenshot of a computer

Description automatically generated

Interacting with the Virtual Machine.

As with other VM platforms, Harvester offers a console interface so we can go into the VM and configure or validate anything configuration-related.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

The IP Address assigned is from a DHCP network outside the Harvester environment, this is possible because the network selected is VLAN tagged and in bridge mode to the VM, so the packets are directly injected into VLAN10.

This is just a quick way to get a VM running on Harvester, next, we are going to play with Windows VM and VM creation from ISO images.

 

Ceph on Proxmox as Storage Provider.

For a few months, I’ve been reading about Ceph and how it works, I love distributed stuff, maybe the reason is that I can have multiple machines and the idea of clustering has always fascinated me. In Ceph, the more the better!

If you have multiple machines with lots of SSD/NVME the Ceph performance will be a lot different than having a 3-node cluster with only one OSD per node. This is my case, and the solution has been working well.

Installing Ceph on Proxmox is just a few clicks away, is already documented in https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster

At first, I have two nodes and the state of Ceph was faulty.

A screenshot of a computer

Description automatically generated

The crush_map created by Proxmox is a 3-host configuration, that adds at least one OSD to the cluster, in this picture, there were only 2 hosts with 1 OSD each.

After the third node was added, it started replicating data across all OSD to meet the crush_map policy.

A screenshot of a computer

Description automatically generated

A screenshot of a graph

Description automatically generated

A screenshot of a computer

Description automatically generated

Here you can see the PGs getting moved across the OSDs.

One thing I didn’t like about the storage usage on Proxmox, is the thin provision is nothing like VMware VMFS! It depends on the backend and the format of the virtual drive. I need to get used to this.

A screenshot of a computer

Description automatically generated

A screenshot of a computer

Description automatically generated

This is the state of the Storage side of the Proxmox Cluster. I need to move more VMs into this Storage and see how Ceph performs with more IOPs demanding VMs.

The hardware used in this cluster is documented:

here: https://arielantigua.com/weblog/2024/03/ceph-as-my-storage-provider/

 

Ceph as my storage provider?

 

Ceph.io — Logo Usage

Ceph is the future of storage; where traditional systems fail to deliver, Ceph is designed to excel. Leverage your data for better business decisions and achieve operational excellence through scalable, intelligent, reliable, and highly available storage software. Ceph supports object, block and file storage, all in one unified storage system.

That’s the official definition from Ceph website. It’s it true?

I don’t know. Want to find out!

Since few weeks ago I’ve been in the planning stage to install and configure Ceph in a 3-node cluster, everything done via Proxmox UI. One of the main issues with this solution, the storage devices. how’s that?

Well.. it doesn’t like Consumer SSD/Disks/NVME.

BoM:

  • Supermicro X9SRL with Xeon E5-2680v2 + 128GB of RAM + Intel P3600 1.6TB PCIe NVMe
  • HP Z440 with Xeon E5-2640 v4 + 128GB of RAM + Intel Intel P3600 1.2TB PCIe NVMe
  • HP Z440 with Xeon E5-2640 v4 + 64GB of RAM + Intel Intel P3600 1.2TB PCIe NVMe

Note: The storage listed here will be used for Ceph OSD, there is a dual 10GbE card on each host for replication.

I have a pair of 970 EVO Plus (1TB) that were working fine with vSAN ESA, decide to move to Intel Enterprise NVMe because a lot of information around the web points to bad performance with this type of NMVe.

The Supermicro machine is already running Proxmox, lets the Ceph Adventure begins!

This picture is one of the Z440, is full in there!!

A close up of a computer

Description automatically generated

 

I want to see more KubeVirt out there…

KubeVirt · GitHub

Let’s do this.
I think this is the second English-post in the entire history of this blog. Be gentle with me.

KubeVirt, is a known name for the people already doing containers, a normal player for people in the OpenShift world, but why if you come from others environment (VMware?) is like this doesn’t exist or is just being ignore?

If you can read Spanish and go back in the post history of this site, you can see that when is about VMs, VMware was the only option, not the same with Kubernetes, I started playing with K8s since 2018 and by that time, VMware didn’t have an offering that can compete with K8s. so I stick with K8s installed directly with kubeadm or the Rancher’s offering (RKE, RKE2 and K3s).

Why did I do a little recap?
Because is time to start trying new solutions to manage our VMs.

I love Harvester, love the fact that it can connect to Rancher UI and can be managed from the same place that I already have my containers, but is so resource intensive! The same host that was running Proxmox and ESXi can’t cope with the demand, I hope that this solution keeps growing and get my hands on others! (looking at you Platform9!).

My next steps, read more about plain KubeVirt to know more how this thing works behind the nice UI provided by Rancher or OpenShift… wish me luck !