Platform9 KubeVirt solution.

 

This is my opinion about this platform.

A few weeks ago, Platform9 announced a Hands-on-Lab for their KubeVirt implementation, and after using Harvester for running VMs mainly for deploying Rancher RKE clusters, I got my hands on this platform and the differences are huge.

First, Platform9 keeps its offering very close to the upstream project, what does this mean, it looks like you installed KubeVirt manually in your K8s cluster, this is good. The good thing about it is that you are more familiar with the solution and when the time to move to another KubeVirt offering comes, the changes will be minimal.

Continuar leyendo «Platform9 KubeVirt solution.»

Creating Linux VM with Harvester HCI.

In a previous article, we saw how to integrate Harvester in Rancher UI, and from there we were able to request a new K8s cluster with just a few clicks. Now is Virtual Machine time. How fast can we deploy a Linux VM.

Look at https://arielantigua.com/weblog/2023/12/harvester-hci-en-el-homelab/

For installing Harvester.

Linux VM.

This is easier than expected. You just need an img or qcow2 file imported into Harvester. Navigate to Images and click Create.

Continuar leyendo «Creating Linux VM with Harvester HCI.»

Ceph on Proxmox as Storage Provider.

For a few months, I’ve been reading about Ceph and how it works, I love distributed stuff, maybe the reason is that I can have multiple machines and the idea of clustering has always fascinated me. In Ceph, the more the better!

If you have multiple machines with lots of SSD/NVME the Ceph performance will be a lot different than having a 3-node cluster with only one OSD per node. This is my case, and the solution has been working well.

Installing Ceph on Proxmox is just a few clicks away, is already documented in https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster

At first, I have two nodes and the state of Ceph was faulty.

A screenshot of a computer Description automatically generated

The crush_map created by Proxmox is a 3-host configuration, that adds at least one OSD to the cluster, in this picture, there were only 2 hosts with 1 OSD each.

Continuar leyendo «Ceph on Proxmox as Storage Provider.»

Ceph as my storage provider?

 

Ceph.io — Logo Usage

Ceph is the future of storage; where traditional systems fail to deliver, Ceph is designed to excel. Leverage your data for better business decisions and achieve operational excellence through scalable, intelligent, reliable, and highly available storage software. Ceph supports object, block and file storage, all in one unified storage system.

That’s the official definition from Ceph website. It’s it true?

I don’t know. Want to find out!

Since few weeks ago I’ve been in the planning stage to install and configure Ceph in a 3-node cluster, everything done via Proxmox UI. One of the main issues with this solution, the storage devices. how’s that?

Well.. it doesn’t like Consumer SSD/Disks/NVME.

BoM:

  • Supermicro X9SRL with Xeon E5-2680v2 + 128GB of RAM + Intel P3600 1.6TB PCIe NVMe
  • HP Z440 with Xeon E5-2640 v4 + 128GB of RAM + Intel Intel P3600 1.2TB PCIe NVMe
  • HP Z440 with Xeon E5-2640 v4 + 64GB of RAM + Intel Intel P3600 1.2TB PCIe NVMe

Note: The storage listed here will be used for Ceph OSD, there is a dual 10GbE card on each host for replication.

I have a pair of 970 EVO Plus (1TB) that were working fine with vSAN ESA, decide to move to Intel Enterprise NVMe because a lot of information around the web points to bad performance with this type of NMVe.

The Supermicro machine is already running Proxmox, lets the Ceph Adventure begins!

This picture is one of the Z440, is full in there!!

A close up of a computer

Description automatically generated