Ceph on Proxmox as Storage Provider.

For a few months, I’ve been reading about Ceph and how it works, I love distributed stuff, maybe the reason is that I can have multiple machines and the idea of clustering has always fascinated me. In Ceph, the more the better!

If you have multiple machines with lots of SSD/NVME the Ceph performance will be a lot different than having a 3-node cluster with only one OSD per node. This is my case, and the solution has been working well.

Installing Ceph on Proxmox is just a few clicks away, is already documented in https://pve.proxmox.com/wiki/Deploy_Hyper-Converged_Ceph_Cluster

At first, I have two nodes and the state of Ceph was faulty.

A screenshot of a computer Description automatically generated

The crush_map created by Proxmox is a 3-host configuration, that adds at least one OSD to the cluster, in this picture, there were only 2 hosts with 1 OSD each.

Continuar leyendo «Ceph on Proxmox as Storage Provider.»

Ceph as my storage provider?


Ceph.io — Logo Usage

Ceph is the future of storage; where traditional systems fail to deliver, Ceph is designed to excel. Leverage your data for better business decisions and achieve operational excellence through scalable, intelligent, reliable, and highly available storage software. Ceph supports object, block and file storage, all in one unified storage system.

That’s the official definition from Ceph website. It’s it true?

I don’t know. Want to find out!

Since few weeks ago I’ve been in the planning stage to install and configure Ceph in a 3-node cluster, everything done via Proxmox UI. One of the main issues with this solution, the storage devices. how’s that?

Well.. it doesn’t like Consumer SSD/Disks/NVME.


  • Supermicro X9SRL with Xeon E5-2680v2 + 128GB of RAM + Intel P3600 1.6TB PCIe NVMe
  • HP Z440 with Xeon E5-2640 v4 + 128GB of RAM + Intel Intel P3600 1.2TB PCIe NVMe
  • HP Z440 with Xeon E5-2640 v4 + 64GB of RAM + Intel Intel P3600 1.2TB PCIe NVMe

Note: The storage listed here will be used for Ceph OSD, there is a dual 10GbE card on each host for replication.

I have a pair of 970 EVO Plus (1TB) that were working fine with vSAN ESA, decide to move to Intel Enterprise NVMe because a lot of information around the web points to bad performance with this type of NMVe.

The Supermicro machine is already running Proxmox, lets the Ceph Adventure begins!

This picture is one of the Z440, is full in there!!

A close up of a computer

Description automatically generated