United Kingdom | How to create a Ceph Cluster on a Single Machine

Sebastian Baszcyj - 14.02.202320230214

How to create a Ceph Cluster on a Single Machine

United Kingdom | How to create a Ceph Cluster on a Single Machine

Ceph storage is often employed in numerous large-scale applications, hence simpler usage for testing or learning purposes may pass unnoticed. 

In certain situations, testing certain aspects such as a command, software behaviour or integration may be necessary. Ceph is a software-defined storage platform that is hardware-agnostic, and well-suited for data analytics, AI/ML and other data-intensive workloads. 

Ceph’s popularity in large-scale use cases means that simpler setups may go unnoticed. But, in scenarios where a quick Ceph cluster is needed for learning or practice, or to connect to a test OpenShift or OpenStack cluster, and limited hardware is available, it is possible to create a single-machine Ceph cluster with some adjustments. 

To create a single-machine Ceph cluster, all that’s required is a machine that’s either virtual or physical with 4 CPU cores, 8GB of RAM and at least  3 disks (additionally, one disk for the operating system is also needed). 

The cluster can be created using either Red Hat Ceph or the Ceph Community edition. This guide uses Red Hat Ceph 5, but if you’re using the Ceph Community edition, the Pacific version would be the equivalent. It’s worth noting that other than activating the repository, all other commands remain the same in the Ceph Community edition. 

It’s important to remember that this setup is intended for study or learning purposes, and should not be expected to provide high performance or data resiliency.

System Configuration

A Linux distribution needs to be installed on the machine. This example uses Red Hat Enterprise Linux (RHEL) 8.7, but any Ceph-supported distribution can be used. 

Four disks will be utilized in this example: one for the operating system and the remaining three for Ceph Object Storage Daemons (OSDs). In the demonstrated setup I will use an Oracle VBOX. 

The following presents the configuration of the server: 

[root@rhel-ceph ~]# cat /etc/redhat-release   Red Hat Enterprise Linux release 8.7 (Ootpa)   [root@rhel-ceph ~]# lsmem   RANGE                                  SIZE  STATE REMOVABLE BLOCK   0x0000000000000000-0x00000000dfffffff  3.5G online       yes  0-27   0x0000000100000000-0x000000021fffffff  4.5G online       yes 32-67      Memory block size:       128M   Total online memory:       8G   Total offline memory:      0B   [root@rhel-ceph ~]# lscpu   Architecture:        x86_64   CPU op-mode(s):      32-bit, 64-bit   Byte Order:          Little Endian   CPU(s):              4   On-line CPU(s) list: 0-3   Thread(s) per core:  1   Core(s) per socket:  4   Socket(s):           1   NUMA node(s):        1   Vendor ID:           GenuineIntel   CPU family:          6   Model:               154   Model name:          12th Gen Intel(R) Core(TM) i7-12700H   Stepping:            3   CPU MHz:             2688.028   BogoMIPS:            5376.05   Hypervisor vendor:   KVM   Virtualisation type: full   L1d cache:           48K   L1i cache:           32K   L2 cache:            1280K   L3 cache:            24576K   NUMA node0 CPU(s):   0-3   Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase bmi1 avx2 bmi2 invpcid rdseed clflushopt md_clear flush_l1d arch_capabilities   [root@rhel-ceph ~]# lsblk   NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT   sda             8:0    0   50G  0 disk   ├─sda1          8:1    0    1G  0 part /boot   └─sda2          8:2    0   49G  0 part     ├─rhel-root 253:0    0   44G  0 lvm  /     └─rhel-swap 253:1    0    5G  0 lvm  [SWAP]   sdb             8:16   0  100G  0 disk   sdc             8:32   0  100G  0 disk   sdd             8:48   0  100G  0 disk   sr0            11:0    1 1024M  0 rom   

Installation

After installing Linux and attaching the three disks, the next step is to add or enable the Ceph repositories. For those using RHEL, the following command should be used to add or enable Ceph repositories: 

subscription-manager repos –enable=rhceph-5-tools-for-rhel-8-x86_64-rpms 

In case you have not heard, an alternative approach to provision the Ceph cluster is to use cephadm, a new tool from the Ceph project, which provisions the cluster based on containers. In this case, along with that package (cephadm), you will also need Podman to manage the containers, ceph-common for common Ceph commands and ceph-base for advanced tools. The following command can be used to install these packages: 

dnf install podman cephadm ceph-common ceph-base -y 

Once all the packages are installed, we are ready to bootstrap the cluster. Ensure you know your network address (if not, ip addr sh will help). 

The following command needs a few words of explanation. 

To create a new Ceph cluster, the first step is to execute the cephadm bootstrap command on the host of the Ceph cluster. This command creates the “monitor daemon” of the Ceph cluster, which requires an IP address. The IP address of the single Ceph cluster host should be provided to the ceph bootstrap command, so it is necessary to know the IP address of the host beforehand. In my case, the server has a static IP address 192.168.50.137/24, hence –mon-ip is set to this IP address. Make sure that you have the Red Hat account, to pull the containers from registry.redhat.io. It is easy to create one on www.redhat.com. 

cephadm bootstrap    --cluster-network 192.168.50.0/24    --mon-ip 192.168.50.137    --registry-url registry.redhat.io    --registry-username 'your_rh_user'    --registry-password 'your_rh_pass'    --dashboard-password-noupdate    --initial-dashboard-user admin    --initial-dashboard-password ceph    --allow-fqdn-hostname    --single-host-defaults 

The successful bootstrap should render at the end the output similar to the following: 

Ceph Dashboard is now available at:          URL: https://rhel-ceph.example.net:8443/       User: admin   Password: ceph     Enabling client.admin keyring and conf on hosts with "admin" label   Enabling autotune for osd_memory_target   You can access the Ceph CLI as following in case of multi-cluster or non-default config:     sudo /usr/sbin/cephadm shell --fsid 3f7804be-925e-11ed-a0ff-08002785a452 -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring     Or, if you are only running a single cluster on this host:     sudo /usr/sbin/cephadm shell     Please consider enabling telemetry to help improve Ceph:     ceph telemetry on     For more information see:     https://docs.ceph.com/en/pacific/mgr/telemetry/    Bootstrap complete. 

Ceph Configuration

With the cluster now up and running, the next step is to add Object Storage Daemons (OSDs) to create disks, file systems, or buckets. An OSD is required for each disk created. The output of the ceph -s command shows that at least two OSDs are needed, or three if the cluster was bootstrapped without the –single-host-defaults option. 

[root@rhel-ceph ~]# ceph -s     cluster:       id:     3f7804be-925e-11ed-a0ff-08002785a452       health: HEALTH_WARN               OSD count 0 < osd_pool_default_size 2        services:       mon: 1 daemons, quorum rhel-ceph.example.net (age 11m)       mgr: rhel-ceph.example.net.mcsjrx(active, since 7m), standbys: rhel-ceph.rpnvua       osd: 0 osds: 0 up, 0 in       data:       pools:   0 pools, 0 pgs       objects: 0 objects, 0 B       usage:   0 B used, 0 B / 0 B avail       pgs:      

Use the following command to add OSD on all available disks (we created three): 

[root@rhel-ceph ~]# ceph orch apply osd --all-available-devices   Scheduled osd.all-available-devices update... 

Monitor the progress with ceph -s command: 

[root@rhel-ceph ~]# ceph -s     cluster:       id:     3f7804be-925e-11ed-a0ff-08002785a452       health: HEALTH_OK         services:       mon: 1 daemons, quorum rhel-ceph.example.net (age 14m)       mgr: rhel-ceph.example.net.mcsjrx(active, since 10m), standbys: rhel-ceph.rpnvua       osd: 3 osds: 1 up (since 3s), 3 in (since 17s)         data:       pools:   0 pools, 0 pgs       objects: 0 objects, 0 B       usage:   4.8 MiB used, 100 GiB / 100 GiB avail       pgs:      

Note in the output above the health of the cluster (HEALTH_OK) 

You can also verify the status using the following command: 

[root@rhel-ceph ~]# ceph orch ps   NAME                              HOST                   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION           IMAGE ID      CONTAINER ID     alertmanager.rhel-ceph            rhel-ceph.example.net  *:9093,9094  running (11m)     3s ago  14m    23.7M        -                    2de2e7d63e1b  291ff6ca84cf     crash.rhel-ceph                   rhel-ceph.example.net               running (14m)     3s ago  14m    6979k        -  16.2.10-94.el8cp  34880245f74a  f7f25bc08cef     grafana.rhel-ceph                 rhel-ceph.example.net  *:3000       running (11m)     3s ago  12m    59.8M        -  8.3.5             bf676a29bcc5  272a970fabf9     mgr.rhel-ceph.example.net.mcsjrx  rhel-ceph.example.net  *:9283       running (15m)     3s ago  15m     434M        -  16.2.10-94.el8cp  34880245f74a  7ec955793058     mgr.rhel-ceph.rpnvua              rhel-ceph.example.net  *:8443       running (12m)     3s ago  12m     387M        -  16.2.10-94.el8cp  34880245f74a  148d40031067     mon.rhel-ceph.example.net         rhel-ceph.example.net               running (15m)     3s ago  15m    72.7M    2048M  16.2.10-94.el8cp  34880245f74a  af8aefba4729     node-exporter.rhel-ceph           rhel-ceph.example.net  *:9100       running (11m)     3s ago  11m    21.9M        -                    6c8570b1928b  6893ea15579d     osd.0                             rhel-ceph.example.net               running (83s)     3s ago  83s    27.1M    4096M  16.2.10-94.el8cp  34880245f74a  2fe598d25b78     osd.1                             rhel-ceph.example.net               running (78s)     3s ago  78s    27.3M    4096M  16.2.10-94.el8cp  34880245f74a  138abd7068a4     osd.2                             rhel-ceph.example.net               running (74s)     3s ago  74s    27.6M    4096M  16.2.10-94.el8cp  34880245f74a  4c1607cdf31b     prometheus.rhel-ceph              rhel-ceph.example.net  *:9095       running (11m)     3s ago  11m    56.4M        -                    39847ff1cddf  0dc108db7a31   

Another way of checking if the OSDs are available is by verifying the containers: 

[root@rhel-ceph ~]# podman ps -f name=osd --format "{{.Names}} {{.Status}}"   ceph-3f7804be-925e-11ed-a0ff-08002785a452-osd-0 Up 9 minutes ago   ceph-3f7804be-925e-11ed-a0ff-08002785a452-osd-1 Up 9 minutes ago   ceph-3f7804be-925e-11ed-a0ff-08002785a452-osd-2 Up 9 minutes ago 

Test

Ceph’s block storage is referred to as RBD, which stands for RADOS block device. To create disks, a pool that is configured to work with RBD must be enabled. The following commands create a pool called ‘rbd’ and activate it for RBD usage:

[root@rhel-ceph ~]# ceph osd pool create rbd         [root@rhel-ceph ~]# ceph osd pool stats   pool device_health_metrics id 1     nothing is going on      pool rbd id 2     nothing is going on 

Activate the pool: 

[root@rhel-ceph ~]# ceph osd pool application enable rbd rbd   enabled application 'rbd' on pool 'rbd' 

Create some rbd disks: 

rbd create test_rbd_file --size 2G 

And verify the config: 

[root@rhel-ceph ~]# rbd pool stats   Total Images: 1   Total Snapshots: 0   Provisioned Size: 2 GiB   [root@rhel-ceph ~]# rbd list   test_rbd_file         [root@rhel-ceph ~]# rbd --image test_rbd_file info   rbd image 'test_rbd_file':   size 2 GiB in 512 objects   order 22 (4 MiB objects)   snapshot_count: 0   id: 5e5063f6a66d   block_name_prefix: rbd_data.5e5063f6a66d   format: 2   features: layering, exclusive-lock, object-map, fast-diff, deep-flatten   op_features:   flags:   create_timestamp: Fri Jan 13 12:20:39 2023   access_timestamp: Fri Jan 13 12:20:39 2023   modify_timestamp: Fri Jan 13 12:20:39 2023 

Mounting RBD

We can quickly do this on the ceph cluster node. 

Run the following command to map the image to a block device: 

rbd map test_rbd_file 

Create the file system over a new device: 

mkfs.xfs  /dev/rbd/rbd/test_rbd_file 

Create a mountpoint: 

mkdir -p /mnt/ceph-block-device 

Mount the filesystem: 

mount /dev/rbd/rbd/test_rbd_file /mnt/ceph-block-device/ 

Some performance testing: 

[root@rhel-ceph]# cd /mnt/ceph-block-device/   [root@rhel-ceph ceph-block-device]# dd if=/dev/zero of=./test1.img bs=1G count=1 oflag=dsync   1+0 records in   1+0 records out   1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.93031 s, 120 MB/s 

Although Ceph is a powerful platform capable of supplying a multitude of storage types, it can be quite daunting to consider it a single immense cluster with multiple devices and multiple petabytes of data. Fortunately, its comprehensive abilities also enable it to be employed in small-scale circumstances for educational purposes. Don’t hesitate to contact us for more information on Ceph Clusters.

Related Article  

Insentra’s Red Hat Capability

THANK YOU FOR YOUR SUBMISSION!

United Kingdom | How to create a Ceph Cluster on a Single Machine

The form was submitted successfully.

Join the Insentra Community with the Insentragram Newsletter

Hungry for more?

If you’re waiting for a sign, this is it.

We’re a certified amazing place to work, with an incredible team and fascinating projects – and we’re ready for you to join us! Go through our simple application process. Once you’re done, we will be in touch shortly!

Who is Insentra?

Imagine a business which exists to help IT Partners & Vendors grow and thrive.

Insentra is a 100% channel business. This means we provide a range of Advisory, Professional and Managed IT services exclusively for and through our Partners.

Our #PartnerObsessed business model achieves powerful results for our Partners and their Clients with our crew’s deep expertise and specialised knowledge.

We love what we do and are driven by a relentless determination to deliver exceptional service excellence.

United Kingdom | How to create a Ceph Cluster on a Single Machine

Insentra ISO 27001:2013 Certification

SYDNEY, WEDNESDAY 20TH APRIL 2022 – We are proud to announce that Insentra has achieved the  ISO 27001 Certification.