{"id":17708,"date":"2023-02-14T06:18:02","date_gmt":"2023-02-14T06:18:02","guid":{"rendered":"https:\/\/www.insentragroup.com\/nz\/?p=17708"},"modified":"2024-12-13T01:57:16","modified_gmt":"2024-12-13T01:57:16","slug":"how-to-create-a-ceph-cluster-on-a-single-machine","status":"publish","type":"post","link":"https:\/\/www.insentragroup.com\/nz\/insights\/geek-speak\/modern-workplace\/how-to-create-a-ceph-cluster-on-a-single-machine\/","title":{"rendered":"How to create a Ceph Cluster on a Single Machine"},"content":{"rendered":"\n<p>Ceph storage is often employed in numerous large-scale applications, hence simpler usage for testing or learning purposes may pass unnoticed.&nbsp;<\/p>\n\n\n\n<p>In certain situations, testing certain aspects such as a command, software behaviour or integration may be necessary. Ceph is a software-defined storage platform that is hardware-agnostic, and well-suited for data analytics, AI\/ML and other data-intensive workloads.&nbsp;<\/p>\n\n\n\n<p>Ceph&#8217;s popularity in large-scale use cases means that simpler setups may go unnoticed. But, in scenarios where a quick Ceph cluster is needed for learning or practice, or to connect to a test OpenShift or OpenStack cluster, and limited hardware is available, it is possible to create a single-machine Ceph cluster with some adjustments.&nbsp;<\/p>\n\n\n\n<p>To create a single-machine Ceph cluster, all that&#8217;s required is a machine that&#8217;s either virtual or physical with 4 CPU cores, 8GB of RAM and at least&nbsp;<s> <\/s>3 disks (additionally, one disk for the operating system is also needed).&nbsp;<\/p>\n\n\n\n<p>The cluster can be created using either <a href=\"https:\/\/www.insentragroup.com\/nz\/services\/professional-services\/cloud-and-modern-data-centre\/\" target=\"_blank\" rel=\"noreferrer noopener\">Red Hat Ceph<\/a> or the Ceph Community edition. This guide uses Red Hat Ceph 5, but if you&#8217;re using the Ceph Community edition, the Pacific version would be the equivalent. It&#8217;s worth noting that other than activating the repository, all other commands remain the same in the Ceph Community edition.&nbsp;<\/p>\n\n\n\n<p>It&#8217;s important to remember that this setup is intended for study or learning purposes, and should not be expected to provide high performance or data resiliency.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">System Configuration<\/h2>\n\n\n\n<p>A Linux distribution needs to be installed on the machine. This example uses Red Hat Enterprise Linux (RHEL) 8.7, but any Ceph-supported distribution can be used.&nbsp;<\/p>\n\n\n\n<p>Four disks will be utilized in this example: one for the operating system and the remaining three for Ceph Object Storage Daemons (OSDs). In the demonstrated setup I will use an Oracle VBOX.&nbsp;<\/p>\n\n\n\n<p>The following presents the configuration of the server:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# cat \/etc\/redhat-release \n\nRed Hat Enterprise Linux release 8.7 (Ootpa) \n\n&#91;root@rhel-ceph ~]# lsmem \n\nRANGE                                  SIZE  STATE REMOVABLE BLOCK \n\n0x0000000000000000-0x00000000dfffffff  3.5G online       yes  0-27 \n\n0x0000000100000000-0x000000021fffffff  4.5G online       yes 32-67 \n\n \n\nMemory block size:       128M \n\nTotal online memory:       8G \n\nTotal offline memory:      0B \n\n&#91;root@rhel-ceph ~]# lscpu \n\nArchitecture:        x86_64 \n\nCPU op-mode(s):      32-bit, 64-bit \n\nByte Order:          Little Endian \n\nCPU(s):              4 \n\nOn-line CPU(s) list: 0-3 \n\nThread(s) per core:  1 \n\nCore(s) per socket:  4 \n\nSocket(s):           1 \n\nNUMA node(s):        1 \n\nVendor ID:           GenuineIntel \n\nCPU family:          6 \n\nModel:               154 \n\nModel name:          12th Gen Intel(R) Core(TM) i7-12700H \n\nStepping:            3 \n\nCPU MHz:             2688.028 \n\nBogoMIPS:            5376.05 \n\nHypervisor vendor:   KVM \n\nVirtualization type: full \n\nL1d cache:           48K \n\nL1i cache:           32K \n\nL2 cache:            1280K \n\nL3 cache:            24576K \n\nNUMA node0 CPU(s):   0-3 \n\nFlags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single pti fsgsbase bmi1 avx2 bmi2 invpcid rdseed clflushopt md_clear flush_l1d arch_capabilities \n\n&#91;root@rhel-ceph ~]# lsblk \n\nNAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT \n\nsda             8:0    0   50G  0 disk \n\n\u251c\u2500sda1          8:1    0    1G  0 part \/boot \n\n\u2514\u2500sda2          8:2    0   49G  0 part \n\n  \u251c\u2500rhel-root 253:0    0   44G  0 lvm  \/ \n\n  \u2514\u2500rhel-swap 253:1    0    5G  0 lvm  &#91;SWAP] \n\nsdb             8:16   0  100G  0 disk \n\nsdc             8:32   0  100G  0 disk \n\nsdd             8:48   0  100G  0 disk \n\nsr0            11:0    1 1024M  0 rom   <\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Installation<\/h2>\n\n\n\n<p>After installing Linux and attaching the three disks, the next step is to add or enable the Ceph repositories. For those using RHEL, the following command should be used to add or enable Ceph repositories:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>subscription-manager repos \u2013enable=rhceph-5-tools-for-rhel-8-x86_64-rpms <\/code><\/pre>\n\n\n\n<p>In case you have not heard, an alternative approach to provision the Ceph cluster is to use cephadm, a new tool from the Ceph project, which provisions the cluster based on containers. In this case, along with that package (cephadm), you will also need Podman to manage the containers, ceph-common for common Ceph commands and ceph-base for advanced tools. The following command can be used to install these packages:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>dnf install podman cephadm ceph-common ceph-base -y <\/code><\/pre>\n\n\n\n<p>Once all the packages are installed, we are ready to bootstrap the cluster. Ensure you know your network address (if not, ip addr sh will help).&nbsp;<\/p>\n\n\n\n<p>The following command needs a few words of explanation.&nbsp;<\/p>\n\n\n\n<p>To create a new Ceph cluster, the first step is to execute the cephadm bootstrap command on the host of the Ceph cluster. This command creates the &#8220;monitor daemon&#8221; of the Ceph cluster, which requires an IP address. The IP address of the single Ceph cluster host should be provided to the ceph bootstrap command, so it is necessary to know the IP address of the host beforehand. In my case, the server has a static IP address 192.168.50.137\/24, hence \u2013mon-ip is set to this IP address. Make sure that you have the Red Hat account, to pull the containers from registry.redhat.io. It is easy to create one on www.redhat.com.&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>cephadm bootstrap \\ \n\n--cluster-network 192.168.50.0\/24 \\ \n\n--mon-ip 192.168.50.137 \\ \n\n--registry-url registry.redhat.io \\ \n\n--registry-username 'your_rh_user' \\ \n\n--registry-password 'your_rh_pass' \\ \n\n--dashboard-password-noupdate \\ \n\n--initial-dashboard-user admin \\ \n\n--initial-dashboard-password ceph \\ \n\n--allow-fqdn-hostname \\ \n\n--single-host-defaults <\/code><\/pre>\n\n\n\n<p>The successful bootstrap should render at the end the output similar to the following:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>Ceph Dashboard is now available at: \n\n\n\n     URL: https:\/\/rhel-ceph.example.net:8443\/ \n\n    User: admin \n\nPassword: ceph \n\n\n\nEnabling client.admin keyring and conf on hosts with \"admin\" label \n\nEnabling autotune for osd_memory_target \n\nYou can access the Ceph CLI as following in case of multi-cluster or non-default config: \n\n\n\nsudo \/usr\/sbin\/cephadm shell --fsid 3f7804be-925e-11ed-a0ff-08002785a452 -c \/etc\/ceph\/ceph.conf -k \/etc\/ceph\/ceph.client.admin.keyring \n\n\n\nOr, if you are only running a single cluster on this host: \n\n\n\nsudo \/usr\/sbin\/cephadm shell \n\n\n\nPlease consider enabling telemetry to help improve Ceph: \n \n\nceph telemetry on \n\n\n\nFor more information see: \n\n\n\nhttps:&#47;&#47;docs.ceph.com\/en\/pacific\/mgr\/telemetry\/ \n\n\nBootstrap complete. <\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Ceph Configuration<\/h2>\n\n\n\n<p>With the cluster now up and running, the next step is to add Object Storage Daemons (OSDs) to create disks, file systems, or buckets. An OSD is required for each disk created. The output of the ceph -s command shows that at least two OSDs are needed, or three if the cluster was bootstrapped without the &#8211;single-host-defaults option.&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# ceph -s \n\n  cluster: \n\n    id:     3f7804be-925e-11ed-a0ff-08002785a452 \n\n    health: HEALTH_WARN \n\n            OSD count 0 &lt; osd_pool_default_size 2 \n\n  \n  services: \n\n    mon: 1 daemons, quorum rhel-ceph.example.net (age 11m) \n\n    mgr: rhel-ceph.example.net.mcsjrx(active, since 7m), standbys: rhel-ceph.rpnvua \n\n    osd: 0 osds: 0 up, 0 in \n\n\n\n  data: \n\n    pools:   0 pools, 0 pgs \n\n    objects: 0 objects, 0 B \n\n    usage:   0 B used, 0 B \/ 0 B avail \n\n    pgs:      <\/code><\/pre>\n\n\n\n<p>Use the following command to add OSD on all available disks (we created three):&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# ceph orch apply osd --all-available-devices \n\nScheduled osd.all-available-devices update... <\/code><\/pre>\n\n\n\n<p>Monitor the progress with ceph -s command:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# ceph -s \n\n  cluster: \n\n    id:     3f7804be-925e-11ed-a0ff-08002785a452 \n\n    health: HEALTH_OK \n\n  \n\n  services: \n\n    mon: 1 daemons, quorum rhel-ceph.example.net (age 14m) \n\n    mgr: rhel-ceph.example.net.mcsjrx(active, since 10m), standbys: rhel-ceph.rpnvua \n\n    osd: 3 osds: 1 up (since 3s), 3 in (since 17s) \n\n  \n\n  data: \n\n    pools:   0 pools, 0 pgs \n\n    objects: 0 objects, 0 B \n\n    usage:   4.8 MiB used, 100 GiB \/ 100 GiB avail \n\n    pgs:      <\/code><\/pre>\n\n\n\n<p>Note in the output above the health of the cluster (HEALTH_OK)&nbsp;<\/p>\n\n\n\n<p>You can also verify the status using the following command:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# ceph orch ps \n\nNAME                              HOST                   PORTS        STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION           IMAGE ID      CONTAINER ID   \n\nalertmanager.rhel-ceph            rhel-ceph.example.net  *:9093,9094  running (11m)     3s ago  14m    23.7M        -                    2de2e7d63e1b  291ff6ca84cf   \n\ncrash.rhel-ceph                   rhel-ceph.example.net               running (14m)     3s ago  14m    6979k        -  16.2.10-94.el8cp  34880245f74a  f7f25bc08cef   \n\ngrafana.rhel-ceph                 rhel-ceph.example.net  *:3000       running (11m)     3s ago  12m    59.8M        -  8.3.5             bf676a29bcc5  272a970fabf9   \n\nmgr.rhel-ceph.example.net.mcsjrx  rhel-ceph.example.net  *:9283       running (15m)     3s ago  15m     434M        -  16.2.10-94.el8cp  34880245f74a  7ec955793058   \n\nmgr.rhel-ceph.rpnvua              rhel-ceph.example.net  *:8443       running (12m)     3s ago  12m     387M        -  16.2.10-94.el8cp  34880245f74a  148d40031067   \n\nmon.rhel-ceph.example.net         rhel-ceph.example.net               running (15m)     3s ago  15m    72.7M    2048M  16.2.10-94.el8cp  34880245f74a  af8aefba4729   \n\nnode-exporter.rhel-ceph           rhel-ceph.example.net  *:9100       running (11m)     3s ago  11m    21.9M        -                    6c8570b1928b  6893ea15579d   \n\nosd.0                             rhel-ceph.example.net               running (83s)     3s ago  83s    27.1M    4096M  16.2.10-94.el8cp  34880245f74a  2fe598d25b78   \n\nosd.1                             rhel-ceph.example.net               running (78s)     3s ago  78s    27.3M    4096M  16.2.10-94.el8cp  34880245f74a  138abd7068a4   \n\nosd.2                             rhel-ceph.example.net               running (74s)     3s ago  74s    27.6M    4096M  16.2.10-94.el8cp  34880245f74a  4c1607cdf31b   \n\nprometheus.rhel-ceph              rhel-ceph.example.net  *:9095       running (11m)     3s ago  11m    56.4M        -                    39847ff1cddf  0dc108db7a31   <\/code><\/pre>\n\n\n\n<p>Another way of checking if the OSDs are available is by verifying the containers:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# podman ps -f name=osd --format \"{{.Names}} {{.Status}}\" \n\nceph-3f7804be-925e-11ed-a0ff-08002785a452-osd-0 Up 9 minutes ago \n\nceph-3f7804be-925e-11ed-a0ff-08002785a452-osd-1 Up 9 minutes ago \n\nceph-3f7804be-925e-11ed-a0ff-08002785a452-osd-2 Up 9 minutes ago <\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Test<\/h2>\n\n\n\n<p>Ceph&#8217;s block storage is referred to as RBD, which stands for RADOS block device. To create disks, a pool that is configured to work with RBD must be enabled. The following commands create a pool called &#8216;rbd&#8217; and activate it for RBD usage:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# ceph osd pool create rbd \n\n \n\n \n\n&#91;root@rhel-ceph ~]# ceph osd pool stats \n\npool device_health_metrics id 1 \n\n  nothing is going on \n\n \n\npool rbd id 2 \n\n  nothing is going on <\/code><\/pre>\n\n\n\n<p>Activate the pool:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# ceph osd pool application enable rbd rbd \n\nenabled application 'rbd' on pool 'rbd' <\/code><\/pre>\n\n\n\n<p>Create some rbd disks:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>rbd create test_rbd_file --size 2G <\/code><\/pre>\n\n\n\n<p>And verify the config:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph ~]# rbd pool stats \n\nTotal Images: 1 \n\nTotal Snapshots: 0 \n\nProvisioned Size: 2 GiB \n\n&#91;root@rhel-ceph ~]# rbd list \n\ntest_rbd_file \n\n \n\n \n\n&#91;root@rhel-ceph ~]# rbd --image test_rbd_file info \n\nrbd image 'test_rbd_file': \n\nsize 2 GiB in 512 objects \n\norder 22 (4 MiB objects) \n\nsnapshot_count: 0 \n\nid: 5e5063f6a66d \n\nblock_name_prefix: rbd_data.5e5063f6a66d \n\nformat: 2 \n\nfeatures: layering, exclusive-lock, object-map, fast-diff, deep-flatten \n\nop_features: \n\nflags: \n\ncreate_timestamp: Fri Jan 13 12:20:39 2023 \n\naccess_timestamp: Fri Jan 13 12:20:39 2023 \n\nmodify_timestamp: Fri Jan 13 12:20:39 2023 <\/code><\/pre>\n\n\n\n<h2 class=\"wp-block-heading\">Mounting RBD<\/h2>\n\n\n\n<p>We can quickly do this on the ceph cluster node.&nbsp;<\/p>\n\n\n\n<p>Run the following command to map the image to a block device:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>rbd map test_rbd_file <\/code><\/pre>\n\n\n\n<p>Create the file system over a new device:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>mkfs.xfs  \/dev\/rbd\/rbd\/test_rbd_file <\/code><\/pre>\n\n\n\n<p>Create a mountpoint:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>mkdir -p \/mnt\/ceph-block-device <\/code><\/pre>\n\n\n\n<p>Mount the filesystem:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>mount \/dev\/rbd\/rbd\/test_rbd_file \/mnt\/ceph-block-device\/ <\/code><\/pre>\n\n\n\n<p>Some performance testing:&nbsp;<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>&#91;root@rhel-ceph]# cd \/mnt\/ceph-block-device\/ \n\n&#91;root@rhel-ceph ceph-block-device]# dd if=\/dev\/zero of=.\/test1.img bs=1G count=1 oflag=dsync \n\n1+0 records in \n\n1+0 records out \n\n1073741824 bytes (1.1 GB, 1.0 GiB) copied, 8.93031 s, 120 MB\/s <\/code><\/pre>\n\n\n\n<p>Although Ceph is a powerful platform capable of supplying a multitude of storage types, it can be quite daunting to consider it a single immense cluster with multiple devices and multiple petabytes of data. Fortunately, its comprehensive abilities also enable it to be employed in small-scale circumstances for educational purposes. Don&#8217;t hesitate to<a href=\"https:\/\/www.insentragroup.com\/nz\/contact\/\" data-type=\"URL\" data-id=\"https:\/\/www.insentragroup.com\/au\/contact\/\" target=\"_blank\" rel=\"noreferrer noopener\">\u202fcontact us<\/a> for more information on Ceph Clusters.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\">Related Article&nbsp;&nbsp;<\/h2>\n\n\n\n<p><a href=\"https:\/\/www.insentragroup.com\/nz\/insights\/geek-speak\/modern-workplace\/insentras-red-hat-capability\/\" target=\"_blank\" rel=\"noreferrer noopener\">Insentra\u2019s Red Hat Capability<\/a><\/p>\n\n\n\n<style>\nbody .wp-block-code>code {\n    font-family: Menlo,Consolas,monaco,monospace;\n    color: #000;\n    padding: 30px 40px;\n    border: none;\n    border-radius: 4px;\n    background: #ddd;\n}\n<\/style>\n","protected":false},"excerpt":{"rendered":"<p>Learn how to create a single-machine Ceph cluster for testing or learning purposes with this step-by-step guide. Get started with a machine, 4 CPU cores, 8GB RAM &#038; at least 4 disks today. Read on now! <\/p>\n","protected":false},"author":67,"featured_media":17709,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"content-type":"","footnotes":""},"categories":[19],"tags":[],"class_list":["post-17708","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-modern-workplace","entry"],"_links":{"self":[{"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/posts\/17708","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/users\/67"}],"replies":[{"embeddable":true,"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/comments?post=17708"}],"version-history":[{"count":5,"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/posts\/17708\/revisions"}],"predecessor-version":[{"id":17806,"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/posts\/17708\/revisions\/17806"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/media\/17709"}],"wp:attachment":[{"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/media?parent=17708"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/categories?post=17708"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.insentragroup.com\/nz\/wp-json\/wp\/v2\/tags?post=17708"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}