Ceph Storage Centos

Everything Linux, A.I, IT News, DataOps, Open Source and more delivered right to you.
Subscribe
"The best Linux newsletter on the web"

This post is about Ceph Storage Centos. Welcome.

Introduction

All types of Linux can install ceph on any Linux distribution, but it requires the recent kernel and other up-to-date libraries to be appropriately executed. But, here in this tutorial, we will be using CentOS with minimal installation packages on it.

https://unixcop.com/glusterfs-multiple-nodes-in-centos-almalinux/Whether you want to provide Ceph Object Storage and Ceph Block Device services to Cloud Platforms, deploy a Ceph File System or use Ceph for another purpose, all Ceph Storage Cluster deployments begin with setting up each Ceph Node, your network, and the Ceph Storage Cluster. A Ceph Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also required when running Ceph File System clients.

Create ssh key

Transfer public key to the client

[[email protected] ~]# ssh-copy-id node1

Install required packages

[[email protected] ~]# ssh node1 "yum -y install centos-release-ceph-nautilus"
[[email protected] ~]# ssh node1 "yum -y install ceph-common"

Transfer required files to client Host

[[email protected] ~]# scp /etc/ceph/ceph.conf node1:/etc/ceph/
ceph.conf                                     100%  195    98.1KB/s   00:00
[[email protected] ~]# scp /etc/ceph/ceph.client.admin.keyring node1:/etc/ceph/
ceph.client.admin.keyring                     100%  151    71.5KB/s   00:00
[[email protected] ~]# ssh node1 "chown ceph. /etc/ceph/ceph.*"

Create a block device and mount it on a client host.

A block is a sequence of bytes (often 512). Block-based storage interfaces are a mature and common way to store data on media including HDDs, SSDs, CDs, floppy disks, and even tape. The ubiquity of block device interfaces is a perfect fit for interacting with mass data storage including Ceph.

Ceph block devices are thin-provisioned, resizable, and store data striped over multiple OSDs. Ceph block devices leverage RADOS capabilities including snapshots, replication, and strong consistency. Ceph block storage clients communicate with Ceph clusters through kernel modules or the librbd library.

Ceph’s block devices deliver high performance with vast scalability to kernel modules, or to KVMs such as QEMU, and cloud-based computing systems like OpenStack and CloudStack that rely on libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster to operate the Ceph RADOS Gateway, the Ceph File System, and Ceph block devices simultaneously.

Create a default RBD pool

[root@node1 ~]# ceph osd pool create rbd 128
pool 'rbd' created
# enable Placement Groups auto scale mode
[root@node1 ~]# ceph osd pool set rbd pg_autoscale_mode on
set pool 1 pg_autoscale_mode to on
# initialize the pool
[root@node1 ~]# rbd pool init rbd
[root@node1 ~]# ceph osd pool autoscale-status
POOL   SIZE TARGET SIZE RATE RAW CAPACITY  RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE
rbd      6               3.0       239.9G 0.0000                               1.0    128         32 on

Create a block device with 20G

[root@node1 ~]# rbd create --size 10G --pool rbd rbd01
# confirm
[root@node1 ~]# rbd ls -l
NAME  SIZE   PARENT FMT PROT LOCK
rbd01 10 GiB          2

[root@node1 ~]# rbd feature disable rbd01 object-map fast-diff deep-flatten
# map the block device
[root@node1 ~]# rbd map rbd01
/dev/rbd0
# confirm
[root@node1 ~]# rbd showmapped
id pool namespace image snap device
0  rbd            rbd01 -    /dev/rbd0

Format filesystem to xfs

[root@node1 ~]# mkfs.xfs /dev/rbd0
Discarding blocks...Done.
meta-data=/dev/rbd0              isize=512    agcount=16, agsize=163840 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=0, sparse=0
data     =                       bsize=4096   blocks=2621440, imaxpct=25
         =                       sunit=1024   swidth=1024 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=8 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

[root@node1 ~]# mount /dev/rbd0 /tmp
[root@node1 ~]# df -hT
Filesystem              Type      Size  Used Avail Use% Mounted on
devtmpfs                devtmpfs  1.9G     0  1.9G   0% /dev
tmpfs                   tmpfs     1.9G     0  1.9G   0% /dev/shm
tmpfs                   tmpfs     1.9G  8.6M  1.9G   1% /run
tmpfs                   tmpfs     1.9G     0  1.9G   0% /sys/fs/cgroup
/dev/mapper/rhel-root xfs        26G  1.8G   25G   7% /
/dev/vda1               xfs      1014M  319M  696M  32% /boot
tmpfs                   tmpfs     379M     0  379M   0% /run/user/0
/dev/rbd0               xfs        20G   33M  20G   1% /tmp
Everything Linux, A.I, IT News, DataOps, Open Source and more delivered right to you.
Subscribe
"The best Linux newsletter on the web"
Neil
Neil
Treat your password like your toothbrush. Don’t let anybody else use it, and get a new one every six months.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest articles

Join us on Facebook