Skip to content

Minimal Ceph Deployment

sunfch edited this page May 2, 2018 · 3 revisions

Ceph single node deployment on centos7

This is a minimal ceph deployment. Even though we’re doing a single node deployment, the ‘ceph-deploy’ tool expects to be able to ssh into the local host as root, without password prompts. So before starting, make sure to install ssh keys and edit /etc/ssh/sshd_config to set PermitRootLogin to yes. Everything that follows should also be run as root.

First, we need the ‘ceph-deploy’ tool installed

# pip install ceph-deploy
# ceph-deploy --version
2.0.0

ceph-deploy will create some config files in the local directory, so it is best to create a directory to hold them and run it from there

# mkdir ceph-deploy
# cd ceph-deploy

Add ceph repo

# cat /etc/yum.repos.d/ceph.repo
[ceph]
name=ceph repo
baseurl=http://download.ceph.com/rpm-luminous/el7/x86_64
enabled=1
gpgcheck=1
type=repo-md
gpgkey=https://download.ceph.com/keys/release.asc

Make sure that the hostname for the local machine is resolvable, both with domain name and unqualified. If it is not, then add entries to /etc/hosts to make it resolve. The first step simply creates the basic config file for ceph-deploy

# export CEPH_HOST=`hostname`
# ceph-deploy new $CEPH_HOST

Since this will be a single node deployment there are 2 critical additions that must be made to the ceph.conf that was just created in the current directory

# echo "osd crush chooseleaf type = 0" >> ceph.conf
# echo "osd pool default size = 1" >> ceph.conf

Now tell ceph-deploy to actually install the main ceph software.

# ceph-deploy install --no-adjust-repos $CEPH_HOST

With the software install the monitor service can be created and started

# ceph-deploy mon create $CEPH_HOST
# ceph-deploy gatherkeys $CEPH_HOST
# ceph-deploy mgr create $CEPH_HOST
# cp ceph.client.admin.keyring  /etc/ceph/

Add a osd

# UUID=$(uuidgen)
# OSD_SECRET=$(ceph-authtool --gen-print-key)
# cp ceph.bootstrap-osd.keyring  /var/lib/ceph/bootstrap-osd/ceph.keyring
# ID=$(echo "{\"cephx_secret\": \"$OSD_SECRET\"}" | ceph osd new $UUID -i -  -n client.bootstrap-osd -k /var/lib/ceph/bootstrap-osd/ceph.keyring)
# mkdir /var/lib/ceph/osd/ceph-$ID
# ceph-authtool --create-keyring /var/lib/ceph/osd/ceph-$ID/keyring --name osd.$ID --add-key $OSD_SECRET
# ceph-osd -i $ID --mkfs --osd-uuid $UUID
# chown -R ceph:ceph /var/lib/ceph/osd/ceph-$ID
# systemctl enable ceph-osd@$ID
# systemctl start ceph-osd@$ID

Assuming that completed without error, the cluster is ok.

# ceph -s
 cluster:
   id:     47e7781d-d663-49eb-8cd3-5a1b258fa11d
   health: HEALTH_WARN
           mon mycentos7 is low on available space

 services:
   mon: 1 daemons, quorum mycentos7
   mgr: mycentos7(active)
   osd: 1 osds: 1 up, 1 in

 data:
   pools:   0 pools, 0 pgs
   objects: 0 objects, 0 bytes
   usage:   29739 MB used, 9457 MB / 39196 MB avail
   pgs:

Now create pool for yig

# ceph osd pool create rabbit 16 16
# ceph osd pool create tiger 16 16
Clone this wiki locally