Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster client configuration #136

Closed
cornelius-keller opened this issue Nov 10, 2014 · 14 comments
Closed

cluster client configuration #136

cornelius-keller opened this issue Nov 10, 2014 · 14 comments

Comments

@cornelius-keller
Copy link
Contributor

Hi all,

can anyone provide some documentation on how to setup a client for this cluster?
I try to tests some rbd use cases like creating mounting and cloning rbd images with a local cluster sat up with vagrant.

I tried copying the monitor configuration from mon1:/etc/ceph/* to my hostmachine:/etc/ceph/*

doing so ceph health outputs:

root@jck2:~# ceph health
HEALTH_OK

ceph -w

root@jck2:~# ceph -w
    cluster 4a158d27-f750-41d5-9e7f-26ce4c9d2d45
     health HEALTH_OK
     monmap e1: 3 mons at {ceph-mon0=192.168.42.10:6789/0,ceph-mon1=192.168.42.11:6789/0,ceph-mon2=192.168.42.12:6789/0}, election epoch 6, quorum 0,1,2 ceph-mon0,ceph-mon1,ceph-mon2
     osdmap e14: 6 osds: 6 up, 6 in
      pgmap v24: 192 pgs, 3 pools, 0 bytes data, 0 objects
            207 MB used, 65126 MB / 65333 MB avail
                 192 active+clean

2014-11-10 12:59:34.364045 mon.0 [INF] pgmap v24: 192 pgs: 192 active+clean; 0 bytes data, 207 MB used, 65126 MB / 65333 MB avail

ceph osd tree:

root@jck2:~# ceph osd tree
# id    weight  type name   up/down reweight
-1  0.05997 root default
-2  0.01999     host ceph-osd0
0   0.009995            osd.0   up  1   
3   0.009995            osd.3   up  1   
-3  0.01999     host ceph-osd1
2   0.009995            osd.2   up  1   
4   0.009995            osd.4   up  1   
-4  0.01999     host ceph-osd2
1   0.009995            osd.1   up  1   
5   0.009995            osd.5   up  1   

creating of rbd images works!

root@jck2:~# rbd create --image-format 2 --size 512 test --id admin
root@jck2:~# rbd ls
test

However if I try to map and mount an image i get the following error:

root@jck2:~# rbd map test --id admin
rbd: add failed: (95) Operation not supported

Can anyone tell me why map fails? If someone provides me with some hints on setting up the client I am willing to contribute to ceph-ansible by writing a playbook that configures clients and document the setup for other users.

Thanks in advance

  • Cornelius
@leseb
Copy link
Member

leseb commented Nov 10, 2014

What you did is correct.
Which client do you use? Please look at dmesg.

@cornelius-keller
Copy link
Contributor Author

Hi Leseb,

since you said that what I do is correct, I invested some more time in trying to solve so problem on my own, but I failed.
I my fork of ceph-ansible I added a client machine and role to the vagrant and ansible, and I use the created client vm for testing.

Unfortunally dmesg sais almost nothing. The only thing I see is in the client

[13085.346189] Key type ceph registered
[13085.346858] libceph: loaded (mon/osd proto 15/24)
[13085.350742] rbd: loaded rbd (rados block device)

On the monitors and osds I can't see anything souspicous.

@cornelius-keller
Copy link
Contributor Author

Hi Lesb,

I finaly got it. The reason for my Problem was, that the rbd kernel module does not support cephx authentication with signatures.

Changing

cephx_require_signatures: true

to

cephx_require_signatures: false

in roles/ceph-common/vars/main.yml solves my problem.

I finally found the solution in this thread of the ceph-users mailing list http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-February/007797.html .

While debugging I wanted to make shure that an old vesion of ceph or ubunu is not the reason for my problems so in my fork of ceph ansible I now use trusty for the client os and the giant stable release of ceph.

Also there is one client started and configured that one can use immedeatly for things like testing rbd:

Example:

jck@jck2:~$ cd git/ceph-ansible/
jck@jck2:~/git/ceph-ansible$ vagrant ssh client
Welcome to Ubuntu 14.04 LTS (GNU/Linux 3.13.0-29-generic x86_64)

 * Documentation:  https://help.ubuntu.com/

  System information as of Wed Nov 12 20:18:35 UTC 2014

  System load:  0.0               Processes:           75
  Usage of /:   2.7% of 39.34GB   Users logged in:     0
  Memory usage: 67%               IP address for eth0: 10.0.2.15
  Swap usage:   0%                IP address for eth1: 192.168.42.40

  Graph this data and manage this system at:
    https://landscape.canonical.com/

  Get cloud support with Ubuntu Advantage Cloud Guest:
    http://www.ubuntu.com/business/services/cloud

0 packages can be updated.
0 updates are security updates.


Last login: Wed Nov 12 20:21:34 2014 from 10.0.2.2
vagrant@client:~$ sudo su -
root@client:~# ceph health
HEALTH_WARN clock skew detected on mon.ceph-mon1
root@client:~# ceph osd lspools
0 rbd,
root@client:~# rbd create test --image-format 2 --size 512 --id admin
root@client:~# rbd ls
test
root@client:~# rbd map test
/dev/rbd1
root@client:~# ceph --version
ceph version 0.87 (c51c8f9d80fa4e0168aa52685b8de40e42758578)
root@client:~# mkfs.ext4 /dev/rbd/rbd/test 
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=1024 blocks, Stripe width=1024 blocks
32768 inodes, 131072 blocks
6553 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=134217728
4 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks: 
    32768, 98304

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

root@client:~# mount /dev/rbd/rbd/test /mnt/
root@client:~# 

@cornelius-keller
Copy link
Contributor Author

Hi Lesb,

I also just found an minor issue in https://github.com/ceph/ceph-ansible/blob/master/roles/ceph-common/templates/ceph.conf.j2#L9 .

There cephx_require_signatures is used instead of cephx_cluster_require_signatures.

@leseb
Copy link
Member

leseb commented Nov 13, 2014

Thanks for reporting this! I'll make this clearer in the playbook.

@rootfs
Copy link
Member

rootfs commented Dec 3, 2015

For CentOS7 (kernel 3.10), this is still an issue. For now I use the following patch

diff --git a/roles/ceph-common/defaults/main.yml b/roles/ceph-common/defaults/main.yml
index c34a3ce..5411cb1 100644
--- a/roles/ceph-common/defaults/main.yml
+++ b/roles/ceph-common/defaults/main.yml
@@ -105,8 +105,8 @@ ceph_dev_redhat_distro: centos7
 #
 fsid: "{{ cluster_uuid.stdout }}"
 cephx: true
-cephx_require_signatures: true # Kernel RBD does NOT support signatures for Kernels < 3.18!
-cephx_cluster_require_signatures: true
+cephx_require_signatures: false # Kernel RBD does NOT support signatures for Kernels < 3.18!
+cephx_cluster_require_signatures: false
 cephx_service_require_signatures: false
 max_open_files: 131072
 disable_in_memory_logs: true # set this to false while enabling the options below

@leseb
Copy link
Member

leseb commented Dec 3, 2015

@rootfs you should probably override this variable from group_vars/all instead.
So your change won't be tracked in git :)
I was wondering if I shouldn't put this to 'false' by default anyway... Feel free to PR the change and I'll merge it. ;)

@rootfs
Copy link
Member

rootfs commented Dec 3, 2015

awesome, will ping you later

@leseb
Copy link
Member

leseb commented Dec 3, 2015

@rootfs sure :)

@rokka-n
Copy link

rokka-n commented Apr 10, 2016

Have you guys had a chance validate client basic operations on centos7.1 with cephx enabled?
I still seeing the same problem, although OS has a newer kernel that supposedly works with cephx.
I'd like to make it work properly.

[vagrant@ceph-mon0 ~]$ uname -a
Linux ceph-mon0 3.10.0-229.el7.x86_64 #1 SMP Fri Mar 6 11:36:42 UTC 2015 x86_64 x86_64 x86_64 GNU/Linux
[vagrant@ceph-mon0 ~]$ sudo rbd create --image-format 2 --size 512 test2 --id admin
[vagrant@ceph-mon0 ~]$ sudo rbd map  test2
rbd: sysfs write failed
rbd: map failed: (95) Operation not supported

Edited: sorry, I missed that its 3.10, so cephx wouldn't work. But on the other side, when I upgrade kernel to 3.18 on ubuntu trusty, there was the same error.

Additional question, how to restart services on centos nodes?
systemctl can't work with them. For example on mon0:

[vagrant@ceph-mon0 ~]$ sudo systemctl list-unit-files | grep ceph
ceph-create-keys@.service                   static
ceph-disk@.service                          static
ceph-mds@.service                           disabled
ceph-mon@.service                           enabled
ceph-osd@.service                           disabled
ceph-radosgw@.service                       disabled
ceph.target                                 disabled

@leseb
Copy link
Member

leseb commented Apr 10, 2016

@rokka-n

For a monitor: systemctl restart ceph-mon@<hostname>.service
For an OSD: systemctl restart ceph-osd@<id>.service

@subhashchand
Copy link

Want to setup CEPH cluster integration with OpenNebula 5.2. but we faced some issues on that, after creating CEPH cluster when try to integrating with opennebula frontend node then its not getting up and not showing disk uasble space.

[cephuser@storage1 ~]$ ceph -s
cluster dd8ea244-d2e4-4948-b3b8-ec2ddedeff80
health HEALTH_OK
monmap e1: 3 mons at {storage1=XX:XX:XX:137:6789/0,storage2=XX.XX.XX.138:6789/0,storage3=XX.XX.XX.139:6789/0}
election epoch 70, quorum 0,1,2 storage1,storage2,storage3
osdmap e204: 3 osds: 3 up, 3 in
flags sortbitwise,require_jewel_osds
pgmap v312560: 2112 pgs, 3 pools, 14641 kB data, 24 objects
19914 MB used, 11104 GB / 11123 GB avail
2112 active+clean

[oneadmin@node1 ~]$ cat cephds.conf
NAME = "cephds"
DS_MAD = ceph
TM_MAD = ceph
DISK_TYPE = RBD
POOL_NAME = data
CEPH_HOST = storage1-storage2-storage3
CEPH_USER = oneadmin
CEPH_SECRET = "AQCU1GxYHmbkBxAA6TlD6q+PyMTL+/a+AB+2bg==2dsf"
BRIDGE_LIST = storage1

Plz suggest us.

@subhashchand
Copy link

datastore

@leseb
Copy link
Member

leseb commented Jan 12, 2017

@subhashchand what are the features enabled on that test image?

Can you run an rbd info <pool>/<image>?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants