0


4. ceph存储使用流程

ceph存储使用流程

一、ceph三种存储接口

文件系统存储 cephFS, 依赖于MDS
块存储 RBD
对象存储 RGW

二、文件系统存储

1、在ceph集群中部署MDS

[root@node01 ceph]# ceph-deploy mds create node01 node02 node03 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy mds create node01 node02 node03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :<ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc36a1277a0>[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          :<function mds at 0x7fc36a16c050>[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  mds                           :[('node01', 'node01'), ('node02', 'node02'), ('node03', 'node03')][ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node01:node01 node02:node02 node03:node03
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host[node01][DEBUG ] detect machine type[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node01
[node01][DEBUG ]write cluster configuration to /etc/ceph/{cluster}.conf
[node01][WARNIN] mds keyring does not exist yet, creating one
[node01][DEBUG ] create a keyring file[node01][DEBUG ] create path if it doesn't exist
[node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node01 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node01/keyring
[node01][INFO  ] Running command: systemctl enable ceph-mds@node01
[node01][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node01][INFO  ] Running command: systemctl start ceph-mds@node01
[node01][INFO  ] Running command: systemctl enable ceph.target
[node02][DEBUG ] connected to host: node02 
[node02][DEBUG ] detect platform information from remote host
[node02][DEBUG ] detect machine type
[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node02
[node02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node02][WARNIN] mds keyring does not exist yet, creating one
[node02][DEBUG ] create a keyring file
[node02][DEBUG ] create path if it doesn't exist
[node02][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node02 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node02/keyring
[node02][INFO  ] Running command: systemctl enable ceph-mds@node02
[node02][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node02][INFO  ] Running command: systemctl start ceph-mds@node02
[node02][INFO  ] Running command: systemctl enable ceph.target
[node03][DEBUG ] connected to host: node03 
[node03][DEBUG ] detect platform information from remote host[node03][DEBUG ] detect machine type[ceph_deploy.mds][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node03
[node03][DEBUG ]write cluster configuration to /etc/ceph/{cluster}.conf
[node03][WARNIN] mds keyring does not exist yet, creating one
[node03][DEBUG ] create a keyring file[node03][DEBUG ] create path if it doesn't exist
[node03][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-mds --keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node03 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node03/keyring
[node03][INFO  ] Running command: systemctl enable ceph-mds@node03
[node03][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node03][INFO  ] Running command: systemctl start ceph-mds@node03
[node03][INFO  ] Running command: systemctl enable ceph.target
[root@node01 ceph]# netstat -tunlp | grep mds
tcp        00192.168.140.10:6805     0.0.0.0:*               LISTEN      2228/ceph-mds  

2、创建存储池

[root@node01 ceph]# ceph osd pool create db_data 128            // 存储实际数据
pool 'db_data' created

[root@node01 ceph]# ceph osd pool create db_metadata 64        // 存储元数据信息
pool 'db_metadata' created
[root@node01 ceph]# [root@node01 ceph]# ceph -s
  cluster:
    id:     e2010562-0bae-4999-9247-4017f875acc8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node01,node02,node03
    mgr: node01(active), standbys: node02, node03
    mds: mydata-1/1/1 up  {0=node01=up:active}, 2 up:standby
    osd: 3 osds: 3 up, 3in
 
  data:
    pools:   2 pools, 192 pgs
    objects: 22  objects, 2.5 KiB
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     192 active+clean
[root@node01 ceph]# ceph osd pool ls
db_data
db_metadata

说明:
一个文件系统存储需要两个RADOS存储池,一个用于存储实体数据,一个用于存储元数据
上面分别创建两个存储池,名称为db_data, db_metadata
分别指定存储池对应的PG数量为128, 64

存储池对应PG数量参考
少于5个OSD则PG数为128
5-10个OSD则PG数为512
10-50个OSD则PG数为1024
如果有更多的OSD需要自己理解计算

PG计算公式
Total PGs = ((Total_number_of_OSD * 100) / max_replication_count) / pool_count

3、创建文件系统存储

[root@node01 ceph]# ceph fs new mydata db_metadata db_data 
new fs with metadata pool 2 and data pool 1[root@node01 ceph]# ceph fs ls
name: mydata, metadata pool: db_metadata, data pools: [db_data ]

4、业务服务器挂载使用cephfs

ceph集群默认启用了cephx的认证,业务服务器要挂载使用ceph需要通过令牌认证

4.1 将认证的令牌导出,拷贝到业务服务器

[root@node01 ceph]# cat ceph.client.admin.keyring [client.admin]
    key = AQBI5Gtm/aabIxAAc5MEuV05QyvjFNYhUJmnyA==
    caps mds ="allow *"
    caps mgr ="allow *"
    caps mon ="allow *"
    caps osd ="allow *"[root@node01 ceph]# [root@node01 ceph]# ceph-authtool -p ceph.client.admin.keyring > /root/client.keyring[root@node01 ceph]# [root@node01 ceph]# scp /root/client.keyring [email protected]:/root/
client.keyring                                                                                      100%   4151.5KB/s   00:00    
[root@node01 ceph]# 

4.2 业务服务器挂载使用ceph

[root@app ~]# yum install -y ceph-fuse 
[root@app ~]# mount -t ceph node01:6789:/  /test1  -o name=admin,secretfile=/opt/admin.key[root@app ~]# df -hT 
Filesystem              Type      Size  Used Avail Use% Mounted on
devtmpfs                devtmpfs  475M     0  475M   0% /dev
tmpfs                   tmpfs     487M     0  487M   0% /dev/shm
tmpfs                   tmpfs     487M  7.6M  479M   2% /run
tmpfs                   tmpfs     487M     0  487M   0% /sys/fs/cgroup
/dev/mapper/centos-root xfs        18G  1.8G   16G  11% /
/dev/sda1               xfs       497M  161M  336M  33% /boot
tmpfs                   tmpfs      98M     0   98M   0% /run/user/0
192.168.140.10:6789:/   ceph       18G     0   18G   0% /test1

5、删除文件系统存储

5.1 业务服务器取消挂载

5.2 修改ceph.conf,添加允许删除的配置;同步配置文件

[root@node01 ceph]# tail -n 1 ceph.conf
mon_allow_pool_delete =true[root@node01 ceph]# ceph-deploy --overwrite-conf admin node01 node02 node03[root@node01 ceph]# systemctl restart ceph-mon.target

5.3 停掉所有集群节点的mds服务

[root@node01 ceph]# systemctl stop ceph-mds.target

5.4 删除文件系统

[root@node01 ceph]# ceph fs ls
name: mydata, metadata pool: db_metadata, data pools: [db_data ][root@node01 ceph]# ceph fs rm mydata --yes-i-really-mean-it[root@node01 ceph]# ceph fs ls
No filesystems enabled

5.5 删除文件系统对应的存储池

[root@node01 ceph]# ceph osd pool ls
db_data
db_metadata
[root@node01 ceph]# [root@node01 ceph]# [root@node01 ceph]# ceph osd pool delete db_metadata db_metadata --yes-i-really-really-mean-it
pool 'db_metadata' removed
[root@node01 ceph]# [root@node01 ceph]# ceph osd pool delete db_data db_data --yes-i-really-really-mean-it
pool 'db_data' removed

三、块存储的使用

1、将ceph的配置同步到业务服务器

[root@node01 ceph]# ceph-deploy admin app[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy admin app
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :<ceph_deploy.conf.cephdeploy.Conf instance at 0x7f7567f307e8>[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        :['app'][ceph_deploy.cli][INFO  ]  func                          :<function admin at 0x7f7568c46320>[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to app
The authenticity of host'app (192.168.140.13)' can't be established.
ECDSA key fingerprint is SHA256:bdjFNp2/Yyt+F2YvGN2prmrdzemgYD0YJ/CEFvGobN4.
ECDSA key fingerprint is MD5:3b:db:a0:e6:76:03:16:8f:8b:d7:cb:1f:d0:65:77:f0.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'app' (ECDSA) to the list of known hosts.
[app][DEBUG ] connected to host: app 
[app][DEBUG ] detect platform information from remote host[app][DEBUG ] detect machine type[app][DEBUG ]write cluster configuration to /etc/ceph/{cluster}.conf

后续的操作均在业务服务器上执行

2、创建存储池、初始化

[root@app ~]# ceph osd pool create block_pool 128
pool 'block_pool' created
[root@app ~]# [root@app ~]# rbd pool init block_pool

3、创建卷

[root@app ~]# rbd create db_volume --pool block_pool --size 5000[root@app ~]# [root@app ~]# rbd ls block_pool 
db_volume
[root@app ~]# 

4、映射块设备

[root@app ~]# rbd map block_pool/db_volume
rbd: sysfs write failed
RBD image feature set mismatch. You can disable features unsupported by the kernel with "rbd feature disable block_pool/db_volume object-map fast-diff deep-flatten".
In some cases useful info is found in syslog - try "dmesg | tail".
rbd: map failed: (6) No such device or address

[root@app ~]# rbd feature disable block_pool/db_volume object-map fast-diff deep-flatten[root@app ~]# rbd map block_pool/db_volume
/dev/rbd0
[root@app ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    019.5G  0 part 
  ├─centos-root 253:0    017.5G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sr0              11:0    19.5G  0 rom  
rbd0            252:0    04.9G  0 disk 

5、使用ceph的块设备存储数据

[root@app ~]# mkfs -t xfs /dev/rbd0[root@app ~]# mount /dev/rbd0 /test1/[root@app ~]# df -hT | grep test1
/dev/rbd0               xfs       4.9G   33M  4.9G   1% /test1

[root@app ~]# touch /test1/{1..10}[root@app ~]# ls /test1/11023456789

6、扩容

[root@app ~]# rbd resize --size 8000 block_pool/db_volume 
Resizing image: 100% complete...done.

[root@app ~]# xfs_growfs /test1/[root@app ~]# df -hT | grep test1
/dev/rbd0               xfs       7.9G   33M  7.8G   1% /test1

注意:xfs_growfs适用于xfs文件系统, ext4文件系统用resize2fs

7、缩容

必须处于卸载状态
xfs不支持直接缩容

[root@app ~]# mkdir /backup[root@app ~]# cp -ra /test1/* /backup/[root@app ~]# [root@app ~]# umount /test1 [root@app ~]# [root@app ~]# rbd resize --size 4000 block_pool/db_volume --allow-shrink
Resizing image: 100% complete...done.

[root@app ~]# mkfs -t xfs -f /dev/rbd0[root@app ~]# mount /dev/rbd0 /test1/[root@app ~]# cp -ra /backup/* /test1/[root@app ~]# ls /test1/11023456789

8、删除块存储

[root@app ~]# umount /test1 [root@app ~]# rbd unmap /dev/rbd0[root@app ~]# lsblk 
NAME            MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda               8:0    0   20G  0 disk 
├─sda1            8:1    0  500M  0 part /boot
└─sda2            8:2    019.5G  0 part 
  ├─centos-root 253:0    017.5G  0 lvm  /
  └─centos-swap 253:1    0    2G  0 lvm  [SWAP]
sr0              11:0    19.5G  0 rom  

[root@app ~]# ceph osd pool delete block_pool block_pool --yes-i-really-really-mean-it 
pool 'block_pool' removed

四、对象存储

基于对象的存储, 每一个文件称为对象
每个文件存储后,对应惟一的下载地址

适用场景:非结构化数据(图片、视频、声频、动画)

对象存储依赖于rgw服务

1、创建rgw服务

[root@node01 ceph]# ceph-deploy rgw create node01[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (2.0.1): /usr/bin/ceph-deploy rgw create node01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  rgw                           :[('node01', 'rgw.node01')][ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  subcommand                    : create
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       :<ceph_deploy.conf.cephdeploy.Conf instance at 0x7fa364d58fc8>[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  func                          :<function rgw at 0x7fa365a2d140>[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts node01:rgw.node01
[node01][DEBUG ] connected to host: node01 
[node01][DEBUG ] detect platform information from remote host[node01][DEBUG ] detect machine type[ceph_deploy.rgw][INFO  ] Distro info: CentOS Linux 7.9.2009 Core
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to node01
[node01][DEBUG ]write cluster configuration to /etc/ceph/{cluster}.conf
[node01][WARNIN] rgw keyring does not exist yet, creating one
[node01][DEBUG ] create a keyring file[node01][DEBUG ] create path recursively if it doesn't exist
[node01][INFO  ] Running command: ceph --cluster ceph --name client.bootstrap-rgw --keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.node01 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.node01/keyring
[node01][INFO  ] Running command: systemctl enable [email protected]
[node01][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node01][INFO  ] Running command: systemctl start [email protected]
[node01][INFO  ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO  ] The Ceph Object Gateway (RGW) is now running on host node01 and default port 7480
[root@node01 ceph]# ceph -s
  cluster:
    id:     e2010562-0bae-4999-9247-4017f875acc8
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum node01,node02,node03
    mgr: node01(active), standbys: node02, node03
    osd: 3 osds: 3 up, 3in
    rgw: 1 daemon active
 
  data:
    pools:   4 pools, 32 pgs
    objects: 187  objects, 1.1 KiB
    usage:   3.0 GiB used, 57 GiB / 60 GiB avail
    pgs:     32 active+clean
 

[root@node01 ceph]# netstat -tunlp | grep 7480
tcp        000.0.0.0:7480            0.0.0.0:*               LISTEN      7409/radosgw      

在客户端上测试对象存储的使用

2、安装s3cmd测试工具

[root@app ~]# yum install -y s3cmd 

3、生成连接对象存储需要的AK、SK

[root@app ~]#  radosgw-admin user create --uid="testuser" --display-name="first user" | grep -E "access_key|secret_key"

4、创建.s3cfg配置文件,指定对象存储网关的连接信息

[root@app ~]# cat .s3cfg [default]
access_key = AOFSKMEGLZKG8CY9B0NH
secret_key = VIQjjaxvdYHrYQv7Qv5QAI0z81cNf0oQCdi7EFxS
host_base =192.168.140.10:7480
host_bucket =192.168.140.10:7480/%(bucket)
cloudfront_host =192.168.140.10:7480
use_https = False

5、创建桶

[root@app ~]# s3cmd mb s3://test
Bucket 's3://test/' created
[root@app ~]# [root@app ~]# s3cmd ls2024-06-17 07:42  s3://test

6、测试文件上传、下载

[root@app ~]# s3cmd put /etc/fstab s3://test/fstab 
upload: '/etc/fstab' ->'s3://test/fstab'[1 of 1]466 of 466100% in    1s   360.49 B/s  done
[root@app ~]# s3cmd get s3://test/fstab
download: 's3://test/fstab' ->'./fstab'[1 of 1]466 of 466100% in    0s    11.24 KB/s  done
标签: ceph 大数据

本文转载自: https://blog.csdn.net/u010198709/article/details/139731962
版权归原作者 Martin_wjc 所有, 如有侵权,请联系我们删除。

“4. ceph存储使用流程”的评论:

还没有评论