0


mdadm(管理磁盘阵列组)命令详解

mdadm命令来自于英文词组“multiple devices admin”的缩写,其功能是用于管理RAID磁盘阵列组。作为Linux系统下软RAID设备的管理神器,mdadm命令可以进行创建、调整、监控、删除等全套管理操作。

语法格式:mdadm [参数] 设备名

参数大全

-D 显示RAID设备的详细信息

-A 加入一个以前定义的RAID

-l 指定RAID的级别

-n 指定RAID中活动设备的数目

-f 把RAID成员列为有问题,以便移除该成员

-r 把RAID成员移出RAID设备

-a 向RAID设备中添加一个成员
-S 停用RAID设备,释放所有资源

-x 指定初始RAID设备的备用成员的数量

先创四个分区

  1. [root@compute ~]# mdadm -C /dev/md7 -n 3 -l 5 -x 1 /dev/sde{1,2,3,4}
  2. mdadm: /dev/sde1 appears to be part of a raid array:
  3. level=raid5 devices=3 ctime=Tue Mar 7 17:24:53 2023
  4. mdadm: /dev/sde2 appears to be part of a raid array:
  5. level=raid5 devices=3 ctime=Tue Mar 7 17:24:53 2023
  6. mdadm: /dev/sde3 appears to be part of a raid array:
  7. level=raid5 devices=3 ctime=Tue Mar 7 17:24:53 2023
  8. mdadm: /dev/sde4 appears to be part of a raid array:
  9. level=raid5 devices=3 ctime=Tue Mar 7 17:24:53 2023
  10. mdadm: largest drive (/dev/sde4) exceeds size (2094080K) by more than 1%
  11. Continue creating array? y
  12. mdadm: Fail create md7 when using /sys/module/md_mod/parameters/new_array
  13. mdadm: Defaulting to version 1.2 metadata
  14. mdadm: array /dev/md7 started.
  1. sde 8:64 0 10G 0 disk
  2. ├─sde1 8:65 0 2G 0 part
  3. └─md7 9:7 0 4G 0 raid5
  4. ├─sde2 8:66 0 2G 0 part
  5. └─md7 9:7 0 4G 0 raid5
  6. ├─sde3 8:67 0 2G 0 part
  7. └─md7 9:7 0 4G 0 raid5
  8. └─sde4 8:68 0 4G 0 part
  9. └─md7 9:7 0 4G 0 raid5
  1. [root@compute ~]# mdadm -D /dev/md7
  2. /dev/md7:
  3. Version : 1.2
  4. Creation Time : Tue Mar 7 17:29:25 2023
  5. Raid Level : raid5
  6. Array Size : 4188160 (3.99 GiB 4.29 GB)
  7. Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
  8. Raid Devices : 3
  9. Total Devices : 4
  10. Persistence : Superblock is persistent
  11. Update Time : Tue Mar 7 17:29:36 2023
  12. State : clean
  13. Active Devices : 3
  14. Working Devices : 4
  15. Failed Devices : 0
  16. Spare Devices : 1
  17. Layout : left-symmetric
  18. Chunk Size : 512K
  19. Consistency Policy : resync
  20. Name : compute:7 (local to host compute)
  21. UUID : dcdfacfb:9b2f17cd:ce176b58:9e7c56a4
  22. Events : 18
  23. Number Major Minor RaidDevice State
  24. 0 8 65 0 active sync /dev/sde1
  25. 1 8 66 1 active sync /dev/sde2
  26. 4 8 67 2 active sync /dev/sde3
  27. 3 8 68 - spare /dev/sde4
  1. mdadm -S /dev/md7 //一次性停止有个硬盘的阵列组
  2. [root@compute ~]# mdadm -C /dev/md7 -x 1 -n 3 -l 5 /dev/sdd{1..4}
  3. mdadm: /dev/sdd1 appears to be part of a raid array:
  4. level=raid5 devices=3 ctime=Tue Mar 7 17:25:51 2023
  5. mdadm: /dev/sdd2 appears to be part of a raid array:
  6. level=raid5 devices=3 ctime=Tue Mar 7 17:25:51 2023
  7. mdadm: largest drive (/dev/sdd4) exceeds size (2094080K) by more than 1%
  8. Continue creating array? yes
  9. mdadm: Fail create md7 when using /sys/module/md_mod/parameters/new_array
  10. mdadm: Defaulting to version 1.2 metadata
  11. mdadm: array /dev/md7 started.
  12. └─cinder--volumes-cinder--volumes--pool 253:5 0 69G 0 lvm
  13. sdd 8:48 0 10G 0 disk
  14. ├─sdd1 8:49 0 2G 0 part
  15. └─md7 9:7 0 4G 0 raid5
  16. ├─sdd2 8:50 0 2G 0 part
  17. └─md7 9:7 0 4G 0 raid5
  18. ├─sdd3 8:51 0 2G 0 part
  19. └─md7 9:7 0 4G 0 raid5
  20. └─sdd4 8:52 0 4G 0 part
  21. └─md7 9:7 0 4G 0 raid5
  22. sde 8:64 0 10G 0 disk
  1. [root@compute ~]# mdadm -D /dev/md7
  2. /dev/md7:
  3. Version : 1.2
  4. Creation Time : Wed Mar 8 14:52:00 2023
  5. Raid Level : raid5
  6. Array Size : 4188160 (3.99 GiB 4.29 GB)
  7. Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
  8. Raid Devices : 3
  9. Total Devices : 4
  10. Persistence : Superblock is persistent
  11. Update Time : Wed Mar 8 14:52:11 2023
  12. State : clean
  13. Active Devices : 3
  14. Working Devices : 4
  15. Failed Devices : 0
  16. Spare Devices : 1
  17. Layout : left-symmetric
  18. Chunk Size : 512K
  19. Consistency Policy : resync
  20. Name : compute:7 (local to host compute)
  21. UUID : 700d9a77:0babc26d:3b5a6768:223d47a8
  22. Events : 18
  23. Number Major Minor RaidDevice State
  24. 0 8 49 0 active sync /dev/sdd1
  25. 1 8 50 1 active sync /dev/sdd2
  26. 4 8 51 2 active sync /dev/sdd3
  27. 3 8 52 - spare /dev/sdd4
  1. [root@compute ~]# mdadm /dev/md7 -f /dev/sdd1
  2. mdadm: set /dev/sdd1 faulty in /dev/md7
  3. [root@compute ~]# mdadm -D /dev/md7
  4. /dev/md7:
  5. Version : 1.2
  6. Creation Time : Wed Mar 8 14:52:00 2023
  7. Raid Level : raid5
  8. Array Size : 4188160 (3.99 GiB 4.29 GB)
  9. Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
  10. Raid Devices : 3
  11. Total Devices : 4
  12. Persistence : Superblock is persistent
  13. Update Time : Wed Mar 8 14:59:00 2023
  14. State : clean, degraded, recovering
  15. Active Devices : 2
  16. Working Devices : 3
  17. Failed Devices : 1
  18. Spare Devices : 1
  19. Layout : left-symmetric
  20. Chunk Size : 512K
  21. Consistency Policy : resync
  22. Rebuild Status : 66% complete
  23. Name : compute:7 (local to host compute)
  24. UUID : 700d9a77:0babc26d:3b5a6768:223d47a8
  25. Events : 30
  26. Number Major Minor RaidDevice State
  27. 3 8 52 0 spare rebuilding /dev/sdd4
  28. 1 8 50 1 active sync /dev/sdd2
  29. 4 8 51 2 active sync /dev/sdd3
  30. 0 8 49 - faulty /dev/sdd1
  1. [root@compute ~]# mdadm /dev/md7 -r /dev/sdd1
  2. mdadm: hot removed /dev/sdd1 from /dev/md7
  3. [root@compute ~]# mdadm -D /dev/md7
  4. /dev/md7:
  5. Version : 1.2
  6. Creation Time : Wed Mar 8 14:52:00 2023
  7. Raid Level : raid5
  8. Array Size : 4188160 (3.99 GiB 4.29 GB)
  9. Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
  10. Raid Devices : 3
  11. Total Devices : 3
  12. Persistence : Superblock is persistent
  13. Update Time : Wed Mar 8 15:00:22 2023
  14. State : clean
  15. Active Devices : 3
  16. Working Devices : 3
  17. Failed Devices : 0
  18. Spare Devices : 0
  19. Layout : left-symmetric
  20. Chunk Size : 512K
  21. Consistency Policy : resync
  22. Name : compute:7 (local to host compute)
  23. UUID : 700d9a77:0babc26d:3b5a6768:223d47a8
  24. Events : 38
  25. Number Major Minor RaidDevice State
  26. 3 8 52 0 active sync /dev/sdd4
  27. 1 8 50 1 active sync /dev/sdd2
  28. 4 8 51 2 active sync /dev/sdd3
  1. [root@compute ~]# mdadm /dev/md7 -a /dev/sdd1
  2. mdadm: added /dev/sdd1
  3. [root@compute ~]# mdadm -D /dev/md7
  4. /dev/md7:
  5. Version : 1.2
  6. Creation Time : Wed Mar 8 14:52:00 2023
  7. Raid Level : raid5
  8. Array Size : 4188160 (3.99 GiB 4.29 GB)
  9. Used Dev Size : 2094080 (2045.00 MiB 2144.34 MB)
  10. Raid Devices : 3
  11. Total Devices : 4
  12. Persistence : Superblock is persistent
  13. Update Time : Wed Mar 8 15:01:29 2023
  14. State : clean
  15. Active Devices : 3
  16. Working Devices : 4
  17. Failed Devices : 0
  18. Spare Devices : 1
  19. Layout : left-symmetric
  20. Chunk Size : 512K
  21. Consistency Policy : resync
  22. Name : compute:7 (local to host compute)
  23. UUID : 700d9a77:0babc26d:3b5a6768:223d47a8
  24. Events : 39
  25. Number Major Minor RaidDevice State
  26. 3 8 52 0 active sync /dev/sdd4
  27. 1 8 50 1 active sync /dev/sdd2
  28. 4 8 51 2 active sync /dev/sdd3
  29. 5 8 49 - spare /dev/sdd1
标签: linux

本文转载自: https://blog.csdn.net/m0_67849390/article/details/130192267
版权归原作者 邵小 所有, 如有侵权,请联系我们删除。

“mdadm(管理磁盘阵列组)命令详解”的评论:

还没有评论