Creating and Managing a RAID Array in SUSE Linux Enterprise

45 minutes
  • 7 Learning Objectives

About this Hands-on Lab

In this hands-on lab, we will create partitions on additional drives so we can build a software RAID. Once we have a mirror configured and operating, we will fail a drive and remove it from the array, add a new drive to the array, and rebuild the mirror. This is a common set of tasks in the enterprise, as RAID mirrors are used for fault tolerance to prevent data loss due to drive failure.

Learning Objectives

Successfully complete this lab by achieving the following learning objectives:

Provision the Disks with a Primary Partition So They Can Be Added to the RAID rray
  1. Get the names of the drives with no partitions

    lsblk
  2. Your output should match the below (the drives you’ll be using later are nvme0n1, nvme1n1, and nvme2n1):

    NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
    nvme0n1     259:0    0   2G  0 disk
    nvme1n1     259:1    0   2G  0 disk
    nvme2n1     259:2    0   2G  0 disk
    nvme3n1     259:3    0  10G  0 disk
    ├─nvme3n1p1 259:4    0   2M  0 part
    ├─nvme3n1p2 259:5    0  20M  0 part /boot/efi
    └─nvme3n1p3 259:6    0  10G  0 part /
  3. Create partitions on the drives, performing the following for each drive:

    sudo fdisk /dev/nvme0n1
    Then press n for new
    Then press p for primary
    Then press 1 for the partition number
    Then enter for starting/first sector
    Then enter for ending/last sector
    Then w to write the changes
  4. Prompt the kernel to reread the partition table for the drive:

    sudo partprobe /dev/nvme0n1
  5. Repeat the steps for all three drives.

Verify mdadm Is Installed and Create a RAID 1 (`/dev/md0`) from Two of the Drives
  1. Verify that mdadm is installed:

    rpm -q mdadm
  2. Create the RAID 1 using two of the drives:

    sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/nvme0n1p1 /dev/nvme1n1p1
  3. Watch it being created:

    sudo watch cat /proc/mdstat
  4. Hit Ctrl+C to quit.

  5. Now the array is complete, check the status:

    sudo mdadm --detail /dev/md0
Create an XFS Filesystem on the Array and Mount It to `/mnt/raid1`
  1. Create the filesystem:

    sudo mkfs.xfs /dev/md0
  2. Create a directory and then mount the RAID volume into the directory:

    sudo mkdir /mnt/raid1
    sudo mount /dev/md0 /mnt/raid1
  3. Verify the RAID size is correct and the mount displays:

    lsblk
  4. Note that your array is shown in the output, including the mount point:

    nvme0n1     259:0    0   2G  0 disk
    └─nvme0n1p1 259:9    0   2G  0 part
      └─md0       9:0    0   2G  0 raid1 /mnt/raid1
    nvme1n1     259:1    0   2G  0 disk
    └─nvme1n1p1 259:10   0   2G  0 part
      └─md0       9:0    0   2G  0 raid1 /mnt/raid1
    nvme2n1     259:2    0   2G  0 disk
    └─nvme2n1p1 259:7    0   2G  0 part
    nvme3n1     259:3    0  10G  0 disk
    ├─nvme3n1p1 259:4    0   2M  0 part
    ├─nvme3n1p2 259:5    0  20M  0 part  /boot/efi
    └─nvme3n1p3 259:6    0  10G  0 part  /
Compress `/var/log/messages` Using `tar` and Place It in the `/mnt/raid1` Directory
  1. Compress the messages file, and place it in the newly created raid1 directory:

    sudo tar -czvf /mnt/raid1/messages.tar.gz /var/log/messages
  2. Verify the data is in place and the file is not empty:

    ls -l /mnt/raid1
Simulate a Failure of One of the Drives and Then Remove It from the Array
  1. Set one drive to faulty:

    sudo mdadm --manage --set-faulty /dev/md0 /dev/nvme1n1p1
  2. Confirm that the drive is in a faulty state:

    sudo mdadm --detail /dev/md0
  3. Remove the faulty drive from the array:

    sudo mdadm --manage --remove /dev/md0 /dev/nvme1n1p1
  4. Confirm that the drive has been removed:

    sudo mdadm --detail /dev/md0
Add the Third Drive to the Array, Let It Rebuild, and Verify the Data Is in Place
  1. Add the drive to the array:

    sudo mdadm --manage --add /dev/md0 /dev/nvme2n1p1
  2. Watch the rebuild:

    sudo watch cat /proc/mdstat
  3. Hit Ctrl+C to quit.

  4. Once the rebuild is complete, confirm that the array shows all drives synced:

    sudo mdadm --detail /dev/md0
  5. Confirm that the data is still present:

    ls -l /mnt/raid1
Add the Unused Disk Back to the Array as a Spare in Case of Another Failure
  1. Re-add the failed drive back to the array as a spare:

    lsblk
  2. Note the nvme1n1p1 partition is not currently linked to the array — that’s the one to add back in:

    NAME        MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
    nvme0n1     259:0    0   2G  0 disk
    └─nvme0n1p1 259:9    0   2G  0 part
      └─md0       9:0    0   2G  0 raid1 /mnt/raid1
    nvme1n1     259:1    0   2G  0 disk
    └─nvme1n1p1 259:10   0   2G  0 part
    nvme2n1     259:2    0   2G  0 disk
    └─nvme2n1p1 259:7    0   2G  0 part
      └─md0       9:0    0   2G  0 raid1 /mnt/raid1
    nvme3n1     259:3    0  10G  0 disk
    ├─nvme3n1p1 259:4    0   2M  0 part
    ├─nvme3n1p2 259:5    0  20M  0 part  /boot/efi
    └─nvme3n1p3 259:6    0  10G  0 part  /
  3. Add the drive to the array as a spare:

    sudo mdadm --manage /dev/md0 --add /dev/nvme1n1p1

    The resulting output should be:

    mdadm: added /dev/nvme1n1p1
  4. Verify that the drive is added to the array as a spare:

    sudo mdadm --detail /dev/md0

Additional Resources

There is a server in your organization that has had some drive failures in the past.

You have been asked to provide a proof of concept of the benefits of a RAID array for mirroring the backups on the production server. You have been provided a SUSE Linux Enterprise server with three extra drives to show the correct process for changing a disk when one fails.

You will need to:

  • Provision the disks with a primary partition so they can be added to the RAID array.
  • Verify mdadm is installed, and then create a RAID 1 (/dev/md0) from two of the drives.
  • Create an XFS filesystem on the array and mount it to /mnt/raid1.
  • Compress var/log/messages using tar, and place it in the /mnt/raid1 directory.
  • Simulate a failure of one of the drives, and then remove it from the array.
  • Add the third drive to the array, let it rebuild, and verify the data is in place.

What are Hands-on Labs

Hands-on Labs are real environments created by industry experts to help you learn. These environments help you gain knowledge and experience, practice without compromising your system, test without risk, destroy without fear, and let you learn from your mistakes. Hands-on Labs: practice your skills before delivering in the real world.

Sign In
Welcome Back!

Psst…this one if you’ve been moved to ACG!

Get Started
Who’s going to be learning?