Archive for the tag 'convert existing disk into raid'

The purpose of this HOW-To is to show how you can migrate your existing openSUSE server into a mirrored RAID1 solution.
This is not a easy migration, you could leave your server into a non booting and unrecoverable state.
We highly suggest you backup all your data before going into this documentation.
This documentation will setup 2 disks in to a mirrored array, meaning both disks will have same data content.

Partition Scheme:

/dev/sda - Installed non-RAID system disk
/dev/sda1 - swap partition
/dev/sda2 - root partition
/dev/sdb - First empty disk for the RAID mirror
/dev/md0 - Mirrored swap partition
/dev/md1 - Mirrored root partition

* Prepare the non-RAID Disk

- Backup everything
- Make sure that both disks are the same size

  1. # cat /proc/partitions
  2.  
  3.    8     0    2097152 sda
  4.    8     1     514048 sda1
  5.    8     2    1582402 sda2
  6.    8    16    2097152 sdb

It is recommended the migration be performed in run level 1 to minimize corruption possibilities.
Change the default run level to 1.

  1. # init 1
  2. # vi /etc/inittab
  3. id:1:initdefault:

Make sure that your devices do not have labels and that you are referencing the disks by device name.

  1. # cat /etc/fstab
  2. /dev/sda2  /               reiserfs   defaults              1 1
  3. /dev/sda1  swap            swap       pri=42                0 0
  4.  
  5. # cat /boot/grub/menu.lst
  6. default 0
  7. timeout 8
  8. title openSUSE 10.2
  9.     root (hd0,1)
  10.     kernel /boot/vmlinuz-2.6.16.46-0.12-default root=/dev/sda2 resume=/dev/sda1 showopts
  11.     initrd /boot/initrd-2.6.16.46-0.12-default.orig

Change the partition type on the existing non-RAID disk to type ‘fd’ (Linux raid autodetect).

  1. # fdisk /dev/sda
  2.  
  3. Command (m for help): p
  4.  
  5. Disk /dev/sda: 2147 MB, 2147483648 bytes
  6. 255 heads, 63 sectors/track, 261 cylinders
  7. Units = cylinders of 16065 * 512 = 8225280 bytes
  8.  
  9.    Device Boot      Start         End      Blocks   Id  System
  10. /dev/sda1               1          64      514048+  82  Linux swap
  11. /dev/sda2   *          65         261     1582402+  83  Linux
  12.  
  13. Command (m for help): t
  14. Partition number (1-4): 1
  15. Hex code (type L to list codes): fd
  16. Changed system type of partition 1 to fd (Linux raid autodetect)
  17.  
  18. Command (m for help): t
  19. Partition number (1-4): 2
  20. Hex code (type L to list codes): fd
  21. Changed system type of partition 2 to fd (Linux raid autodetect)
  22.  
  23. Command (m for help): p
  24.  
  25. Disk /dev/sda: 2147 MB, 2147483648 bytes
  26. 255 heads, 63 sectors/track, 261 cylinders
  27. Units = cylinders of 16065 * 512 = 8225280 bytes
  28.  
  29.    Device Boot      Start         End      Blocks   Id  System
  30. /dev/sda1               1          64      514048+  fd  Linux raid autodetect
  31. /dev/sda2   *          65         261     1582402+  fd  Linux raid autodetect
  32.  
  33. Command (m for help): w
  34. The partition table has been altered!
  35.  
  36. Calling ioctl() to re-read partition table.
  37.  
  38. WARNING: Re-reading the partition table failed with error 16: Device or
  39. resource busy.
  40. The kernel still uses the old table.
  41. The new table will be used at the next reboot.
  42. Syncing disks.

* Copy the non-RAID disk’s partition table to the empty disk.

  1. # sfdisk -d /dev/sda > partitions.txt
  2. # sfdisk /dev/sdb < partitions.txt
  3. Checking that no-one is using this disk right now …
  4. OK
  5.  
  6. Disk /dev/sdb: 261 cylinders, 255 heads, 63 sectors/track
  7.  
  8. sfdisk: ERROR: sector 0 does not have an msdos signature
  9.  /dev/sdb: unrecognized partition
  10. Old situation:
  11. No partitions found
  12. New situation:
  13. Units = sectors of 512 bytes, counting from 0
  14.  
  15.    Device Boot    Start       End   #sectors  Id  System
  16. /dev/sdb1            63   1028159    1028097  fd  Linux raid autodetect
  17. /dev/sdb2   *   1028160   4192964    3164805  fd  Linux raid autodetect
  18. /dev/sdb3             0         -          0   0  Empty
  19. /dev/sdb4             0         -          0   0  Empty
  20. Successfully wrote the new partition table
  21.  
  22. Re-reading the partition table …
  23.  
  24. If you created or changed a DOS partition, /dev/foo7, say, then use dd(1)
  25. to zero the first 512 bytes:  dd if=/dev/zero of=/dev/foo7 bs=512 count=1
  26. (See fdisk(8).)
  27.  
  28. # cat /proc/partitions
  29. major minor  #blocks  name
  30.  
  31.    8     0    2097152 sda
  32.    8     1     514048 sda1
  33.    8     2    1582402 sda2
  34.    8    16    2097152 sdb
  35.    8    17     514048 sdb1
  36.    8    18    1582402 sdb2

* Reboot the server.

* Create a Degraded Array

Create the degraded array on the empty disk, but leave out the existing system disk for now.

  1. # mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sdb1 missing
  2. mdadm: array /dev/md0 started.
  3.  
  4. # mdadm –create /dev/md1 –level=1 –raid-devices=2 /dev/sdb2 missing
  5. mdadm: array /dev/md1 started.
  6.  
  7. # cat /proc/mdstat
  8. Personalities : [raid1]
  9. md1 : active raid1 sdb2[0]
  10.       1582336 blocks [2/1] [U_]
  11.  
  12. md0 : active raid1 sdb1[0]
  13.       513984 blocks [2/1] [U_]
  14.  
  15. unused devices:

* Format the degraded array devices.

  1. # mkswap /dev/md0
  2. Setting up swapspace version 1, size = 526315 kB
  3.  
  4. # mkreiserfs /dev/md1
  5. –snip–
  6. ReiserFS is successfully created on /dev/md1.

Backup the original initrd.

  1. # ls -l /boot/initrd*
  2. lrwxrwxrwx 1 root root      29 Apr  2 12:03 initrd -> initrd-2.6.16.46-0.12-default
  3. -rw-r–r– 1 root root 2930512 Apr  2 10:15 initrd-2.6.16.46-0.12-default
  4. # mv initrd-2.6.16.46-0.12-default initrd-2.6.16.46-0.12-default.orig

Recreate the initrd with Software RAID1 support.
Add “raid1″ on your INITRD_MODULES list.

  1. # vi /etc/sysconfig/kernel
  2. INITRD_MODULES="raid1 mptscsih reiserfs"
  3.  
  4. # mkinitrd
  5.  
  6. Root device:    /dev/sda2 (mounted on / as reiserfs)
  7. Module list:    raid1 mptspi reiserfs
  8. Kernel image:   /boot/vmlinuz-2.6.5-7.311-default
  9. Initrd image:   /boot/initrd-2.6.5-7.311-default
  10. Shared libs:    lib/ld-2.3.3.so lib/libblkid.so.1.0 lib/libc.so.6
  11. lib/libpthread.so.0 lib/libselinux.so.1 lib/libuuid.so.1.2
  12. Modules:        kernel/drivers/scsi/scsi_mod.ko
  13. kernel/drivers/scsi/sd_mod.ko
  14. kernel/drivers/md/raid1.ko
  15. kernel/drivers/message/fusion/mptbase.ko
  16. kernel/drivers/message/fusion/mptscsih.ko
  17. kernel/drivers/message/fusion/mptspi.ko
  18. kernel/fs/reiserfs/reiserfs.ko
  19. Including:      udev raidautorun

We finish doing a mkinitrd command for preventing that the md0 partition will get detected on next boot.

  1. # mkinitrd -f md -d /dev/md0
  2.  
  3. Root device:    /dev/sda2 (mounted on / as reiserfs)
  4. Module list:    piix mptspi ide-generic processor thermal fan reiserfs edd
  5.  (xennet xenblk)
  6. Kernel image:   /boot/vmlinuz-2.6.16.46-0.12-default
  7. Initrd image:   /boot/initrd-2.6.16.46-0.12-default
  8. Shared libs:    lib/ld-2.4.so lib/libacl.so.1.1.0 lib/libattr.so.1.1.0
  9. lib/libc-2.4.so lib/libdl-2.4.so lib/libhistory.so.5.1 lib/libncurses.so.5.5
  10. lib/libpthread-2.4.so lib/libreadline.so.5.1 lib/librt-2.4.so
  11. lib/libuuid.so.1.2 lib/libnss_files-2.4.so lib/libnss_files.so.2
  12. lib/libgcc_s.so.1
  13. Driver modules: ide-core ide-disk scsi_mod sd_mod piix scsi_transport_spi
  14. mptbase mptscsih mptspi ide-generic processor thermal fan edd raid0 raid1
  15. xor raid5 linear libata ahci ata_piix
  16. Filesystem modules:     reiserfs
  17. Including:      initramfs mdadm fsck.reiserfs
  18. Bootsplash:     SuSE-SLES (800×600)
  19. 13481 blocks
  20.  
  21. # ls -l /boot/initrd*
  22. lrwxrwxrwx 1 root root      29 Apr  2 12:15 initrd -> initrd-2.6.16.46-0.12-default
  23. -rw-r–r– 1 root root 3045558 Apr  2 12:15 initrd-2.6.16.46-0.12-default
  24. -rw-r–r– 1 root root 2930512 Apr  2 10:15 initrd-2.6.16.46-0.12-default.orig

If you attempt to boot the degraded array, without referencing an initrd that contains the raid1 driver or raidautorun,
then you will get a message that the /dev/md1 device is not found, and the server hangs. If this happens, reboot into the non-RAID configuration and rebuild the initrd properly.

* Setup Grub so it can boot into your new RAID1 system.

vi /boot/grub/menu.lst

  1. title openSUSE 10.2 LinuxRAID1
  2.     root (hd0,1)
  3.     kernel /boot/vmlinuz-2.6.16.46-0.12-default root=/dev/md1 resume=/dev/md0 showopts
  4.     initrd /boot/initrd-2.6.16.46-0.12-default

* Copy the System Disk to the Degraded Array

Mount the degraded array system device to a temporary mount point on the non-RAID system disk.
Copy the non-RAID system disk to the degraded array file system.

  1. # mkdir -p /mnt/newroot
  2. # mount /dev/md1 /mnt/newroot
  3. # mount | grep newroot
  4. cp -ax / /mnt/newroot
  5. chkconfig mdadmd on

* Modify the /mnt/newroot/etc/fstab file on the degraded array so the system boots properly.

  1. # cat /mnt/newroot/etc/fstab
  2. /dev/sda2  /               reiserfs   defaults              1 1
  3. /dev/sda1  swap            swap       pri=42                0 0
  4.  
  5. # vi /mnt/newroot/etc/fstab
  6.  
  7. /dev/md1   /               reiserfs   defaults              1 1
  8. /dev/md0   swap            swap       pri=42                0 0

* Reboot and select the degraded array “LinuxRAID1″.

* Build the Finished Array

At this point you should be running your system from the degraded array, and the non-RAID disk is not even mounted.

  1. # mount
  2. /dev/md1 on / type reiserfs (rw,acl,user_xattr)
  3. proc on /proc type proc (rw)
  4. sysfs on /sys type sysfs (rw)
  5. debugfs on /sys/kernel/debug type debugfs (rw)
  6. udev on /dev type tmpfs (rw)
  7. devpts on /dev/pts type devpts (rw,mode=0620,gid=5)
  8.  
  9. # mdadm –detail /dev/md1

Create a RAID configuration file.

  1. # cat << EOF > /etc/mdadm.conf
  2. > DEVICE /dev/sdb1 /dev/sdb2 /dev/sda1 /dev/sda2
  3. > ARRAY /dev/md0 devices=/dev/sdb1,/dev/sda1
  4. > ARRAY /dev/md1 devices=/dev/sdb2,/dev/sda2
  5. > EOF
  6.  
  7. # cat /etc/mdadm.conf
  8. DEVICE /dev/sdb1 /dev/sdb2 /dev/sda1 /dev/sda2
  9. ARRAY /dev/md0 devices=/dev/sdb1,/dev/sda1
  10. ARRAY /dev/md1 devices=/dev/sdb2,/dev/sda2

Add the non-RAID disk partitions into their respective RAID array.

  1. # mdadm /dev/md0 -a /dev/sda1
  2. mdadm: hot added /dev/sda1
  3.  
  4. # mdadm /dev/md1 -a /dev/sda2
  5. mdadm: hot added /dev/sda2

Now, check how the RAID1 is mirroring from one disk to the other, you will see % completed going and depending of your disk
sizes will take longer to complete.

  1. cat /proc/mdstat

Install GRUB onto both disks. You will need to do this manually.

  1. # grub
  2.  
  3. grub> find /boot/grub/stage1
  4.  (hd0,1)
  5.  (hd1,1)
  6.  
  7. grub> root (hd0,1)
  8. grub> setup (hd0)
  9.  
  10. grub> root (hd1,1)
  11. grub> setup (hd1)

If you do not reinstall GRUB, after rebooting you will get a GRUB error. If that happens, boot from the safe disk. Once the system is up, follow the steps above to install GRUB onto both drives.

* Remove the original initrd. It is useless at this point.

  1. # rm /boot/initrd-2.6.16.46-0.12-default.orig

Change back to your default run level.

  1. vi /etc/inittab
  2. id:3:initdefault:

Congratulations! You are now using your openSUSE server with RAID1.

  1. # df -h
  2. Filesystem            Size  Used Avail Use% Mounted on
  3. /dev/md1              1.6G  421M  1.1G  28% /
  4. tmpfs                 126M  8.0K  126M   1% /dev/shm