Ubuntu: Why is my RAID /dev/md1 showing up as /dev/md126? Is mdadm.conf being ignored?



Question:

I created a RAID with:

sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1  sudo mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdb2 /dev/sdc2  

sudo mdadm --detail --scan returns:

ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e  ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb  

Which I appended it to /etc/mdadm/mdadm.conf, see below:

# mdadm.conf  #  # Please refer to mdadm.conf(5) for information about this file.  #    # by default (built-in), scan all partitions (/proc/partitions) and all  # containers for MD superblocks. alternatively, specify devices to scan, using  # wildcards if desired.  #DEVICE partitions containers    # auto-create devices with Debian standard permissions  CREATE owner=root group=disk mode=0660 auto=yes    # automatically tag new arrays as belonging to the local system  HOMEHOST <system>    # instruct the monitoring daemon where to send mail alerts  MAILADDR root    # definitions of existing MD arrays    # This file was auto-generated on Mon, 29 Oct 2012 16:06:12 -0500  # by mkconf $Id$  ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e  ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb  

cat /proc/mdstat returns:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]   md2 : active raid1 sdb2[0] sdc2[1]        208629632 blocks super 1.2 [2/2] [UU]    md1 : active raid1 sdb1[0] sdc1[1]        767868736 blocks super 1.2 [2/2] [UU]    unused devices: <none>  

ls -la /dev | grep md returns:

brw-rw----   1 root disk      9,   1 Oct 30 11:06 md1  brw-rw----   1 root disk      9,   2 Oct 30 11:06 md2  

So I think all is good and I reboot.


After the reboot, /dev/md1 is now /dev/md126 and /dev/md2 is now /dev/md127?????

sudo mdadm --detail --scan returns:

ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e  ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb  

cat /proc/mdstat returns:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]   md126 : active raid1 sdc2[1] sdb2[0]        208629632 blocks super 1.2 [2/2] [UU]    md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1]        767868736 blocks super 1.2 [2/2] [UU]    unused devices: <none>  

ls -la /dev | grep md returns:

drwxr-xr-x   2 root root          80 Oct 30 11:18 md  brw-rw----   1 root disk      9, 126 Oct 30 11:18 md126  brw-rw----   1 root disk      9, 127 Oct 30 11:18 md127  

All is not lost, I:

sudo mdadm --stop /dev/md126  sudo mdadm --stop /dev/md127  sudo mdadm --assemble --verbose /dev/md1 /dev/sdb1 /dev/sdc1  sudo mdadm --assemble --verbose /dev/md2 /dev/sdb2 /dev/sdc2  

and verify everything:

sudo mdadm --detail --scan returns:

ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e  ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb  

cat /proc/mdstat returns:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]   md2 : active raid1 sdb2[0] sdc2[1]        208629632 blocks super 1.2 [2/2] [UU]    md1 : active raid1 sdb1[0] sdc1[1]        767868736 blocks super 1.2 [2/2] [UU]    unused devices: <none>  

ls -la /dev | grep md returns:

brw-rw----   1 root disk      9,   1 Oct 30 11:26 md1  brw-rw----   1 root disk      9,   2 Oct 30 11:26 md2  

So once again, I think all is good and I reboot.


Again, after the reboot, /dev/md1 is /dev/md126 and /dev/md2 is /dev/md127?????

sudo mdadm --detail --scan returns:

ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e  ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb  

cat /proc/mdstat returns:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]   md126 : active raid1 sdc2[1] sdb2[0]        208629632 blocks super 1.2 [2/2] [UU]    md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1]        767868736 blocks super 1.2 [2/2] [UU]    unused devices: <none>  

ls -la /dev | grep md returns:

drwxr-xr-x   2 root root          80 Oct 30 11:42 md  brw-rw----   1 root disk      9, 126 Oct 30 11:42 md126  brw-rw----   1 root disk      9, 127 Oct 30 11:42 md127  

What am I missing here?


Solution:1

I found the answer here, RAID starting at md127 instead of md0. In short, I chopped my /etc/mdadm/mdadm.conf definitions from:

ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e  ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb  

to:

ARRAY /dev/md1 UUID=aa1f85b0:a2391657:cfd38029:772c560e  ARRAY /dev/md2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb  

and ran:

sudo update-initramfs -u  

I am far from an expert on this, but my understanding is this ...

The kernel assembled the arrays prior to the normal time to assemble the arrays occurs. When the kernel assembles the arrays, it does not use mdadm.conf. Since the partitions had already been assembled by the kernel, the normal array assembly which uses mdadm.conf was skipped.

Calling sudo update-initramfs -u tells the kernel take a look at the system again to figure out how to start up.

I am sure someone with better knowledge will correct me / elaborate on this.

Use the following line to update the initrd for each respective kernel that exists on your system:

sudo update-initramfs -k all -u  


Solution:2

sudo update-initramfs -u  

was all I needed to fix that. I did not edit anything in /etc/mdadm/mdadm.conf.


Solution:3

I had the same issue.

This solution solvd my problem: http://aubreykloppers.wordpress.com/2012/07/06/mdadm-devmd127/


Solution:4

I managed to replicate the issue in the following manner:

When "Software Updater" asked if i wanted to update packages (including Ubuntu base" and kernel, i said:OK. The newly installed kernel used the current kernel's/system's settings. I then created the array. Only the currently running kernel got updated with the new RAID settings. Once i rebooted, the new kernel knew nothing about the raid, and gave it an md127 name!


Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Previous
Next Post »