0

I'm having trouble figuring out where the problem is. I'm seeing my two drives... I removed a failed drive and installed an empty one. Thought I added it to the raid and it said it was rebuilding.

joshua@tree-nas:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdc[2](S) sdb[0]
      1953513424 blocks super 1.2 [2/1] [U_]

unused devices: <none>



joshua@tree-nas:~$ sudo mdadm --examine /dev/sdc
/dev/sdc:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 40b712ad:e014e9fb:db87b836:e4b78943
           Name : tree-nas2:STORE
  Creation Time : Fri Aug 23 07:04:04 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
     Array Size : 1953513424 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 3907026848 (1863.02 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 26e9cad3:049f0638:7401a39a:f81da93d

    Update Time : Fri Mar 27 05:02:02 2015
       Checksum : 56132f6a - correct
         Events : 22632


   Device Role : spare
   Array State : A. ('A' == active, '.' == missing)
joshua@tree-nas:~$ sudo mdadm --examine /dev/sdb
/dev/sdb:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : 40b712ad:e014e9fb:db87b836:e4b78943
           Name : tree-nas2:STORE
  Creation Time : Fri Aug 23 07:04:04 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 3907027120 (1863.02 GiB 2000.40 GB)
     Array Size : 1953513424 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 3907026848 (1863.02 GiB 2000.40 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : d999fb65:e8c350d9:873eeb53:b33e27b6

    Update Time : Fri Mar 27 05:02:02 2015
       Checksum : bb5462cd - correct
         Events : 22632


   Device Role : Active device 0
   Array State : A. ('A' == active, '.' == missing)

Would anybody mind helping me understand the output of these two examine? Please excuse my ignorance.

I guess another thing I'm needing to do is potentially to remove the old drive completely...

Output of mdadm --detail /dev/md127:

joshua@tree-nas:~$ sudo mdadm --detail /dev/md127
/dev/md127:
        Version : 1.2
  Creation Time : Fri Aug 23 07:04:04 2013
     Raid Level : raid1
     Array Size : 1953513424 (1863.02 GiB 2000.40 GB)
  Used Dev Size : 1953513424 (1863.02 GiB 2000.40 GB)
   Raid Devices : 2
  Total Devices : 2
    Persistence : Superblock is persistent

    Update Time : Fri Mar 27 05:02:02 2015
          State : clean, degraded
 Active Devices : 1
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 1

           Name : tree-nas2:STORE
           UUID : 40b712ad:e014e9fb:db87b836:e4b78943
         Events : 22632

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       0        0        1      removed

       2       8       32        -      spare   /dev/sdc

1 Answers1

0

This Question deals with the same problem, where it's indicated that it could be block errors on the disk. I had the same problem when I ran a raid1 md array long ago. The sync would stop halfway in with IO errors in the kernel log shown by dmesg.