0

If I use a expensive commercial software I can read the data but I did not buy it because it is too expensive for me. Since I have the conviction that I do not reformat the hard drive again and I can read the data easily.

I am about to do the following:

  • Read Data of 2 disks (out of 4 disk Raid 10 hardware array) physically moved into Linux Desktop PC.
  • looking for a software RAID solution that can do that. I hope mdadm can do that.

Creating the Raid works:

# mdadm --create /dev/md/md_27tb --level=0 --raid-devices=2 /dev/sdf2 /dev/sdd2
mdadm: /dev/sdf2 appears to be part of a raid array:level=raid10 devices=4 ctime=Sun Nov 3 01:19:11 2019
mdadm: /dev/sdd2 appears to be part of a raid array:level=raid10 devices=4 ctime=Sun Nov 3 01:19:11 2019
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md/md_27tb started.

Unfortunately, I was unable to read the data. Not possible to mount.

then I read: mount: wrong fs type, bad option, bad superblock and tried:

# mount -t ext4 /dev/md/md_27tb /mnt/md_27tb

mount: /mnt/md_27tb: wrong fs type, bad option, bad superblock on /dev/md126, missing codepage or helper program, or other error.

# fsck /dev/md/md_27tb

# fsck /dev/md/md_27tb

fsck from util-linux 2.34
e2fsck 1.45.4 (23-Sep-2019)
ext2fs_open2: Bad magic number in super-block
fsck.ext2: Superblock invalid, trying backup blocks...
fsck.ext2: Bad magic number in super-block while trying to open /dev/md126
The superblock could not be read or does not describe a valid ext2/ext3/ext4
filesystem.  If the device is valid and it really contains an ext2/ext3/ext4
filesystem (and not swap or ufs or something else), then the superblock
is corrupt, and you might try running e2fsck with an alternate superblock:
    e2fsck -b 8193 <device>
 or
    e2fsck -b 32768 <device>

# e2fsck -b 32768 /dev/md/md_27tb

e2fsck: Bad magic number in super-block 

# e2fsck -b 8193 /dev/md/md_27tb

e2fsck: Bad magic number in super-block 

Maybe i used the wrong file system.

I read here:

https://forums.lenovo.com/t5/Lenovo-Iomega-Networking-Storage/File-system-error-on-IX4-300d/td-p/1407487

Maybe its ext3?

UPDATE important informations:

# file -sk /dev/md/md_27tb
/dev/md/md_27tb: symbolic link to ../md126

# fdisk -l /dev/md/md_27tb
Disk /dev/md/md_27tb: 5.43 TiB, 5957897682944 bytes, 11636518912 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 524288 bytes / 1048576 bytes

# file -skL /dev/md/md_27tb
/dev/md/md_27tb: data

UPDATE important informations:

vgscan scans all supported LVM block devices in the system for VGs. 

# vgscan
  Reading volume groups from cache.
  Found volume group "md0_vg" using metadata type lvm2

# pvscan
  PV /dev/md127   VG md0_vg          lvm2 [19.98 GiB / 0    free]
  Total: 1 [19.98 GiB] / in use: 1 [19.98 GiB] / in no VG: 0 [0   ]


# vgdisplay
- Volume group -
  VG Name               md0_vg
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No 3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               19.98 GiB
  PE Size               4.00 MiB
  Total PE              5115
  Alloc PE / Size       5115/19.98 GiB
  Free  PE / Size       0/0   
  VG UUID               fYicLg-jFJr-trfJ-3HvH-LWl4-tCci-fI

UPDATE Wed, Nov. 27, 22:30

After a very long time of automatic reconstruction, after about one day on Wed, Nov. 27, 22:30, I get the message "Data protection reconstruction has completed." Now I am sure that the data is correct again on these two discs and I can continue to try.

# mdadm --assemble --force /dev/md/md_27tb /dev/sdf2 /dev/sdd2
mdadm: /dev/sdf2 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
[seeh-pc seeh]# 

UPDATE 191128-090222 (By the way: sde is certainly not of interest here): I would prefer md126 would not be there.

$ lsblk
....

sdd           8:48   0   2.7T  0 disk  
├─sdd1        8:49   0    20G  0 part  
│ └─md126     9:126  0    20G  0 raid1 
│   ├─md0_vg-BFDlv
│   │       253:0    0     4G  0 lvm   
│   └─md0_vg-vol1
│           253:1    0    16G  0 lvm   
└─sdd2        8:50   0   2.7T  0 part  
sde           8:64   0 931.5G  0 disk  
├─sde1        8:65   0  60.6G  0 part  
└─sde2        8:66   0   871G  0 part  
sdf           8:80   0   2.7T  0 disk  
├─sdf1        8:81   0    20G  0 part  
│ └─md126     9:126  0    20G  0 raid1 
│   ├─md0_vg-BFDlv
│   │       253:0    0     4G  0 lvm   
│   └─md0_vg-vol1
│           253:1    0    16G  0 lvm   
└─sdf2        8:82   0   2.7T  0 part 

UPDATE 191128-123821: raid1 looks wrong for me:

$ cat /proc/mdstat
Personalities : [raid1] 
md126 : active (auto-read-only) raid1 sdd1[0] sdf1[2]
      20955008 blocks super 1.1 [2/2] [UU]

md127 : inactive sdd2[6](S) sdf2[7](S)
      5818261504 blocks super 1.1

UPDATE 191128-144807 :

Is looks like a success.

# mdadm --assemble --force /dev/md127 /dev/sdf2 /dev/sdd2
mdadm: /dev/sdf2 is busy - skipping
mdadm: /dev/sdd2 is busy - skipping
[seeh-pc seeh]# mdadm --stop md127
mdadm: stopped md127
[seeh-pc seeh]# mdadm --assemble md127 --run /dev/sdf2 /dev/sdd2
mdadm: /dev/md/md127 has been started with 2 drives (out of 4).
[seeh-pc seeh]# 


# cat /proc/mdstat
Personalities : [raid1] [raid10] 
md1 : active raid10 sdd2[6] sdf2[7]
      5818260480 blocks super 1.1 512K chunks 2 near-copies [4/2] [_U_U]

md126 : active (auto-read-only) raid1 sdd1[0] sdf1[2]
      20955008 blocks super 1.1 [2/2] [UU]


sdd            8:48   0   2.7T  0 disk   
├─sdd1         8:49   0    20G  0 part   
│ └─md126      9:126  0    20G  0 raid1  
│   ├─md0_vg-BFDlv
│   │        253:0    0     4G  0 lvm    
│   └─md0_vg-vol1
│            253:1    0    16G  0 lvm    
└─sdd2         8:50   0   2.7T  0 part   
  └─md1        9:1    0   5.4T  0 raid10 
    └─3760fd40_vg-lv2111e672
             253:2    0   5.4T  0 lvm    
sde            8:64   0 931.5G  0 disk   
├─sde1         8:65   0  60.6G  0 part   
└─sde2         8:66   0   871G  0 part   
sdf            8:80   0   2.7T  0 disk   
├─sdf1         8:81   0    20G  0 part   
│ └─md126      9:126  0    20G  0 raid1  
│   ├─md0_vg-BFDlv
│   │        253:0    0     4G  0 lvm    
│   └─md0_vg-vol1
│            253:1    0    16G  0 lvm    
└─sdf2         8:82   0   2.7T  0 part   
  └─md1        9:1    0   5.4T  0 raid10 
    └─3760fd40_vg-lv2111e672
             253:2    0   5.4T  0 lvm  
SL5net
  • 281

0 Answers0