0

I am having trouble with unsustainable RAID array, it keeps on disappearing in random fashion. At creation time the warnings related to lost / meaningless partition table are displayed, even though a partition and not a disk is used as the RAID base. The output clearly suggest an existing partition table within the partition, indeed a closer look at */dev/sdd** reveals a whole partition table within the /dev/sdd1 partition!

I don't know for a fact, but I suspect the issue of disappearing RAID arrray might be ralated to this mysterious extra partition table. I certainly did not put it there, the 'main' partition table was overwritten many times over, yet the table internal to sdd1 somehow prevails. Is there a way to wipe / erase / get rid of this extra partition table and re-purpose the disk as a RAID array member? Do you think that's what's causing the issue? What kind of nested-partition-table voodoo is this and how do I proceed??

The relevant outputs below. Many thanks in advance.

mdadm --create --verbose --homehost=any --level=1 --force --raid- 
devices=1 --name=md127 /dev/md/md127 /dev/sdd1
mdadm: partition table exists on /dev/sdd1
mdadm: partition table exists on /dev/sdd1 but will be lost or
       meaningless after creating array
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
mdadm: size set to 1953381440K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? n
mdadm: create aborted.

fdisk -l /dev/sdd*
Disk /dev/sdd: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ASM1153E        
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 33553920 bytes
Disklabel type: dos
Disk identifier: 0x2966153e

Device     Boot Start        End    Sectors  Size Id Type
/dev/sdd1        2048 3907029167 3907027120  1.8T fd Linux raid autodetect


Disk /dev/sdd1: 1.84 TiB, 2000397885440 bytes, 3907027120 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 33553920 bytes
Disklabel type: dos
Disk identifier: 0x6e697373

Device      Boot      Start        End    Sectors   Size Id Type
/dev/sdd1p1      1936269394 3772285809 1836016416 875.5G 4f QNX4.x 3rd part
/dev/sdd1p2      1917848077 2462285169  544437093 259.6G 73 unknown
/dev/sdd1p3      1818575915 2362751050  544175136 259.5G 2b unknown
/dev/sdd1p4      2844524554 2844579527      54974  26.9M 61 SpeedStor

Partition table entries are not in disk order.
muthuh
  • 152

1 Answers1

1

What kind of nested-partition-table voodoo is this?

Googling for 1936269394 3772285809 1836016416 reveals the same garbage "partition table" may occur when you look at NTFS. You probably used to have NTFS at the very same offset (sector 2048 of /dev/sdd).

Similar problem where NTFS VBR "disguised" as MBR was here: Windows does not mount USB NTFS superfloppy. In your case this happens in a partition and the traces of NTFS are just an artifact. My answer there compares MBR to NTFS VBR, it explains sometimes you cannot really tell which one you're dealing with. Tools that expect a partition table may get fooled by NTFS and vice versa.

I guess if Windows has access to the partition, it may read the NTFS signature and try to mount the alleged filesystem. I don't know if Windows would then alter whatever data there is but I wouldn't be surprised if it did. If so, such interference from Windows may be the reason your RAID array "keeps on disappearing in random fashion".


Is there a way to wipe / erase / get rid of this extra partition table?

wipefs is a tool to wipe various signatures from a device. This command

wipefs -a /dev/sdd1

will remove all signatures the tool can find. But ask yourself if you want to remove all possible signatures; read the whole answer before proceeding.

In your case writing zeros to the first 512 bytes of the partition should also work, so this is an alternative solution:

dd if=/dev/zero of=/dev/sdd1 bs=512 count=1

Modern mdadm by default uses superblock (version 1.2) which is stored 4K from the start of the device. The above dd command will overwrite neither the superblock (if any) nor data that follows. I'm not sure about wipefs -a, that's why I said "ask yourself if you want to remove all possible signatures". If I were you I would go with dd.

After this mdadm should not complain.