2

TLDR:

  • Encrypted Luks drive will unlock on boot sequence, but then fails to boot.
  • Drops to (intiramsfs) on boot.
  • Logical volumes messed up. Not finding data-root (boot drive)
  • was using Gparted and Disks to partition, format external drives prior to boot issue.
  • was installing Qubes on USB drive prior to boot issue.
  • Somewhere between partioning, formatting, and booting to Qubes install, logical volumes got screwed up.

Background:

I was formatting, partioning external hard drives using Gparted and Disks. I had to repeat several times for the same drive, because I had messed up the scheme I wanted.

My Disks sidebar showed a host of drives that no longer exist. (Suspect the mapper got messed up during the partioning.)

After copying Qubes ISO to usb, and trying to boot to it, it showed corrupted.

I rebooted to my primary drive (ElementaryOS)... and thats when the boot issue showed.

Boot error:

Here is the error:

enter image description here

/dev/sda: open failed: No medium found

dev/mapper/data-root does not exist.

I tried:

Booting to a USB ElementaryOS, and accessing the primary drive (to see if I could fix the mapper).

Using Files to unlock the primary encrypted drive, I get the error:

Error unlocking /dev/invme0n1p2:Failed to activate device: Operation not permitted

enter image description here

After the error the drive disappears from the Files devices.

Cloned the drive, to a USB, and tried mounting the encrypted clone. Same error on the clone.

What I suspect:

The mapper is screwed up. When the drive tries to open, it decrypts, but the mapping is screwed up.

Question:

Do you think it is indeed a mapper problem? Or something else? How would I fix the mapper in this situation?

If I can't fix the mapper, how would I access the data on the encrypted drive, and copy it to another backup?

UPDATE:

Thanks to user1686 I've been able to determine it seems that the problem has to do with no logical volumes once Luks decrypts the device.

I cloned the drive to an external HD. I mounted primary drive (nvme0n1p2) using cryptosetup luksOpen in terminal and test7 as the mapper param. It decrypts.

I ran pvscan, and get "No matching volumes found".

I repeated for the cloned drive, using test8. Same result.

So, if I'm able to decrypt, and then there are no logical volumes inside, how would I access the data?

(At this point, I just want to get the data, and then reinstall a fresh system.)

Here's what I'm seeing in terminal:

enter image description here

Emily
  • 161

1 Answers1

1

Mappings are ephemeral, like mounts, and the "mapper" isn't a property of the encrypted disk – it's a component of the currently running OS. Similarly, the 'devtmpfs' filesystem that's mounted on /dev always represents mappings established on the running OS – your physical disk shouldn't even have a /dev/mapper or indeed anything else inside /dev.

So if you connect the same disk to a completely different Linux system (with its own DM modules and everything) and still cannot access the disk, the problem is almost certainly not with device-mapper but with something else that is part of the disk.

For example, the LUKS metadata header on the disk could've gotten corrupted (it is used to actually set up the DM mapping). This is somewhat unlikely, as cryptsetup does prompt for passphrase without complaints, but it could be (if somewhat unlikely) that the installer set up two LUKS volumes with the same passphrase, and the first unlocked successfully but the second didn't.

The "No medium found" error message for /dev/sda is somewhat concerning. The only time it should normally show up is if the device is in fact just a drive/reader without any media inserted, e.g. a SD card reader with no card in it1 . Use lsblk -S or ls -l /sys/class/block/sda to find out. (If the message ever shows up for an HDD or SSD or USB stick, it pretty much means that device is dead.) That being said, from Elementary OS it seems that your system disk is NVMe, so /dev/sda isn't that.

1 (CD/DVD drives are 'sr', not 'sd', so even if you have one, it's not the cause of the error.)

I would suggest starting with lsblk (alone; with -f; with -S) from another working Linux system, finding your disk there, and trying to unlock each LUKS volume manually through command line, i.e. using cryptsetup open. From the screenshot it seems that you're supposed to have LVM inside LUKS, so after unlocking the LUKS volumes run lsblk -f again to see if the contents are recognized, and pvscan -ay (or whatever command LVM uses again) to see if the LVM metadata is working. (It seems that the boot process first opens a LUKS volume and names the mapping "cryptdata", then expects LVM to set up a logical-volume mapping named "data-root" on top of that.)

grawity
  • 501,077