I have an SLES 12 SP1 system running on AWS. I need to "wipe" the EBS volumes before deprovisioning them as a customer requirement. The data is sensitive in a commercial way only (no TLAs). Is shred an appropriate tool?
I see there are several posts on SO on shred and scrub which are helpful. However, they typically quote a caveat from shred man page that it may not work reliably on journaled finesystems BUT it may work better when applied to the device itself. This is somewhat confusing. I need help figuring out if this will work in my case. I use both standard hard drives and SSD hard drives.
Q1. how to tell if the filesystem is journaled? mount and pvdisplay shows the following. I am tempted to assume it is not journaled (and I am lucky!) but is there a way to explicitly check for it?
/dev/mapper/vgdb-lvdbdata on /db/data type xfs (rw,noatime,nodiratime,attr2,nobarrier,inode64,logbsize=256k,sunit=512,swidth=1536,noquota)
pvdisplay
--- Physical volume ---
PV Name /dev/xvdf
VG Name vgdb
PV Size 1.00 TiB / not usable 4.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 262143
Free PE 255
Allocated PE 261888
Q2. is shred inadequate for SSD disks? Elsewhere in SO a method using hdparm is recommended but I don't have enough specifics about what I actually get in an EBS SSD drive.
Q3. what is the idiomatic way of doing this? I was thinking of stopping my EC2 instance; detach volumes; launch a small server; attach the volumes and run the wiper. Is there a simpler way?
--EDIT
Some have replied (here and elsewhere on SO) that AWS wipes the drive before presenting it available to a new user. We all are aware of that assertion and don't doubt it, but one must read that statement carefully since after all we are in risk area. There is some time lag potentially between one user releasing a drive and another acquiring it. Another potential for a time lag is between the drive being marked for destruction and its actual destruction.
I am not paranoid - just want to do a job well if I am going to do it.