Background
For 7 years now I've had a personal file/media server running Fedora (initially 21, currently 30). The main storage array comprises 3TB drives in an mdadm RAID6 (originally, RAID5). Over the years, a few drives have died and been replaced; I've grown the array; etc. Most recently, it was 6 drives + a hot spare in RAID6 (12TB usable space). There had never been any issue with failover or rebuilding the array; everything just kept on chugging like you'd expect from a redundant RAID array. But, unfortunately, due to the size of the array, I never had a backup and relied on the RAID for resilience.
The Incident
Until last week. I was randomly browsing the server and noticed a ton of my files were no longer there. (A bunch of other files were untouched, though, including some backups of my other computers, funny enough.) Upon noticing that, I immediately checked /proc/mdstat and found that one of the drives had failed and the hot spare had been subbed in. Great, I thought, there must have been some filesystem snafu while the failed drive was in its death throes.
I then unmounted the array before any more damage could be done, and then I physically removed the failed drive from the system (after confirming with smartctl that it's really dead). And that was basically the last smart thing I did. I ran fsck and it kept finding issues ("ref count 2, expected 1") and asking if I wanted it to fix it. TL;DR, 48 hours later fsck had finished having its way with my array. I re-mounted it and immediately ran df, which showed 5.8T used out of 11T (it used to be in the 9.5T used range). Obviously I'm a bit upset about probably losing 4TB of data for, as far as I can tell, no reason, so I started researching options for data recovery. However, since I don't have any other volume that's even close in size, I appeared to be SOL. So I resigned myself to picking through lost+found and finding out what data had managed to survive.
Current State
The array is a healthy six-drive RAID6 (no more hot spare) with a 12TB ext4 filesystem which appears to be healthy (no issues mounting/unmounting, no more dmesg errors, fsck has no complaints). Of the 5.8TB of data reportedly still present, roughly 500GB is as it was before and the remaining 5+TB is in lost+found.
My Question
Before I start actually going through these files and trying to put them back where they belong, is there anything else I can try to recover the 4TB of "missing" data? (Is it possible that it's not really gone?) Or am I just totally hosed because I ran fsck too hastily?
And does anyone have any "tricks" for handling the lost+found recovery? Based on other posts I've read, such as https://unix.stackexchange.com/questions/177691/restore-from-lostfound, I'm expecting to basically use file in a script to segregate files by type and then if I'm lucky I'll be able to use baked-in metadata to help automate sorting most them.