BTRFS RAID-6 is (though still experimental) actually pretty stable now. The current version even manages to fix many typical errors, including replacing a failed/missing drive.
Like ZFS, BTRFS does checksumming, which means that you can always (and you should, periodically) run a scrub to verify your data. If data (data or metadata) on a drive is damaged, BTRFS will be able to detect the errors and if it has redundancy (using BTRFS RAID-6), it can fix the affected files. After this, it will know if the repaired files are correct because it has checksums.
Classic (hardware) RAID systems, including MD (software) RAID, do not have checksums. Those systems rely on parity only, so an unfortunate combination of errors on several drives could very well lead to corruption. Since there are no checksums, the RAID system (like md) won't be able to verify that all errors are gone after a scrub (i.e., that the files are correct). There are examples (also some videos) that demonstrate data corruption on a classic RAID system.
It's important to be notified as soon as the first parity error is detected and fix the issue as soon as possible (scrub). Also, since the parity is the only (not very reliable) way for the RAID system to know if your data is ok or not, battery backups should be used to prevent losing all this valuable parity data to the write hole when the power goes out.
Now, if you use an advanced filesystem like BTRFS that does checksumming as a single filesystem (without redundancy) on top of a dumb RAID-6 system, it's up to this RAID system to detect and fix errors because once too many errors have corrupted the RAID system, BTRFS will NOT be able to fix them. It will detect errors and it will help you decide what to restore from your backup by telling you which files (path) are corrupt. But otherwise, it would be too late by then. This is why this setup might not be such a good idea after all.
If you use BTRFS the way it's supposed to be used, by creating a BTRFS RAID-6 filesystem (without MD RAID) with direct access to your drives, it will be able to fix errors reliably and it will know if the errors are actually gone because it has checksums. It will tell you on which drives these errors occurred, so you know which drive is bad (you can replace it using btrfs commands). The point is, no matter if a drive is dead/missing and you have to replace it or if a drive is partially corrupted (because it's about to die), BTRFS will detect the errors reliably. Of course, periodic scrubs are as important as with other RAID systems, to detect silent corruption (hint: cronjob).
So again, BTRFS RAID-6 is still considered experimental in some ways, but by having checksums it already offers a reliable way to detect (and then fix) errors. It's experimental, so it could crash in certain cases, then you should try a newer kernel version, which is the fix for many BTRFS issues. Make sure to stay up to date with your kernel (4.3 at the time of writing, don't use anything older than that for RAID-6). But typical RAID use cases (just storing lots of data and at some point replacing a drive - a multi-drive failure might be different) already work with BTRFS.
You've tagged your question with zfs. ZFS isn't included in the Linux kernel, so it would have to be installed manually - the ZFSonLinux port works very well. It might be necessary to reinstall it or do some other fixing after a new kernel version has been installed, but that's not important now. Of course, there are some things that work differently in ZFS. Also, unlike a BTRFS filesystem, a ZFS RAIDZ2 (like RAID-6) zpool cannot be resized (there are "tricks", but bottom line is that a raidz2 vdev cannot really be grown by adding a drive), so an existing system cannot easily be expanded with more drives. However, as far as stability is concerned, ZFS is probably the best choice of all. With checksums, it offers reliable data protection as described and it's mature enough to handle pretty much everything (multiple drive failures, shaky controller, ZFS survives almost everything and can protect/fix your data as long as you have enough good drives). Given that (like BTRFS) ZFS needs to access your drives individually (to be able to fix errors), you definitely should not use it on top of an MD RAID volume unless you have a good reason for it.