16 GB/s? Bah, we can do better. Fill a truck with microSD cards and drive down the road to your neighbor. There, you've got a bandwidth on the order of PB/s. And nobody minds waiting millions of milliseconds to see if they managed to move their cursor to the correct button, right?
Okay, so that's an extreme example, but it demonstrates what happens when you focus only on bandwidth and ignore the other half of data transfer performance: latency.
There's a nice breakdown of the latency of various data accesses over on StackOverflow. The important takeaway is that RAM latency is measured in hundreds of nanoseconds whereas SSD latency is measured in tens of microseconds. So instead of waiting 100 clock cycles when your 1 GHz CPU needs something not in a cache, it would have to wait 10000 or more. That's a lot of time to have to try to fill with other work.
And then there's the fact that M.2 SSDs don't actually have as much bandwidth as you think. The M.2 slot only supports up to x4 PCI-E lanes, which, with the PCI-E 4.0 standard, limits the bandwidth to ~7.9 GB/s and the far future PCI-E 5.0 standard to a likely 15.8 GB/s.
As for virtual memory, yes, similar to a HDD, we can use an SSD for virtual memory, but keep in mind this is as an extension to RAM, not a replacement.
Something else of interest: AMD announced a GPU with a pair of PCI-E x4 M.2 SSDs on board in a RAID-0. This isn't a replacement for the GPU's ram (which would have a bandwidth measured in hundreds of GB/s), but rather as a storage drive (it's presented as such to the OS). This mainly means that the GPU can get data from the drive without any overhead from the motherboard's PCI-E interface. This resulted in a bump from 900 MB/s talking to the system drive to ~4 GB/s talking to the onboard drive, although it's not specified if the system drive was also RAID-0.