0

For the sake of drives' (SSDs primarily) longevity, do there exist managing algorithms (built-in or optional/addition/third-party) implemented in drives' controllers or operating systems that take care of avoiding writes to the same physical block of memory many times? Something like choosing to readdress the block that's already been overwritten many times.

Specifically, I want to know how much I underestimated the danger of rewriting the same config file about 100 times while I was trying to make my archlinux work on an SSD. Thanks in advance!

EDIT: My concerns: If I have a 250GB SSD drive which is rated at 150TBW than every piece of memory has expected number of safe writes equal to 150*1024/250 which is exactly 600. So did I just waste 16% of life of my config file?

0 Answers0