71

My old 128 GB SSD drive is about a year and a half old now, and I've since upgraded to another drive.

I'd like to clean up my the old SSD to ...

  • restore its performance to near-new levels

  • rehabilitate it and generally give it a health check

How do I go about doing this?

Jeff Atwood
  • 24,402

10 Answers10

56

On Linux, simply run

hdparm --trim-sector-ranges start:count /dev/sda

passing the block ranges you want to TRIM instead of start and count and the SSD device in place of /dev/sda. It has the advantage of being fast and not writing zeros on the drive. Rather, it simply sends TRIM commands to the SSD controller letting it know that you don't care about the data in those blocks and it can freely assume they are unused in its garbage collection algorithm.

You probably need to run this command as root. Since this command is extremely dangerous, as it can immediately cause major data loss, you also need to pass --please-destroy-my-drive argument to hdparm (I haven't added this to the command line to prevent accidental data loss caused by copy and paste.)

In the above command line, /dev/sda should be replaced with the SSD device you want to send TRIM commands to. start is the address of the first block (sector) to TRIM, and count is the number of blocks to mark as free from that starting address. You can pass multiple ranges to the command.

Having personally done it with hdparm v9.32 on Ubuntu 11.04 on my laptop with a 128GB Crucial RealSSD C300, I have to point out an issue: I was not able to pass the total number of disk blocks (0:250069680) as the range. I manually (essentially "binary searched" by hand) found a large enough value for block count that worked (40000) and was able to issue TRIM commands on a sequence of 40000 ranges to free up the entire disk. It's possible to do so with a simple shell script like this (tested on Ubuntu 11.04 under root):

 # fdisk -lu /dev/sda

 Disk /dev/sda: 128.0 GB, 128035676160 bytes
 255 heads, 63 sectors/track, 15566 cylinders, total 250069680 sectors
 ...  

to erase the entire drive, take that total number of sectors and replace 250069680 in the following line with that number and run (add --please-destroy-my-drive):

 # i=0; while [ $i -lt 250069680 ]; do echo $i:40000; i=$(((i+40000))); done \
 | hdparm --trim-sector-ranges-stdin /dev/sda

And you're done! You can try reading the raw contents of the disk with hexedit /dev/sda before and after and verify that the drive has discarded the data.


Of course, even if you don't want to use Linux as the primary OS of the machine, you can leverage this trick by booting off a live CD and running it on the drive.

mmx
  • 3,209
24

First off, let's begin by understanding just what it is that causes the performance degradation. Without knowing this, many people will suggest inadequate solutions (as I already see happening). The crux of this entire predicament basically comes down to the following fact, as cited from Wikipedia. Remember it, it's important:

With NAND flash memory, read and programming operations must be performed page-at-a-time while unlocking and erasing must happen in block-wise fashion.

SSD's are made up of NAND flash, and flash consists of "blocks". Each block contains many "pages". For the sake of simplicity, lets imagine we just purchased a shiny new SSD that contains a whopping single block of memory, and that block consists of 4 empty pages.

For the sake of clarity, I differentiate between empty pages, used pages, and deleted pages with ∅, 1, and X. The key being that there is a difference between each of these from the controllers perspective! It is not as simple as 1's and 0's. So, to start, the pages on our fresh drive look like so:

∅, ∅, ∅, ∅ (all empty)

Now, we go to write some data to the drive, and it ends up getting stored in that first page, thus:

1, ∅, ∅, ∅

Next, we write a bit more data, only this time enough that it requires two pages and so it ends up being stored in the 2nd and 3rd page:

1, 1, 1, ∅

We are running out of space! We decide we don't really need the initial data we wrote, so lets delete it to make room.:

X, 1, 1, ∅

Finally, we have another large set of data that we need to store which will consume the remaining two pages. THIS IS WHERE THE PERFORMANCE HIT OCCURS IN DRIVES WITHOUT TRIM!! Going from our last state to this:

1, 1, 1, 1

...requires more work than most people realize. Again this is due to the fact that flash can only erase in a block-wise fashion, not page-wise which is precisely what the final transition above calls for. The differentiator between TRIM and non-TRIM based SSD's is when the following work is performed!

Since we need to make use of an empty page and a deleted page, the SSD needs to first read the contents of the entire block into some external storage/memory, erase the original block, modify the contents, and then write those contents back into the block. It's not as simple as a "write", instead it's now become a "read-erase-write". This is a big change, and for it to happen while we are writing lots of data is probably the most inopportune time for it to occur. It could all be avoided if that "deleted" page were recovered ahead of time, which is precisely what TRIM is intended to do. With TRIM, the SSD either recovers our deleted pages immediately after the delete or at some other opportune time that it's TRIM algorithms deem appropriate. The important part though is that with TRIM it doesn't happen when we are in the middle of a write!

Without TRIM, we ultimately can't avoid the above scenario as we fill up our drives with data. Thankfully some newer SSD's go beyond just TRIM and effectively do the same thing as TRIM in the background at a hardware level without the necessary ATA commands (some call this garbage collection). But for those of us unlucky enough not to have either, it is important to know that writing zeros to the entire drive is not sufficient for reclaiming original performance!!!!! Writing all zeros to the drive does not indicate to the controller that the page in flash is free for writing. The only way to do that on a drive which does not support TRIM is to invoke the ATA secure erase command on your drive using a tool such as HDDErase (via Wayback Machine).

I believe there were some early SSD's that only supported TRIM upon deleting partitions or upon such things as Windows 7's "diskpart clean all", and not upon the deletion of individual files. This may be a reason why an older drive appeared to regain performance upon executing that command. This seems a bit hazy to me though...

Much of my knowledge of SSD's and hardware/gadgets in general comes from anandtech.com. I thought he had a great writeup explaining all of this but for the life of me I cannot find it!

James
  • 369
17

Apparently the standard recommendation is to do a full-drive write of all zeros. I'm not entirely sure why this helps (don't a lot of writes eventually kill SSDs?), but it does seem to be endorsed by the major vendor SSD support forums.

So, to do that in Windows:

  • start a command prompt with Administrator privileges
  • execute the command diskpart

Once in the utility you'll see a DISKPART> prompt and issue the following commands:

DISKPART> list disk
DISKPART> select disk x

Obviously, MAKE SURE YOU HAVE SELECTED THE CORRECT SSD DRIVE before proceeding!

DISKPART> clean all
DISKPART> create partition primary
DISKPART> format quick fs=NTFS 

The magic here is clean all which writes all zeros to the drive:

If you specify the all parameter, each and every sector can be zeroed, and all data that is contained on the drive can be deleted.

After doing this, I can confirm that disk performance went up substantially.

Jeff Atwood
  • 24,402
16

I also found a tool, SSD Life Pro. It has some bad news for me.

SSDLife Pro -- drive health is bad!

As to how it calculates that, it uses the SMART SSD indicators. Apparently it tries to predict based on S.M.A.R.T. data:

  • The lifetime of flash memory, on which SSDs are based, is limited to 10,000 writes per cell
  • most of the drives also show data about written and/or read information in their S.M.A.R.T. parameters

This is tricky because it also needs to know when the data was written to estimate, but here's the underlying data:

01 Read Error Rate   7
09 Power-on Hours Count  7085
0C Power Cycle Count   318
B8 Initial Bad Block Count   15
C3 Program Failure Block Count   0
C4 Erase Failure Block Count   0
C5 Read Failure Block Count  0
C6 Read Sectors   5468243171
C7 Write Sectors  41640920876
C8 Read Commands  100482453
C9 Write Commands   417315851
CA Error Bits from Flash  345270
CB Read Sectors with Correctable Bit Error  340001
CC Bad Block Full Flag   0
CD Maximum P/E Count Specification   5000
CE Minimum Erase Count   3774
CF Maximum Erase Count   65348
D0 Average Erase Count   4837
D1 Remaining Drive Life  4

The scary number there is Remaining Drive Life which is 4.. percent!

And the resulting calculations:

Model: CRUCIAL_CT128M225
Size: 128 GB
Serial number: xxxxxxxxxxxxxxxxx456
Firmware: 2030
Powered on times: 318    
TRIM support in drive/OS: enabled/enabled
Worked time: 9 months 16 days 5 hours
Total data read: 2607.46 GB
written: 19855.94 GB

For the record, this drive was originally purchased in October 2009, so it's a little over a year and a half old.

Jeff Atwood
  • 24,402
6

I've found that writing zero's through the drive is not the best approach. While it might help in the short run, I found that it definitely didn't restore my drive to its full performance (I have a rather old non-TRIM-enabled Intel-SSD). After a year-or-so of fairly heavy usage, I starting running into 1-2 second freezes when the SSD would attempt to write to any file, even after zero'ing the SSD.

The only thing I've found that fully restores the performance was a secure erase using hdparm. I have made it a habit to secure-erase my SSD every 6-12 months when it starts to experience some minor hiccups. Someone on Macrumors made a mac specific tutorial on how to do for mac* devices.

According to all the claims I've seen, a secure erase sends a special command to the SSD that causes it to set all the sectors to zero at a much lower level then just using dd or something.

Bora
  • 801
5

On the Mac, check out the digilloydTools DiskTester. There are also some interesting data points there to see the effects of reconditioning on drive performance.

jerwood
  • 1,527
4

ThinkPads have a hidden BIOS menu ( enable with http://www-307.ibm.com/pc/support/site.wss/MIGR-68369.html ) that resets your SSD.

3

To check ssd life left on a (solid-state drive) ssd, you will need to install the smartmontools package. It contains two utility programs (smartctl and smartd) to control and monitor storage systems using the Self-Monitoring, Analysis and Reporting Technology System (S.M.A.R.T.) built into most modern ATA and SCSI hard disks.

For Ubuntu, Mint, or Debian based distributions

# apt-get install smartmontools

For Fedora, Centos, or Red Hat based distributions
# yum install smartmontools

The Media_Wearout_Indicator is what you are looking for. For 100 means your ssd has 100% life, the lower number means less life left.

# smartctl -a /dev/sda | grep Media_Wearout_Indicator

Output from my laptop

233 Media_Wearout_Indicator 0×0032 100 100 000 Old_age Always – 0

If you want to see more details and full attributes from your drive, you can run

# smartctl -data -A /dev/sda

Source: namhuy.net/1024/how-to-check-ssd-life-left.html

2

There's now a much better answer for Linux systems, compared to the @LeakyCode answer:

sudo fstrim -v /boot

The "fstrim" command from "util-linx" will run through the filesystem and issue TRIM commands for all unused space. On distributions such as Ubuntu this is disabled by default except for a select list of "known safe" drives from Intel and Samsung. But the command can be run manually on any partition for any drive.

Bryce
  • 2,375
-1

On Windows, just simply format the disk. (This means erasing it!) A quick format will do a TRIM of the entire drive.

Dwedit
  • 238