74

In my jobs almost two decades ago, IT experts would keep the size of Windows' main partition (C drive) extremely small compared to the other partitions. They would argued this runs PC at optimum speed without slowing down.

But the downside of it is, C: drive easily fills up if kept small, and soon you can't install new software as it runs out of space. Even if I install software in D: drive, part of it is always copied to C: which fills it up.

My question is this practice still good? Why it is done. What are its main advantage if any? One obvious one is if primary partition crashes, your data is safe in secondary.

The reason I am asking this question is because I am trying to update Visual Studio and I can't because I have only 24MB left in primary partition.

TheTechGuy
  • 2,054

14 Answers14

90

In my jobs almost two decades ago, IT experts would keep the size of Windows' main partition (C drive) extremely small compared to the other partitions. They would argued this runs PC at optimum speed without slowing down. [...] My question is this practice still good?

In general: No.

In older Windows versions, there were performance problems with large drives (more accurately: with large filesystems), mainly because the FAT filesystem used by Windows did not support large filesystems well. However, all modern Windows installations use NTFS instead, which solved these problems. See for example Does NTFS performance degrade significantly in volumes larger than five or six TB?, which explains that even terabyte-sized partitions are not usually a problem.

Nowadays, there is generally no reason not to use a single, large C: partition. Microsoft's own installer defaults to creating a single, large C: drive. If there were good reasons to create a separate data partition, the installer would offer it - why should Microsoft let you install Windows in a way that creates problems?

The main reason against multiple drives is that it increases complexity - which is always bad in IT. It creates new problems, such as:

  • you need to decide which files to put onto which drive (and change settings appropriately, click stuff in installers etc.)
  • some (badly written) software may not like not being put onto a drive different than C:
  • you can end up with too little free space on one partition, while the other still has free space, which can be difficult to fix

There are some special cases where multiple partitions make still make sense:

  • If you want to dual-boot, you (usually) need separate partitions for each OS install (but still only one partition per install).
  • If you have more than one drive (particularly drives with different characteristics, such as SSD & HD), you may want to pick and choose what goes where - in that case it can make sense to e.g. put drive C: on the SSD and D: on the HD.

To address some arguments often raised in favor of small/separate partitions:

  • small partitions are easier to backup

You should really back up all your data anyway, to splitting it across partitions does not really help. Also, if you really need to do it, all backup software I know lets you selectively back up a part of a partition.

  • if one partition is damaged, the other partition may still be ok

While this is theoretically true, there is no guarantee damage will nicely limit itself to one partition (and it's even harder to check to make sure of this in case of problems), so this provides only limited guarantee. Plus, if you have good, redundant backups, the added safety is usually to small to be worth the bother. And if you don't have backups, you have much bigger problems...

  • if you put all user data on a data partition, you can wipe and reinstall / not backup the OS partition because there is no user data there

While this may be true in theory, in practice many programs will write settings and other important data to drive C: (because they are unfortunately hardcoded to do that, or because you accidentally forgot to change their settings). Therefore IMHO it is very risky to rely on this. Plus, you need good backups anyway (see above), so after reinstallation you can restore the backups, which will give you the same result (just more safely). Modern Windows versions already keep user data in a separate directory (user profile directory), so selectively restoring is possible.


See also Will you install software on the same partition as Windows system? for more information.

The Vee
  • 190
  • 9
sleske
  • 23,525
24

The historical reason for this practice is most likely rooted in the performance properties of rotating magnetic HDDs. The area on spinning disks with the highest sequential access speed are the outermost sectors (near the start of the drive).

If you use the whole drive for your operating system, sooner or later (through updates etc) your OS files would be spread out all over the disk surface. So, to make sure that the OS files physically stay in the fastest disk area, you would create a small system partition at the beginning of the drive, and spread the rest of the drive in as many data partitions as you like.

Seek latency also partly depends on how far the heads have to move, so keeping all the small files somewhat near each other also has an advantage on rotational drives.

This practice has lost all its reason with the advent of SSD storage.

Peter Cordes
  • 6,345
WooShell
  • 458
5

Short answer: Not any more.

In my experience (20+ years of IT adminship work), the primary reason for this practice (others are listed below) is that users basically didn't trust Windows with their data and hard drive space.

This is no longer needed. The war is over.

  • Roughtly since Win7, Windows is trustworthy enough to use Microsoft-provided folder hierarchy for your data
  • Secondary reasons to have multiple partitions also no longer apply, or only apply is niche cases

Details:

Windows has long been notoriously bad at staying stable over time, cleaning after itself, keeping the system partition healthy and providing convenient access to user data on it. So users preferred to reject the filesystem hierarchy that Windows provided and roll their own outside of it. The system partition also acted as a ghetto to deny Windows the means to wreak havoc outside of its confines.

  • There are lots of products, including those from Microsoft, that don't uninstall cleanly and/or cause compatibility and stability issues (the most prominent manifestation is leftover files and registry entries all around and DLL Hell in all of its incarnations). Many files created by the OS are not cleaned up afterwards (logs, Windows updates etc), leading to the OS taking up more and more space as time goes. In Windows 95 and even XP era, advice went as far as suggesting a clean reinstall of the OS once in a while. Reinstalling the OS required an ability to guarantee wiping the OS and its partition (to also clean up any bogus data in the filesystem) -- impossible without multiple partitions. And splitting the drive without losing data is only possible with specialized programs (which may have their own nasty surprises like bailing out and leaving data in an unusable state upon encountering a bad sector). Various "clean up" programs alleviated the problem, but, their logic being based on reverse engineering and observed behaviour, were even more likely to cause a major malfunction that would force a reinstall (e.g. the RegClean utility by MS itself was called off after Office 2007 release that broke assumptions about the registry that it was based on). The fact that many programs saved their data into arbitrary places made separating user and OS data even harder, making users install programs outside of the OS hierarchy as well.
    • Microsoft tried a number of ways to enhance stability, with varying degrees of success (shared DLLs, Windows File Protection and its successor TrustedInstaller, Side-by-side subsystem, a separate repository for .NET modules with storage structure that prevents version and vendor conflicts). The latest versions of Windows Installer even have rudimentary dependency checking (probably the last major package manager in general use to include that feature).
    • With regard to 3rd-party software compliance to best practices, they maneuvered between maintaining compatibility with sloppily-written but sufficiently used software (otherwise, its users would not upgrade to a new Windows version) -- which lead to a mind-bogging amount of kludges and workarounds in the OS, including undocumented API behavior, live patching of 3rd-party programs to fix bugs in them and a few levels of registry and filesystem virtualization -- and between forcing 3rd-party vendors into compliance with measures like a certification logo program and a driver signing program (made compulsory starting with Vista).
  • User data being buried under a long path under the user's profile made it inconvenient to browse for and specify paths to it. The paths also used long names, had spaces (a bane of command shells everywhere) and national characters (a major problem for programming languages except very recent ones that have comprehensive Unicode support) and were locale-specific (!) and unobtainable without winapi access (!!) (killing any internationalization efforts in scripts), all of which didn't help matters, either.
    So having your data in the root dir of a separate drive was seen as a more convenient data structure than what Windows provided.
    • This was only fixed in very recent Windows releases. Paths themselves were fixed in Vista, compacting long names, eliminating spaces and localized names. The browsing problem was fixed in Win7 that provided Start Menu entries for both the root of the user profile and most other directories under it and things like persistent "Favorite" folders in file selection dialogs, with sensible defaults like Downloads, to save the need to browse for them each time.
  • All in all, MS' efforts bore fruit in the end. Roughtly since Win7, the OS, stock and 3rd-party software, including cleanup utilities, are stable and well-behaved enough, and HDDs large enough, for the OS to not require reinstallation for the entirety of a typical workstation's life. And the stock hierarchy is usable and accessible enough to actually accept and use it in day-to-day practice.

Secondary reasons are:

  • Early software (filesystem and partitioning support in BIOS and OSes) were lagging behind hard drives in supporting large volumes of data, necessitating splitting a hard drive into parts to be able to use its full capacity.
    • This was primarily an issue in DOS and Windows 95 times. With the advent of FAT32 (Windows 98) and NTFS (Windows NT 3.1), the problem was largely solved for the time being.
    • The 2TB barrier that emerged recently was fixed by the recent generation of filesystems (ext4 and recent versions of NTFS), GPT and 4k disks.
  • Various attempts to optimize performance. Rotational hard drives are slightly (about 1.5 times) faster at reading data from outer tracks (which map to the starting sectors) than the inner, suggesting locating frequently-accessed files like OS libraries and pagefile near the start of the disk.
    • Since user data is also accessed very often and head repositioning has an even larger impact on performance, outside of very specific workloads, the improvement in real-life use is marginal at best.
  • Multiple physical disks. This is a non-typical setup for a workstation since a modern HDD is often sufficiently large by itself and laptops don't even have space for a 2nd HDD. Most if not all stations I've seen with this setup are desktops that (re)use older HDDs that are still operational and add up to the necessary size -- otherwise, either a RAID should be used, or one of the drives should hold backups and not be in regular use.
    • This is probably the sole case where one gets a real gain from splitting system and data into separate volumes: since they are physically on different hardware, they can be accessed in parallel (unless it's two PATA drives on the same cable) and there's no performance hit on head repositioning when switching between them.
      • To reuse the Windows directory structure, I typicaly move C:\Users to the data drive. Moving just a single profile or even just Documents, Downloads and Desktop proved to be inferior 'cuz other parts of the profile and Public can also grow uncontrollably (see the "separate configuration and data" setup below).
    • Though the disks can be consolidated into a spanned volume, I don't use or recommend this because Dynamic Volumes are a proprietary technology that 3rd-party tools have trouble working with and because if any of the drives fails, the entire volume is lost.
  • An M.2 SSD + HDD.
    • In this case, I rather recommend using SSD solely as a cache: this way, you get the benefit of an SSD for your entire array of data rather than just some arbitrary part of it, and what is accelerated is determined automagically by what you actually access in practice.
    • In any case, this setup in a laptop is inferior to just a single SSD 'cuz HDDs are also intolerant to external shock and vibration which are very real occurrences for laptops.
  • Dual boot scenarios. Generally, two OSes can't coexist on a single partition. This is the only scenario that I know of that warrants multiple partitions on a workstation. And use cases for that are vanishingly rare nowadays anyway because every workstation is now powerful enough to run VMs.
  • On servers, there are a number of other valid scenarios -- but none of them applies to Super User's domain.
    • E.g. one can separate persistent data (programs and configuration) from changing data (app data and logs) to prevent a runaway app from breaking the entire system. There are also various special needs (e.g. in an embedded system, persistent data often resides on a EEPROM while work data on a RAM drive). Linux's Filesystem Hierarchy Standard lends itself nicely to tweaks of this kind.
ivan_pozdeev
  • 1,973
4

Is there a reason to keep Windows' primary partition / drive C: small?

Here are a few reasons to do that:

  1. All system files and the OS itself are on the primary partition. It is better to keep those files seperated from other software, personal data and files, simply because constantly meddling in the bootable partition and mixing your files there might occasionally lead to mistakes, like deleting system files or folders by accident. Organization is important. This is why the size of the primary partition is low -- to discourage users from dumping all their data in there.
  2. Backups - it's a lot easier, faster, and effective to backup and recover a smaller partition than a bigger one, depending on the purpose of the system. As noted by @computercarguy in the comments, it is better to backup specific folders and files, than backing up a whole partition, unless needed.
  3. It could improve performance, however, in a hardly noticeable manner. On NTFS filesystems, there are the so-called Master File Tables on each partition, and it contains meta-data about all the files on the partition:

    Describes all files on the volume, including file names, timestamps, stream names, and lists of cluster numbers where data streams reside, indexes, security identifiers, and file attributes like "read only", "compressed", "encrypted", etc.

This might introduce an advantage, though unnoticeable, thus this could be ignored, as it really doesn't make a difference. @WooShell's answer is more related to the performance issue, even though it still is neglectable.

Another thing to note, is that in case of having an SSD + HDD, it is way better to store your OS on the SSD and all your personal files/data on the HDD. You most likely wouldn't need the performance boost from having an SSD for most of your personal files and consumer-grade solid state drives usually do not have much space on them, so you'd rather not try to fill it up with personal files.

Can someone explain why this practice is done and is it still valid?

Described some of the reasons why it is done. And yes, it is still valid, though not a good practice anymore as it seems. The most notable downsides are that end-users will have to keep track on where applications suggest to install their files and change that location (possible during almost any software installation, especially if expert/advanced install is an option) so the bootable partition doesn't fill up, as the OS does need to update at times, and another downside is that when copying files from one partition to another, it actually needs to copy them, while if they were in the same partition, it just updates the MFT and the meta-data, does not need to write the whole files again.

Some of these unfortunately can introduce more problems:

  1. It does increase the complexity of the structure, which makes it harder and more time-consuming to manage.
  2. Some applications still write files/meta-data to the system partition (file associations, context menus, etc..), even if installed in another partition, thus this makes it harder to backup and might introduce failures in syncing between partitions. (thanks to @Bob's comment)

To avoid the problem you're having, you need to:

  1. Always try to install applications on the other partitions (change the default installation location).
  2. Make sure to install only important software in your bootable partition. Other not-so-needed and unimportant software should be kept outside of it.

I am also not saying that having multiple partitions with a small primary one is the best idea. It all depends on the purpose of the system, and although it introduce a better way to organize your files, it comes with its downsides, which on Windows systems in the current days, are more than the pros.

Note: And as you've mentioned yourself, it does keep the data that is in separate partitions safe in case of a failure of the bootable partition occurs.

Fanatique
  • 5,153
4

I'm a software developer, but also have spent time doing "regular" / back-office IT work. I typically keep the OS and applications on drive C:, and my personal files on drive D:. These don't necessarily need to be separate physical drives, but currently I am using a relatively small SSD as my "system" drive (C:) and a "traditional" disk drive (i.e. with rotating magnetic platters) as my "home" drive (D:).

All filesystems are subject to fragmentation. With SSDs this is basically a non-issue, but it is still an issue with traditional disk drives.

I have found that fragmentation can significantly degrade system performance. For example, I've found that a full build of a large software project improved by over 50% after defragmenting my drive -- and the build in question took the better part of an hour, so this was not a trivial difference.

Keeping my personal files on a separate volume makes, I have found:

  • the system volume doesn't get fragmented nearly as quickly (or severely);
  • it is much faster to defragment the two separate volumes than a single volume with everything on it -- each volume takes 20%-25% as long as the combined volume would.

I've observed this on several generations of PCs, with several versions of Windows.

(As a commenter pointed out, this also tends to facilitate making backups.)

I should note that the development tools I use tend to generate a large number of temporary files, which seem to be a significant contributor to the fragmentation issue. So the severity of this issue will vary according to the software you use; you may not notice a difference, or as much of one. (But there are other activities -- for example video / audio composition and editing -- which are I/O intensive, and depending on the software used, may generate large numbers of temporary / intermediate files. My point being, don't write this off as something that only affects one class of users.)

Caveat: with newer versions of Windows (from 8 onward), this has become much more difficult, because user folders on a volume other than C: are no longer officially supported. I can tell you that I was unable to perform an in-place upgrade from Windows 7 to Windows 10, but YMMV (there are a number of different ways to [re]locate a user folder, I don't know which are affected).

One additional note: if you maintain two separate volumes on a traditional drive, you may want to set up a page file on the D: volume. For the reasons described in WooShell's answer, this will reduce seek time when writing to the page file.

David
  • 381
3

Nearly 2 decades ago would have been dominated by the range of Windows 98 through XP, including NT4 and 2000 on the workstation/server side.

All hard drives would also be PATA or SCSI cabled magnetic storage, as SSDs cost more than the computer, and SATA did not exist.

As WooShell's answer says, the lower logical sectors on the drive (outside of platter) tend to be the fastest. My 1TB WDC Velociraptor drives start out at 215MB/s, but drops down to 125MB/s at the outer sectors, a 40% drop. And this is a 2.5" drive platter drive, so most 3.5" drives generally see an ever larger drop in performance, greater than 50%. This is the primary reason for keeping the main partition small, but it only applies where the partition is small relative to the size of the drive.

The other main reason to keep the partition small was if you were using FAT32 as the file system, which did not support partitions larger than 32GB. If you were using NTFS, partitions up to 2TB were supported prior to Windows 2000, then up to 256TB.

If your partition was too small relative to the amount of data that would be written, it is easier to get fragmented, and more difficult to defragment. Of you can just straight up run out of space like what happened to you. If you had too many files relative to the partition and cluster sizes, managing the file table could be problematic, and it could affect performance. If you are using dynamic volumes for redundancy, keeping the redundant volumes as small as necessary will save space on the other disks.

Today things are different, client storage is dominated by flash SSDs or flash accelerated magnetic drives. Storage is generally plentiful, and it is easy to add more to a workstation, whereas in the PATA days, you might have only had a single unused drive connection for additional storage devices.

So is this still a good idea, or does it have any benefit? That depends on the data you keep and how you manage it. My workstation C: is only 80GB, but the computer itself has well over 12TB of storage, spread across multiple drives. Each partition only contains a certain type of data, and the cluster size is matched to both the data type and the partition size, which keeps fragmentation near 0, and keeps the MFT from being unreasonably large.

The downsize is that there is unused space, but the performance increase more than compensates, and if I want more storage I add more drives. C: contains the operating system and frequently used applications. P: contains less commonly used applications, and is a 128GB SSD with a lower write durability rating than C:. T: is on a smaller SLC SSD, and contains user and operating system temporary files, including the browser cache. Video and audio files go on magnetic storage, as does virtual machine images, backups, and archived data, these generally have 16KB or larger cluster sizes, and read/writes are dominated by sequential access. I run defrag only once a year on partitions with high write volume, and it takes about 10 minutes to do the whole system.

My laptop only has a single 128GB SSD and a different use case, so I cannot do the same thing, but I still separate into 3 partitions, C: (80GB os and programs), T: (8GB temp), and F: (24 GB user files), which does a good job of controlling fragmentation without wasting space, and the laptop will be replaced long before I run out space. It also makes it much easier to backup, as F: contains the only important data that changes regularly.

Richie Frame
  • 1,980
2

I'm wondering if your decades old IT department was concerned about backup. Since C: is a boot/OS partition, it would be typical to use some type of image backup, but for a data / program partition, an incremental file + folder backup could be used. Reducing the space used in the C: partition would reduce the time and space needed to backup a system.


A comment on my personal usage of the C: partition. I have a multi-boot system including Win 7 and Win 10 and I don't have any OS on the C: partition, just the boot files. I use Windows system image backup for both Win 7 and Win 10, and Windows system image backup always includes the C: (boot) partition, in addition to the Win 7 or Win 10 partition, so this is another scenario where reducing the amount of data and programs on the C: partition reduces the time and space needed for a system image backup (or restore if needed).


I'm leaving this section in my answer because of the comments below.

Since my system is multi-boot, rebooting into a different OS makes backup of data / program partitions simpler since there's no activity on the partition(s) while they are being backed up. I wrote a simple backup program that does a folder + file copy along with security and reparse info, but it doesn't quite work for Win 7 or Win 10 OS partitions, so I'm using system image backup for C;, Win 7 and Win 10 OS partitions.

rcgldr
  • 187
  • 4
2

I used to do some IT work, and here is what I know and remember.

In the past, as others have said there was a real benefit to having a small C partition on the start of the disk. Even today in some lower end laptops this could still be true. Essentially by having a smaller partition, you have less fragmentation and by keeping it at the start of the disk you have better seek and thus read times. This is still valid today with laptops (usually) and slower "green" hard drives.

Another great benefit that I still use today is having "data" and "os" on separate drives, or if I can't manage that separate partitions. There is no real speed increase if using SSD, or even faster magnetic drives, but there is a huge "easy fix" option when the OS eventually tanks. Just swap the drive or re-ghost that partition. The user's data is intact. When properly set up, between a D: drive and "Roaming profiles" reinstalling windows is a 5-minute non-issue. It makes it a good step one for a level 1 tech.

coteyr
  • 140
2

Here is one reason, but I don't believe it is a valid reason for today's (modern) computers.

This goes back to Windows 95/98 and XT. It probably doesn't apply to Vista and later, but it was a hardware limitation so running a newer OS on old hardware would still have to deal with the limitation.

I believe the limitation was 2gb, but there could have been a 1gb limitation (or perhaps others) at an earlier time.

The issue was (something like) this: the BOOT partition had to be within the first 2gb (perhaps 1gb earlier) of the physical space on the drive. It could have been that 1) the START of the BOOT partition had to be within the bounds of the limit, or, 2) the ENTIRE boot partition had to be within the bounds of the limit. It's possible that at various times, each of those cases applied, but if #2 applied, it was probably short lived, so I'll assume it's #1.

So, with #1, the START of the BOOT partition had to be within the first 2gb of physical space. This would not preclude making 1 big partition for the Boot/OS. But, the issue was dual/multi boot. If there seemed to ever be possible to want to dual/multi boot the drive, there had to be space available below the 2gb mark to create other bootable partitions on the drive. Since it may not be known at install time if the drive would ever need another boot partition, say, Linix, or some bootable debug/troubleshoot/recover partition, it was often recommended (and often without knowing why) to install on a "small" OS boot partition.

Kevin Fegan
  • 4,997
2

No, not with Windows and its major software suites insisting on ties to System: despite installing them to Programs:. (It's an institutionalized necessity the way most OSes are built.) A Data: volume makes sense, but a separate removable drive for your data (or NAS, or selective or incremental backups to such a removable drive) makes even more sense.

Partitioning for multi-OS systems also makes sense, but each partition forces you to select a hard upper storage limit. Generally it's better with separate drives even in this case.

And today, Virtual Machines and Cloud drives supplement many of these choices.

2

There is one particular reason — using volume snapshots.

A volume snapshot is a backup of the whole partition. When you restore from such kind of backup, you rewrite the whole partition, effectively rolling back the system to the previous state.

A system administrator might create such snapshots on a regular basis in preparation for any kind of software failures. They can even store them on another partition of the same drive. That's why you want the system partition to be relatively small.

When using this scheme, users are encouraged to store their data at the network drive. In case of any software problems a system administrator can just rollback the system to the working state. That would be extremely time-efficient comparing to manually investigating the problem reasons and fixing it.

enkryptor
  • 741
0

I have been programming for nearly half a century. Another reply says historical and another long reply says Multiple physical disks.

I want to emphasize that multiple physical disks is most likely what began the recommendation. More than half a century ago, back when there were no such things as partitions, it was extremely common to use a separate physical drive for the system. The primary reason for that is the physical movement of the heads and the spinning of the drives. Those advantages do not exist for partitions when the physical drive is used often for other things.

Also note that Unix separates the system and the data into separate partitions. There are many good reasons to do that, as explained in many other answers, but for performance, separate physical drives is the primary justification.

Sam Hobbs
  • 119
0

Yes, there is a big benefit for a separate Windows primary partition, and that is the possibility of imaging.

After installing a Windows plus adding antivirus plus some basic configuration, I create an image of the entire partition.

When I want to have a fresh copy of Windows, instead of installing it, I just restore it from that image.

This has the benefit that Windows is already activated, updates are all installed, Windows is configured to not include games, etc. plus Antivirus and 2 other utilities are already installed and activated. Your special hardware drivers are working. Also, any SSH keys are in place, and accounts credentials are configured as well. Your registry changes are also restored. Your custom certificates are working. Your desktop, language and other customization are in place.

You can also have multiple versions in images. E.g. you can have one vanilla Windows image, one with an antivirus, etc.

Alternative to this imaging approach is creating unattended (automated) installs. That also works well, but you need much more knowledge to achieve the same result than simply doing one attended install and reusing that as an image. It is not surprising that cloud providers also use image-based deployments.

You can also have a hybrid backup strategy with

  • Having your baseline OS partition imaged and
  • Creating subsequent backups using regular backup tools - (although imaging is also "regular" in Windows, it is part of the OS tools.)

Final three points:

  • You can always remap (most of the) paths on drive C to any other location, even on another partition or drive.
  • After a successful image restore, you can resize that partition if needed.
  • If your imaging program is smart (that is, knows what sector has valid content and what does not, e.g. the default Windows imaging program knows this) OR you zero-fill free space, after compression, the image backup will be much smaller than your 100 GB partition.

I use this image-based flow on AWS, but also locally on physical hardware and in local virtualized instances in VMWare Workstation as well.

TFuto
  • 222
-1

The reason we used to make 2 partitions was due to viruses. Some viruses used to overwrite the boot sector and the beginning of the disk.

Backups on users computers used to boil down to a copy of the whole program onto a floppy disk. (In actuality a non-backup)

So when a virus "ate up" the beginning of the disk, usually only the system had to be reinstalled.

And if there were no backups, then the recovery of data was easier if the second partition was intact.

So if you have backups, this reason is not valid.