-2

EDIT4 We now have an answer. Thank you for everyone participating in the test, esp Jamie. Since the answer was deleted, here's the short summary: Win10 introduces memory compression, making this kind of testing difficult and partially pointless. If (on Win8 x64) you try to disable pagefile and write a test app to allocate memory, you'll likely run into allocation fail long before the core is exhausted (Out of CC). What Jamie did was write an app to perform millions of small allocations that did in fact succeed of using every last scrap of ram with no low memory warning. So the mechanism simply does not exist on Win 8 anymore, if you disable pagefile, the first warning you will get is a crash.

WRT failing "normal size" memory allocation while having plenty of CC left is probably due to fragmentation.


With 8GB or 6GB windows 8.1 x64 machine you get a low memory warning if your free RAM drops below about 20% of the total system RAM amount (1.6GB and 1.2GB, respectively) AND there is no more space in the pagefile. If pagefile space is available, physical memory will be allocated into pagefile to keep the 20% or RAM in reserve. So if you're playing Skyrim with a lot of mods and get a low memory warning, you'll probably see the pagefile being completely full and a bit under 20% of RAM being available.

Has anyone tried what is the limit with a 16GB windows machine? Does it extend with no limit i.e. you would receive a low memory warning at 3.2GB?

The easiest way to try this is to disable pagefile altogether or set it to a low value (like 1GB) and then start several apps with high memory use and/or just use this little utility: http://www.soft.tahionic.com/download-memalloc/

I'd test this myself but I have no access to a PC with 16GB (or more!) ram.

In Win8.1 the actual memory use figure is a bit harder to see as performance monitor does not show you pagefile usage. But task manager gives you the "committed" value that shows total memory in use (including pagefile)

Edit: Process explorer system information is probably the best to monitor how memory is being used. Commit charge and limit is the relevant bit here, if you have no pagefile, Commit limit = RAM and you should get a low memory warning when you get ~81% commit charge.

Edit2: To make it even more unambiguous, here's a pseudocode for the two cases I'm asking about

Case A no limit how large the minimum free memory (available commit charge) can grow before warning is issued:

if (CC/CL) > 0.8 then print "low memory warning"

Case B The minimum free memory (available commit charge) is limited to some absolute value and no warning is issued before it is crossed:

if (CC/CL) > 0.8 and if (CL-CC) < 2048MB then print "low memory warning"

Edit3: It turns out Windows 10 compresses memory when it runs low enough on actual RAM. This naturally makes this test more difficult to perform. You'd can still exhaust the available ram to be sure but windows will compress malloc with zero values quite efficiently. In Win8.1 x64 and earlier it's a simple task.

update

I'm currently having the misfortune to have to use 4GB Windows 7 x64 box. In this system Windows tries to keep ~800MB of physical memory available. This is of course the familiar 20% slice. And it hurts a lot worse than the 1.6GB "reserve" on 8GB box.

I see moderator deleted my answer where I summarized Jamie's findings using a bespoke program written to exhaust the core. Thanks for that.

Barleyman
  • 332

2 Answers2

3

Unfortunately, there are wrong answers to questions like this one all over the Internet. If you see any answer that doesn't point out the difference between memory that has been allocated and RAM that is being used, that answer will be completely wrong. This memory warning can be produced with any amount of free RAM. You can see reports all over the Internet of people getting low memory warnings even though they have lots of RAM free.

The low memory warning has nothing whatsoever to do with how much RAM is free. You can have lots of free RAM and still get the low memory warning because that RAM is (indirectly) reserved to back allocations that have already been made but not yet used the RAM and cannot be used to back subsequent allocations.

For example, suppose you have a Windows 8.1 x64 machine with 16GB of physical RAM and no pagefile. Then imagine you run a program that allocates 15GB but doesn't use any of it yet. If the OS allows the allocation, it will begin giving low virtual memory warnings (because it cannot permit allocations of backed memory to succeed) even though almost all of the 16GB of RAM is still free.

You have to be very careful to separate used RAM from allocation requests for virtual memory.

Windows will give you that low memory warning when it may need to fail allocations of virtual memory that might require backing store. This can occur regardless of how much free RAM the system has because that RAM can become constrained due to prior allocations that also might require backing store.

For example, if you perform a normal memory allocation for 8GB but haven't touched that allocation yet, essentially no RAM will be used by that allocation yet. But if you have no page file, 8GB of free RAM now have a constraint that they must remain discardable so that they can be used to back that allocation later, should it need it.

In effect, RAM is like money in the bank and memory allocations are like checks. You can have plenty of money left in the bank, but you may be unable to write any more checks because people might cash checks you've already written. A person can be unable to buy anything no matter how much money they have left in the bank. (Page files are like a line of credit in this analogy.)

It's not possible to understand how memory works on Windows in terms as simple as those in your question. You have to understand the distinction between allocating memory and using RAM.

That said, there is some threshold, possibly a fraction of total RAM, that triggers this warning. But it has nothing to do with whether free RAM is less than that threshold.

-1

There's some misunderstanding here which I'd like to clear up, for the OP's sake.

@David Schwartz's answer, while not complete, is certainly accurate, however I'd like to add to what he said.

@OP in 2011 my employer tasked me with finding an answer to this question.
After some 3 mths of testing hardware and extensive researching I did find it.

It's nothing to do with the page file or application malloc/vmalloc allocation. Mostly the issue is outdated API and some broken D3D implementation.

The really short answer:

WDDM2.0 + D3D11.2 +4GB GPU's

The missing 2/4GB RAM has been reserved for the GPU. CPU cannot touch it, thereore it doesn't exist. Regardless of whther the VRAM is used oor not, it is reseved, and mapped into GPU address space.

RAM which is GPU Reserved doesn't show up against the system Commit Limit, because it's not available to the CPU. Nor does it appear against the Commit Charge, because it's not allocated - only reserved.

^^ Russunovich actually talked about this anomilie in Winternals 7th Edition. It's simply an issue with the resource usage API, nothing more.

I read the book back to front trying to figure out why my missing memory was always equal to the amount of VRAM on the GPU.

Beginning around DX11.2 WDDM 2.0 support unified addressing between CPU RAM & GPU VRAM, meaning the GPU can map RAM into it's own address space for 0 copy paging, tiled resources or buffering.

This is where it all goes south, Dynamic Resource allocation was meant to be supported with 8.1, however it didn't get implemented until W10. . Dynamic Resource allocation is a DX11.x feature which allows the GPU reserved system memory to be dynamically resized and given back to the CPU during gaming. The "dynamic" part never made it, but reserving system memory did.

What happens is: 8GB RAM and a 4GB GPU, 4GB of RAM is sliced of reserved

So, if you have GPU with 4GB VRAM on 8.1, 4GB of systyem RAM is sliced off and reserved for the GPU, leaving only 4GB for the entire rest of the system.

It's fine to run with the Pagefile disabled in 8.1/Dx11, just remember to add some extra RAM depending on how much VRAM you have.

The other irony here is because DX9 is 32bit the games dopn't support over 4GB of address space, lol So 4GB of RAM is reserved but games like Fallout NV can't even make use of it anyway... lmao.

We do quite a bit of platform testing where I am, rule of thumb which I find works is 16GB RAM with a 4GB GPU, that allows ~12GB free for DX12 games which eat RAM.

You could go to W10 (ugh) which doesn't suffer these issues.. :P

Btw there's also a page in the MSDN d#d library which covers the DX9 GPU me

Now, true, when mm notices it's short on RAM, it will try to recover some: by paging out long-idle processes, and also not-recently-accessed pages of all processes. Jamie Hanrahan

That's not entirely correct. An idle process doesn't get paged out, only the Working Set is trimmed (if possible).
Any idle excess pages are then flushed to disk, but there is a min WS size which always resides in RAM.

Trimming the WS is last resort though, a sign of insufficient RAM. Memory normally Mapped/cached files get cleared first, from the Standby List.

Btw on a side note the Standby List consists almost entirely of files cached from the HDD into RAM. Check the cache after a defrag or reading your 200GB Music collection, also there will be no free memory left. :)

OP, if you like I can send some screenshots/results/conclusions from testing games and other apps with, notes etc. Maybe 8-9 games on half as many platforms...... Let me know.

PS All the above I wrote from memory, because all the testing stuff happened 4-5 years ago, it's possible (hopefully not) a couple of minor points I made may not be exactly 100% word for word as written in the quoted sources.

There is something else I forgot to mention which is your question of Free memory vs Available memory. There is a substantial difference between what is Available, and what is Free - I will cover this in more depth when I have time. But rest assured, No Free memory WILL result in severe performance degradation if a memory intensive program such as Skyrim is running with ~25GB worth of mods. Processes on 64bit are limited to 8GB, for the working set, however the total address space available to that one process is 8TB. This is called a section object, and it's how AWE works.

Paging still takes place, but happens entirely within RAM (using pointers I believe). Whenever pages in the standby list are referenced, a pagefault occurs, which is why pagefaults happen with no PF.

Pagefaults occur if a referenced page is in the standby list, the actual location on the HDD or RAM doesn't really come into it...

Also when it comes to disabled pagefile, there is no virtual address space - there is only address space. Pointers are still used but always point to real memory addresses (well ideally, but not always), and commit limit is the same as installed RAM. :)

Journeyman Geek
  • 133,878