To start, I know this question is quite old, but in case someone stumbles upon it via Google:
Shared GPU memory is simply the memory a GPU can use if it runs out of dedicated GPU memory. On Windows, that value is 50% of the available RAM. However, keep in mind that "available memory" excludes memory reserved for hardware usage or integrated GPUs, so the value is often less than the total physical RAM.
For example, if you have a dedicated GPU with 16 GiB of VRAM and 32 GiB of available RAM, your GPU can use up to 16 GiB + (32 GiB / 2) = 32 GiB of memory. Exceeding the VRAM limit (in this case, 16 GiB) incurs a hefty performance penalty when shared memory is used, so it's usually not practical for gaming, though it may still be useful for other compute workloads.
Integrated graphics, on the other hand, reserve a small amount of RAM, typically 128, 256, or 512 MB, that is dedicated solely to the integrated GPU’s purposes. This is often why Task Manager shows a RAM value smaller than what you physically installed. Since integrated graphics have such limited dedicated memory, they rely heavily on shared GPU memory for anything beyond the simplest tasks.
It's important to note that shared GPU memory is not exclusively reserved for the GPU. As the name suggests, it's shared between the GPU and CPU, meaning it can be allocated as needed. Additionally, shared GPU memory is a limit, a maximum amount. It doesn’t mean that whenever a GPU exceeds its dedicated memory, it will automatically get the entire shared amount or nothing at all.
For most users, there's little to no reason to modify this shared memory value. If you're not running GPU-intensive tasks that require it, the shared memory won't be used by the GPU, nor will it be reserved, sitting unused just in case the GPU might need it.