Unsolved pagefile size on Windows Server with dynamic RAM?
-
Is there a best practice as it relates to the page file setting on a Windows Server with dynamic RAM? I have it set to system managed, and it's working ok, except the Zabbix reports "Lack of free swap space on <servername>" I could just turn that trigger off for the servers with dynamic RAM, but I wondered what others were doing.
-
What server OS and services is this server running?
-
Also I assume this is a 64-bit server OS, once we know what this server runs and what the other settings are (like crash dump file size) then a recommendation can be made.
-
About halfway down this page is a table of the minimum and maximum settings up to Server 2012 r2
-
It's running Server 2012 R2 64bit. It's a RDS server that starts at 6GB RAM and can go to 80GB.
According to that chart, I'd be looking at a 240GB pagefile.
Maybe I don't need crash dumps until the day the thing starts crashing and I need to figure out why... thoughts?
-
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
It's running Server 2012 R2 64bit. It's a RDS server that starts at 6GB RAM and can go to 80GB.
According to that chart, I'd be looking at a 240GB pagefile.
Maybe I don't need crash dumps until the day the thing starts crashing and I need to figure out why... thoughts?
The 240GB is assuming you want to have the maximum size. I certainly wouldn't start there. Why not start with something way smaller, like 6GB?
-
Also your crash dump settings need to be determined, what are those configured for?
-
If this system is currently configured for a Small memory Dump, there is no reason to provide 240GB of space for the dump, as it wouldn't be filled.
If however you have this system configured for a full memory dump (and it can scale up to 80GB) then you'd want to provide up to triple the storage space for the crash logs.
Of course, we need to know what the dump settings are.
-
-
The system is set to "Automatic memory dump".
From that article:
The Automatic memory dump setting at first selects a small paging file size, would accommodate the kernel memory most of the time. If the system crashes again within four weeks, the Automatic memory dump feature selects the page file size as either the RAM size or 32 GB, whichever is smaller.So I guess I can set it to 6GB or something and then up it if it ever crashes. Thanks for the pointers.
-
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
Maybe I don't need crash dumps until the day the thing starts crashing and I need to figure out why... thoughts?
Also if you disable the dumps, you'd have nothing to investigate with ;-). So don't go turning them off...
-
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
The system is set to "Automatic memory dump".
From that article:
The Automatic memory dump setting at first selects a small paging file size, would accommodate the kernel memory most of the time. If the system crashes again within four weeks, the Automatic memory dump feature selects the page file size as either the RAM size or 32 GB, whichever is smaller.So I guess I can set it to 6GB or something and then up it if it ever crashes. Thanks for the pointers.
You're welcome.
-
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
-
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
Basically you don't need a paging file.
-
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
Basically you don't need a paging file.
That is not at all what the article says...
Minimum PF should be large enough to cover the memory demands of your largest process. In the case of application allocating 50 MB of memory, the allocations request will fail with out of virtual memory error. When application allocates 10 MB chunks, the first two allocations will go through while the rest will fail as there’s no virtual memory (either RAM or PF) to accommodate that.
Which is the same as saying scale appropriately, don't do something insane like committing 240GB to the PF (off the bat)
-
@DustinB3403 said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
Basically you don't need a paging file.
That is not at all what the article says...
Minimum PF should be large enough to cover the memory demands of your largest process. In the case of application allocating 50 MB of memory, the allocations request will fail with out of virtual memory error. When application allocates 10 MB chunks, the first two allocations will go through while the rest will fail as there’s no virtual memory (either RAM or PF) to accommodate that.
Which is the same as saying scale appropriately, don't do something insane like committing 240GB to the PF (off the bat)
With a properly configured host reserve, if you follow (Peak Commit Charge – Physical Memory + some buffer) formula on the host, you’ll find out that you don’t need a page file.
-
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@DustinB3403 said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
Basically you don't need a paging file.
That is not at all what the article says...
Minimum PF should be large enough to cover the memory demands of your largest process. In the case of application allocating 50 MB of memory, the allocations request will fail with out of virtual memory error. When application allocates 10 MB chunks, the first two allocations will go through while the rest will fail as there’s no virtual memory (either RAM or PF) to accommodate that.
Which is the same as saying scale appropriately, don't do something insane like committing 240GB to the PF (off the bat)
With a properly configured host reserve, if you follow (Peak Commit Charge – Physical Memory + some buffer) formula on the host, you’ll find out that you don’t need a page file.
No that is page filing for the Host, not for the VM's.
-
Think of it like this, separation of the hypervisor and VM's means a VM should never effect the host in terms of a VM crash. It's why separation is critical between the two, among the other issues.
This is purely at the VM level that needs to be targeted first. Then the Host must have it's own PF settings configured for the host. In case it crashes.
-
OK I'm totally new to dynamic RAM assignment.
@Mike-Davis what do you gain by using Dymanic RAM assignment in this case, unless you're overallocating on the server? You mention that your RDS server can spike to 80 GB of RAM, do you find that the loss in performance when running less than that is worth using dymanic RAM (i.e. when you have to much RAM that's not in use on a VM, you can have performance issues on that VM).
This is an educational question for me.
-
@Dashrender said in pagefile size on Windows Server with dynamic RAM?:
OK I'm totally new to dynamic RAM assignment.
@Mike-Davis what do you gain by using Dymanic RAM assignment in this case, unless you're overallocating on the server? You mention that your RDS server can spike to 80 GB of RAM, do you find that the loss in performance when running less than that is worth using dymanic RAM (i.e. when you have to much RAM that's not in use on a VM, you can have performance issues on that VM).
This is an educational question for me.
The point of the Dynamic RAM allocation is because he'll have peaks and valleys of users using the RDS system. Keeping the memory as static on any host is almost wasteful as other hosts could likely use the memory as well when they have to process reports or whatever else.