Unsolved pagefile size on Windows Server with dynamic RAM?
-
It's running Server 2012 R2 64bit. It's a RDS server that starts at 6GB RAM and can go to 80GB.
According to that chart, I'd be looking at a 240GB pagefile.
Maybe I don't need crash dumps until the day the thing starts crashing and I need to figure out why... thoughts?
-
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
It's running Server 2012 R2 64bit. It's a RDS server that starts at 6GB RAM and can go to 80GB.
According to that chart, I'd be looking at a 240GB pagefile.
Maybe I don't need crash dumps until the day the thing starts crashing and I need to figure out why... thoughts?
The 240GB is assuming you want to have the maximum size. I certainly wouldn't start there. Why not start with something way smaller, like 6GB?
-
Also your crash dump settings need to be determined, what are those configured for?
-
If this system is currently configured for a Small memory Dump, there is no reason to provide 240GB of space for the dump, as it wouldn't be filled.
If however you have this system configured for a full memory dump (and it can scale up to 80GB) then you'd want to provide up to triple the storage space for the crash logs.
Of course, we need to know what the dump settings are.
-
-
The system is set to "Automatic memory dump".
From that article:
The Automatic memory dump setting at first selects a small paging file size, would accommodate the kernel memory most of the time. If the system crashes again within four weeks, the Automatic memory dump feature selects the page file size as either the RAM size or 32 GB, whichever is smaller.So I guess I can set it to 6GB or something and then up it if it ever crashes. Thanks for the pointers.
-
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
Maybe I don't need crash dumps until the day the thing starts crashing and I need to figure out why... thoughts?
Also if you disable the dumps, you'd have nothing to investigate with ;-). So don't go turning them off...
-
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
The system is set to "Automatic memory dump".
From that article:
The Automatic memory dump setting at first selects a small paging file size, would accommodate the kernel memory most of the time. If the system crashes again within four weeks, the Automatic memory dump feature selects the page file size as either the RAM size or 32 GB, whichever is smaller.So I guess I can set it to 6GB or something and then up it if it ever crashes. Thanks for the pointers.
You're welcome.
-
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
-
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
Basically you don't need a paging file.
-
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
Basically you don't need a paging file.
That is not at all what the article says...
Minimum PF should be large enough to cover the memory demands of your largest process. In the case of application allocating 50 MB of memory, the allocations request will fail with out of virtual memory error. When application allocates 10 MB chunks, the first two allocations will go through while the rest will fail as there’s no virtual memory (either RAM or PF) to accommodate that.
Which is the same as saying scale appropriately, don't do something insane like committing 240GB to the PF (off the bat)
-
@DustinB3403 said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
Basically you don't need a paging file.
That is not at all what the article says...
Minimum PF should be large enough to cover the memory demands of your largest process. In the case of application allocating 50 MB of memory, the allocations request will fail with out of virtual memory error. When application allocates 10 MB chunks, the first two allocations will go through while the rest will fail as there’s no virtual memory (either RAM or PF) to accommodate that.
Which is the same as saying scale appropriately, don't do something insane like committing 240GB to the PF (off the bat)
With a properly configured host reserve, if you follow (Peak Commit Charge – Physical Memory + some buffer) formula on the host, you’ll find out that you don’t need a page file.
-
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@DustinB3403 said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis said in pagefile size on Windows Server with dynamic RAM?:
@IRJ said in pagefile size on Windows Server with dynamic RAM?:
@Mike-Davis Check this article out.
Thanks. That was helpful as well.
Basically you don't need a paging file.
That is not at all what the article says...
Minimum PF should be large enough to cover the memory demands of your largest process. In the case of application allocating 50 MB of memory, the allocations request will fail with out of virtual memory error. When application allocates 10 MB chunks, the first two allocations will go through while the rest will fail as there’s no virtual memory (either RAM or PF) to accommodate that.
Which is the same as saying scale appropriately, don't do something insane like committing 240GB to the PF (off the bat)
With a properly configured host reserve, if you follow (Peak Commit Charge – Physical Memory + some buffer) formula on the host, you’ll find out that you don’t need a page file.
No that is page filing for the Host, not for the VM's.
-
Think of it like this, separation of the hypervisor and VM's means a VM should never effect the host in terms of a VM crash. It's why separation is critical between the two, among the other issues.
This is purely at the VM level that needs to be targeted first. Then the Host must have it's own PF settings configured for the host. In case it crashes.
-
OK I'm totally new to dynamic RAM assignment.
@Mike-Davis what do you gain by using Dymanic RAM assignment in this case, unless you're overallocating on the server? You mention that your RDS server can spike to 80 GB of RAM, do you find that the loss in performance when running less than that is worth using dymanic RAM (i.e. when you have to much RAM that's not in use on a VM, you can have performance issues on that VM).
This is an educational question for me.
-
@Dashrender said in pagefile size on Windows Server with dynamic RAM?:
OK I'm totally new to dynamic RAM assignment.
@Mike-Davis what do you gain by using Dymanic RAM assignment in this case, unless you're overallocating on the server? You mention that your RDS server can spike to 80 GB of RAM, do you find that the loss in performance when running less than that is worth using dymanic RAM (i.e. when you have to much RAM that's not in use on a VM, you can have performance issues on that VM).
This is an educational question for me.
The point of the Dynamic RAM allocation is because he'll have peaks and valleys of users using the RDS system. Keeping the memory as static on any host is almost wasteful as other hosts could likely use the memory as well when they have to process reports or whatever else.
-
@Dashrender said in pagefile size on Windows Server with dynamic RAM?:
OK I'm totally new to dynamic RAM assignment.
@Mike-Davis what do you gain by using Dymanic RAM assignment in this case, unless you're overallocating on the server? You mention that your RDS server can spike to 80 GB of RAM, do you find that the loss in performance when running less than that is worth using dymanic RAM (i.e. when you have to much RAM that's not in use on a VM, you can have performance issues on that VM).
This is an educational question for me.
@Dashrender That's a good question. Basically I have two RDS servers. A primary and then one that can be put in to service by just setting the IP address. The host doesn't have enough RAM for me to allocate them both the most RAM they would ever use, but by assigning dynamic RAM to them both, they can both be online and I can keep them patched up to date, etc. It was the fastest way I could think of to be able to put the second RDS server in to service should the primary one have an issue.
I'm open to suggestions if anyone has any other ideas.
-
Do you already have a connection broker configured for these hosts?
-
And the reason I ask about the connection broker, is you could just have both RDS servers going at the same time, and have the connection broker send the bulk of the connection request to your primary rds and if it hits a threshold, send the new incoming to the backup RDS server.
Since both are configured for dynamic ram allocation, this would be a better approach overall. Also it means you could take down an RDS server, and the connection broker would send all of the connection request to the backup RDS server.
-
Why do you have two RDS servers on the same VM host? If it's because you only have one VM host and you want to be able to do patches while the system is up and running, I understand that.
In this case where you are bouncing live users from one VM to the other, I understand the use of Dynamic RAM.
I do like Dustin's suggestion on the broker though - but I'm not familiar enough with them to know if they can do what you want or not.