Big Virtual Fileservers
-
There is really nothing bad about this from an IT point of view. I mean it creates complexity, which is bad, but it also makes management easier, which is good.
As long as you are already using good namespace design for users accessing the shares, I can see nothing wrong with this at all.
-
@jaredbusch said in Big Virtual Fileservers:
long as you are already using good namespace design for users accessing the shares, I can see nothing wrong with this at all.
I've been doing everything with DFS Namespaces to keep shares non-reliant of server names and IP addresses. So whatever I do on the back-end, users won't even know or notice. It's strictly to make IT's life easier and to cause less downtime.
-
@tim_g said in Big Virtual Fileservers:
@jaredbusch said in Big Virtual Fileservers:
long as you are already using good namespace design for users accessing the shares, I can see nothing wrong with this at all.
I've been doing everything with DFS Namespaces to keep shares non-reliant of server names and IP addresses. So whatever I do on the back-end, users won't even know or notice. It's strictly to make IT's life easier and to cause less downtime.
Yeah, I cannot see any downside to this other than the added complexity of more servers. But, from the sounds of it, the improved management will be a bigger offset to the positive than the added complexity.
-
I'm a fan of smaller vhdx attached to as few guests as possible (windows world here).
So licensing is the biggest item I wrestle with.
In terms of total storage attached to a single VM, I tend to stop around 30 TB total. Backups and everything else just become more complicated without decent reason.
in terms of XenServer (dead ATM) has a 2tb minus 4gb limit per vhdx. So to get to 30 TB you have 15 vhdx.
That's a lot to manage in and of its self.
-
@dustinb3403 said in Big Virtual Fileservers:
So licensing is the biggest item I wrestle with.
On my host in question, it has DC licensing with SA. So licensing and all that isn't an issue there. But that could easily be a huge deal breaker otherwise.
-
@jaredbusch said in Big Virtual Fileservers:
@tim_g said in Big Virtual Fileservers:
@jaredbusch said in Big Virtual Fileservers:
long as you are already using good namespace design for users accessing the shares, I can see nothing wrong with this at all.
I've been doing everything with DFS Namespaces to keep shares non-reliant of server names and IP addresses. So whatever I do on the back-end, users won't even know or notice. It's strictly to make IT's life easier and to cause less downtime.
Yeah, I cannot see any downside to this other than the added complexity of more servers. But, from the sounds of it, the improved management will be a bigger offset to the positive than the added complexity.
Yeah that's my line of thinking too... I don't really see any other options then if yourself and others seem to be on board with it. The positive will definitely outweigh the added complexity in my situation. I think, once I have it set up, it won't be so bad. It's more of a set it and forget it, once replication and backups are set up I mean. Then I test restores and such occasionally as with everything else.
-
Have you looked into compression, deduplication and file minification before even considering splitting servers? Especially that last one, it's overlooked most of the time, but can give you some impressive storage gains. I did a project once, with minification alone we went down from 1.5TB to little over 200GB. Of course, it all depends what kind of files you're dealing with, but if you have typical users, I wouldn't be surprised to see 80MB powerpoint presentations. These can easily minify to 3-4MB.
-
I haven't seen the impressive savings that @marcinozga has, but I've seen Server 2012's Dedupe feature run about a 30% Savings... (from 1.5TB down to ~1TB).
-
@dafyre I've tried 2012's dedupe feature and had it cause corruption slowly little by little at which I had to restore the share (internal IT department share).
-
@dustinb3403 said in Big Virtual Fileservers:
@dafyre I've tried 2012's dedupe feature and had it cause corruption slowly little by little at which I had to restore the share (internal IT department share).
Eww, that's no fun. Never had that issue.
-
@dafyre it wasn't a huge deal as our share was mostly static, MSI's and flat documentation.
So reverting the share to the previous backup wasn't an issue. We just disabled dedupe on the drive and the problem was gone.
-
@marcinozga said in Big Virtual Fileservers:
Have you looked into compression, deduplication and file minification before even considering splitting servers? Especially that last one, it's overlooked most of the time, but can give you some impressive storage gains. I did a project once, with minification alone we went down from 1.5TB to little over 200GB. Of course, it all depends what kind of files you're dealing with, but if you have typical users, I wouldn't be surprised to see 80MB powerpoint presentations. These can easily minify to 3-4MB.
I am already taking advantage of space-saving technologies where it makes sense.