SAN in an Inverted Pyramid Architecture for Fourteen Physical Servers
-
@Dashrender said:
I'll bow out of this conversation until the OP returns.
I suspect he may run away screaming and never return
-
Partially why I created a new thread, this was way too divergent from the original topic. Although I do feel that it is important to mention that he may be missing "enterprise-like" opportunities within his existing budget.
-
It would be nice to know the Specifics of his environment... 14 servers... wow. He said that some of them are running Hyper-V or ESXi ... What are the Specs on those machines?
What about the rest of his machines?
Could he not potentially build out his own shared storage (a la StarWind) by filling two of his existing servers with large enough drives and using Starwind to replicate between the two?
That helps to mitigate the IPOD (via redundancy), as well as grants him the shared storage.
There's a number of other spins on that that I can think of... Let's hope we haven't scared the OP off, lol.
-
@dafyre said:
Could he not potentially build out his own shared storage (a la StarWind) by filling two of his existing servers with large enough drives and using Starwind to replicate between the two?
Sure, that would support by SAN theory like I said It would, however, be a major reduction in guests from 14 to 12 making a SAN or SAN cluster far less attractive as you start to head towards harder and harder to justify numbers of guests.
-
@scottalanmiller said:
Sure, that would support by SAN theory like I said It would, however, be a major reduction in guests from 14 to 12 making a SAN or SAN cluster far less attractive as you start to head towards harder and harder to justify numbers of guests.
How would that be a reduction of guests? Let's say he has 7 Hyper-V Servers... he could P2V 2 of his Non-Hyper-V servers and use those for his SAN... That would increase the number of Guest VMs he currently is operating.
Let's assume you meant to write hosts instead of guests. Okay, so he goes from 14 physical servers down to 12 physical servers. However, to build his (2-node) SAN, and take steps towards redundancy, he's only having to buy hard drives, instead of a full-boat SAN system from a vendor.
So we're looking at Let's go crazy and say $5,000 worth of hard drives (split evenly in his 2 servers that were taken for storage). That would still be a far cry cheaper than having to purchase 2 x Nimble SAN units for replication.
I see that as a win-win. Because A) It gets him closer to where he wants to go and B) The company only had to spend $5k instead of 35k... (Prices arbitrarily pulled from magic hat).
-
@dafyre said:
How would that be a reduction of guests?
He currently has 14 guests. Converting two of them from guests to storage means 14 - 2 = 12.
-
@dafyre said:
That would increase the number of Guest VMs he currently is operating.
VMs are irrelevant to SAN discussions. It is physical hosts alone that are the determining factor. VMs are a red herring.
-
@dafyre said:
Let's assume you meant to write hosts instead of guests.
I meant physical guests of the SAN, not talking about virtualization or VMs in any way in this discussion.
-
@dafyre said:
Okay, so he goes from 14 physical servers down to 12 physical servers. However, to build his (2-node) SAN, and take steps towards redundancy, he's only having to buy hard drives, instead of a full-boat SAN system from a vendor.
So this is built on the assumption that he can consolidate and that the 14 servers are not needed. The thing is, the idea of using the SAN only makes sense if he can't consolidate and so this would not be an option. If he can consolidate then we are in a completely different discussion.
Two servers as a SAN cluster is definitely an option. But be aware that you will need to replicate the disks so it is buying twice the disks and assumes that his current machines have the necessary storage capacity do to double storage duty. A single SAN approach is only 1x the disk purchase and it is specifically the cost reduction of the storage that makes a SAN viable. He does not have replicated storage today, the assumption is that he does not need it tomorrow.
-
@dafyre said:
So we're looking at Let's go crazy and say $5,000 worth of hard drives (split evenly in his 2 servers that were taken for storage). That would still be a far cry cheaper than having to purchase 2 x Nimble SAN units for replication.
$5,000 is probably low, but we have no idea. But for enterprise drives, which is probably what we would be using, that's only 10 - 20 drives total. Since we need to replicate that's likely low.
No need for dual enterprise SAN. Going to dual enterprise SAN is assuming a need for HA which cannot be achieved with any of the other assumptions in this thread (need for 14 hosts) since you'd have HA storage and no HA elsewhere. So we are looking only at a single SAN here.
So yes, a single SAN is $30K+. But we are assuming no consolidation ability (or the 14 servers count is the issue) and no need for HA (or we have other issues that can't be addressed here anyway without buying more servers.)
-
Ah, OK. I read that as guest = VM, lol. To me we are talking about both, as the shared storage would primarily be for his Hyper-V servers (thus my confusion on the term guest).
That being said, I still would stand by my Win/Win statement because it gets him closer to where he wants to be in moving towards redundancy. It makes it a good business decision because the cost of the hard drives would be negligible as long as he already has 2 servers that can be recycled as StarWind nodes (especially when compared with the cost of 2 x SAN units in Network Raid 1).
-
@dafyre said:
To me we are talking about both, as the shared storage would primarily be for his Hyper-V servers (thus my confusion on the term guest).
The "shared" storage is for EVERY physical box no matter what role it plays because it is by having so many physical boxes no longer needing their own storage that you get cost savings.
-
@dafyre said:
That being said, I still would stand by my Win/Win statement because it gets him closer to where he wants to be in moving towards redundancy. It makes it a good business decision because the cost of the hard drives would be negligible as long as he already has 2 servers that can be recycled as StarWind nodes (especially when compared with the cost of 2 x SAN units in Network Raid 1).
This only is viable if you also assume that the basic premise, that of needing 14 servers, is wrong. Starting with the "question is wrong" as a way to get to an answer is never right. I get what you are saying, but this thread is based on SAN for 14 servers, not SAN for 12 servers when we have two unused storage servers available.
Like I said, if we can assume that 14 servers are not necessary then none of this SAN discussion makes sense at all. You have to assume that 14 servers is the necessity.
So for your example you would need two new servers at a minimum of $5K each (likely) plus the disks to go in them. So that is $15K suddenly rather than $5K A 300% jump. And I still feel that $5K of drives is likely very low but let's go with it, it is likely close.
Then we have a minimum of $1,500 in Windows licensing to get to use Starwind. So we are up to $16.5K. Still nowhere near $30K for an enterprise SAN, but you can see how the numbers are much closer.
-
Not knowing the capacity or IOPS numbers makes this a lot harder to hypothesize.
-
@scottalanmiller said:
So yes, a single SAN is $30K+. But we are assuming no consolidation ability (or the 14 servers count is the issue) and no need for HA (or we have other issues that can't be addressed here anyway without buying more servers.)
So where is the savings with a $30K SAN? Not to mention the dedicated 1 Gb switch (actually at least two of them) if not 10 Gb switches or FC switches. oh and the adapters for that.
-
@Dashrender said:
So where is the savings with a $30K SAN? Not to mention the dedicated 1 Gb switch (actually at least two of them) if not 10 Gb switches or FC switches. oh and the adapters for that.
Well let's do some standard math and see where we fall. We have to make some assumptions, of course, but for the point of the scenario....
It is all about wasted capacity. How much is in each server? Let's say each has 6x 1TB drives in RAID 6. That's 4TB usable in each host. So 84 drives purchased with 56TB usable.
1/3rd of all drives are for parity. Performance is split up between 14 hosts.
Now how much capacity are we likely using? Chances are a huge percentage is wasted on each server. Typically we use less than half. Chances are we could get by with something like 20TB usable on RAID 6 on a single SAN. And the machines get to split the performance.
A single 24TB SAN might be cheaper than 84TB of raw storage across all boxes while still being just as flexible. That's where the potential cost savings come from and why SANs require a lot of physical hosts connected to them to begin to pay off - it is about getting to a scale where the drives become cheaper through a combination of thin provisioning, storage consolidation and better RAID overhead.
-
Ok, that works if and when it's time to replace a huge percentage of that storage all at once, otherwise you're needing to spend a rather substantial CapEx and wasting the remaining life of the old but not obsolete equipment. I suppose you could sell it and recoup some of that expense, but assuming he keeps the servers and only sells the HD's, what kinda return can he really expect on those?
-
@Dashrender said:
Ok, that works if and when it's time to replace a huge percentage of that storage all at once, otherwise you're needing to spend a rather substantial CapEx and wasting the remaining life of the old but not obsolete equipment.
The other choice is repeatedly investing in technical debt. It remains substantial CapEx in both cases, one is just lower and "at once" instead of larger and "spread out."
It's why good design before purchase is so important, the technical debt component that is often overlooked is often crippling.
-
@scottalanmiller said:
@Dashrender said:
Ok, that works if and when it's time to replace a huge percentage of that storage all at once, otherwise you're needing to spend a rather substantial CapEx and wasting the remaining life of the old but not obsolete equipment.
The other choice is repeatedly investing in technical debt. It remains substantial CapEx in both cases, one is just lower and "at once" instead of larger and "spread out."
It's why good design before purchase is so important, the technical debt component that is often overlooked is often crippling.
Very true, but the organic growth of many business also makes this difficult at best.
You also didn't mention the network infrastructure and adapters needed to run the SAN in your cost analysis, but I'm sure you didn't forget them.
-
@Dashrender said:
You also didn't mention the network infrastructure and adapters needed to run the SAN in your cost analysis, but I'm sure you didn't forget them.
For 14 hosts you typically have a lot of options and not very expensive ones. Lower end GigE switches will normally do the trick just fine. It is a relatively minor cost in most cases where we are not talking about extreme HA but just "SA" IPOD setups.