Xen Server 6.5 + Xen Orchestra w. HA & SAN
-
Thank you for all the awesome details here..
Company does not like downtime.. if we have to make employee's go home; costs company thousands. As its high volume work; billing services... collecting money for clients as well as a in house call center.
Have VM's for 2X (Parallels 2X Gateway) hosting apps company wide.
Just weighing the benefits. I just for some reason like the Idea of a SAN for storage.
Yes, I know would need 2 XenServer hosts & 2 SAN units. EMC would be awesome... How is it now under DELL though?
Or just do DRDB on XenServer using the HA-Lizard for 6.5; would this be fast enough? Probably spec out 2U server with 10-12 Drives with a hardware RAID controller... SSD Caching??
Why HA you ask? Well, don't want a single point of failure. Could have a single server hosting all the VM's. But then at high risk of any failure. Can have redundant PSU's and split power... But if the server happens to have a hardware failure. Then we are down until replacement. With a Secondary server, the VM's would roll-over.
I suppose this is why I was looking at Scale Computing; As its a already built solution/package deal with support.
-
@ntoxicator said:
Company does not like downtime..
No company does, but that is emotional. Emotional decision making is the sign of an unhealthy management team - they aren't using their brains, just going on fear. That's bad. The more a company has this fear, the more likely I've found that HA is not for them because companies that are fearful rarely actually have the need.
-
@ntoxicator said:
Yes, I know would need 2 XenServer hosts & 2 SAN units.
Why would that be awesome? That sounds downright sad to me. What a horrible setup. That's like setting money on fire and getting nothing for it.
-
@ntoxicator said:
Why HA you ask? Well, don't want a single point of failure.
Not the same thing. HA doesn't mean not having a single point of failure. A single point of failure can still be HA (EMC VMAX, IBM Z series, Oracle M5000, HP SuperDome, etc.)
Not wanting a SPOF is still an emotional, not logical reaction.
What a company should want is what is profitable. That would be expressed in a cost of downtime and then mitigated by a cost effective strategy. Nothing more, nothing less... ever. Any deviation from that is an emotional response and likely to waste money (wasted money is no different than downtime.)
-
@ntoxicator said:
Just weighing the benefits. I just for some reason like the Idea of a SAN for storage.
You should not... again, emotional. SAN literally adds no benefits. None. Paying for a SAN here is the same as having downtime later. Both are just "money loss" events. If you have the reaction that you want a SAN you can't have the reaction that you want HA... these are conflicting emotional messages. One says "I want to lose money just to spend it" and the other says "I'm afraid of losing money."
-
@ntoxicator said:
. if we have to make employee's go home; costs company thousands. As its high volume work; billing services... collecting money for clients as well as a in house call center.
Of course, but you lose thousands if you buy a SAN too, tens of thousands. So the reaction to "maybe" losing thousands should never be to definition lose tens of thousands.
The analogy we use here is: Shooting myself in the face today to avoid maybe getting a headache tomorrow.
-
@ntoxicator said:
Or just do DRDB on XenServer using the HA-Lizard for 6.5; would this be fast enough? Probably spec out 2U server with 10-12 Drives with a hardware RAID controller... SSD Caching??
Fast enough? It's the fastest possible HA option. If it isn't fast enough you can't even think of mentioning a SAN which is slower.
Remember a SAN is just a server with local disks... but one that is far away. So take EVERY fear you have of not having a SAN... then add on the fear of extra networking, extra boxes to fail, extra cost, extra latency, extra bottlenecks....
-
@ntoxicator said:
I suppose this is why I was looking at Scale Computing; As its a already built solution/package deal with support.
Yes, same basics as the XenServer + DRBD but with scalability and top to bottom integrated support. You can get it (soon) with pure SSD as well (we have one.)
-
With one, two or three hosts, SAN and NAS cannot enter the conversation. It's physically impossible for them to have any place. They act against every possible interest of the design. Because they are more expensive by the necessity of the additional hardware and more risky because they increase the failure points while adding links in the failure chain while removing no risk at all and slower by simple physics they become big problems.
When you have four to twenty physical servers, there are niche cases where a SAN might make sense. But very niche and only to save money.
Twenty or more servers, likely a SAN will save you money, so it is worth considering if the performance and risk penalties are acceptable.
-
@scottalanmiller said:
Thanks scott. makes sense and I understand.
So again, just have to spec out a 2U server (I assume 2U). With the required disk space which would hold us out for 5+ years. I am going to say we would well over 5TB+ to be safe.
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
-
@ntoxicator said:
@scottalanmiller said:
Thanks scott. makes sense and I understand.
So again, just have to spec out a 2U server (I assume 2U). With the required disk space which would hold us out for 5+ years. I am going to say we would well over 5TB+ to be safe.
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
Is there a specific reason you mention the height of a server? Do you have limitations in your cabinet that limit how tall your servers are?
In my office I have 1 full height cabinet, 42 U worth of space. Could I have gotten away with a half cabinet, definitely, but I have what I have. I'm using 8 U for UPSs, 4U (two 2U servers) for 10 year old EHR, 4U (two 2U servers) for hypervisor hosts, 2U for a Drobo and 1U for a network switch and 1U for a KVM panel. Grand total 20U. I still have over half the rack left over for expansion.If I were looking at new servers, the height of the server would be the least of my concerns. Granted you can get 2U servers today that hold nearly 20 disks, it wouldn't matter to me if it was 4U because I have the space.
The size of the drives you buy will be dependent upon a few factors. What do you need for IOPs? If you have low IOPs needs, why not buy 4 TB drives? 4 of them in RAID 10 would give you 8 TB of usable space. If you need higher IOPS, perhaps eight 2 TB drives in RAID 10 would be better, still leaving you with 8 TB usable space.
To determine your IOPs requirement, you could get a Dell DPack run against your system. You just have to ignore the sales people trying to sell you a SAN and remember, Dell isn't trying to be your friend, they are trying to extract money from you. Ignore their SAN recommendation, and post the results to a place like ML to get help/suggestions on what to get.
Another option would be to hire a firm to do all of this specing for you. They will run the tools and then recommend a system. This is a situation where you are paying someone for their opinion, preferably someone who isn't trying to sell you anything else. This way they understand that they are making money on their opinion/suggestion, not on the hopes of selling you hardware.
-
@ntoxicator said:
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
The spindle speed is just spindle speed. If 7200 RPM drives have the IOPS that you need, they are just as good (or better since they are cheaper and more reliable.)
-
Question
HA-Lizard HA-iSCSI interface between the two(2) servers..
bonded GigE link?
or a single DA 10Gbe cable between the hosts? I suppose, bonded GigE is plenty sufficient due to disk write speeds? (non SSD)
-
Can't bond SAN links.
-
I could be incorrect on the terminology or reference. I was reading one of their posted documents.
This would be the DRBD interface / IP link between the two(2) nodes
-
-
@ntoxicator said:
This would be the DRBD interface / IP link between the two(2) nodes
DRBD is the protocol there, no iSCSI. It's not SAN or anything like that.
-
Ok. Their documentation says the DRBD interface to be bonded from within Xencenter (per documentation). Am I wrong here?
So the ethernet link between the two nodes, im sure GigE is plenty enough bandwidth? or 10Gige not hurt?
-
@ntoxicator said:
Ok. Their documentation says the DRBD interface to be bonded from within Xencenter (per documentation). Am I wrong here?
So the ethernet link between the two nodes, im sure GigE is plenty enough bandwidth? or 10Gige not hurt?
Well 10 Ge never hurts...
-
@ntoxicator said:
Ok. Their documentation says the DRBD interface to be bonded from within Xencenter (per documentation). Am I wrong here?
That seems fine. DRBD works differently than iSCSI. They are not related protocols.