Xen Server 6.5 + Xen Orchestra w. HA & SAN
-
@ntoxicator said:
Or just do DRDB on XenServer using the HA-Lizard for 6.5; would this be fast enough? Probably spec out 2U server with 10-12 Drives with a hardware RAID controller... SSD Caching??
Fast enough? It's the fastest possible HA option. If it isn't fast enough you can't even think of mentioning a SAN which is slower.
Remember a SAN is just a server with local disks... but one that is far away. So take EVERY fear you have of not having a SAN... then add on the fear of extra networking, extra boxes to fail, extra cost, extra latency, extra bottlenecks....
-
@ntoxicator said:
I suppose this is why I was looking at Scale Computing; As its a already built solution/package deal with support.
Yes, same basics as the XenServer + DRBD but with scalability and top to bottom integrated support. You can get it (soon) with pure SSD as well (we have one.)
-
With one, two or three hosts, SAN and NAS cannot enter the conversation. It's physically impossible for them to have any place. They act against every possible interest of the design. Because they are more expensive by the necessity of the additional hardware and more risky because they increase the failure points while adding links in the failure chain while removing no risk at all and slower by simple physics they become big problems.
When you have four to twenty physical servers, there are niche cases where a SAN might make sense. But very niche and only to save money.
Twenty or more servers, likely a SAN will save you money, so it is worth considering if the performance and risk penalties are acceptable.
-
@scottalanmiller said:
Thanks scott. makes sense and I understand.
So again, just have to spec out a 2U server (I assume 2U). With the required disk space which would hold us out for 5+ years. I am going to say we would well over 5TB+ to be safe.
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
-
@ntoxicator said:
@scottalanmiller said:
Thanks scott. makes sense and I understand.
So again, just have to spec out a 2U server (I assume 2U). With the required disk space which would hold us out for 5+ years. I am going to say we would well over 5TB+ to be safe.
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
Is there a specific reason you mention the height of a server? Do you have limitations in your cabinet that limit how tall your servers are?
In my office I have 1 full height cabinet, 42 U worth of space. Could I have gotten away with a half cabinet, definitely, but I have what I have. I'm using 8 U for UPSs, 4U (two 2U servers) for 10 year old EHR, 4U (two 2U servers) for hypervisor hosts, 2U for a Drobo and 1U for a network switch and 1U for a KVM panel. Grand total 20U. I still have over half the rack left over for expansion.If I were looking at new servers, the height of the server would be the least of my concerns. Granted you can get 2U servers today that hold nearly 20 disks, it wouldn't matter to me if it was 4U because I have the space.
The size of the drives you buy will be dependent upon a few factors. What do you need for IOPs? If you have low IOPs needs, why not buy 4 TB drives? 4 of them in RAID 10 would give you 8 TB of usable space. If you need higher IOPS, perhaps eight 2 TB drives in RAID 10 would be better, still leaving you with 8 TB usable space.
To determine your IOPs requirement, you could get a Dell DPack run against your system. You just have to ignore the sales people trying to sell you a SAN and remember, Dell isn't trying to be your friend, they are trying to extract money from you. Ignore their SAN recommendation, and post the results to a place like ML to get help/suggestions on what to get.
Another option would be to hire a firm to do all of this specing for you. They will run the tools and then recommend a system. This is a situation where you are paying someone for their opinion, preferably someone who isn't trying to sell you anything else. This way they understand that they are making money on their opinion/suggestion, not on the hopes of selling you hardware.
-
@ntoxicator said:
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
The spindle speed is just spindle speed. If 7200 RPM drives have the IOPS that you need, they are just as good (or better since they are cheaper and more reliable.)
-
Question
HA-Lizard HA-iSCSI interface between the two(2) servers..
bonded GigE link?
or a single DA 10Gbe cable between the hosts? I suppose, bonded GigE is plenty sufficient due to disk write speeds? (non SSD)
-
Can't bond SAN links.
-
I could be incorrect on the terminology or reference. I was reading one of their posted documents.
This would be the DRBD interface / IP link between the two(2) nodes
-
-
@ntoxicator said:
This would be the DRBD interface / IP link between the two(2) nodes
DRBD is the protocol there, no iSCSI. It's not SAN or anything like that.
-
Ok. Their documentation says the DRBD interface to be bonded from within Xencenter (per documentation). Am I wrong here?
So the ethernet link between the two nodes, im sure GigE is plenty enough bandwidth? or 10Gige not hurt?
-
@ntoxicator said:
Ok. Their documentation says the DRBD interface to be bonded from within Xencenter (per documentation). Am I wrong here?
So the ethernet link between the two nodes, im sure GigE is plenty enough bandwidth? or 10Gige not hurt?
Well 10 Ge never hurts...
-
@ntoxicator said:
Ok. Their documentation says the DRBD interface to be bonded from within Xencenter (per documentation). Am I wrong here?
That seems fine. DRBD works differently than iSCSI. They are not related protocols.
-
@ntoxicator said:
So the ethernet link between the two nodes, im sure GigE is plenty enough bandwidth? or 10Gige not hurt?
What is the bandwidth of the storage? You will be limited to GigE throughput speeds, that's 1Gb/s, for writes. That's a fraction of what SATA and SAS can do.
-
Are you sure that all of this makes sense in your environment? This is two orders of magnitude from where you have been in the past. It isn't normal to have an LA (low availability) environment and get by for a long time and suddenly leap to HA. Why not just go to standard availability? It's a full order of magnitude safer than where you have been in the past, almost zero effort (and no risk from that lack of effort... simple is your friend) and less than half the price of doing HA.
SA is the only clear win... tons safer, tons cheaper. HA is tons safer for sure, but costs more and doesn't make sense given what was deemed acceptable in the past.
-
Looking to develop hardware costs and quotes for new equipment. Company wants to grow employee's to 500+ by year 2020. Need to have reliable servers hosting VM's
If the primary xenserver host fails.. then what? We have a day + of downtime waiting for server to come back online?
Looking to future proof so can be in production for next 5 years. I may not be here with company in next 5 years, so want to leave behind a good setup.
-
@scottalanmiller said:
Are you sure that all of this makes sense in your environment? This is two orders of magnitude from where you have been in the past. It isn't normal to have an LA (low availability) environment and get by for a long time and suddenly leap to HA. Why not just go to standard availability? It's a full order of magnitude safer than where you have been in the past, almost zero effort (and no risk from that lack of effort... simple is your friend) and less than half the price of doing HA.
SA is the only clear win... tons safer, tons cheaper. HA is tons safer for sure, but costs more and doesn't make sense given what was deemed acceptable in the past.
also to mention, I've been hammering HA setup for awhile to management; for peace of mind and rest-easy at night. Yes, we've been getting along with low availability type setup for now. But as the resource usages increase; i feel the need for HA setup with dual nodes.
-
@ntoxicator said:
Looking to develop hardware costs and quotes for new equipment. Company wants to grow employee's to 500+ by year 2020. Need to have reliable servers hosting VM's
That's fine. But that doesn't suggest HA at all.
-
@ntoxicator said:
If the primary xenserver host fails.. then what? We have a day + of downtime waiting for server to come back online?
This is not how you discuss risk. This tells me that HA is not needed. This isn't how a "we need HA" discussion would start.