Xen Server 6.5 + Xen Orchestra w. HA & SAN
-
@ntoxicator said:
However, for DRDB setup -- I would have to spec the server nodes with large drives. Would have to be a nice 2U server with 12+ drives
If you don't you'd have to spec two additional NAS with large drives. So while you can present this as a negative, it's actually a positive.
-
@ntoxicator said:
As right now, We have over 1.5TB of data being stored. I see this growing much larger over the coming years.
That's tiny.
-
@ntoxicator said:
Not to mention need a place locally to store snapshots, or back-up data.. and other NFS shares (System imaging, ISO store, misc)
ISO store you can throw on a cheap NAS or any desktop. Don't use production HA resources for that.
-
@Dashrender said:
I guess first things first.. do you need HA? What is your RTO and RPO? (recovery time objective, recovery point objective)
That's definitely the first question. HA with only 200 users? Possible but very unlikely. Has someone run the numbers on this?
-
@ntoxicator said:
So was thinking of a redundant HA-setup with some high-end Synology 2U units... using NFS storage as SR (rather than iSCSI).
Not possible. Synology offers file server HA but not VM backing HA. Synology can't fail over faster enough to not have the VMs fail. You will be looking at a cluster of EMC or 3PAR type units to be able to do NAS or SAN with HA. Anything less and you generally can't do HA for storage.
-
@ntoxicator said:
However, I was also considering rather than doing Network storage -- as I know @scottalanmiller has said otherwise..... that I can do a DRDB setup using HA-Lizard on XenServer 6.5
However, for DRDB setup -- I would have to spec the server nodes with large drives. Would have to be a nice 2U server with 12+ drives
As right now, We have over 1.5TB of data being stored. I see this growing much larger over the coming years.
1.5 TB isn't very much data today. You could easily get that with 4 SSD 500 GB drives in RAID 5 (yep RAID 5 is good again, as long as you're only using it on SSD drives).
An 8 drive array of 500 GB SSD drives would give you 3.5 TB of usable space. Most 2U chassis can do this pretty easily. This might be a bit over budget, and assuming you get the IOPs you need, you could go with eight 1 TB drives in RAID 10 and have 4 TB of usable space.
Is HA really needed? If your business needs really warrant it, i.e. being down for 4 hour response time from Tier 1 server providers costs you thousands or 10's of thousands, then definitely scope it out.
In that case you could go with two servers, each with eight 2 TB drives in RAID 10 in DRDB (or StarWind).
I'm guessing that the cost of two NAS/SAN devices and two switches will likely outweigh the cost of 2U servers that hold 8+ drives, plus that two NAS/SAN solution is still less reliable than a two server situation.
-
@scottalanmiller said:
@ntoxicator said:
So was thinking of a redundant HA-setup with some high-end Synology 2U units... using NFS storage as SR (rather than iSCSI).
Not possible. Synology offers file server HA but not VM backing HA. Synology can't fail over faster enough to not have the VMs fail. You will be looking at a cluster of EMC or 3PAR type units to be able to do NAS or SAN with HA. Anything less and you generally can't do HA for storage.
Is this because the storage syncing is to slow?
-
@Dashrender said:
@scottalanmiller said:
@ntoxicator said:
So was thinking of a redundant HA-setup with some high-end Synology 2U units... using NFS storage as SR (rather than iSCSI).
Not possible. Synology offers file server HA but not VM backing HA. Synology can't fail over faster enough to not have the VMs fail. You will be looking at a cluster of EMC or 3PAR type units to be able to do NAS or SAN with HA. Anything less and you generally can't do HA for storage.
Is this because the storage syncing is to slow?
No, syncing has to be real time for any of these technologies to work. Synology is DRBD just like Xen would be. The issue is the failover time. The delay in failing over is too great and the VMs have failed by the time that the failover happens.
HA at one level does not mean it's useful at the next. One of the many caveats of the term HA. Synology NFS is perfectly viable HA when used for a /home directory automount. But not useful for a VM that needs to keep running.
-
Thank you for all the awesome details here..
Company does not like downtime.. if we have to make employee's go home; costs company thousands. As its high volume work; billing services... collecting money for clients as well as a in house call center.
Have VM's for 2X (Parallels 2X Gateway) hosting apps company wide.
Just weighing the benefits. I just for some reason like the Idea of a SAN for storage.
Yes, I know would need 2 XenServer hosts & 2 SAN units. EMC would be awesome... How is it now under DELL though?
Or just do DRDB on XenServer using the HA-Lizard for 6.5; would this be fast enough? Probably spec out 2U server with 10-12 Drives with a hardware RAID controller... SSD Caching??
Why HA you ask? Well, don't want a single point of failure. Could have a single server hosting all the VM's. But then at high risk of any failure. Can have redundant PSU's and split power... But if the server happens to have a hardware failure. Then we are down until replacement. With a Secondary server, the VM's would roll-over.
I suppose this is why I was looking at Scale Computing; As its a already built solution/package deal with support.
-
@ntoxicator said:
Company does not like downtime..
No company does, but that is emotional. Emotional decision making is the sign of an unhealthy management team - they aren't using their brains, just going on fear. That's bad. The more a company has this fear, the more likely I've found that HA is not for them because companies that are fearful rarely actually have the need.
-
@ntoxicator said:
Yes, I know would need 2 XenServer hosts & 2 SAN units.
Why would that be awesome? That sounds downright sad to me. What a horrible setup. That's like setting money on fire and getting nothing for it.
-
@ntoxicator said:
Why HA you ask? Well, don't want a single point of failure.
Not the same thing. HA doesn't mean not having a single point of failure. A single point of failure can still be HA (EMC VMAX, IBM Z series, Oracle M5000, HP SuperDome, etc.)
Not wanting a SPOF is still an emotional, not logical reaction.
What a company should want is what is profitable. That would be expressed in a cost of downtime and then mitigated by a cost effective strategy. Nothing more, nothing less... ever. Any deviation from that is an emotional response and likely to waste money (wasted money is no different than downtime.)
-
@ntoxicator said:
Just weighing the benefits. I just for some reason like the Idea of a SAN for storage.
You should not... again, emotional. SAN literally adds no benefits. None. Paying for a SAN here is the same as having downtime later. Both are just "money loss" events. If you have the reaction that you want a SAN you can't have the reaction that you want HA... these are conflicting emotional messages. One says "I want to lose money just to spend it" and the other says "I'm afraid of losing money."
-
@ntoxicator said:
. if we have to make employee's go home; costs company thousands. As its high volume work; billing services... collecting money for clients as well as a in house call center.
Of course, but you lose thousands if you buy a SAN too, tens of thousands. So the reaction to "maybe" losing thousands should never be to definition lose tens of thousands.
The analogy we use here is: Shooting myself in the face today to avoid maybe getting a headache tomorrow.
-
@ntoxicator said:
Or just do DRDB on XenServer using the HA-Lizard for 6.5; would this be fast enough? Probably spec out 2U server with 10-12 Drives with a hardware RAID controller... SSD Caching??
Fast enough? It's the fastest possible HA option. If it isn't fast enough you can't even think of mentioning a SAN which is slower.
Remember a SAN is just a server with local disks... but one that is far away. So take EVERY fear you have of not having a SAN... then add on the fear of extra networking, extra boxes to fail, extra cost, extra latency, extra bottlenecks....
-
@ntoxicator said:
I suppose this is why I was looking at Scale Computing; As its a already built solution/package deal with support.
Yes, same basics as the XenServer + DRBD but with scalability and top to bottom integrated support. You can get it (soon) with pure SSD as well (we have one.)
-
With one, two or three hosts, SAN and NAS cannot enter the conversation. It's physically impossible for them to have any place. They act against every possible interest of the design. Because they are more expensive by the necessity of the additional hardware and more risky because they increase the failure points while adding links in the failure chain while removing no risk at all and slower by simple physics they become big problems.
When you have four to twenty physical servers, there are niche cases where a SAN might make sense. But very niche and only to save money.
Twenty or more servers, likely a SAN will save you money, so it is worth considering if the performance and risk penalties are acceptable.
-
@scottalanmiller said:
Thanks scott. makes sense and I understand.
So again, just have to spec out a 2U server (I assume 2U). With the required disk space which would hold us out for 5+ years. I am going to say we would well over 5TB+ to be safe.
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
-
@ntoxicator said:
@scottalanmiller said:
Thanks scott. makes sense and I understand.
So again, just have to spec out a 2U server (I assume 2U). With the required disk space which would hold us out for 5+ years. I am going to say we would well over 5TB+ to be safe.
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
Is there a specific reason you mention the height of a server? Do you have limitations in your cabinet that limit how tall your servers are?
In my office I have 1 full height cabinet, 42 U worth of space. Could I have gotten away with a half cabinet, definitely, but I have what I have. I'm using 8 U for UPSs, 4U (two 2U servers) for 10 year old EHR, 4U (two 2U servers) for hypervisor hosts, 2U for a Drobo and 1U for a network switch and 1U for a KVM panel. Grand total 20U. I still have over half the rack left over for expansion.If I were looking at new servers, the height of the server would be the least of my concerns. Granted you can get 2U servers today that hold nearly 20 disks, it wouldn't matter to me if it was 4U because I have the space.
The size of the drives you buy will be dependent upon a few factors. What do you need for IOPs? If you have low IOPs needs, why not buy 4 TB drives? 4 of them in RAID 10 would give you 8 TB of usable space. If you need higher IOPS, perhaps eight 2 TB drives in RAID 10 would be better, still leaving you with 8 TB usable space.
To determine your IOPs requirement, you could get a Dell DPack run against your system. You just have to ignore the sales people trying to sell you a SAN and remember, Dell isn't trying to be your friend, they are trying to extract money from you. Ignore their SAN recommendation, and post the results to a place like ML to get help/suggestions on what to get.
Another option would be to hire a firm to do all of this specing for you. They will run the tools and then recommend a system. This is a situation where you are paying someone for their opinion, preferably someone who isn't trying to sell you anything else. This way they understand that they are making money on their opinion/suggestion, not on the hopes of selling you hardware.
-
@ntoxicator said:
Could use 600GB or larger SAS drives with hardware raid controller. Or some enterprise level 7200RPM drives? I'm unsure how folks feel about those.
The spindle speed is just spindle speed. If 7200 RPM drives have the IOPS that you need, they are just as good (or better since they are cheaper and more reliable.)