XenServer hyperconverged
-
@fateknollogee said in XenServer hyperconverged:
Are the hosts shown in your example using HW or software RAID?
What is preferred, HW or software RAID?
@dustinb3403 said in XenServer hyperconverged:
@fateknollogee said in XenServer hyperconverged:
Are the hosts shown in your example using HW or software RAID?
What is preferred, HW or software RAID?
Based on the blog post I'm guessing HW raid
It's not that easy to answer. Phase III will bring multi-disk capability on each host (and even tiering). So it means you could use any number of disks on each hosts to make inception-like scenario (replication on host level + on cluster level). But obviously, hardware raid is perfectly fine too
-
During an event where a host goes down, and for that brief time period where writes are paused, are those writes cached and then written once the system determines what to do?
Or are those writes lost?
-
@olivier Thank you for clarifying, I'm assuming this would apply principally at least the same to a 2 node cluster? One goes down, writes are briefly suspended, writes resume on the Active node, failed node is replaced, then rebuild/healing process continues on the New node. How long are you expecting for rebuilds? I'm sure that's a loaded question because it's data dependent.....
-
@dustinb3403 No writes are lost, it's handled on your VM level (VM OS wait for "ack" of virtual HDD but it's not answering, so it waits). Basically, cluster said: "writes command won't be answered as long as we figured it out".
So it's safe
-
@r3dpand4 This is a good question. We made the choice to use "sharding", which means making blocks of 512MB for your data to be replicated or spread.
So the heal time will be time to fetch all new/missing 512MB blocks of data since node was down. It's pretty fast on the tests I've done.
-
@olivier So essentially just deduplication?
-
@r3dpand4 That has nothing to do with deduplication. There is just chunks of files replicated or distributed-replicated (or even disperse for disperse mode).
By the way, nobody talks about this mode, but it's my favorite Especially for large HDD, it's perfect. Thanks to the ability to lose any of n disk in your cluster. Eg with 6 nodes:
This is disperse 6 with redundancy 2 (like RAID6 if you prefer). Any 2 XenServer hosts can be destroyed, it will continue to work as usual:
And in this case (6 with redundancy of 2), you'll be able to address 4/6th of your total disk space!
-
Here it is with improved pics of XOSAN, I suppose it's more clear now:
What do you think?
-
@olivier That picture helps make it way more clear.
Each server is providing 100GB and either are standalone systems (disperse) or are paired (dist. repl).
-
@dustinb3403 That's it, indeed
- fist picture: you can lose up to 2 hosts (any of them)
- second picture: you can lose up to 3 hosts (1 by pair)
-
What is the difference in performance between the two options?
-
@fateknollogee said in XenServer hyperconverged:
What is the difference in performance between the two options?
Disperse requires more compute performance because it's a complex algorithm (based on reed-solomon). So it's slower vs replication, but it's not a big deal if you are using HDDs.
However, if you are using SSDs, disperse will be a bottleneck, so it's better to go on replicate.
Ideal solution? Disperse for large storage space on HDDs, and Replicated on SSDs… at the same time (using tiering, which will be available soon). Chunks that are read often will be promoted to the replicated SSDs storage automatically (until it's almost full). If more accessed chunks appears in the future, some chunks will be demoted to "slower" tier and replaced by the new hot ones.
-
We validated our first provider: https://xen-orchestra.com/blog/xosan-on-10gbps-io/
Next? Probably a hardware provider
-
@olivier said in XenServer hyperconverged:
We validated our first provider: https://xen-orchestra.com/blog/xosan-on-10gbps-io/
Next? Probably a hardware provider
Congrats