Should I bother to learn Windows Storage Spaces and what about Glances export?
-
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
It's like that IPOD situation I dealt with yesterday.
What about all of the server maintenance able to be done without having any down time? Or didn't they use it for that, strictly redundancy?
In a cluster setting (SOFS) this is a moot point since nodes can be patched and rebooted without any downtime.
Including the SOFS nodes, you mean. That's the important part. It fixes the single maintenance point of the SAN.
You'd need S2D (or similar tech, like SW vSAN) to get around the single maintenance point of SAN / DAS.
SOFS does that. But you can use a large range of RAID or RAIN systems to handle it. Gluster, CEPH, Starwind, DRBD, HAST, etc.
-
@DustinB3403 said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
It's like that IPOD situation I dealt with yesterday.
What about all of the server maintenance able to be done without having any down time? Or didn't they use it for that, strictly redundancy?
In a cluster setting (SOFS) this is a moot point since nodes can be patched and rebooted without any downtime.
Including the SOFS nodes, you mean. That's the important part. It fixes the single maintenance point of the SAN.
You'd need S2D (or similar tech, like SW vSAN) to get around the single maintenance point of SAN / DAS.
Only if you are looking for HA/Failover. On a single host with local storage you wouldn't need any of this.
He already put in the "to get around" piece.
-
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@DustinB3403 said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
It's like that IPOD situation I dealt with yesterday.
What about all of the server maintenance able to be done without having any down time? Or didn't they use it for that, strictly redundancy?
In a cluster setting (SOFS) this is a moot point since nodes can be patched and rebooted without any downtime.
Including the SOFS nodes, you mean. That's the important part. It fixes the single maintenance point of the SAN.
You'd need S2D (or similar tech, like SW vSAN) to get around the single maintenance point of SAN / DAS.
Only if you are looking for HA/Failover. On a single host with local storage you wouldn't need any of this.
That is true. Most cars have 4 tires. Also true.
The difference being that most cars should have four tires, most workloads should not have zero downtime maintenance. Some workloads, but not the majority.
-
@DustinB3403 said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
It's like that IPOD situation I dealt with yesterday.
What about all of the server maintenance able to be done without having any down time? Or didn't they use it for that, strictly redundancy?
In a cluster setting (SOFS) this is a moot point since nodes can be patched and rebooted without any downtime.
Including the SOFS nodes, you mean. That's the important part. It fixes the single maintenance point of the SAN.
You'd need S2D (or similar tech, like SW vSAN) to get around the single maintenance point of SAN / DAS.
This is the 2-node shared SAS Hyper-V/Storage Spaces cluster mentioned above that runs a 15-18 seat accounting firm.
There are two types of virtual disks set up on Storage Spaces. One with a 64KB interleave with the storage stack similarly configured while the other is the standard 256KB interleave with the defaults for storage stack. There are six to eight server based virtual machines and at least two or three desktop virtual machines running on the cluster at any given time.
EDIT: There are multiple virtual disks set up as Cluster Shared Volumes.
@PhlipElder cool cool. . . so what happens if that dataon unit fails to the 9's?
Your client would be dead in the water, no?
The unit is fully redundant all the way through to the disk. If we have a complete system failure we have Veeam and the ability to spin the VMs up on short order.
-
@DustinB3403 said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
It's like that IPOD situation I dealt with yesterday.
What about all of the server maintenance able to be done without having any down time? Or didn't they use it for that, strictly redundancy?
In a cluster setting (SOFS) this is a moot point since nodes can be patched and rebooted without any downtime.
Including the SOFS nodes, you mean. That's the important part. It fixes the single maintenance point of the SAN.
You'd need S2D (or similar tech, like SW vSAN) to get around the single maintenance point of SAN / DAS.
This is the 2-node shared SAS Hyper-V/Storage Spaces cluster mentioned above that runs a 15-18 seat accounting firm.
There are two types of virtual disks set up on Storage Spaces. One with a 64KB interleave with the storage stack similarly configured while the other is the standard 256KB interleave with the defaults for storage stack. There are six to eight server based virtual machines and at least two or three desktop virtual machines running on the cluster at any given time.
EDIT: There are multiple virtual disks set up as Cluster Shared Volumes.
@PhlipElder cool cool. . . so what happens if that dataon unit fails to the 9's?
Your client would be dead in the water, no?
yeah, sounds like a traditional IPOD. Maybe we missed something, are there two DataOn units?
What is the purpose of the DataON there? Why have that extra hardware? With just two nodes, you get WAY higher reliability without having it at all.
-
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@DustinB3403 said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
It's like that IPOD situation I dealt with yesterday.
What about all of the server maintenance able to be done without having any down time? Or didn't they use it for that, strictly redundancy?
In a cluster setting (SOFS) this is a moot point since nodes can be patched and rebooted without any downtime.
Including the SOFS nodes, you mean. That's the important part. It fixes the single maintenance point of the SAN.
You'd need S2D (or similar tech, like SW vSAN) to get around the single maintenance point of SAN / DAS.
This is the 2-node shared SAS Hyper-V/Storage Spaces cluster mentioned above that runs a 15-18 seat accounting firm.
There are two types of virtual disks set up on Storage Spaces. One with a 64KB interleave with the storage stack similarly configured while the other is the standard 256KB interleave with the defaults for storage stack. There are six to eight server based virtual machines and at least two or three desktop virtual machines running on the cluster at any given time.
EDIT: There are multiple virtual disks set up as Cluster Shared Volumes.
@PhlipElder cool cool. . . so what happens if that dataon unit fails to the 9's?
Your client would be dead in the water, no?
yeah, sounds like a traditional IPOD. Maybe we missed something, are there two DataOn units?
What is the purpose of the DataON there? Why have that extra hardware? With just two nodes, you get WAY higher reliability without having it at all.
We call that Storage Spaces Direct.
-
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@DustinB3403 said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
It's like that IPOD situation I dealt with yesterday.
What about all of the server maintenance able to be done without having any down time? Or didn't they use it for that, strictly redundancy?
In a cluster setting (SOFS) this is a moot point since nodes can be patched and rebooted without any downtime.
Including the SOFS nodes, you mean. That's the important part. It fixes the single maintenance point of the SAN.
You'd need S2D (or similar tech, like SW vSAN) to get around the single maintenance point of SAN / DAS.
This is the 2-node shared SAS Hyper-V/Storage Spaces cluster mentioned above that runs a 15-18 seat accounting firm.
There are two types of virtual disks set up on Storage Spaces. One with a 64KB interleave with the storage stack similarly configured while the other is the standard 256KB interleave with the defaults for storage stack. There are six to eight server based virtual machines and at least two or three desktop virtual machines running on the cluster at any given time.
EDIT: There are multiple virtual disks set up as Cluster Shared Volumes.
@PhlipElder cool cool. . . so what happens if that dataon unit fails to the 9's?
Your client would be dead in the water, no?
The unit is fully redundant all the way through to the disk.
That's what every SAN vendor has always claimed for "single box magic." Not saying that it isn't decently reliable, but any redundancy you get do in there, you can do without it. But with fewer points of failure total. And therefore lower cost potential, too.
Given that we can meet and beat any reliability here simply by removing the DataOn, what purpose is it serving?
And if the DataOn fails (no single chassis is ever fully redundant, it just can't be), you will quickly see the single point of failure. Just turn it off, if turning it off makes things go down, it wasn't redundant.
This looks like going back to traditional inverted pyramid design. Other than using software RAID instead of hardware RAID (something that isn't new either), what's different about this than the standard, textbook "what not to do" design? Too costly, too risky. Exactly the same design we just saw fail in the other thread.
-
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@DustinB3403 said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
It's like that IPOD situation I dealt with yesterday.
What about all of the server maintenance able to be done without having any down time? Or didn't they use it for that, strictly redundancy?
In a cluster setting (SOFS) this is a moot point since nodes can be patched and rebooted without any downtime.
Including the SOFS nodes, you mean. That's the important part. It fixes the single maintenance point of the SAN.
You'd need S2D (or similar tech, like SW vSAN) to get around the single maintenance point of SAN / DAS.
This is the 2-node shared SAS Hyper-V/Storage Spaces cluster mentioned above that runs a 15-18 seat accounting firm.
There are two types of virtual disks set up on Storage Spaces. One with a 64KB interleave with the storage stack similarly configured while the other is the standard 256KB interleave with the defaults for storage stack. There are six to eight server based virtual machines and at least two or three desktop virtual machines running on the cluster at any given time.
EDIT: There are multiple virtual disks set up as Cluster Shared Volumes.
@PhlipElder cool cool. . . so what happens if that dataon unit fails to the 9's?
Your client would be dead in the water, no?
yeah, sounds like a traditional IPOD. Maybe we missed something, are there two DataOn units?
What is the purpose of the DataON there? Why have that extra hardware? With just two nodes, you get WAY higher reliability without having it at all.
We call that Storage Spaces Direct.
SSD doesn't require an IPOD design, though. Just as RAID can be done in a reliable design model, or in an IPOD, so can SSD.
-
@scottalanmiller We add two more units and indeed we have enclosure resilience.
Hyper-Converged with nested resilience in Storage Spaces Direct takes care of all of these single-points of failure in a neat package.
The shared SAS setup is now considered legacy with the only place we're deploying them now being archival storage with up to eight 102 bay JBODs loaded with 12TB drives being stacked with three nodes and full resilience across the board.
HCI or disaggregate with Hyper-V and SOFS S2D are they way we're deploying now. So, the whole conversation is essentially moot.
The converged setup in the blog post is about three years old now. It was the best bang for the dollar as far as insurance against downtime at the time.
-
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller We add two more units and indeed we have enclosure resilience.
That's what was being missed then, that makes sense.
-
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
Hyper-Converged with nested resilience in Storage Spaces Direct takes care of all of these single-points of failure in a neat package.
Yes, HyperConverged would do that. But having external chassis removed the hyperconvergence and takes us back to having additional points of failure to worry about. Having chassis redundancy provides a lot of protection there. But we could move the external chassis into the server chassis for HC to reduce cost and points of failure. Two nodes instead of four. With external chassis for storage, it's still the old HA SAN model (as opposed to the IPOD SAN model), but lacks leveraging bringing everything into the chassis like HC does.
-
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
HCI or disaggregate with Hyper-V and SOFS S2D are they way we're deploying now. So, the whole conversation is essentially moot.
Not really HCI as described with the DataOn. That's just a software RAID version of the non-HC model.
HC has always meant physical convergence.
-
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
Hyper-Converged with nested resilience in Storage Spaces Direct takes care of all of these single-points of failure in a neat package.
Yes, HyperConverged would do that. But having external chassis removed the hyperconvergence and takes us back to having additional points of failure to worry about. Having chassis redundancy provides a lot of protection there. But we could move the external chassis into the server chassis for HC to reduce cost and points of failure. Two nodes instead of four. With external chassis for storage, it's still the old HA SAN model (as opposed to the IPOD SAN model), but lacks leveraging bringing everything into the chassis like HC does.
The biggest limiting factor there then is storage capacity, and also storage capacity scaling. Harder to do that if sticking to just a 2-node hardware setup.
-
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
The converged setup in the blog post is about three years old now. It was the best bang for the dollar as far as insurance against downtime at the time.
The blog post was the polar opposite of HC. IPOD is the farthest that you can get from HC. There are steps in between, like your four node, full redundant, but that's still not HC until you actually converge.
Remember that software RAID was an assumption from day one, it predates hardware RAID. So RAID and RAIN systems running in software doesn't move us towards convergence, it was almost a part of unconverged systems.
RAIN improves on a poor RAID system of the past. But it's an improvement, not a change of architecture.
-
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
Hyper-Converged with nested resilience in Storage Spaces Direct takes care of all of these single-points of failure in a neat package.
Yes, HyperConverged would do that. But having external chassis removed the hyperconvergence and takes us back to having additional points of failure to worry about. Having chassis redundancy provides a lot of protection there. But we could move the external chassis into the server chassis for HC to reduce cost and points of failure. Two nodes instead of four. With external chassis for storage, it's still the old HA SAN model (as opposed to the IPOD SAN model), but lacks leveraging bringing everything into the chassis like HC does.
The biggest limiting factor there then is storage capacity, and also storage capacity scaling. Harder to do that if sticking to just a 2-node hardware setup.
Harder than having a one storage node setup?
It's a myth that HC limits you in scale out. In makes it easier, in fact, in most cases. You aren't LIMITED to two nodes, it's simply that you don't need any more. In the original IPOD design, you are limited to one node. In the "multi-DataON" models, you can scale out more or less unlimited, same as with HC designs. No HC design has a two node limit (of which I am aware) and most go really big and the biggest, Starwind using RAID not RAIN, is limited only by the platform, not the storage layer.
-
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
Hyper-Converged with nested resilience in Storage Spaces Direct takes care of all of these single-points of failure in a neat package.
Yes, HyperConverged would do that. But having external chassis removed the hyperconvergence and takes us back to having additional points of failure to worry about. Having chassis redundancy provides a lot of protection there. But we could move the external chassis into the server chassis for HC to reduce cost and points of failure. Two nodes instead of four. With external chassis for storage, it's still the old HA SAN model (as opposed to the IPOD SAN model), but lacks leveraging bringing everything into the chassis like HC does.
The biggest limiting factor there then is storage capacity, and also storage capacity scaling. Harder to do that if sticking to just a 2-node hardware setup.
Harder than having a one storage node setup?
It's a myth that HC limits you in scale out. In makes it easier, in fact, in most cases. You aren't LIMITED to two nodes, it's simply that you don't need any more. In the original IPOD design, you are limited to one node. In the "multi-DataON" models, you can scale out more or less unlimited, same as with HC designs. No HC design has a two node limit (of which I am aware) and most go really big and the biggest, Starwind using RAID not RAIN, is limited only by the platform, not the storage layer.
I meant from the point you made of sticking to only two nodes with internal storage. Once that storage is full, you either have to add more nodes, or add storage boxes... therefore, no longer only having 2 hardware nodes.
-
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
HCI or disaggregate with Hyper-V and SOFS S2D are they way we're deploying now. So, the whole conversation is essentially moot.
Not really HCI as described with the DataOn. That's just a software RAID version of the non-HC model.
HC has always meant physical convergence.
I believe I referred to the DataON setup as "Converged" or sometimes "Asymmetric" not Hyper-Converged which is what Storage Spaces Direct is when running with both Storage Spaces and Hyper-V on the nodes.
Disaggregate is where we have two clusters. One running SOFS (could be a similar to DataON setup as was done in the past or S2D in SOFS only mode which is our way forward) and the other running Hyper-V.
In both S2D and disaggregate setups we run with RDMA over Converged Ethernet (RoCE) via Mellanox kit for our ultra-low latency fabric.
-
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@Obsolesce said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
Hyper-Converged with nested resilience in Storage Spaces Direct takes care of all of these single-points of failure in a neat package.
Yes, HyperConverged would do that. But having external chassis removed the hyperconvergence and takes us back to having additional points of failure to worry about. Having chassis redundancy provides a lot of protection there. But we could move the external chassis into the server chassis for HC to reduce cost and points of failure. Two nodes instead of four. With external chassis for storage, it's still the old HA SAN model (as opposed to the IPOD SAN model), but lacks leveraging bringing everything into the chassis like HC does.
The biggest limiting factor there then is storage capacity, and also storage capacity scaling. Harder to do that if sticking to just a 2-node hardware setup.
Harder than having a one storage node setup?
It's a myth that HC limits you in scale out. In makes it easier, in fact, in most cases. You aren't LIMITED to two nodes, it's simply that you don't need any more. In the original IPOD design, you are limited to one node. In the "multi-DataON" models, you can scale out more or less unlimited, same as with HC designs. No HC design has a two node limit (of which I am aware) and most go really big and the biggest, Starwind using RAID not RAIN, is limited only by the platform, not the storage layer.
I meant from the point you made of sticking to only two nodes with internal storage. Once that storage is full, you either have to add more nodes, or add storage boxes... therefore, no longer only having 2 hardware nodes.
I know, and my point was that that isn't an applicable thought. Yes, sticking to two nodes would limit you to the capacity of two nodes. but there is no such limit, in either design. Whether you are doing the HC design, or the traditional "multi-storage node" design, you can just add more nodes and get bigger on the storage side.
The only time you are limited is when you go with the IPOD and CAN only have one storage node, then you are much more limited.
-
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@scottalanmiller said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
HCI or disaggregate with Hyper-V and SOFS S2D are they way we're deploying now. So, the whole conversation is essentially moot.
Not really HCI as described with the DataOn. That's just a software RAID version of the non-HC model.
HC has always meant physical convergence.
I believe I referred to the DataON setup as "Converged" or sometimes "Asymmetric" not Hyper-Converged which is what Storage Spaces Direct is when running with both Storage Spaces and Hyper-V on the nodes.
I see. Asymmetric is a decent term. What about it is converged, though? It seems "unconverged", if you will. Other than the software RAID running on the storage nodes.
-
@PhlipElder said in Should I bother to learn Windows Storage Spaces and what about Glances export?:
Disaggregate is where we have two clusters. One running SOFS (could be a similar to DataON setup as was done in the past or S2D in SOFS only mode which is our way forward) and the other running Hyper-V.
Not a term that I usually see, but makes sense. This is what we would normally just call "traditional, legacy HA design", or "HA external storage." It's the standard non-IPOD setups we were seeing in the early 2000s. It's decent, but doesn't require new terminology. But it never really had a term, so maybe one is needed, new or otherwise.