StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far)
-
@Dashrender said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
@scottalanmiller - where is your - hypervisors are not basket and eggs - post?
-
@Dashrender this is not about the basket/eggs thing, consolidation is well and good, but HCI adds a massive load on each host, and the resources for that load have to come from somewhere. SDS is not easy and it does demand CPU, RAM and network resoruces. SDN is just as bad. Lump it all into the same host, and you've got nowhere to run VMs adequately, that's my point.
There's a very old joke - a man is pulled over by a policeman, as he was driving with one hand and hugging his girlfriend with the other. The policeman says "Sir, you are doing two things and both of them badly". This is exactly why HCI is wrong. Yes, if all you have is a single machine, you'll be lumping all your workloads on it, but if you are building a real datacenter, you better do the networking stack properly, using the right hardware: even if it's going to be some opensource SDN like Calico and not a suitcase of money sent to cisco, you should dedicate the correctly spec'd hardware to that, the same goes for the storage stack - you want to run on commodity hardware using opensource SDS software - be my guest, but dedicate those hosts to SDS and spec them out to fit the task, and the same goes for the workload-bearing machines, whether they will be KVM hypervisors or a docker swarm or an overprices vmware cluster - that's immaterial. If you do the HCI thing, you cannot spec the hardware to the task, you end up running all of those services and workloads on the same set of hosts, and all those tasks will be sharing that hardware, either competing for resources, or cutting available un-utilized resouces away from where they could be needed.
Yes, the nicer HCI systems can try to keep the data they serve balanced so that it is at least partially local to the workload, but in a properly build virtual DC this is not a problem. Infiniband, FC and even FCoE make latency moot, and throughputs can be much higher than over a local SAS or even NVMe channels.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
a man is pulled over by a policeman, as he was driving with one hand and hugging his girlfriend with the other. The policeman says "Sir, you are doing two things and both of them badly"
That's a joke?
-
@DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
a man is pulled over by a policeman, as he was driving with one hand and hugging his girlfriend with the other. The policeman says "Sir, you are doing two things and both of them badly"
That's a joke?
LOL, yes.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
but HCI adds a massive load on each host,
This is they myth. In most HCI it adds no appreciable load. As long as you believe that things like storage and networking are going to create a lot of load, yes, this is going to seem like a point of risk, although even then things like RAID cards fixed that in the era where that was true.
But since it doesn't add load, and actually adds less load than splitting it out, this logic is backwards.
-
@DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
That's a joke?
You should have seen my British friend tell it
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
SDS is not easy and it does demand CPU, RAM and network resoruces. SDN is just as bad. Lump it all into the same host, and you've got nowhere to run VMs adequately, that's my point.
SDS isn't part of HC. This might be a root of your confusion. This is why some HC, like the one for whom the thread is about doesn't do this and just does RAID. Overhead is ridiculously low.
-
@scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
This is they myth. In most HCI it adds no appreciable load. As long as you believe that things like storage and networking are going to create a lot of load, yes, this is going to seem like a point of risk, although even then things like RAID cards fixed that in the era where that was true.
But since it doesn't add load, and actually adds less load than splitting it out, this logic is backwards.
I already answered that above. Just because you say it doesn't add any load, doesn't mean it doesn't.
SDS isn't part of HC. This might be a root of your confusion. This is why some HC, like the one for whom the thread is about doesn't do this and just does RAID. Overhead is ridiculously low.
How exactly do they deal with the HA side of things? With RAID, and a host going down, all the VMs using that host go down, RAID or not.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
@DustinB3403 said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
That's a joke?
You should have seen my British friend tell it
I don't really get it, but okay.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
even if it's going to be some opensource SDN like Calico and not a suitcase of money sent to cisco, you should dedicate the correctly spec'd hardware to that, the same goes for the storage stack - you want to run on commodity hardware using opensource SDS software - be my guest, but dedicate those hosts to SDS and spec them out to fit the task, and the same goes for the workload-bearing machines,
This just doesn't hold up in the real world. Are there cases where you'd want this? Absolutely. But in general? No. Most companies and workloads are not trying to do things where this makes sense at all. Giant storage pools, on host SDS, etc. These are the things that, while they have their place, is almost exclusively just vendors raping customers who don't realize that this stuff isn't helping them.
All this SDS like this is doing is making new SAN pools, right back to the complexity, costs, and risks that we had before.
If your design creates all this overhead, whether you address it in one box or many, chances are that design itself is the flaw. Not always, but generally. But whether you have RAID or RAIN, the overhead to do this stuff just isn't there when implemented well. Now sure, if we only look at totally garbage solutions, we can make any design seem like a problem. but you have to separate good design from good products. Bad products exist even in well designed solutions.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
I already answered that above. Just because you say it doesn't add any load, doesn't mean it doesn't.
Right, but because we measure it and there isn't means that there isn't. Just because you claim a load that no one has, which is all you are doing, doesn't make it real. You have created problems that no one faces and are acting like we are all impacted by them.
You are literally claiming that contrary to all evidence, common sense, and industry knowledge, that software RAID is not just a huge load, but so large that we now need not only hardware RAID cards to do it, but entire hardware RAID servers!
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
How exactly do they deal with the HA side of things? With RAID, and a host going down, all the VMs using that host go down, RAID or not.
Um, RAID makes a copy. So both machines have identical copies, RAID 1 is just mirroring - they even mirror the RAM cache. There's no magic. I think you are not aware of what HC products are like in the real world and are making loads of assumptions. My guess is you've seen Nutanix, the one god awful total failure of an HC product (that didn't even exist until I was consulting on HC for many years) and assuming that what they do badly or wrong is part of HC when it is just a Nutanix thing.
Many HC products use network RAID, not RAIN, and have none of the issues you are picturing. And many RAIN implementations, like SCRIBE, don't have them either. No one is arguing that Nutanix is bad or has these problems, we are explaining to you that your limited interpretation of HC to mean something different than it means to anyone else, is making it seem like these problems are endemic when, in fact, they aren't even likely. You are basically defining HC to mean Nutanix, when to everyone else, Nutanix isn't even a player, just a complete joke that exists only for marketers to screw the unprepared.
You should really research the field and products before making these wild claims. It's trivial to show that your assumptions can't be true, because we can demonstrate it. No one is denying that bad products are bad, but you are claiming that good products can't be good because you once saw a bad one. That's like thinking all hamburgers are bad because you once ate at a McDonald's, but that's hardly the only hamburger, let alone the reference example.
-
@scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
but entire hardware RAID servers!
Not only Hardware RAID Servers, but separate dedicated network stacks, and compute pools as well.
-
Good reference HC systems would be....
Scale HC3, StarWind, Vmware with VSAN, Simplivity (for hardware offload), and XO. This gives a range of packaged HC solutions that show how HC can't be what you imagine it to be, all are well made and none of the issues you are stuck on.
Then you should look at the more traditional HC examples that we've had for even longer, like KVM and Xen with DRBD or BHive with HASP where you build your own HC and also, don't have these issues.
-
@scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
This just doesn't hold up in the real world.
HCI doesn't hold up in the real world. See, I can keep doing this too
Most companies and workloads are not trying to do things where this makes sense at all.
Maybe in the SMB space there is less planning and more "lets just deliver something and let someone else support it", but it doesn't work in the enterprise.
All this SDS like this is doing is making new SAN pools, right back to the complexity, costs, and risks that we had before.
Exactly. SDS or SANs - you can pick whatever you want and suits you best. But if you are going to be running a replicated distributed storage service on hardware that is already quite busy running VMs, you'll end up in trouble as soon as you go anywhere near capacity.
If your design creates all this overhead, whether you address it in one box or many, chances are that design itself is the flaw. Not always, but generally. But whether you have RAID or RAIN, the overhead to do this stuff just isn't there when implemented well.
How exactly can there be no overhead, when you are synchronizing large chunks of blocks over the network? Especially with high replication factors? Add encryption to that, add all the extra logic for local tiering, and there's no wonder the minimal sizing for SDS is so high.
Now sure, if we only look at totally garbage solutions, we can make any design seem like a problem. but you have to separate good design from good products. Bad products exist even in well designed solutions.
Right, but because we measure it and there isn't means that there isn't. Just because you claim a load that no one has, which is all you are doing, doesn't make it real. You have created problems that no one faces and are acting like we are all impacted by them.
More sophistry. Can you be more specific please, instead?
You are literally claiming that contrary to all evidence, common sense, and industry knowledge, that software RAID is not just a huge load, but so large that we now need not only hardware RAID cards to do it, but entire hardware RAID servers!
I'm not talking about simple software RAID, I'm talking about keeping a distributed storage system in sync, and then on top of that keep constantly rebalancing to satisfy the local tiering requirements. And while doing all that juggling, also ensure the system remains resilient to node failure. This is a lot of work, unless like @Dashrender says there is magic at play. I don't believe in magic.
So to skip the rest of your quotes, in general, what you are saying is that a system which is essentially something like a simple KVM with DRBD, is the perfect solution. I am saying sure, for two nodes. How about 200?
Do you think AWS/GCP/Azure are running HCI solutions for example?
-
If you look at any one product, from any field, you might easily assume that the problems that that product has might be the result of the design or approach, rather than the product itself.
Example: If you use Cisco as the example of networking, you might assume that all networking equipment is expensive, slow, and riddled with security problems.
If you only look at Ubiquiti you might think that all networking has limited support, limited selection, and can't do SDN.
If you only look at Meraki, you might think that all networking uses a cloud controller.
And so forth. Or with databases, tons of people do this today and believe that all databases must have the limitations of Oracle and SQL Server because those two share a lot of licensing and tech similarities and without looking further, they will often associate their problems or caveats with databases as a category, rather than realizing that they are just similarities shared between those two.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
HCI doesn't hold up in the real world. See, I can keep doing this too
Except it does, tons of places are using it, and show me ANY example where it didn't shine... any. WHile somewhere, one must exist, dollars to donuts you can't find one.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Maybe in the SMB space there is less planning and more "lets just deliver something and let someone else support it", but it doesn't work in the enterprise.
Actually, that's exactly what loads and loads of enterprises do. But regardless of that, this has nothing to do with HC whatsoever and is neither here nor there.
-
@scottalanmiller said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
HCI doesn't hold up in the real world. See, I can keep doing this too
Except it does, tons of places are using it, and show me ANY example where it didn't shine... any. WHile somewhere, one must exist, dollars to donuts you can't find one.
I've worked with several Openstack and K8s clusters where the storage was local to the hypervisors, served from Ceph/Gluster/AFS/SheepDog. Horrible experience each time.
-
@dyasny said in StarWind HCA is one of the 10 coolest HCI systems of 2019 (so far):
Exactly. SDS or SANs - you can pick whatever you want and suits you best. But if you are going to be running a replicated distributed storage service on hardware that is already quite busy running VMs, you'll end up in trouble as soon as you go anywhere near capacity.
Maybe you are running into these problems due to bad products, planning or design, but you are hoisting problems you've had onto everyone else where we aren't experiencing these problems.
You are "projecting". I'm not saying you or people you know having implemented HC and had it not work well. But you are confusing a bad implementation or planning with the architecture being bad. The two are different things.