StarWind vs Storage Spaces Direct
-
So I sent a pm to @KOOLER to see if he had any information, but I figured I'd start a topic here as well with regards to StarWind vs Storage Spaces Direct to find options of performance, reliability etc between the two.
Did MS just make SSD for the purpose of trying to close out StarWind (and other competition?)
-
My take on it is that after 20 years of Windows Software RAID being totally insane to implement in production, we need to wait at least one or two server release cycles before we have enough time for Storage Spaces Direct to have collected enough reliability data to even be a remote consideration. Microsoft's track record here speaks for itself. The entire hardware RAID industry exists almost solely to tackle this one software issue with Windows. Storage Spaces was just an attempt to rename it to hopefully get out of touch Windows Admins to think that there was some hot, new feature worth putting their data on and a lot got burned.
While Direct might be a good idea, it's going to be a long time before I would even consider trusting it. For 22 years we've not been able to rely on that subsystem. Sure not going to assume that they've figured it out suddenly.
-
Even waiting one or two more Server release cycles.. how does that help us when few to no one is using Storage Spaces Direct?
I can't use SSD in this case - it's to ingrained for Solid State Drives. -
@Dashrender said in StarWind vs Storage Spaces Direct:
I can't use SSD in this case - it's to ingrained for Solid State Drives.
I was thinking the same thing when I type'd it. Like damn you Microsoft!
-
@Dashrender said in StarWind vs Storage Spaces Direct:
Even waiting one or two more Server release cycles.. how does that help us when few to no one is using Storage Spaces Direct?
because no matter how foolish or reckless any technological choice is, tons and tons of people will use it. Stopping discerning people from using it doesn't change that it will get a lot of use.
-
@DustinB3403 said in StarWind vs Storage Spaces Direct:
@Dashrender said in StarWind vs Storage Spaces Direct:
I can't use SSD in this case - it's to ingrained for Solid State Drives.
I was thinking the same thing when I type'd it. Like damn you Microsoft!
S2D is the official tagline, not SSD
-
@FATeknollogee S2D just doesn't make any sense to me. I hadn't heard that, but it just doesn't compute.
S2D.... Soft 2 Die?
-
@scottalanmiller said in StarWind vs Storage Spaces Direct:
My take on it is that after 20 years of Windows Software RAID being totally insane to implement in production, we need to wait at least one or two server release cycles before we have enough time for Storage Spaces Direct to have collected enough reliability data to even be a remote consideration. Microsoft's track record here speaks for itself. The entire hardware RAID industry exists almost solely to tackle this one software issue with Windows. Storage Spaces was just an attempt to rename it to hopefully get out of touch Windows Admins to think that there was some hot, new feature worth putting their data on and a lot got burned.
There's some nasty @#$@ in there. Mainly write order fidelity isn't working yet with ReFS...
-
@scottalanmiller Microsoft has spent 100 Million dollars hiring architects to push their solutions. They will pay thousands to MSP's on the back end per 3 node Hyper-V cluster they deploy (even if it doesn't work).
Think back to the early years of SQL. SQL 2000 WAS AWFUL. Microsoft funded startups who would build their applications on Microsoft SQL.
-
My dog is in another fight (Clustered storage for vSphere not Hyper-V) but in this case honestly I'd trust a synology over Storage Spaces Direct. At least I have a slight clue of what black magic is going on underneath it.
-
@John-Nicholson said in StarWind vs Storage Spaces Direct:
@scottalanmiller Microsoft has spent 100 Million dollars hiring architects to push their solutions. They will pay thousands to MSP's on the back end per 3 node Hyper-V cluster they deploy (even if it doesn't work).
Yeah, I've had to deal with their incompetent sales people pushing these systems recklessly already. They've lost so much credibility here.
-
Guys appreciate your time and trust. I'm currently in London on a SpiceWorld so a bit head over the heals. Give me a day or two and I'll write some detailed story here. LOTS of things to mention.
-
@John-Nicholson said in StarWind vs Storage Spaces Direct:
@scottalanmiller said in StarWind vs Storage Spaces Direct:
My take on it is that after 20 years of Windows Software RAID being totally insane to implement in production, we need to wait at least one or two server release cycles before we have enough time for Storage Spaces Direct to have collected enough reliability data to even be a remote consideration. Microsoft's track record here speaks for itself. The entire hardware RAID industry exists almost solely to tackle this one software issue with Windows. Storage Spaces was just an attempt to rename it to hopefully get out of touch Windows Admins to think that there was some hot, new feature worth putting their data on and a lot got burned.
There's some nasty @#$@ in there. Mainly write order fidelity isn't working yet with ReFS...
ReFS with integrity streams enabled is a 100% log-structured file system pretty much like StarWind LSFS or NetApp WAFL or Nimble CASL (except StarWind and Nimle and NetApp are much more effective because of 4MB+ pages touching all spindles in a parity RAID, MSFT is still below 64KB most of the time). With integrity disabled it's just an NTFS without a scrub process (and with no dedupe). We did a cool review here take a look:
https://slog.starwindsoftware.com/refs-virtualization-workloads-test-part-1/
Good luck!
-
[ I'm really sorry it took so long !! ]
StarWind Virtual SAN Vs. Microsoft Storage Spaces Direct Vs. VMware Virtual SAN
OK, here we go! First of all, I think I owe a set of a disclaimers:
A) I not only work for StarWind but also still own a noticeable part of it and while I’m trying to be as honest as possible and as unbiased as anybody could be… please still accept everything I say with a good grain of a sea salt (hey, healthy skepticism is always welcomed).
B) I personally know (for many years) people who developed and also prominently evangelize for referenced competitive products, I remain in a very good relationship with them and I think can call many of them with a word “friend” (if “friend” can live 10,000 miles away from your home and you see him maybe few times a year at the best), which means I won’t too actively criticize what they and their companies do in public even if I really have a technical reason to do so.
C) I’m under strict NDAs with the named companies and with some other ones as well which means I have to step on my tongue and stop leaking quite many really interesting things I know.
So A), B) and C) summarized makes me probably not the best information source on the subject but… Let’s start to see what could we win!
I’ll begin with a “maturity” issue where I’ll try to play a “devil’s advocate” for both Microsoft and VMware. It’s quite common to hear a statements (usually from Microsoft and VMware “competitors” who managed to download and compile some ZFS fork out, craft some reasonably good-looking HTML5 GUI and now they call themselves an uber-exciting hyperconverged or storage startup LOL) about Microsoft not understanding storage, VMware never been a storage company itself so there’s no traces of storage in their companies’ DNA, both companies having V1.0 version of their products and so on. I’d say neither Microsoft nor VMware aren’t small companies and they take Software-Defined Storage challenge for serious for sure: teams are extraordinary talented, partners are well-engaged, huge money is bet on success so everything others spent years on could be done by Microsoft and VMware in a very short time term (2-3 years I think). These guys are maybe indeed a bit late to the SDS party (well, big guys never were good in a true innovation, disruption strategy belongs to lean startups and it’s a law stronger than law of gravity IMHO) but they catch up very fast and while their products may have some “holes” in features line (who doesn’t have them?) everything they put into RTM labeled version works well for sure! Finally, if some particular niche isn’t served well by them maybe it could be because VMware and Microsoft don’t really see this niche as a valuable source of an income worth spending time on serving such a customer group? In a nutshell: everything I’ll compare below is assumed to be of a similar build quality, no FUD for sure!
Now comes one very important assumption. While the topic covers StarWind Virtual SAN, Microsoft Storage Spaces Direct (Why didn’t Microsoft call it a “Virtual SAN” name as well? It would save us all A LOT of time and kill so many confusion!), and VMware Virtual SAN I’ll expand software-only offerings to so-called “ready nodes” which are branded servers with pre-installed hypervisor (Microsoft Hyper-V or VMware vSphere) and matching Software-Defined Storage solution from listed (SDS in this context) vendors. Software-Defined Storage eventually evolved into hyperconvergence, and hyperconvergence is now mostly considered as a “ready nodes” rather than a software alone and here’s why: SDS and hyperconvergence allowed to reduce implementation costs (CapEx) and maintenance costs (OpEx) in a way of not buying any “big named” SAN or NAS first (Software-Defined Storage did it) and later in a way of not buying any now DIY (Do-It-Yourself) SAN or NAS (hyperconvergence now) at all. “Ready nodes” take hyperconvergence and associated CapEx and OpEx savings just to another level up and save even more money upfront when hyperconverged vendor shares some of his major hardware discount with his end user to make hyperconverged infrastructure more affordable, and later when hyperconverged vendor covers all support for cluster by his own, eliminates a need in a “middle man” or MSP smaller SMB shop had to hire to support his vSphere or Hyper-V installation before. Software-Defined Storage Hyperconvergence HC “ready nodes”, this is how the whole virtualization evolution looks like for a typical SMB. StarWind has “ready nodes” called HCA (Hyper-Converged Appliance), VMware has them as well (VSAN “ready nodes” or VxRack from a parent Dell company), and Microsoft delivers similar solutions thru the network of partners.
I’ve decided to separate the customers by size and by associated most typical scenarios instead of focusing on features and some products’ limitations because I believe it’s hard to decide is say lack of a f.e. deduplication a deal breaker or not for somebody. My separation is still not perfect, no line drawn in sand (and if there’s a one it’s definitely blurred) but most starting points are there.
- Very small SMBs and / or ROBOs. We’re talking about 2-3 hypervisor nodes (2 CPU sockets each host usually as it’s the most popular and cost-effective compute platform now), comparably few VMs (20-30 or so) so reasonably low “VMs-per-host” density, strictly hyperconverged setup, and either Hyper-V or vSphere used but never both at the same time. Dramatic growth isn’t expected in the nearest future. Shop is very short in a human resources to manage and support the whole thing.
VMware does exist in a two-node version called “ROBO Edition” but it requires a third witness data-less host somewhere (private or public cloud) and it’s also “per-VM-pack” rather than “per-CPU-socket” licensed. Microsoft doesn’t support two-node Storage Spaces Direct setup currently but even if they would their data distribution policy doesn’t allow losing more than a single hard disk even in a three-node two-way replicated S2D cluster. Moreover, Storage Spaces Direct requires Datacenter license making resulting solution extremely expensive and a total overkill for smaller deployments. StarWind doesn’t need any witness entities (StarWind can utilize heartbeat network) for a pure two-node setup, doesn’t need network switches (StarWind doesn’t use broadcast and multicast messages like VMware Virtual SAN currently does), and doesn’t require any specially licensed Windows host (even free Hyper-V Server is absolutely OK for us, Windows Server Standard is PERFECT). Smaller overhead VM-based VMware solution has can be safely ignored (StarWind runs as part of a hypervisor on Hyper-V but requires a “controller” VM for vSphere) because IOPS requirements are low within this scenario. To put a final point StarWind software alone and hyperconverged appliances come with a 24/7 support so shortage or a complete lack of an on-site human resources is mitigated.
These all points mentioned above make StarWind and StarWind-based hyperconverged appliances a very natural choice here, we’ll provide either our own Virtual SAN software alone to “fuel” virtual shared storage for existing commodity servers customer already has or we’ll ship a complete hyperconverged appliances with our StarWind Virtual SAN being used as a “data mover” layer. Still our “ready nodes” are more affordable than DIY (Do-It-Yourself) kits (StarWind has hardware discount it splits with the customer, customer has to spend millions of dollars literally to have comparable discount rate), still our “ready nodes” come with a 24/7 support while DIY is basically a “self-supported” solution.
- Bigger SMBs (4 and more hosts in a basic hypervisor cluster) and up to entry-level Enterprises (10 hypervisor hosts or so), everything hyperconverged, comparably high “VMs-per-host” density (10+ VMs). Still either Microsoft or VMware hypervisor employed (not both at the same time). Growth is expected, is moderate, and is more or less linear for compute and storage at the same time. IT management and administration staff is present on-site.
For these particular customers Microsoft Storage Spaces Direct and VMware Virtual SAN are the best fit! Window Server Datacenter licensing makes sense because of the amount of VMs alone so Storage Spaces Direct are there automagically, and VMware Virtual SAN cost overhead is split between many VMs running on the same host so is reasonable. For these guys StarWind isn’t offering any paid software for primary storage (within this cluster I mean), but we’ll be happy to sell StarWind-branded “ready nodes” (just to drive server hardware costs down a little bit) where either Microsoft S2D or VMware VSAN will be used as a “data mover”. We’ll still use our own Virtual SAN to tap some little holes in a Microsoft and VMware products to add even more performance, increase storage efficiency, and add some more flexibility as StarWind isn’t forced to support one storage protocol while not supporting other one for example. For Microsoft we’ll add RAM-based write-back cache (Microsoft own CSV RAM cache is read-only and limited in size), 4KB in-line deduplication (Microsoft Storage Spaces Direct require ReFS and ReFS has no dedupe), log-structured file system, and set of a protocols Microsoft isn’t offering out-of-box (HA iSCSI including RDMA iSER and vVols extensions, failover NFS etc). VMware has no RAM-based write-back cache (flash only) as well, no dedupe for spinning disk (means VMware’s dedupe is for primary storage and backup scenario isn’t served), and block & file protocols (iSCSI, NFS, and SMB3) customer able to deploy immediately out-of-box (VMware VSAN is “private party” so only VMs has access to VSAN-managed distributed storage pool). Last but not least, we’ll still wrap everything customer gets into our own 24/7 support making us rather customer to own the whole support and maintenance thing.
Making long story short: StarWind has same unchanged offering “hyperconverged appliance still being cheaper then do-it-yourself kit but all covered by our premium 24/7 support your DIY doesn’t have”. Except for “data mover” we’ll use Microsoft’s and VMware’s own SDS solutions keeping our own software as a complimentary free SKU to “enhance” them and to help in our differentiation from other vendors shipping same Dell or HP servers and same S2D or VSAN based HCI SKUs.
- Very big Enterprises (20+ hypervisor hosts), compute and render farms, cloud and hosting providers. Either hyperconverged or “compute and storage segregated” scenarios. VM density varies from host to host, some hosts may use Windows Server BYOL (Bring-Your-Own-License) for some or all of their VMs. Microsoft and VMware hypervisors can be used at the same time (so-called “multi-tenant” environment). Growth is unpredictable and compute can be increased separately from storage and vice versa. Staffing is generally not an issue but bigger guys always keep doing some sort of a restructuring to drive OpEx down so… nobody knows about tomorrow situation for sure!
Both Microsoft Storage Spaces Direct and VMware Virtual SAN are really a bad choice here. First reason is strictly financial: for a hyperconverged environment it’s simply too expensive to pay $5,000+ of a licensing fees per every single host if there are too many of them. 20+ hosts will bring an associated $100,000+ price tag with them and it’s a MSRP of an exceptionally well-performing all-flash SAN covered with a super-strict SLA (Service Level Agreement) and delivering performance and features set Microsoft and VMware can only dream about. Tiny remark: Microsoft licensing has a benefit of an unlimited licensed Windows Server VMs included but if customer already has VMs licensed (Windows Server licenses purchased already or BYOL performed) this argument is gone and Storage Spaces Direct and VMware Virtual SAN play in an equal condition. Second reason is a hybrid financial / architectural: if compute and storage tiers need to be sized separately from each other whole hyperconverged concept fades and more classic “compute and storage segregated” should be deployed instead of it. Utilizing expensive Windows Server Datacenter licenses to build Scale-Out File Server storage only tier is pretty much pointless, just because a dedicated to serve storage only server hardware alone plus Windows Server Datacenter licenses will outweigh the price of an all-flash SAN still being still not able to catch up with an all-flash SAN’s IOPS, features and SLA included. VMware Virtual SAN simply doesn’t support non-hyperconverged architecture as it has to run on every single hypervisor host where running VMs are consuming VSAN-managed virtual shared storage, means “compute only” data-less VSAN-licensed nodes are supported while “storage only” VSAN-unlicensed nodes aren’t supported at all. Third reason is again architectural: it’s about multi-tenant environments where both vSphere and Hyper-V are deployed in a various proportion. VMware Virtual SAN doesn’t provide any way to export managed storage so anybody (including Hyper-V cluster of course) outside of a VSAN cluster are out of game immediately, and Microsoft can expose only SMB3 reasonably well while VMware doesn’t “understand” this protocol asking for more commonly adopted iSCSI and NFS, and Microsoft isn’t really good with any of them. This means Microsoft and VMware simply “talk different languages” and instead of having a single pool of storage being shared and consumed by either Microsoft or VMware running cluster, customer needs to maintain at least two separate storage pools one for Microsoft and one for VMware. This again brings just recently buried unified central all-flash SAN idea back again only because even if CapEx would be OK for a two separate “VMware only” and “Microsoft only” solutions, then OpEx would go over the roof for sure, and resource utilization would suck badly: “islands of storage” are always bad compared to a “single unified pool” which is good.
StarWind here can offer our Virtual SAN software to run either on a non-symmetric (all nodes provide compute power but not all of them provide storage at the same time, say you have 40 nodes VMware vSphere cluster where only 8 nodes actually provide shared storage to the others, sort of a hyperconverged and non-hyperconverged mix) hyperconverged cluster and provide it with a virtual shared storage or on a “compute and storage segregated” cluster “powering” storage only tier. Unlike Microsoft and VMware we don’t expect our software to be licensed on every single node of a cluster (or it’s storage-only “sibling” part called Microsoft SOFS if “compute and storage segregated” model is utilized), we license consumed capacity and how many nodes of a hyperconverged cluster actually do provide exposed storage and how many instances of a StarWind service is running and where… we don’t actually care about that! Customer now has an excellent ability to pay for an exactly consumed storage resources and he’s the one who decides what servers do compute, what servers do storage and what servers do both compute and storage at the same time. Flexibility! Alternatively, we can issue the customer with a hyperconverged or storage only “ready nodes” in an any combination because we not only ship HCI but also we have “storage only” SA (Storage Appliance) “building blocks”. We can provide hyperconverged N-node cluster for just a fraction of VMware or Microsoft N-node cluster typical licensing costs and we can provide a fully packed all-flash SAN equivalent to feed shared storage to a hypervisor cluster as well. Still everything way cheaper than prospect will try to buy assemble himself and still everything covered with our 24/7 support, just because after we start running the storage part of the cluster – we immediately starting to own the whole big thing. As you can see StarWind is a very natural choice here both in terms of a hardware and with a “data mover” virtual shared storage layer as well if customer is lucky (unlucky?) to have servers purchased already.
-
Strictly “compute and storage segregated”. This is a scenario and it’s mostly described in details as a part of a section 3 above. I just wanted to highlight it separately because I’ve talked about it in an “Enterprise” context while it’s absolutely possible that anybody who needs to grow compute and storage separately isn’t a good fit for hyperconvergence at all and he’ll end with a “compute and storage separated” instead of this “hype” trend. Making long story short: Microsoft and VMware are either unreasonably expensive here or don’t support this implementation scenario at all or don’t talk protocols other peer can understand (and this brings a need in a “middle man” who can easily kill performance, raise unnecessary support questions etc). StarWind has a very flexible licensing policy, supports “compute and storage segregated” in full and exposes all possible protocols. Add here “hyperconverged or storage alone less expensive that DIY” and “24/7 support included instead of a self-supported” messages and you’ll get a “full house”.
-
Databases, either non-virtualized or virtualized. If server virtualization is maybe 80% of the market by number leaving 20% to databases, money split is actually flips everything other side up: 80% of money belongs to DBs leaving 20% to so-called generic server virtualization. Technically in this category we can fall down every single configuration with a very low VM density and a very high performance requirement per VM at the same time. High performance is something which makes this case (comparably few nodes and few VMs) being actually very different from a very small SMBs and ROBOs we discussed in our section 1, while initially these scenarious sound somewhat similar.
VMware and Microsoft aren’t any good fit here. Either technical (VMware VSAN can’t be used on a non-virtualized SQL Server cluster as it doesn’t use vSphere, Microsoft can’t provide virtual shared storage to Oracle RAC as it can’t talk SMB3, VMware Virtual SAN and Storage Spaces Direct scale well with a many small consumers but they don’t really shine when one or a few consumers need all IOPS from a single big unified name space etc) or financial (licensing Windows Server Datacenter on every host of the cluster just for a few VMs or for storage only tier is waste of money, we touched this reason already before).
StarWind is a perfect choice here both in terms of a software (so-called “data mover” to create virtual shared storage pool) installed on a top of a server set customer already has or a complete “ready nodes” for HCI or storage-only infrastructure. StarWind supports non-virtualized Windows Server environments, properly supports all possible storage protocols and can provide high performance (with a decent amount of RAM even matching in-memory TPC) shared storage to a single non-virtualized or a few virtualized consumers (VMs?) thanks to StarWind aggressive RAM write-back cache, storage optionally pinpointed to RAM completely, in-line 4KB dedupe, log-structuring, and data locality concepts used. In case of SQL Server (virtualized or not) instead of deploying a very expensive SQL Server Enterprise to build an AlwaysOn Availability Groups and utilizing “in-memory” DBs customer can use much cheaper SQL Server Standard and use AlwaysOn Failover Clustered Instances put on top of an “in-memory” storage StarWind provides. As a result, customer will get much better $/TPC ratio with a very similar or even better uptime metrics. Same about Oracle RAC and Oracle’s need in an expensive Enterprise plus special “in-memory” licenses, StarWind replaces all of these requirements with an old licensing scheme re-used plus in-memory storage we provide. Same about SAP R/4 vs SAP HANA. In case of a hardware purchased from StarWind we’ll also bring in our discount to make new hardware more affordable to customer, and we’ll keep everything wrapped in our 24/7 support either route customer choses: software-only or HCA/storage one.
Conclusion: StarWind Virtual SAN is a complimentary rather than competitive solution to Microsoft Storage Spaces Direct and VMware Virtual SAN. In a case when we see our shared customer doesn’t need Microsoft S2D or VMware VSAN we’ll use our own software and that’s it, if we see that customer is going to benefit from a combined solution we’ll provide him with a stack where Microsoft S2D (or VMware VSAN) will co-exist with a StarWind Virtual SAN on the same hardware. Technically what we do here at StarWind is we “tap a holes” in a Microsoft and VMware product and positioning strategy in terms of a features they miss and we bring whole hyperconvergence to another level up by making good quality hardware even more affordable and 24/7 support a routine and a “checkbox” feature. Yes, we’ll split our hardware discount with you to make our “ready nodes” even cheaper than you’ll build anything yourself on DIY concept, and, yes, we’ll “babysit” your clusters for you so you don’t need to do anything yourself!
-
@KOOLER That's basically everything what I've learned about the current situation on hyperconvergence and VSAN's during the last couple of months. Really appreciate your introductory disclaimer. Just remembering one of your posts a couple of months ago over at SW where you said something like "Actually, I'm the sales prevention guy here at StarWind".
Anyway, thanks for your excellent writeup.