Starwind AMA Ask Me Anything April 26 10am - 12pm EST
-
I have 2 identical physical hosts that my company is allowing me to set up as a proof of concept for Starwind. As we are a Hyper-V shop, I am looking for a good startup guide for this; it appears this is where I would begin: https://www.starwindsoftware.com/starwind-virtual-san-hyper-converged-2-nodes-scenario-2-nodes-with-hyper-v-cluster ??
-
@LaMerk said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...
What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?
The ideal scenario for LSFS is slow spindle drives in RAID 5/50 and RAID 6/60. And the benefits are: eliminating I/O blender, snapshots, boosting synchronization process and overall performance in non-write intensive environments (maximum 60% read/40% write).
Do you see any real benefits on RAID 10 arrays? #OBR10
-
@Stuka said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does @StarWind_Software work with NVMe?
We're also bringing NVMe storage to a new level. With added iSER support we improved hybrid environments with NVMe tiers. For all-NVMe environments we're also adding NVMf (NVMe over Fabrics) support within a few weeks. Keep in mind NVMe storage will need 100 GbE interconnect.
Sounds great. I'm not really up on iSER. I know it is like an iSCSI upgrade, when would I / can I use it? What do I need to take advantage of that?
-
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
What about Storage Spaces Direct (S2D)? Can you talk about how Starwind is competing with that and what the differentiating factors are? Maybe when we would choose one or the other if we are on Hyper-V?
S2D is designed for large scale environments that are more focused on storage efficiency and less focused on performance. 4-nodes is the minimum recommended production scenario. 67%+ storage efficiency is achieved only on more than 6-nodes AFAIR. If your primary goal is scale - choose S2D. If your primary goal is resiliency and performance - StarWind.
-
@jt1001001 said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
I have 2 identical physical hosts that my company is allowing me to set up as a proof of concept for Starwind. As we are a Hyper-V shop, I am looking for a good startup guide for this; it appears this is where I would begin: https://www.starwindsoftware.com/starwind-virtual-san-hyper-converged-2-nodes-scenario-2-nodes-with-hyper-v-cluster ??
Yeah, you're absolutely right. This is exactly the one to start with.
-
@TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does the caching work on Starwind storage? I've read that I can use RAM cache, and obviously there are the disks in RAID. Can I have an SSD tier between the two? Can I have multiple tiers like a huge RAID 6 of SATA drives, a smaller RAID 10 of SAS 10Ks, a smaller SSD RAID 5 array and then the RAM on top?
StarWind can utilize both L1 cache on RAM and L2 cache on SSDs.
In regards to a specific configuration, as an example: you can have a huge RAID 6 array for your coldest data, then a moderate RAID 10 10k SAS array for your day-to-day workloads, a small RAID 5 of SSDs for I/O hungry databases and then top it off with RAM caching. That being said we do not provide automated tiering between these arrays and you would assign everything to each tier specifically. You could easily use Storage Spaces 2016 with StarWind for that functionality. Just make sure not to use SS 2012, since the storage tiering functionality on itsuckedwas suboptimal and lead us to the decision of not doing automated tiering in the first place.Okay so basically there are two cache layers, L1 RAM and L2 SSD Array, and then you would have to "manually tier" anything beneath that?
Any options for sending cold storage out to cloud like S3 or B2, which is popular here?
-
@jt1001001 said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
I have 2 identical physical hosts that my company is allowing me to set up as a proof of concept for Starwind. As we are a Hyper-V shop, I am looking for a good startup guide for this; it appears this is where I would begin: https://www.starwindsoftware.com/starwind-virtual-san-hyper-converged-2-nodes-scenario-2-nodes-with-hyper-v-cluster ??
Are you going to do a full write up on the experience on ML?
-
I will try!
-
@ABykovskyi said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
What about Storage Spaces Direct (S2D)? Can you talk about how Starwind is competing with that and what the differentiating factors are? Maybe when we would choose one or the other if we are on Hyper-V?
S2D is designed for large scale environments that are more focused on storage efficiency and less focused on performance. 4-nodes is the minimum recommended production scenario. 67%+ storage efficiency is achieved only on more than 6-nodes AFAIR. If your primary goal is scale - choose S2D. If your primary goal is resiliency and performance - StarWind.
That makes sense, thanks.
-
@LaMerk said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@jt1001001 said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
I have 2 identical physical hosts that my company is allowing me to set up as a proof of concept for Starwind. As we are a Hyper-V shop, I am looking for a good startup guide for this; it appears this is where I would begin: https://www.starwindsoftware.com/starwind-virtual-san-hyper-converged-2-nodes-scenario-2-nodes-with-hyper-v-cluster ??
Yeah, you're absolutely right. This is exactly the one to start with.
You can PM me, and we will help you building POC for your company.
-
-
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@Stuka said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does @StarWind_Software work with NVMe?
We're also bringing NVMe storage to a new level. With added iSER support we improved hybrid environments with NVMe tiers. For all-NVMe environments we're also adding NVMf (NVMe over Fabrics) support within a few weeks. Keep in mind NVMe storage will need 100 GbE interconnect.
Sounds great. I'm not really up on iSER. I know it is like an iSCSI upgrade, when would I / can I use it? What do I need to take advantage of that?
So iSER is iSCSI extensions for RDMA. In a nutshell you get much better latency and use less CPU while getting more IOPS out of your storage. Really important for all-flash, almost crucial for Flash/NVMe hybrid storage setups.
-
So what would be the use case, on Hyper-V, of the appliance vs native on the Hypervisor?
-
@Stuka said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@Stuka said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@StrongBad said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does @StarWind_Software work with NVMe?
We're also bringing NVMe storage to a new level. With added iSER support we improved hybrid environments with NVMe tiers. For all-NVMe environments we're also adding NVMf (NVMe over Fabrics) support within a few weeks. Keep in mind NVMe storage will need 100 GbE interconnect.
Sounds great. I'm not really up on iSER. I know it is like an iSCSI upgrade, when would I / can I use it? What do I need to take advantage of that?
So iSER is iSCSI extensions for RDMA. In a nutshell you get much better latency and use less CPU while getting more IOPS out of your storage. Really important for all-flash, almost crucial for Flash/NVMe hybrid storage setups.
I think we have a video of someone from @StarWind_Software talking about that from MancoCon, even...
-
@jt1001001 said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
So what would be the use case, on Hyper-V, of the appliance vs native on the Hypervisor?
Virtual Storage Appliance would be a good choice to test StarWind capabilities. Native one would be preferable to build PoC on Hyper-V.
-
@Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@Reid-Cooper said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
How does the caching work on Starwind storage? I've read that I can use RAM cache, and obviously there are the disks in RAID. Can I have an SSD tier between the two? Can I have multiple tiers like a huge RAID 6 of SATA drives, a smaller RAID 10 of SAS 10Ks, a smaller SSD RAID 5 array and then the RAM on top?
StarWind can utilize both L1 cache on RAM and L2 cache on SSDs.
In regards to a specific configuration, as an example: you can have a huge RAID 6 array for your coldest data, then a moderate RAID 10 10k SAS array for your day-to-day workloads, a small RAID 5 of SSDs for I/O hungry databases and then top it off with RAM caching. That being said we do not provide automated tiering between these arrays and you would assign everything to each tier specifically. You could easily use Storage Spaces 2016 with StarWind for that functionality. Just make sure not to use SS 2012, since the storage tiering functionality on itsuckedwas suboptimal and lead us to the decision of not doing automated tiering in the first place.Okay so basically there are two cache layers, L1 RAM and L2 SSD Array, and then you would have to "manually tier" anything beneath that?
Any options for sending cold storage out to cloud like S3 or B2, which is popular here?
Yes, exactly. Or if you prefer automated tiering 2016 Storage Spaces play nicely with StarWind.
We have several options to offload data to cloud:
-Our own asynchronous replication, which allows to do block level replication to the cloud.
-We can provide a cloud gateway solution as part of our appliance infrastructure, which is plugged into the SATA bus and presents the cloud as a local cold storage tier.
-We have an offering with our partners from Veeam where you can offload data to the cloud using virtual emulations of physical tapes. This allows to kill 2 birds with 1 stone: you store backups (for example) in the cloud and have ransomware protection, because cryptolocker doesn't target physical tapes (for obvious reasons). -
@TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
-We can provide a cloud gateway solution as part of our appliance infrastructure, which is plugged into the SATA bus and presents the cloud as a local cold storage tier.
That would be Aclouda. I got to hold one the other day.
-
@scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@TheDeepStorage said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
-We can provide a cloud gateway solution as part of our appliance infrastructure, which is plugged into the SATA bus and presents the cloud as a local cold storage tier.
That would be Aclouda. I got to hold one the other day.
The one and only, I really think it's a great addition to our appliance offering.
-
@scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@LaMerk said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
@scottalanmiller said in Starwind AMA Ask Me Anything April 26 10am - 12pm EST:
I'll jump in with one that I know but think it is worth talking about and that people are unlikely to know to ask about...
What are the benefits of the LSFS (Log Structured File System) and when would be want to choose it?
The ideal scenario for LSFS is slow spindle drives in RAID 5/50 and RAID 6/60. And the benefits are: eliminating I/O blender, snapshots, boosting synchronization process and overall performance in non-write intensive environments (maximum 60% read/40% write).
Do you see any real benefits on RAID 10 arrays? #OBR10
Even though LSFS technology was initially designed for parity type RAIDs, you could still benefit from utilizing LSFS on RAID 10 arrays having snapshot options and being able to squeeze out decent performance at the environments where reads dominate writes.
-
@LaMerk would there be a negative to using it in other circumstances? Or just a lack of benefits? I guess the needed question would be.. what factors would make you avoid LSFS given its benefits?