This will really lower competition in the SAN space. Only a few viable vendors left. Nimble was one of the few major players.
I disagree. What about Tintri, Kaminario, or Tegile? There are still plenty of vendors out there in Nimble's competitive tier. I mean they don't compare to HDS or IBM but again, Nimble is a completely different use case.
Honestly I think it would have made more sense for HPE to acquire Tintri for their portfolio at a reasonable price. Nimble is awesome for its own use case, but the acquisition seemed to be just one of appeasement to shareholders.
Unless they are shifting gears and trying to get more competitive in small-medium markets as an all around vendor (aside from adding Nimble, thinking of them purchasing Aruba a couple years ago as well).
Here's the latest:We had been doing some testing with Infinio while waiting for the SSD from Dell using diskspd inside a VM that had VMDKs on multiple datastores that were LUNs on the SAN. Their read caching works very well if you need something for that purpose.
Dell sent us the 1.6 TB SED SSD mentioned above (not officially a supported configuration), but it actually made the SAN overall slower using our benchmarking tools and would only apply the cache to one of the controllers for whatever reason. Dell understood and was willing to help us pursue additional options to make it right.
During the process of Dell trying to get us one of those drives that they thought would work in our SAN, I had mentioned we should look at VMware VSAN. But they were quoting it using exact ready node configurations (dual CPU sockets per node), which would have put us over our vSphere licensed limit for this location (4 sockets) in addition to having to purchase VSAN Standard licenses. I suggested single socket and 4 hosts. There are SED options that will work with VSAN, but it really limits you in terms of choices.
As far as the end solution goes, it looks like we'll get bumped to Enterprise Plus in our vSphere licensing to take advantage of VM Encryption as well as getting VSAN Standard for each host for a hybrid config. That way we can use larger spinning disks in the hosts and let the software handle the encryption. We will have to have an external KMS which will also be provided as part of the solution.
The only thing to answer now is whether VxRail does the trick or we go with some kind of modified ready node / build your own host for VSAN. The SAN we have now and 2 R630 hosts plus two of the 10 GB PowerConnect switches will go back to Dell to exchange.
Starwind was a consideration, but it did not seem as easy to manage and maintain as VSAN for a 4-node configuration to get the storage capacity needed.
*Under the Hood: RAID arrays effectively use all of their devices in lock step. Whether you have two drives or eighty in your array, all of them go and look for one block of data together and they all way for the slowest drive in the array to return its block before continuing on. When all drives are identical, they all read and write at the same time and we basically get full performance from every device.
A minor change to be more accurate
*Under the Hood: RAID arrays effectively use all of their devices in lock step. Whether you have two drives or eighty in your array, all of them go and look for one block of data together and they all wait for the slowest drive in the array to return its block before continuing on. When all drives are identical makes and models, the differences are much smaller between them and we get closer to full performance from every device.
Of course, in more modern systems, the use of advanced LVMs instead of older partitions makes this a little more flexible so that more control over the process can exist. But all of the core problems still exist.
Some vendors try to market this mechanism as "RAID virtualization", which isn't a completely crazy name due to the layers of abstraction, but it makes it sound valuable when, in reality, it is not. RAID virtualization when used for the purpose of enabling hot or live RAID array growth is generally a good idea. Used as a kludge to enable bad ideas, it remains bad.
I installed Hyper-V Core, and I'm facing a though time configuring... The server is at a remote location, and connect to the remote network via VPN, and am trying to use tools like Server Manager, Hyper-V manager, and even 5nine.. Server Manager itself works fine, but when I launch tools (such a Computer Management) from within Server Manager, I get random access denied messages .. Even after adding it as a Trusted host
Why are you doing things over a VPN? Stop doing that, that's likely your problem.
Even better, this sounds like a MSP office he is working from, so they probably have all these VPN connections to various clients open.
That's super scary, MSPs using VPNs is how malware is going to suddenly take over the world. Cross contamination all over the place.
What you are thinking of is my recommendation for supported drives that are part of the system itself if you are going for a warranty supported system like from Dell or HPE. Bringing your own drives would push you to vendors like SuperMicro where you can mix and match for the best performance, cost and features.
I want to ask why we can't/shouldn't use consumer class drives in a Dell or HPE server, but I think the answer might be - because if you're paying for that level of support, why are you not going all in?
Is that right?
i.e. if you want to run your own performance/cost factors, you're better off starting with a SuperMicro, is that what you're saying?
One of the confusing pieces here is that Linux actually does things more clearly but the Windows world is so confusing that if you carry that confusion into the Linux world, it makes things harder. Windows rarely uses or discloses the names of their product components. So Windows Software RAID is used to describe part of the Windows OS. But what if you have software RAID on Windows that is not Windows Software RAID? Windows Admins typically have no good terminology to discuss this, even though it is common. They just.... don't know what's going on and don't document it. But in Linux, we have the terms on hand all of the time (MD, ZFS, whatever.) So the Linux side isn't as bad as it seems, but if you are used to a weird blend of generic names being used as if they are specifics from the Windows world and assume that the Linux world is just as crazy, then it seems crazy.
That list makes hardware RAID sound safer than ZFS, which is probably not quite true. But is the case is that the average implementation of hardware RAID is quite a bit safer than the average implementation of ZFS software RAID. Hardware RAID "handles everything for you" protecting you from most bad decisions. ZFS leaves all the nitty gritty details up to you which makes it super, duper easy to mess something up and leave yourself vulnerable. This is exacerbated by the Cult of ZFS problem and loads of misinformation swirling about its use. So the average person using ZFS is not even remotely prepared for what is needed to use it safely.
Some problems that we see people have when using ZFS without fully understanding storage:
Believing that ZFS doesn't use RAID (this is extremely common.)
Believing that RAIDZ is magic, rather than a brand name, and that normal RAID concerns do not apply. So we often see people implement RAID 5 in reckless, insane situations using "it's RAIDZ" as an excuse as if RAIDZ isn't just RAID 5 - literally just a brand name for RAID 5.
Treating common features common to all RAID systems as "unique" and believing that ZFS has feature after feature of protection that makes the need to protect against storage failure unnecessary.
Not understanding hot swap and blind swap differences and creating systems that they do not know how to address should a drive fail.
Believing that ZFS being magic is not at risk from power loss and failing to protect caches from power issues - something that they are not normally used to dealing with as hardware RAID does this for you.
Not understanding the CPU and memory needs of ZFS, especially with features like dedupe and RAIDZ3.
Ignoring common RAID knowledge and thinking that using ZFS means not using mirroring technologies.
The most common RAIN approach that I see is taking all disks in the pool, noting their nodal presence and using mirroring to distribute the data so that data mirrors never go to the same disk and/or the same node. So a little like a networked RAID 1E but with more flexibility and the option to add nodal separation and performance testing so that data moves to where it is used.
Are you aware of any open source RAIN systems?
Gluster and Swift
I think Ceph and Lustre may be two others.
Lustre is RAIN, but is closed. Gluster was the open replacement for Lustre.
Just a quick search showed that Lustre was GPL 2.0, not sure if that is new or not.
Oh wow, must be new. It was crazy expensive in 2006 when we were really investigating it. That's awesome.
Ah looks like it went open source in 2010.
Oh cool, so I remember things well then. I'm just out of date. Gluster probably forced their hand, why would anyone consider Lustre when it was closed source? The answer was probably... they wouldn't and didn't.
Yep, I'd assume that was the case. Especially when it is a such a specific, and at the time, niche market.
And when Gluster went directly after them, even in name.
Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.