Scale Computing Keeps Storage Simple and Efficient
-
Hyperconvergence is the combination of storage, compute, and virtualization. In a traditional virtualization architecture, combining these three components from different vendors can be complex and unwieldy without the right number of experts and administrators. When hyperconverged into a single solution, the complexity can be eliminated, if done correctly.
At Scale Computing we looked at the traditional architecture to identify the complexity we wanted to eliminate. The storage architecture that used SAN or NAS storage for virtualization turned out to be very complex. In order to translate storage from the SAN or NAS to a virtual machine, we counted 7 layers of object files, file systems, and protocols through which I/O had to traverse to go from the VM to the hardware. Why was this the case?
Because the storage system and the hypervisor were from different vendors, and not designed specifically to work with each other, they needed these layers of protocol translation to integrate. The solution at Scale Computing for our HC3 was to own the hypervisor (HyperCore OS) and the storage system (SCRIBE) so we could eliminate these extra layers and make storage work with VMs just like direct attached storage works with a traditional server. I call it a Block Access, Direct Attached Storage System because I like the acronym.
Why didn’t other “hyperconverged” vendors do the same? Primarily because they are not really hyperconverged and they don’t own the hypervisor. As with traditional virtualization architectures, the problem of the storage and hypervisor being from different vendors prevents efficiently integrated storage for VMs. These are storage systems being designed to support one or more third party hypervisors. These solutions generally use virtual storage appliances (VSAs) with more or less the same storage architecture as the traditional virtualization I mentioned earlier.
VSAs not only add to the inefficiency but they consume CPU and RAM resources that could otherwise be used by VM workloads. To overcome these inefficiencies, these solutions use flash storage for caching to avoid performance issues. In some cases, these solutions have added extra processing cards to their hardware nodes to offload processing. Without being able to provide efficient storage on commodity hardware, they just can’t compete with the low price AND storage efficiency of the HC3.
The efficiency of design for HC3 performance and low price is only part of the story. We also designed the storage to combine all of the disks in a cluster into a single pool that is wide striped across the cluster for redundancy and high availability. This pooling also allows for complete flexibility of storage usage across all nodes. The storage pool can contain both SSD and HDD tiers and both tiers are wide striped, highly available, and accessible across the entire virtualization cluster, even on nodes that may have no physical SSD drives.
To keep the tiering both simple and efficient, we designed our own automated tiering mechanism to automatically utilize the SSD storage tier for the blocks of data with the highest I/O. By default, the storage will optimize the SSD tier for the best overall storage efficiency without anything to manage. We wanted to eliminate the idea that someone would need a degree or certification in storage to use virtualization.
We did recognize that users might occasionally need some control over storage performance so we implemented a simple tuning mechanism that can be used to give each disk in a cluster a relative level of SSD utilization priority within the cluster. This means you can tune a disk up or down, on the fly, if you know that disk requires less or more I/O and SSD than other disks. You don’t need to know how much SSD it needs, but only that it needs less or more than other disks in the cluster and the automation takes care of the rest. We included a total of 12 levels of prioritization from 0-11 or no SSD and scaling to 11 for putting all data on SSD if available.
http://blog.scalecomputing.com/wp-content/uploads/2016/04/Screenshot-2016-04-19-13.07.06-300x154.png
The result of all of the design considerations for HC3 at Scale Computing is simplicity for efficiency, ease of use, and low cost. We’re different and we want to be. It’s as simple as that.
Original Post: http://blog.scalecomputing.com/scale-computing-keeps-storage-simple-and-efficient/
-
It goes to 11, so it must be better than the ones that only go to 10.
.
.
.
.
.
.
Sorry, can't help myself sometimes.