Local Storage vs SAN ...
-
@scottalanmiller said in Local Storage vs SAN ...:
I'm going to make a video just for this thread BUT, watch this video first while I'm making it...
LOL - nice!
-
Just recorded a forty minute video on this, lol. Uploading now.
-
It's taking a long time to upload
-
@scottalanmiller said in Local Storage vs SAN ...:
It's taking a long time to upload
You know, there are no issues with plugins on my nodebb systems. You should really look closer at what your errors are.
-
@JaredBusch said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
It's taking a long time to upload
You know, there are no issues with plugins on my nodebb systems. You should really look closer at what your errors are.
I'm not uploading it HERE. I'm uploading it to YouTube.
-
@BraswellJay said in Local Storage vs SAN ...:
We are planning a server upgrade and I find myself faced with the question of whether a SAN is necessary.
No, a SAN will not be needed.
What SAN provides is shared storage. Today the preferred solution for shared storage is a vSAN. vSAN is basically local storage from several hosts networked together and replicated. It provides shared storage for the hosts. DRBD, Gluster and Ceph are simply technologies used to build a vSAN.
But maybe you don't need that either. Most don't.
The real question is: what are the business requirements and budget for the applications you run?
-
-
@Pete-S said in Local Storage vs SAN ...:
But maybe you don't need that either. Most don't.
The real question is: what are the business requirements and budget for the applications you run?This is key.
-
@Pete-S said in Local Storage vs SAN ...:
vSAN is basically local storage from several hosts networked together and replicated. It provides shared storage for the hosts. DRBD, Gluster and Ceph are simply technologies used to build a vSAN.
Technically none of those are vSAN. vSAN is a specific means of providing RLS using traditional SAN stack tech. It came about later than RLS. These three all predate vSAN concepts. Starwind does vSAN, for example.
With a vSAN approach you either have something directly on the hypervisor or more commonly a virtualized SAN appliance on a VM. This approach is only common on VMware because it is so lacking in basic features that it is necessary there, just like it is the only platform that requires hardware RAID - everyone else has software RAID built in.
The upside to vSAN is that it "looks" just like SAN in every sense and vendors trying to push you to SAN are fooled because all they see are the iSCSI or ATAoE or whatever adapters in place.
Traditional RLS like CEPH, DRBD, etc. don't have the SAN protocol layer making them simpler, faster, and more robust. There is little value in putting them in a VM so they tend to be deployed in the hypervisor directly.
Some, like Gluster, require a local driver and show up as being Gluster at the driver level. Others, like DRBD, mount as a local filesystem and are undetectable at that level of abstraction and appear as if you are using a regular local disk. Any system trying to detect local disk would believe that that is what it had.
So while the new VMware world vSAN approach has gotten a lot of attention as a way to "replace SAN" using RLS, it's mostly marketing buzz. RLS techniques are old and have been around long before virtualized SAN was imagined as having value. DRBD wasn't the first in 2007, but it was an early player in the enterprise space. But RLS goes back to the 1970s long before SAN or VMware.
Also worth noting, most SAN is actually vSAN. vSAN doesn't imply that it runs on the same box or is RLS. It's a different layer of concept.
SAN refers to block storage over a networking encapsulation protocol and has a set of protocols known as the SAN protocols that are normally used (FC, iSCSI, etc.)
vSAN is any SAN run virtualized (which is how production workloads are generally run, so lots of SAN is done this way.) Most shops building their own SANs will build vSAN without even thinking about it. It's just SAN in a VM. Being in a VM means it COULD be local, could be distant, it's not specified.
Neither SAN or vSAN implies any redundancy, only that the storage is block and over a remote networking protocol and vSAN only then implies that the workload has been virtualized making it all but a useless term (we don't call servers vServers in other contexts.)
RLS refers to the block storage being local AND replicated between nodes. Good SAN deployments have to use RLS to make themselves reliable. This is what a proper 3PAR or Clariion deployment will do - they have multiple nodes and RLS replicates so that a full node can fail. Under the hood, RLS is the only mechanism for redundancy if it truly exists.
For SAN to be truly reliable, it needs RLS. vSAN being SAN, same thing. Lots of vSAN is chosen because it is the smarter way to do SAN, but it still needs RLS to have that value. Many vSAN products include RLS setup out of the box, making people confuse the two, but the concepts are different. Lots of vSAN deployments aren't redundant and lack RLS, some aren't even local.
Starwind, as an example, offers vSAN that is non-redundant and remote (just a traditional SAN but with good technology.) And they offer vSAN that is clustered and redundant, but still remote and just a SAN cluster. Or you can move the VMs onto the hosts that they are providing storage for and make it RLS. All three are as designed.
It's complicated because vendors like VMware use concept names, like vSAN, as product names and market them as meaning something unrelated to their terms.
So all that to say vSAN is definitely an option here, if it is an RLS architected vSAN. Or another way.... vSAN is a tool, RLS is a solution. When solutioning, vSAN is one component that could be used to achieve RLS.
-
@Pete-S said in Local Storage vs SAN ...:
DRBD, Gluster and Ceph are simply technologies used to build a vSAN.
They can be, but 99% of the time no SAN layer will be used. I've never seen Gluster or CEPH used to make a vSAN and DRBD mostly only in a lab. They are so much faster and more robust without the SAN layer that it's not popular to do that. So much of their value comes from removing the need and complexity of the networking layer since the storage itself is already replicated to each node. If you add the vSAN layer, you have to deal with a loss of redundancy (in the connection layer) and build that back in.
-
@scottalanmiller said in Local Storage vs SAN ...:
@JaredBusch said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
It's taking a long time to upload
You know, there are no issues with plugins on my nodebb systems. You should really look closer at what your errors are.
I'm not uploading it HERE. I'm uploading it to YouTube.
I meant to click reply to your prior post, the one with the link and no video preview. But my point stands.
-
@JaredBusch said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
@JaredBusch said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
It's taking a long time to upload
You know, there are no issues with plugins on my nodebb systems. You should really look closer at what your errors are.
I'm not uploading it HERE. I'm uploading it to YouTube.
I meant to click reply to your prior post, the one with the link and no video preview. But my point stands.
That's not an issue of a broken plugin. I cant find any plugin that does that. They removed the youTube plugins from the repos.
-
@BraswellJay said in Local Storage vs SAN ...:
We are planning a server upgrade and I find myself faced with the question of whether a SAN is necessary. I know there have been many posts both here and on other forums about SANs being oversold in situations where they are not needed. My gut instinct is that my situation is one that really doesn't require a SAN, yet I still find myself unsure that I understand the various questions that I should be considering when making this decision.
I bought a copy of Linux Administration Best Practices by @scottalanmiller and am reviewing the chapters on system storage, in particular the parts on SANs, local storage and replicated local storage.
Our needs are not sophisticated. We will have only a handful of VMs. A file server, sql server, freepbx, inventory management system server, security system server and an internal application server for a few internal tools. For most of these we can afford some downtime in the event of a host failure. The exception is really the SQL server. While it would not be catastrophic for some downtime it would be far superior from a continuity perspective if it could fail over to a secondary host if necessary.
With that in mind, I had planned for two hosts so we could survive a failure of one of them. My primary confusion though is how would I accomplish replicated local storage. Is this functionality that the hypervisor must provide? The best practices book mentions several technologies (DRBD, Gluster, CEPH) that can be used for RLS but I would think that these would have to run in the hypervisor itself and not as separate VMs on the host. Is that correct?
In general, for relatively small environments such as mine, is it feasible to even attempt local storage replication? Our MSP has quoted an EMC SAN device to the tune of $25k so that VMs could be migrated between hosts with storage being on the SAN. What would an implementation without the SAN look like if I wanted to maintain the replication and the ability for the VMs to be migrated between hosts?
A Hyper-Converged Infrastructure setup would be the best way to go IMO.
Two nodes with decent AMD EPYC 16 Core 155 Watt+ CPU and 8x 64GB ECC if Rome/Milan based or 12x 64GB ECC if Genoa based.
We only do Microsoft's Storage Spaces Direct (S2D) and Azure Stack HCI with most of our HCI platforms being S2D.
The first place to start is here: www.liveoptics.com
Get a baseline for each VM. Daily highs and lows, weekly, and monthly. Get an idea of what the demands are on the current infrastructure.
With solid evidence on-hand, go to planning the HCI setup with enough IOPS to live today and into a 5 year future. That means knowing some company history to get an idea of growth.
-
@scottalanmiller said in Local Storage vs SAN ...:
vSAN is any SAN run virtualized
I think that is incorrect. The definition is virtual storage area network. A software defined storage area network if you will.
That is not the same as a virtualized storage area network.
-
@scottalanmiller said in Local Storage vs SAN ...:
@Pete-S said in Local Storage vs SAN ...:
DRBD, Gluster and Ceph are simply technologies used to build a vSAN.
They can be, but 99% of the time no SAN layer will be used. I've never seen Gluster or CEPH used to make a vSAN and DRBD mostly only in a lab. They are so much faster and more robust without the SAN layer that it's not popular to do that. So much of their value comes from removing the need and complexity of the networking layer since the storage itself is already replicated to each node. If you add the vSAN layer, you have to deal with a loss of redundancy (in the connection layer) and build that back in.
I don't think that there is such a thing as a SAN layer by definition.
A SAN is just a storage area network. It doesn't imply that it has to have SAS, iSCSI or fiber channel or any other protocol that is traditionally used by physical SAN units.I'd say a SAN is an architecture more than a specific technology.
-
@Pete-S said in Local Storage vs SAN ...:
@scottalanmiller said in Local Storage vs SAN ...:
vSAN is any SAN run virtualized
I think that is incorrect. The definition is virtual storage area network. A software defined storage area network if you will.
That is not the same as a virtualized storage area network.
There's some contention around the "vSAN"/"VSAN" designation.
StarWind and VMware adopted the vSAN designation for their Hyper-Converged Infrastructure solution sets IIRC. Both did.
HCI means local storage on each node, a dedicated network fabric for node to node storage I/O, and resilience/redundancy for the disks based on how many nodes and what kind of performance is needed.
Fault Domains are at the disk and node level while some products allow for a form of Stretch Cluster which could be rack to rack, DC to DC, or intra-DC within a certain amount of latency (S2D/AzSHCI is 5ms or less).
-
@PhlipElder said in Local Storage vs SAN ...:
StarWind and VMware adopted the vSAN designation for their Hyper-Converged Infrastructure solution sets IIRC. Both did.
Both do vSAN. So it makes sense as they run SAN appliances on VMs.
But neither use it to designate hyperconvergence, which is important, because it doesn't.
Both of them offer HCI options, both offer it uses their vSAN products.
Both of them also offer "traditional" SAN that is virtualized using those vSAN products as well.
-
@PhlipElder said in Local Storage vs SAN ...:
HCI means local storage on each node, a dedicated network fabric for node to node storage I/O, and resilience/redundancy for the disks based on how many nodes and what kind of performance is needed.
Well, it doesn't quite mean all of that. It just means putting everything onto the individual node. It doesn't actually imply the network fabric, resiliancy, redundancy or anything like that. All of those concepts were layered onto the term much later by marketing teams. Hyperconvergence itself is much simpler, like all of these terms.
-
@Pete-S said in Local Storage vs SAN ...:
A SAN is just a storage area network. It doesn't imply that it has to have SAS, iSCSI or fiber channel or any other protocol that is traditionally used by physical SAN units.
I'd say a SAN is an architecture more than a specific technology.It is, for sure. But there is a specific type of technology, not specific technology, to make that architecture.
SAS doesn't qualify to be a SAN, for example. If you connect via SAS, that makes it local storage. If you use iSCSI, that makes it SAN attached.
SAS doesn't create a network, iSCSI does. Hence the difference. To be a storage area NETWORK, you need a network protocol. So the architecture designates the type of technology.
SANs came about to address the limitations of direct attach (SAS, SCSI, ATA, etc.) We already had shared storage before we had SAN. SAN let that shared block storage go onto a network. So you need the network protocol to make it a SAN.
-
@Pete-S said in Local Storage vs SAN ...:
I think that is incorrect. The definition is virtual storage area network. A software defined storage area network if you will.
So yes, in the same way that SAN technically refers to the network and not the devices or protocols, but is rarely used that way.
But in that sense, vSAN has existed as long as we had software controlled switches, because thats the "v" piece if we use it that way and then all those Starwind and VMware products can't be vSANs. They are only a vSAN in the sense that the misuse of SAN means the appliance, not the network, and they are that appliance virtualized. In both cases, and all others not mentioned here, it is the virtualization of the appliance, not the network, called vSAN by the vendors, engineers and end users.
In lots of cases, the network is virtualized too, just by the nature of how it is used. But it's virtualized whether vSAN is used or not. That's just SDN.