Gluster and RAID question
-
@biggen said in Gluster and RAID question:
I have no use case for it. But i figure just experimenting with it for a bit can't hurt.
It's cool tech, for sure.
-
@biggen said in Gluster and RAID question:
@scottalanmiller said in Gluster and RAID question:
@biggen said in Gluster and RAID question:
@scottalanmiller No problem. So I'm guessing if one really wanted to use the "distributed" type than RAID would really need to be required if you wanted redundancy. I think I'm wrapping my head around this now.
I think you are thinking about this all wrong.
First, you never use RAIN and RAID together. So anything that's making you think of using RAID with Gluster means you are thinking about it fundamentally wrong. It's not that it's physically impossible, but that it makes no sense.
Second, you never choose distributed if you want redundancy. So never would there be a case where you'd have the distributed type AND want redundancy. You'd choose the redundancy option instead.
So this takes me all the way back to my OP:
Are Distributed Gluster deployments typically in Production?
I guess if one didn't care about redundancy that would be the only use case for that specific architecture. Because the only way to provide it would be with RAID, and you say that running RAID under RAIN isn't the way to ever run RAIN to begin with. So using the "distributed" type of Gluster with RAID to provide redundancy would be a poor choice to ever use with like I was thinking.
You can do distributed and replicated for a volume. It's not just one or the other.
-
@stacksofplates said in Gluster and RAID question:
You can do distributed and replicated for a volume. It's not just one or the other.
They call the options Distributed, Replicated, and Distributed Replicated.
It's a bit like having RAID 0, RAID 1, and RAID 10.
-
Played around with it a bit today. Sharing it out via SAMBA seems a little complicated since you also need to layer CTBD. Is that the standard way to share it out to Windows clients?
-
@biggen said in Gluster and RAID question:
Sharing it out via SAMBA seems a little complicated since you also need to layer CTBD.
Why do you need that?
-
@biggen said in Gluster and RAID question:
Is that the standard way to share it out to Windows clients?
No, that would not be common. The common way is to have Samba in a VM that uses Gluster as a backing share.
-
@scottalanmiller Ok, it seems most of the tutorials show it being done with CTBD. I’ve found a couple that just create a standard samba share and export it. I’ll play with that route.
So would samba be installed on each node and then shared out? To which samba node do the clients connect to?
-
@scottalanmiller it sounds like explaining the whole stack might be in order, and where Gluster/etc fall in that stack.
-
Creating a two node Gluster volume was real easy. Its the sharing that I'm having an issue with.
Do you install Samba on both nodes and create identical smb.conf file in order to share out the volume? To which nodes are the Samba clients supposed to connect with? Does it matter?
-
@biggen said in Gluster and RAID question:
Creating a two node Gluster volume was real easy. Its the sharing that I'm having an issue with.
Do you install Samba on both nodes and create identical smb.conf file in order to share out the volume? To which nodes are the Samba clients supposed to connect with? Does it matter?
If I am understanding WTF you are trying to do, n o you create the Gluster volume and then go into your hypervisor and attach that volume as the datastore, just like you would do for a RAID array.
-
@JaredBusch Once the volume is up and running how the heck does one share it out? That what I'm trying to do. I have a successful two node system running:
joe@glusternode1:/mnt$ sudo gluster volume info Volume Name: gv0 Type: Replicate Volume ID: ab19d123-eb34-4186-8a03-316a3fc790e3 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: glusternode1:/data/xvdb1/brick Brick2: glusternode2:/data/xvdb1/brick Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
That volume must now be mounted "somewhere" to access it. How do I mount it so Windows clients can access it? Do I simply mount the share in one of the nodes under
/mnt/big_ole_gluster_space
and then share out that mount point via Samba from that same Gluster node? -
@biggen said in Gluster and RAID question:
@JaredBusch Once the volume is up and running how the heck does one share it out? That what I'm trying to do. I have a successful two node system running:
joe@glusternode1:/mnt$ sudo gluster volume info Volume Name: gv0 Type: Replicate Volume ID: ab19d123-eb34-4186-8a03-316a3fc790e3 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: glusternode1:/data/xvdb1/brick Brick2: glusternode2:/data/xvdb1/brick Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
That volume must now be mounted "somewhere" to access it. How do I mount it so Windows clients can access it? Do I simply mount the share in one of the nodes under
/mnt/big_ole_gluster_space
and then share out that mount point via Samba from that same Gluster node?The preferred way is to use the GlusterFS FUSE client. Last I knew it's the only one that automatically handles failover and HA.
-
@biggen said in Gluster and RAID question:
@JaredBusch Once the volume is up and running how the heck does one share it out? That what I'm trying to do. I have a successful two node system running:
joe@glusternode1:/mnt$ sudo gluster volume info Volume Name: gv0 Type: Replicate Volume ID: ab19d123-eb34-4186-8a03-316a3fc790e3 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: glusternode1:/data/xvdb1/brick Brick2: glusternode2:/data/xvdb1/brick Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
That volume must now be mounted "somewhere" to access it. How do I mount it so Windows clients can access it? Do I simply mount the share in one of the nodes under
/mnt
and then share out that mount point via Samba?If you want to experiment with it properly, you'll want to follow https://docs.gluster.org/en/latest/Administrator Guide/Accessing Gluster from Windows/
Creating the storage is just the first piece, if you want to share the storage and have it be fault tolerant, then there is a whole lot of other hoops to jump through. Which is also why everyone is saying to just mount it on one of the gluster boxes and create a normal samba share.
-
This was the piece of the puzzle I was missing. It explains at the bottom how to configure a simple Samba share.
When one types in "samba gluster" in Google, this unwieldy page is the very first hit. And since its from the official Gluster docs, it makes it seems that is the RIGHT way to do it. That was my confusion when I asked earlier about CTDB.
If one doesn't want to mess with CTDB then sharing out a simple Samba share on one of the Gluster nodes is real easy as I just found out. There is no fault tolerance as far as Samba goes since you are only dealing with a single Samba connection however.
-
@biggen said in Gluster and RAID question:
And since its from the official Gluster docs, it makes it seems that is the RIGHT way to do it.
It's a bit of a conceptual break. Gluster is the wrong place to be looking. That's a filesystem. In no other circumstance, ever, do you look at filesystem documentation (NTFS, XFS, EXT4, etc.) to ask about SMB networking.
So looking at Gluster in this way will be confusing because it doesn't really make any sense. Gluster is a filesystem. Samba is an SMB server. It just reads Gluster the same as any other filesystem if you want.
How do you share out from XFS, ZFS, NTFS, etc.? You do the same way with Gluster. However you answer the first part, is how you will normally answer the second part.
-
@biggen said in Gluster and RAID question:
There is no fault tolerance as far as Samba goes since you are only dealing with a single Samba connection however.
That's because you are "being weird" and acting like Gluster is replacing your hypervisors and virtualization. Since when do we build file servers without virtualizing them? Virtualize Samba and you solve it that way at the platform level. Or make Samba failover the way that Samba normally does.
Basically you are acting like Gluster is a special case, but it is not. Ignore that Gluster is the mechanism that you are using and everything gets really simple. Get fixated and Gluster, and you'll be looking for Gluster-specific answers to all the normal problems.
It's a bit like looking for a guide on how to drive a Ford. But you'll never find one. You'll just find guides to driving cars. The brand of car just doesn't matter, it's all the same. If you are convinced that you need a guide that is specific to steering a Ford, you'll be forever lost and confused thinking that it can't be done when, in reality, it's so simple that no guide exists outside of basic steering guides.
-
@scottalanmiller that's why I suggested that you make a post about the entire stack :
Hardware
hypervisor
storage (or vice versa with hypervisor)
VMs
storage inside VM
share from inside VM -
I appreciate the explanation guys. Not being in the IT field (directly) for some time means I'm playing catch up with a lot of the stuff.
Lets say as a hypothetical one wanted to build out a 500TB Gluster cluster to be used as a backup target for VMs. It looks like you need at least 3 nodes to build out the Gluster Cluster. Then, of course, you need an additional node for the hypervisor - so 4 nodes minimum.
On the three Gluster nodes, would you be installing a Linux OS directly to them (bare metal)? I know from reading here physical servers have fallen out of style. Is this a use case where a physical server still serves a purpose?
Once the Gluster volume is up and running, you could then connect the hypervisor to the cluster assuming the hypervisor had Gluster Client support and then you have the massive cluster attached to the hypervisor as a SR to be used appropriately.
I'm just wondering if something like this would work.
-
@biggen said in Gluster and RAID question:
I appreciate the explanation guys. Not being in the IT field (directly) for some time means I'm playing catch up with a lot of the stuff.
Lets say as a hypothetical one wanted to build out a 500TB Gluster cluster to be used as a backup target for VMs. It looks like you need at least 3 nodes to build out the Gluster Cluster. Then, of course, you need an additional node for the hypervisor - so 4 nodes minimum.
On the three Gluster nodes, would you be installing a Linux OS directly to them (bare metal)? I know from reading here physical servers have fallen out of style. Is this a use case where a physical server still serves a purpose?
Once the Gluster volume is up and running, you could then connect the hypervisor to the cluster assuming the hypervisor had Gluster Client support and then you have the massive cluster attached to the hypervisor as a SR to be used appropriately.
I'm just wondering if something like this would work.
Would it work, of course. It wouldn't be very efficient tho.
Gluster would make more sense as the storage for VMs. No matter what size, you really don't need 3 boxes of drives just for backups till your environment is absolutely gigantic!
-
@travisdh1 Great thanks for that info. When you say storage for VMs are you speaking of a SAN? So your VMs are running off the Gluster?
Yeah I thought 3 nodes of storage + the hypervisor node sounded like a ton of equipment. I know you can buy single boxes that have 2 - 4 nodes inside of them to reduce the footprint.