Aetherstore, looks amazing, what about...
-
Along these lines, though, where you are looking at slow FastEthernet connections, we've talked to ÆtherStore about making this manageable for WAN links so that, when combined with Pertino, OpenVPN or IPSec VPNs, that we could have storage on our desktops all over the country. So it is something that NTG is very interested in too since we have one pool of machines at HQ, one in the lab and then the pool of machines that are located in homes all over the place.
We'd love to throw a big WD Green drive, say 4TB, as a second drive in every desktop that we ship out to staff, throw ÆtherStore on there and combine it over our Pertino network for a low performance, high capacity backup target. That would be awesome.
-
-
@Breffni-Potter said:
- Only the Administrator has access to the mounted AetherStore drive.
How is this defined, what if there is pot of data which is not production critical but useful to make an attempt to back up, i.e a photo/video library in a non-profit, which can still be read only accessible to the staff team, or a 7+ year archive which users can have read only access into, historical finance data for example?So ÆtherStore itself is a block device that has a "local drive appearance." So think of it as an invisible SAN under the hood that shows up as a local drive on one machine, that the admin controls. That "drive" is, for all intents and purposes, a local drive there and acts just like one.
So while the admin alone has access to that drive, the admin can choose to share that drive via SMB and make it a network share. Now, suddenly, your ÆtherStore drive has turned into the backing store for a fileserver that you can share out however you like. Want to make it a public ISO and software repository for the whole company? Make it "Read" but everyone. Want to let anyone store stuff there, open up write permissions. It's not NTFS yet (I'm pushing, trust me) so you are limited to LUN provisioning (making separate block devices for each security need) and the granularity of SMB permissions at the moment when doing this. But combining those two gives you quite a bit of power to cover a lot of really useful use cases.
-
@scottalanmiller Yes because then you have an insanely cheap geographically irrelevant backup scenario. Lot of companies would dive at the chance for that, building burnt down? not a problem there are 10 others.
The lack of NTFS is a shame but hopefully that will be addressed.
The scheduling, yes I agree needs to be an "expert mode" hidden away feature but for non critical data I can see it being useful.
-
@Breffni-Potter said:
The lack of NTFS is a shame but hopefully that will be addressed.
I sat with engineering to talk about this specifically. They are very aware of the need and the priority of this. I spent a lot of time talking about why this mattered, how it would be used, etc.
-
@Breffni-Potter said:
The scheduling, yes I agree needs to be an "expert mode" hidden away feature but for non critical data I can see it being useful.
In the same vein I'm pushing for (and will certainly be getting) a similar feature for controlling replication level. Right now it is four times replication. But what if it is just a cache and I don't want replication at all, or I'm on RAID 1 sets and just want 2x replication? Or what if devices are really fragile (hopefully only from a network visibility perspective) and want 8x replication? I want control of that under an "experts" area and that, I'm told, I will definitely be getting.
-
I've seen and heard about Aetherstore for a a while. It is very interesting what they are doing with it. Oddly enough, it seems eerily similar to a program from years ago called Medley 97 (which shared storage, as well as CPU and RAM between machines in a local network).
I'm glad to see things like this making their way back into modern times.
-
@dafyre said:
I've seen and heard about Aetherstore for a a while. It is very interesting what they are doing with it. Oddly enough, it seems eerily similar to a program from years ago called Medley 97 (which shared storage, as well as CPU and RAM between machines in a local network).
I'm glad to see things like this making their way back into modern times.
It is a lot more like Gluster.
-
I wonder how this would do (or even if it could be set up) as shared storage or CSVs for a windows failover cluster.
-
@dafyre said:
I wonder how this would do (or even if it could be set up) as shared storage or CSVs for a windows failover cluster.
No, it is not that kind of storage. How would you present it since it can't be shared as a SAN (iSCSI, FC, etc.)
Even if you could, it is not architected for that yet. Eventually this is a real possibility but not today.
-
The number of ways this could break catastrophically actually blows my mind!
You'd need a large dependable desktop fleet for this to make much sense. $0.02.
-
@MattSpeller said:
The number of ways this could break catastrophically actually blows my mind!
You'd need a large dependable desktop fleet for this to make much sense. $0.02.
It's quadruple mirrored network RAID 1. It's pretty reliable with minimal effort. And that's if you use stock drives. Do RAID 1 on the desktops and you move to RAID 1{1} at 4x2 mirroring (8 times total mirroring.)
-
@scottalanmiller said:
It's quadruple mirrored network RAID 1. It's pretty reliable with minimal effort. And that's if you use stock drives. Do RAID 1 on the desktops and you move to RAID 1{1} at 4x2 mirroring (8 times total mirroring.)
Good lord.
Wouldn't this exponentially increase your network traffic as well? Re-Sync'ing all those mirrors all the time? Yuck!
I'm a bit conservative on this one, I'll wait and see how it plays out.
-
@MattSpeller said:
Wouldn't this exponentially increase your network traffic as well? Re-Sync'ing all those mirrors all the time? Yuck!
This is why I'm after the scheduling, so it can only hog the network after hours.
-
@MattSpeller said:
Wouldn't this exponentially increase your network traffic as well? Re-Sync'ing all those mirrors all the time? Yuck!
Why would they resync? What are you picturing happening? It's block level replication. So they stay in sync. On a normal GigE switch network this would create completely unnoticed traffic for normal amounts of storage. Remember "network traffic" is a weird concept as this would only create traffic peer to peer amongst four nodes. So what network impact are you imagining?
-
@Breffni-Potter said:
This is why I'm after the scheduling, so it can only hog the network after hours.
I think that the idea of how much bandwidth is needed for storage is overestimated. GigE is enough for some pretty hefty SAN connections and we are talking about that just for change replication for non-primary storage. Unless you are doing something weird, traffic will be pretty small.
And, of course, replication happens when the storage happens. Run a backup at night and the sync is going to be at night too while the writes are going on.
-
@Breffni-Potter said:
This is why I'm after the scheduling, so it can only hog the network after hours.
I think what you want is just an RSYNC group.
-
@scottalanmiller said:
Why would they resync? What are you picturing happening? It's block level replication. So they stay in sync. On a normal GigE switch network this would create completely unnoticed traffic for normal amounts of storage. Remember "network traffic" is a weird concept as this would only create traffic peer to peer amongst four nodes. So what network impact are you imagining?
Re-sync was a bad term, they'd need to sync up any blocks that changed - absolutely. Ditto network, you're right it'd be peer to peer for most of it.
Maybe I've just not had my coffee, or something, but this whole concept gives me the creeps.
-
@MattSpeller said:
You'd need a large dependable desktop fleet for this to make much sense. $0.02.
Keep in mind that nothing makes you use this on a desktop rather than a server.
-
@MattSpeller said:
Maybe I've just not had my coffee, or something, but this whole concept gives me the creeps.
No different than most modern storage. This is exactly how Gluster or CEPH or Exablox work.