How would you build a File server with 170TB of Usable Storage?
-
Also worth noting, the OP of RAID 10 Must use... is specifically for use as backup storage.
So the production file system is setup, on something.
For backup purposes only, I'd really consider Tapes for this.
-
Which also raises the question is the 170TB 1 full backup, or some combination.
-
If I just needed the storage, SuperMicro makes a 45 drive and 90 drive 4u case. Building one of the 45 drive models with 44 14TB drives and Optane PCIe for a cache was just under $30k. If we're talking just storage and not performance or backup, it's really that easy.
-
@DustinB3403 said in How would you build a File server with 170TB of Usable Storage?:
I'd use multiple hosts and something like Starwinds vSAN so that I didn't have a single massive box to worry about.
Sure I'd have more storage than I "need" due to how vSAN has to work on any given host, but I'd also reduce the risk of everything just taking a dive and being down.
The alternative is to use multiple SANs and present iSCSI targets to your hypervisor.
This is not how you build 170TB of usable storage. a vSAN means way more than 170TB in order to know it is online when one of those nodes go down.
-
@DustinB3403 said in How would you build a File server with 170TB of Usable Storage?:
I'd use multiple hosts and something like Starwinds vSAN so that I didn't have a single massive box to worry about.
With Starwind's VSAN, you'd need 2 x Storage nodes with 170GB of storage. From what I gather, StarWind's VSAN is essentially network raid-1.
-
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
My Build would use some flavor of Linux and based on the Storinator Q30 Enhanced and configured for RAID10.
This I would absolutely avoid. The Storinator has no place in a production SMB environment. It's exclusively designed for use as a RAIN node as part of an N+X cluster. It's disposable / ephemeral only. Any attempt to use it in another way, which would include any configuration with RAID, is misusing it by intent and will lead to support issues. It is not hot swap, it is not designed to be reliable. It's not meant to be serviced. All top considerations for a storage node. It's actually, other than capacity, the exact opposite of what you would seek in a storage device.
-
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
From what I gather, StarWind's VSAN is essentially network raid-1.
Not essentially. Is.
-
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
With Starwind's VSAN, you'd need 2 x Storage nodes with 170GB of storage.
With any RAID configuration where you want chassis failover. You at least need this.
-
@DustinB3403 said in How would you build a File server with 170TB of Usable Storage?:
Which also raises the question is the 170TB 1 full backup, or some combination.
Not backup, the OP was file storage.
-
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
Dual Power Supplies, No Setup / Configuration, and 24 x 14TB WD Purple Drives ~= $18,000.
You would not use Purple drives, either. Those are for streaming, not for file server usage. You'd want other kinds of drives for file server use.
-
@scottalanmiller said in How would you build a File server with 170TB of Usable Storage?:
@DustinB3403 said in How would you build a File server with 170TB of Usable Storage?:
Which also raises the question is the 170TB 1 full backup, or some combination.
Not backup, the OP was file storage.
But @DustinB3403 was trying to drag this back to the original post, instead of reply appropriately to the call out of this topic.
-
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
As a response to the RAID10 Must use... thread, it got me to thinking.
[This topic isn't to rehash things stated in the other thread, so much as to hear how others would put something at this scale into production].
Business Goal: Build a production level storage system with 170TB of usable storage for SMB / NFS file shares for regular office workers up to power users and video editors.
- No NAS Products (Synology, QNAP), please.
- No Budget Constraints.
- RAIN Storage / multiple nodes can also be considered (CEPH, GLUSTER, etc).
My Build would use some flavor of Linux and based on the Storinator Q30 Enhanced and configured for RAID10.
Dual Power Supplies, No Setup / Configuration, and 24 x 14TB WD Purple Drives ~= $18,000.
(as we all know, the most significant cost will be the drives).
What would yours look like?
How much of the 170TB is video files, how much if it is hot? How big are the video files?
What are backup requirements?
Do you need triple the storage considerations for onprem backups plus a tape archive solution for off-site?
How quickly does that 170TB grow. You said 170TB of data, that tells me zero free space, so you already need more. How do you want to scale?
No budget? That makes no sense at all, Impossible.
It depends on many things.
Could it all live in the cloud with an onprem cache for hot or sticky data?
It's hard to give a real answer when the full scope and picture is not there.
-
Misread. So how much of the 170TB is used immediately and what's the projected growth rate?
-
@scottalanmiller said in How would you build a File server with 170TB of Usable Storage?:
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
My Build would use some flavor of Linux and based on the Storinator Q30 Enhanced and configured for RAID10.
This I would absolutely avoid. The Storinator has no place in a production SMB environment. It's exclusively designed for use as a RAIN node as part of an N+X cluster. It's disposable / ephemeral only. Any attempt to use it in another way, which would include any configuration with RAID, is misusing it by intent and will lead to support issues. It is not hot swap, it is not designed to be reliable. It's not meant to be serviced. All top considerations for a storage node. It's actually, other than capacity, the exact opposite of what you would seek in a storage device.
@scottalanmiller -- How would you build it? That is specifically my question or discussion point... How would you go about building a storage platform for 170TB usable data?
-
@Obsolesce said in How would you build a File server with 170TB of Usable Storage?:
Misread. So how much of the 170TB is used immediately and what's the projected growth rate?
40% Currently Used. 6% /year growth rate.
-
@Obsolesce said in How would you build a File server with 170TB of Usable Storage?:
Misread. So how much of the 170TB is used immediately and what's the projected growth rate?
Not relevant. The concept is that we need 170TB usable. Usable implies available now.
You are asking a different question to resolve a different problem.
-
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
@scottalanmiller said in How would you build a File server with 170TB of Usable Storage?:
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
My Build would use some flavor of Linux and based on the Storinator Q30 Enhanced and configured for RAID10.
This I would absolutely avoid. The Storinator has no place in a production SMB environment. It's exclusively designed for use as a RAIN node as part of an N+X cluster. It's disposable / ephemeral only. Any attempt to use it in another way, which would include any configuration with RAID, is misusing it by intent and will lead to support issues. It is not hot swap, it is not designed to be reliable. It's not meant to be serviced. All top considerations for a storage node. It's actually, other than capacity, the exact opposite of what you would seek in a storage device.
@scottalanmiller -- How would you build it? That is specifically my question or discussion point... How would you go about building a storage platform for 170TB usable data?
Assuming a single chassis, pick a linux distribution, and setup software raid, LVM, and XFS. My other two preferences would be ZFS on BSD, or hardware RAID.
Now if we start talking performance, backup or failover, it quickly becomes more difficult.
-
@travisdh1 said in How would you build a File server with 170TB of Usable Storage?:
Now if we start talking performance, backup or failover, it quickly becomes more difficult.
I'm specifically avoiding those topics for the sake of this post. I'm mainly curious as to what the hardware would look like... but more detailed than "buy a server and put hard drives in it" lol.
-
@travisdh1 said in How would you build a File server with 170TB of Usable Storage?:
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
@scottalanmiller said in How would you build a File server with 170TB of Usable Storage?:
@dafyre said in How would you build a File server with 170TB of Usable Storage?:
My Build would use some flavor of Linux and based on the Storinator Q30 Enhanced and configured for RAID10.
This I would absolutely avoid. The Storinator has no place in a production SMB environment. It's exclusively designed for use as a RAIN node as part of an N+X cluster. It's disposable / ephemeral only. Any attempt to use it in another way, which would include any configuration with RAID, is misusing it by intent and will lead to support issues. It is not hot swap, it is not designed to be reliable. It's not meant to be serviced. All top considerations for a storage node. It's actually, other than capacity, the exact opposite of what you would seek in a storage device.
@scottalanmiller -- How would you build it? That is specifically my question or discussion point... How would you go about building a storage platform for 170TB usable data?
Assuming a single chassis, pick a linux distribution, and setup software raid, LVM, and XFS. My other two preferences would be ZFS on BSD, or hardware RAID.
Now if we start talking performance, backup or failover, it quickly becomes more difficult.
I'm not familiar with ZFS. I will stay far FAR FAAR FAAAAAAARRRRRRRR away from BTRFS. I've seen too many problems with it at my day job, lol.
I'd likely go with Linux + XFS for the file system.
-
LVM Thin Pool and XFS for a file server?