ZFS Based Storage for Medium VMWare Workload
-
@scottalanmiller said:
@donaldlandru said:
Which I can add for as cheap as $5k with RED drives or $10k with Seagate SAS drives.
WD makes RE and Red drives. Don't call them RED, it is hard to tell if you are meaning to say RE or Red. The Red Pro and SE drives fall between the Red and the RE drives in the lineup. Red and RE drives are not related. RE comes in SAS, Red is SATA only.
It's all in a name. When I say REDs I am referring to WD Red 1TB NAS Hard Drive 2.5" WD10JFCX. When I say seagate I am referring to Seagate Savvio 10K.5 900 GB 10000 RPM SAS 6-Gb/S ST9900805SS
Edit: I don't always use WD NAS (RED) drives, but when I do I use the WDIDLE tool to fix that problem
-
@scottalanmiller Holy cow... can I borrow $5k ??
For $10k he could build 2 x 16TB usable storage units and use StarWind to make them happy.
(https://beta.wellston.biz/xByte SAM-SD R520.pdf) -
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
Which I can add for as cheap as $5k with RED drives or $10k with Seagate SAS drives.
WD makes RE and Red drives. Don't call them RED, it is hard to tell if you are meaning to say RE or Red. The Red Pro and SE drives fall between the Red and the RE drives in the lineup. Red and RE drives are not related. RE comes in SAS, Red is SATA only.
It's all in a name. When I say REDs I am referring to WD Red 1TB NAS Hard Drive 2.5" WD10JFCX. When I say seagate I am referring to Seagate Savvio 10K.5 900 GB 10000 RPM SAS 6-Gb/S ST9900805SS
Edit: I don't always use WD NAS (RED) drives, but when I do I use the WDIDLE tool to fix that problem
Boy those have gotten cheap!
http://www.newegg.com/Product/Product.aspx?Item=N82E16822236600
But they will be terrible slow. Those are 5400 RPM SATA drives.
-
@dafyre said:
@scottalanmiller Holy cow... can I borrow $5k ??
For $10k he could build 2 x 16TB usable storage units and use StarWind to make them happy.
(https://beta.wellston.biz/xByte SAM-SD R520.pdf)Starwind or DRBD. Both are free.
-
So which way should he look for his dev environment?
A new single host with tons of local disk and possibly ditch all three of the current dev boxes? or
A new single host with tons of local disk, and max the disk out on the newest (planning to keep) dev box, and manually split the load as possible? or
build a SAM-SD and connect the three current dev boxes to it?Any reason that all of these solutions couldn't be done with XByte purchased systems?
-
@Dashrender said:
Any reason that all of these solutions couldn't be done with XByte purchased systems?
Only that he is an HP shop and they are Dell.
-
@scottalanmiller said:
@Dashrender said:
Any reason that all of these solutions couldn't be done with XByte purchased systems?
Only that he is an HP shop and they are Dell.
Is there an HP equivalent?
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
Any reason that all of these solutions couldn't be done with XByte purchased systems?
Only that he is an HP shop and they are Dell.
Is there an HP equivalent?
Nearly everything in their lineups has an equivalent that is close on the other side.
-
@scottalanmiller said:
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
Any reason that all of these solutions couldn't be done with XByte purchased systems?
Only that he is an HP shop and they are Dell.
Is there an HP equivalent?
Nearly everything in their lineups has an equivalent that is close on the other side.
Well I was mainly meaning in the secondary market/refurbished area. I knew that HP and Dell have mostly equivalent server lineups.
-
Oh, I see. ServerMonkey would be a place to start.
-
@scottalanmiller said:
@donaldlandru said:
@scottalanmiller said:
@donaldlandru said:
Which I can add for as cheap as $5k with RED drives or $10k with Seagate SAS drives.
WD makes RE and Red drives. Don't call them RED, it is hard to tell if you are meaning to say RE or Red. The Red Pro and SE drives fall between the Red and the RE drives in the lineup. Red and RE drives are not related. RE comes in SAS, Red is SATA only.
It's all in a name. When I say REDs I am referring to WD Red 1TB NAS Hard Drive 2.5" WD10JFCX. When I say seagate I am referring to Seagate Savvio 10K.5 900 GB 10000 RPM SAS 6-Gb/S ST9900805SS
Edit: I don't always use WD NAS (RED) drives, but when I do I use the WDIDLE tool to fix that problem
Boy those have gotten cheap!
http://www.newegg.com/Product/Product.aspx?Item=N82E16822236600
But they will be terrible slow. Those are 5400 RPM SATA drives.
This is why I made my comment about not using the "RED" drives earlier, they don't have the PRO in 2.5" form factor; however, the savvios are twice the speed at 4x the price.
-
@scottalanmiller said:
@donaldlandru said:
I agree we do lack true HA in the production side as there is a single weak link (one storage array), the solution here depends on our move to Office 365 as that would take most of the operations load off of the network and change the requirements completely.
Good deal. We use O365, it is mostly great.
If I can sell them on Office 365 this time around (third times a charm), but that is for a different thread
-
If you need an Office 365 partner, you know where to fine one
cough.... cough.... @ntg
-
There is a software defined (ZFS) solution- Nexenta- that is specifically organized for Enterprise use and that includes comprehensive hardware and software support. Using Super Micro Reference Architecture, a single head node (HA optional), 200GB L2Arc & Zil dedicated cache, RAID Z2, a 3 yr NBD on-site HW warranty, 3 yr. x 7x24 SW support would run under your $15K budget.
Nexenta includes:
• Certified for Virtual / VDI/Cluster/Big Data/Cloud/Archive & Data Protection Environments
• Standard functionality includes Hybrid storage pools (HDD, SSD, Flash), Auto-tiering, in-line Compression/De-duplication, Replication, unlimited Snapshots, only 2 plug-in options if required: High Availability and FC support
• Uses Scalable Read & Write Cache to accelerate Read/Write performance, leveraging low cost spinning disk but also allowing using 10K/15K/SSD pools to achieve demanding IOPs and throughput
• Unmatched Data Integrity- continuous integrity checking, built-in self healing, 256b check sum copy-on-write, seamlessly addresses intermittent faulty devices, single/dual/triple parity RAID or RAID 10
• Perpetual licensing w/incremental capacity expansion. No replacement of core equipment, minimizing TCO and cost of growth at a fraction of dedicated hardware solutions.International Computer Concepts (www.ICC-USA.com) has been building high performance/ high density compute & storage for commercial, government, research and education and is a premier integrator of NexentaStor. For more info, I can be reached at
Brian Hershenhouse, [email protected] or 877 422-8729 x109. -
I'm pretty sure he actually mentioned Nexenta in the original post. That was the SAM-SD option that he was considering.
-
Hi Scott,
Donald mentioned SM and referenced generic ZFS (could be Oracle, OpenIndiana, FreeBSD, etc.) solutions which have uncoordinated HW, SW and support. Nexenta is packaged to compete with EMC, NetApp, etc. as primary storage in the Commercial market.
If you would like to get an overview, please feel free to ping me.
Best. -
You are correct, sir, I must have imagined it.
-
Following up on this one as I see it getting some traffic today. How did this project go @donaldlandru
-
@donaldlandru said in ZFS Based Storage for Medium VMWare Workload:
Ok, so a little background. the storage situation at my organization is our weakest link in our network. Currently we have a single HP MSA P2000 with 12 spindles (7200 rpm) serving two separate ESXi clusters.
This is not a lot of IOPS I have easily 100x more IOPS in the laptop I'm typing this on, than this disk configuration.
It is not uncommon for us to max out the disk i/o on 12 spindles sharing the load of almost 150 virtual machines and everyone is on board that something needs to be changed.
Yep, Go all Flash.
Here is what the business cares about the solution: Reliable solution that provides necessary resources for the development environments to operate effectively (read: we do not do performance testing in-house as by the very nature, it is much a your mileage may vary depending on your deployment situation).
If your a VMware shop doing QA testing, there's some workflows you can do with Linked Clones, and Instant Clones (No IO overhead, 400ms to clone a VM with zero memory or disk as it runs ajournal log for both) to reduce disk load, speed up process's and in general make everyone's life easier.
In addition to the business requirements, I have added my own requirements that my boss agrees with and blesses.
- Operations and Development must be on separate storage devices
This isn't necessary at 150 VM's. Just get something with all flash and if it is that big of a deal that people are running IO burner something that has QoS as an option to smack them down.
- Storage systems must be built of business class hardware (no RED drives -- although I would allow this in a future Veeam backup storage target)
Think strongly about using reds with Veeam. Reverse Incremental or roll ups use random IO and your backup windows will hate you.
Requirements for development storage
- 9+ Tib of usable storage
- Support a minimum of 1100 random iops (what our current system is peaking at)
- disks must be in some kind of array (zfs, raid, mdadm, etc)
Proposed solutions:
#1 a.k.a the safe option
HP StoreVirtual 4530 with 12 TB (7.2k) spindles in RAID6 -- this is our vendor recommendation. This is an HP renew quote with 3 years 5x9 support next-day on-site for ~$15,000Wait is this a single node storevirtual? ALso the IOPS on this will be awful. (7.2K drives are slow).
Less performance than solution #2 out of the box
More expensive to upgrade later (additional shelves and drives at HP prices)
All used hardware
Its worse than that, as you have to not just buy HP parts but licensing.#2 ZFS Solution ~$10,000
24 spindle 900Gb (7.2k SAS) in 12 mirrored vdevsTo my knowledge no one makes 900GB 7.2K SAS drives.
Based on Supermicro SC216E16 chassis
X9SRH-7F Motherboard
Intel E5-1620v2 CPU
64 GB of RAM
No L2ARC or ZIL plannedThen why the hell would you use ZFS?
Dual 10gig NICs
Pros
Better performance out of the box (twice the spindle count)
Non-vendor specific parts means upgrades require less investmentCons
Alright, tear me apart tell me I am wrong or provide any other useful feedback. The biggest concerns I have exist in both platforms (drives fail, controllers fail, data goes bad, etc) and have to be mitigated either way. That is what we have backups for -- in my opinion the HP gets me the following things:- The "ability" to purchase a support contract
- Next-day on-site of a tech or parts if needed
Be careful with NBD Parts, as a failure on Thursday afternoon really means Monday afternoon is within SLA as its based on when it was isolated.
With the $4000 saved from not buying the HP support contract I can buy a duplicate Supermicro system, and a couple extra hard drives, and have the same level of protection.
Note: this is my first time posting an actual give me feedback topic, I tried to include all information I felt was relevant. If more is needed I can provide.
Your a VMware shop, curious if you looked at Using VSAN? You could go all flash, and get inline Dedupe and Compression which makes all flash cheaper than 10K drives at that point.
-
@Dashrender IF you put enterprise grade in a REAL datacenter its scary how long it will run without failure. Now if this stuff is going in some Nolo Telco Dataslum, or his closet yah, stuff dies all the time.