Connecting a NAS or SAN to a VMWare host
-
Something to think about....
Traditional hard drives have a transfer cap of ~6Gb/s from the media perspective. In reality no drive can deliver that. But they might push over 100MB/s which is not far off from 1Gb/s. But they rarely top 150 IOPS.
New SSDs are still capped at ~6Gb/s and while they will generally push a big more than 100MB/s, they can't push all that much more. However they routinely top 25,000 IOPS.
-
@art_of_shred said:
@scottalanmiller said:
@ajstringham said:
@art_of_shred said:
OK, got it... but stop calling it an IOP. It's an Input/Output, not an Input/output Per. (sorry :P)
Lol Yup, it's either IO or IOPS. One is the actual thing, the other is a measurement.
Sort of. IOPS is actually sort for "Input / Output Operations Per Second." So which letters stand for which words?
Never saw it with "operations" in there. And if so, where is the other "O"?
No idea, but it was always stated that way and I looked it up after I said it to make sure that it wasn't one of those things that I made up in my head and just assumed was true but it wasn't, it really does have "operations" in there.
-
@scottalanmiller said:
New SSDs are still capped at ~6Gb/s and while they will generally push a big more than 100MB/s, they can't push all that much more. However they routinely top 25,000 IOPS.
But that aren't good for DB storage even though you need fast speeds for SQL as you have a lot of transactional writes. and SSDs have limited writes.... hmm.
-
@thecreativeone91 said:
@scottalanmiller said:
New SSDs are still capped at ~6Gb/s and while they will generally push a big more than 100MB/s, they can't push all that much more. However they routinely top 25,000 IOPS.
But that aren't good for DB storage even though you need fast speeds for SQL as you have a lot of transactional writes. and SSDs have limited writes.... hmm.
Actually SSDs are ideal for databases. That limit write thing is a silly concept from a different era. Spinning rust have more limited lifespans than SSDs do. They just have different ways to measure and predict failure. Good SSDs have so many writes that their limitation is a positive, not a negative. SSDs + databases is the sweet spot. Nothing is better for a database. There is a reason that for the last five years nearly every high end enterprise database has been deployed to nothing except SSD.
-
@scottalanmiller said:
@art_of_shred said:
@scottalanmiller said:
@ajstringham said:
@art_of_shred said:
OK, got it... but stop calling it an IOP. It's an Input/Output, not an Input/output Per. (sorry :P)
Lol Yup, it's either IO or IOPS. One is the actual thing, the other is a measurement.
Sort of. IOPS is actually sort for "Input / Output Operations Per Second." So which letters stand for which words?
Never saw it with "operations" in there. And if so, where is the other "O"?
No idea, but it was always stated that way and I looked it up after I said it to make sure that it wasn't one of those things that I made up in my head and just assumed was true but it wasn't, it really does have "operations" in there.
I think you're making that up.
-
@scottalanmiller said:
@art_of_shred said:
@scottalanmiller said:
@ajstringham said:
@art_of_shred said:
OK, got it... but stop calling it an IOP. It's an Input/Output, not an Input/output Per. (sorry :P)
Lol Yup, it's either IO or IOPS. One is the actual thing, the other is a measurement.
Sort of. IOPS is actually sort for "Input / Output Operations Per Second." So which letters stand for which words?
Never saw it with "operations" in there. And if so, where is the other "O"?
No idea, but it was always stated that way and I looked it up after I said it to make sure that it wasn't one of those things that I made up in my head and just assumed was true but it wasn't, it really does have "operations" in there.
Probably because pronouncing IOOPS is weird, where IOPs (aye-ops) is easy to say
-
You're also going to need to know how read / write your environment is in terms of hitting the database, no?
-
@NetworkNerd yes, you need to know your read / write mix to know what IOPS you need and where you need them.
-
Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?
-
@NetworkNerd said:
Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?
Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
You also need switches designed for Iscsi/sans for the best performance.Granted it's been about 2.5 years since I've done a major SAN rollout so it could have change some since then..
-
@NetworkNerd said:
Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?
If you are doing NFS (NAS) then yes. It increases the size of the pipe so improves throughput.
Keep in mind that you cannot team / bond an iSCSI connection. You have to use technologies like MPIO to improve throughput.
-
@thecreativeone91 said:
Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
You also need switches designed for Iscsi/sans for the best performance.Most people are still on GigE connections and it works fine for the bulk of users. It is amazing how little throughput you normally need.
You don't need switches designed for iSCSI, just ones that are fast enough. Edison labs uses unmanaged, low end Netgear switches because they are so fast. Not designed for SAN use, just fast and that is all that matters.
You can also skip the switches altogether.
-
@scottalanmiller said:
@thecreativeone91 said:
Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
You also need switches designed for Iscsi/sans for the best performance.Most people are still on GigE connections and it works fine for the bulk of users. It is amazing how little throughput you normally need.
You don't need switches designed for iSCSI, just ones that are fast enough. Edison labs uses unmanaged, low end Netgear switches because they are so fast. Not designed for SAN use, just fast and that is all that matters.
You can also skip the switches altogether.
Yep - we use a passive Netgear gig switch between our backup target (Drobo B800i SAN) and our ESXi hosts. It works great.
-
@scottalanmiller This was a 42TB Video San (with multiple Nodes for redundancy) so 1GB would never give enough throughput to work off of.
-
@thecreativeone91 said:
@scottalanmiller This was a 42TB Video San (with multiple Nodes for redundancy) so 1GB would never give enough throughput to work off of.
Right, but that is not the norm. That is specifically a throughput heavy environment, far from center. IOPS typically outweigh throughput but a huge margin.
-
LAG's will only improve throughput so far, and really 10GbE is the future at the moment, you'll find you have lower latency, and more throughput overall with 10GbE saturation than with 1GbE LAGs. Also, switches that do iSCSI offload are almost never really used properly to offload, so it's not saving you much. Just any decent switch will work.
I wholly agree with the SSDs for databases, and really, anything that has a high amount of 'touches' (otherwise known as IOPS!). Databases constantly have little tiny touches to make changes, which results in a higher amount of requests going on to the storage. This is where higher IOPS makes the difference. For systems like your average desktop, moving a large file takes relatively few IOPS, but more throughput.
This is why the term 'tiered storage' has become popular, you create tiers of storage, depending on your needs that can be super fast storage (ssd), fast storage (10/15k drives), normal storage (7.2k drives), and nearline (~5k drives). Then you deploy your applications depending on how you want them to live and operate.
-
@Jaguar said:
I wholly agree with the SSDs for databases, and really, anything that has a high amount of 'touches' (otherwise known as IOPS!). Databases constantly have little tiny touches to make changes, which results in a higher amount of requests going on to the storage. This is where higher IOPS makes the difference. For systems like your average desktop, moving a large file takes relatively few IOPS, but more throughput.
These large number of small touches is why huge RAM both in the system and in cache make such a big different to databases. With a good RAID cache you can offload a ton of write hits and speed the system while also preserving the SSDs. And a large system memory will do even more often keeping transactions from hitting the storage subsystem completely.