Connecting a NAS or SAN to a VMWare host
-
You're also going to need to know how read / write your environment is in terms of hitting the database, no?
-
@NetworkNerd yes, you need to know your read / write mix to know what IOPS you need and where you need them.
-
Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?
-
@NetworkNerd said:
Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?
Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
You also need switches designed for Iscsi/sans for the best performance.Granted it's been about 2.5 years since I've done a major SAN rollout so it could have change some since then..
-
@NetworkNerd said:
Would NIC teaming / link aggregation help at all from a VMWare standpoint (i.e. multiple uplinks from SAN / NAS to switch to host)?
If you are doing NFS (NAS) then yes. It increases the size of the pipe so improves throughput.
Keep in mind that you cannot team / bond an iSCSI connection. You have to use technologies like MPIO to improve throughput.
-
@thecreativeone91 said:
Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
You also need switches designed for Iscsi/sans for the best performance.Most people are still on GigE connections and it works fine for the bulk of users. It is amazing how little throughput you normally need.
You don't need switches designed for iSCSI, just ones that are fast enough. Edison labs uses unmanaged, low end Netgear switches because they are so fast. Not designed for SAN use, just fast and that is all that matters.
You can also skip the switches altogether.
-
@scottalanmiller said:
@thecreativeone91 said:
Yes, but You usually use 10GB ethernet or fiber for it not your normal 1GB switch.
You also need switches designed for Iscsi/sans for the best performance.Most people are still on GigE connections and it works fine for the bulk of users. It is amazing how little throughput you normally need.
You don't need switches designed for iSCSI, just ones that are fast enough. Edison labs uses unmanaged, low end Netgear switches because they are so fast. Not designed for SAN use, just fast and that is all that matters.
You can also skip the switches altogether.
Yep - we use a passive Netgear gig switch between our backup target (Drobo B800i SAN) and our ESXi hosts. It works great.
-
@scottalanmiller This was a 42TB Video San (with multiple Nodes for redundancy) so 1GB would never give enough throughput to work off of.
-
@thecreativeone91 said:
@scottalanmiller This was a 42TB Video San (with multiple Nodes for redundancy) so 1GB would never give enough throughput to work off of.
Right, but that is not the norm. That is specifically a throughput heavy environment, far from center. IOPS typically outweigh throughput but a huge margin.
-
LAG's will only improve throughput so far, and really 10GbE is the future at the moment, you'll find you have lower latency, and more throughput overall with 10GbE saturation than with 1GbE LAGs. Also, switches that do iSCSI offload are almost never really used properly to offload, so it's not saving you much. Just any decent switch will work.
I wholly agree with the SSDs for databases, and really, anything that has a high amount of 'touches' (otherwise known as IOPS!). Databases constantly have little tiny touches to make changes, which results in a higher amount of requests going on to the storage. This is where higher IOPS makes the difference. For systems like your average desktop, moving a large file takes relatively few IOPS, but more throughput.
This is why the term 'tiered storage' has become popular, you create tiers of storage, depending on your needs that can be super fast storage (ssd), fast storage (10/15k drives), normal storage (7.2k drives), and nearline (~5k drives). Then you deploy your applications depending on how you want them to live and operate.
-
@Jaguar said:
I wholly agree with the SSDs for databases, and really, anything that has a high amount of 'touches' (otherwise known as IOPS!). Databases constantly have little tiny touches to make changes, which results in a higher amount of requests going on to the storage. This is where higher IOPS makes the difference. For systems like your average desktop, moving a large file takes relatively few IOPS, but more throughput.
These large number of small touches is why huge RAM both in the system and in cache make such a big different to databases. With a good RAID cache you can offload a ton of write hits and speed the system while also preserving the SSDs. And a large system memory will do even more often keeping transactions from hitting the storage subsystem completely.