I forgot about this topic and found it mentioned in a conversation. This thread was a great resource that never got linked anywhere useful. Now to figure out how to make it more referenceable.
Always complaining to me the slowness of mac's even though they're spec'd accordingly. Soon as you kill dropbox.. runs fine. had to implement alot of QoS on their network to throttle DB traffic.
Did you make any changes to the LAN Sync feature in Dropbox? I've always seen big improvements when blocking cloud sync services over WLAN within the office via endpoint software. e.g. Symantec
@LAH3385 might be good to start a thread and try to determine what your needs are before going down the path of technology. By the time you were asking this question, you were already in pretty deep assuming certain products, product categories and platform HA. We should start with a business needs analysis, use that to set goals and then use the goals to select technology approaches.
Depends. I've never seen an enterprise shop with enterprise system admins that were not more expensive than the storage guys. Close, but systems is the top payer. I've known many $200K and higher systems people. No storage people at $200K. But lots over $150K.
Cost more but can do more. I'm not saying what my pay is, but my bonus is more than my whole salary at my previous employer.
Yup, system admins generally can do "nearly anything." Whereas storage admins generally can do only one thing. And if you change storage products you generally have to replace your storage team too.
Let's talk about the actual storage you choose for these machines.
Can be a lot of things: local drives, OEM drives, non-OEM drives, FusionIO cards, Winchester drives, SSD, hybrid arrays, DAS attached chassis... because it is an approach and not a product it is very flexible.
The best line: There is little more terrifying than watching a 300K storage device launch a windows PXE installer.
It just blow my mind that people would trust Cisco with storage (or servers.) But a $300K storage device? Ouch. And one that installs anything it sees on the network by default? What is driving people to even talk to a networking company to be their SAN vendor?
On HyperV, the use of SMB3 is pretty nascent and very vendors provide a good platform for it. So there tends to be a trend to remain with iSCSI for HyperV because of this. Not because SMB3 isn't the better option at a protocol level but because it is so new it remains mostly impractical.
I have three servers, though two would be fine. I don't have what I understand as a "reliable failover system" or anything close to it, but if one server is down I am in a position to provide a degraded or limited service using the other server(s) which is generally enough to keep users happy and the business ticking over. It's in no way automated or anything like high availability, but it gives me options and in a crisis I like to have as many options as I can. The cost isn't anywhere near double, since you're not doubling up on disks or memory or CPU by spreading the load across two boxes.
Something to consider with a VRTX is that that is eight to sixteen Intel Xeon CPUs in a loaded chassis. That is a massive amount of compute power (with very little storage throughput.) So you have the CPU power to handle easily ~400 typical VMs. But the storage capacity and throughput of no more than an R510. Even an R720xd or R730xd has more drive capacity than the VRTX. So the ratio of IOPS and capacity to CPU is wildly different than with normal servers.