I've received two quotes for new server hardware - one from our local reseller and one directly from Dell. As far as I can tell, the two quotes are identical spec-wise but the local reseller is almost $12k more expensive. Here are the two quotes:
Quote from Dell:
2x Dell PowerEdge R430 servers $6,665.60
HP Quote from local reseller:
2x HP ProLiant DL360 servers $7,266.00
2x Xeon E5-2630 v3 CPUs
64 GB RAM (unknown configuration)
1x HP MSA 2040 SAN $20,932.00
14x HP MSA 1.2 TB 10K SAS 2.5in drives
includes $5,850 in labor so actual price
is only $15,082
1x Cisco Catalyst 2960-X gigabit switch $2,320.00
Is there any reason why I should choose the HP solution over the Dell solution? I will be running vSphere 6 on these servers. I'm not familiar with managing either server line so either way I'll be learning new management tools. When it comes to support I think I trust my local reseller more than Dell but $12k extra is hard to stomach just for that.
[Edit: CP Code M.]
Unless that OP is restricted to 1U hosts, I would go with the quote from Xbyte for Dell 730xd with same specs as in quotes is
Multiply by 2, add Starwind's vSAN and a couple 10Gb NICs and he's done. Especially if only 2 hosts. Same(ish) price, way more reliability, better performance all around. I'd post that reco on SW but would likely get banned lol.
The one thing not mentioned is if there are other hosts connecting to the SAN.
A lot of people don't want third party tools that come from an unknown source. That the DRBD feature is totally built into the kernel and there just waiting to be exposed with an interface is a big deal that makes users feel much better than getting the actual functionality from a different company.
Basically it is a Ponsey Scheme (no idea how to spell that and too lazy to look it up.) They need to constantly bring in new people to keep the lights on while old people abandon ship. As long as new people keep signing up they can keep supplying power to the servers. They can keep lowering the performance of the old people and hope that they leave as new join. So the system mostly works. It doesn't work for customers, but works for the vendor.
Blades seem to make you give up a lot of flexibility. With an old fashioned server I can run them diskless today and add disks tomorrow if the way that I want to use them changes. But if I have a blade, I'm stuck.
Good point. I don't particularly doubt that it is there, just don't want to count unhatched chickens, you know? HP used to sell Proliants with the built in virtualization disabled via BIOS. Not that IBM would pull that, but without knowing that the feature is there for sure, it's something to look into.
SuperMicro is beginning to make the lines between Tier 1 and Tier 2 blurry as they increase their level of engineering on their products and begin to offer more and more enterprise class support for their products.
Scale is a big factor. It's the only reason google's crappy desktop motherboard, single psu. Single HDD model works. They have have a lot of nodes. The nodes, don't matter. The redundancy isn't in the nodes, it's in the network of the nodes.
Same logic for Facebook's datacenter or BackBlaze's storage pods. Disposable nodes in large quantity.
"Hey boss' boss, just wanted to run this by you. My boss needs me to document trivial IT tasks that the intern (that we don't have) should be able to look up without a problem and spend time and company money maintaining them so that he, not I, can repeatedly do entry level tasks without needing to get assistance from elsewhere. I'm happy to do this but this puts us at risk of the documentation being wrong since I am just making copies of the industry or vendor documentation, and because it is impossible to keep every internal doc continuously in sync with the latest vendor changes, guidelines and best practices. Just wanted to make sure that you agree that making this kind of documentation, all for one person, is a good use of government financial resources?"
Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.