Cannot decide between 1U servers for growing company
-
shit me for getting torn to shreds on here. Pissing contest.
Its much easier to verbalize than type out exact specifics.
What i meant by "I just seen as Windows iSCSI initiator working much better; more manageable and not limited."
Am I currently using windows iSCSI initator? NO
Do I wish I was using it: Yes?Why: Because I feel it would be easier to manage and connect an iSCSI LUN as localized storage and data storage. The larger 2TB storage holds all the windows network shares and user profile data.... thats the problem.
-
@coliver said:
Dell, HP, and Supermicro are really the only ones I would look at. Lenovo has had some questionable practices over the past year that make me shy away from them (they are 100% Lenovo now there is nothing IBM about them). Don't do Cisco you are going to pay a lot of money for things that you can get for less somewhere else.
Lenovo wasn't good before that, just not outright evil. They were just low end, poorly supported, overpriced stuff before. They were "bad value." Now they are unthinkably outright enemies of their customers. Even when IBM made those servers they weren't that good. IBM didn't use them when they made them themselves. Never buy servers from a vendor that needs to run their competitors' gear to keep the lights on.
-
NOTED!
So back to looking at Sunfire servers or CISCO's lineup.
I just think the Sunfires are down right sexy in appearance, and high built quality. The ones I have are the highest quality servers I've seen in contrast to HP or DELL. I bought them used for my personal testing in Proxmox HA cluster setup.
-
@ntoxicator said:
Essentially What I was looking to do was KVM / VM with complete HA.
I'm uncertain about keeping data local to individual servers. maybe because I have no experience with localized storage in an HA environment? Its all been shared centralized storage.
If you used shared storage you have to do tons to get HA as the external storage adds all kinds of risk that pulls you in the opposite direction as HA. At a small scale, local storage is generally the only possibly way to have HA.
The best HA compute environment in the world is only as good as the storage it runs on. If the storage isn't HA, the stack isn't HA. And getting HA storage is tough.
-
@ntoxicator said:
NOTED!
So back to looking at Sunfire servers or CISCO's lineup.
I just think the Sunfires are down right sexy in appearance, and high built quality. The ones I have are the highest quality servers I've seen in contrast to HP or DELL. I bought them used for my personal testing in Proxmox HA cluster setup.
If the price and features come in right, I'm all for Oracle hardware.
If you can move to all Sparc and Solaris, even better!!
-
Define "At a small scale, local storage is generally the only possibly way to have HA."
In my eyes and logic. Seems centralized storage is the way to go.
I'm unsure how you can have HA cluster or setup when the storage needs are localized at the individual server level. Unless of-course, all the data is shared between all servers and replicated.
For instance.
NODE1 - has NFS shares on it
NODE2 & NODE2 - pull data off NODE1's NFS share.NODE1 suddenly goes down? Then what, as that data is localized to that server.
So just shoot me down as a noob on here. Completely changing what I know?
-
@ntoxicator said:
Essentially What I was looking to do was KVM / VM with complete HA.
Several options there. Build your own, ProxMox (not a fan for multiple reasons that I could go into but might not need to... works but isn't ideal as a product or as a company) or Scale. Scale is the only one that handles HA for you. If you build your own or do ProxMox you are pretty much limited to doing DRBD on your own which is a bit of work and requires some expertise. Or you have to get HA storage and HA SAN networking which means looking at vendors like EMC and 3PAR as starting points and tons of money.
Scale does all of this with HA in both the compute and the storage and everything. There are other vendors like Simplivity and Nutanix but neither have the technical stack of Scale and neither focuses on the market that you are in like Scale.
-
@ntoxicator said:
shit me for getting torn to shreds on here. Pissing contest.
IT gets taken a little too seriously around here but it's only because we're all really passionate about it. That's part of the community's ... charm
-
@scottalanmiller said:
@ntoxicator said:
Essentially What I was looking to do was KVM / VM with complete HA.
Several options there. Build your own, ProxMox (not a fan for multiple reasons that I could go into but might not need to... works but isn't ideal as a product or as a company) or Scale. Scale is the only one that handles HA for you. If you build your own or do ProxMox you are pretty much limited to doing DRBD on your own which is a bit of work and requires some expertise. Or you have to get HA storage and HA SAN networking which means looking at vendors like EMC and 3PAR as starting points and tons of money.
Scale does all of this with HA in both the compute and the storage and everything. There are other vendors like Simplivity and Nutanix but neither have the technical stack of Scale and neither focuses on the market that you are in like Scale.
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
-
@ntoxicator said:
Define "At a small scale, local storage is generally the only possibly way to have HA."
At three or fewer physical hosts, there is no reasonable option except for local storage - it is literally impossible for non-subsidized external storage to compete at all. Once you get to four or more physical hosts there start to be possible scenarios where specific situations like giant nodes, special storage needs might make very niche scenarios make sense but only in the most extreme circumstances.
Typically the number you assume is twelve. Until you have at least twelve physical virtualization nodes (means likely around 600+ VMs) you don't even think of looking at external storage. Even at that scale external storage is unlikely, but well worth considering.
-
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
-
@ntoxicator said:
shit me for getting torn to shreds on here. Pissing contest.
Sorry don't mean to tear anyone to shreds just giving advice and helping to fix some bad information... I've been told I can be a bit abrasive at times.
-
@ntoxicator said:
I'll look into HP servers before DELL. It concerns me about their price/performance now as I feel their quality has deteriorated over the years. Supermicro I know is always a good choice, as I've used them for years. its just the time to configure and build the whitelabel machines and then also the warranty/support. Comes at a cost.
I'm still catching up on older posts...
HP, Dell and SuperMicro are all good. I've used all and have been very happy with all of them.
-
@ntoxicator said:
No true experience with VMware eSXI.
Of all of the hypervisors, it is the one to avoid anyway, most of the time. Not that it is bad, it just fails to be meaningfully "as good" as any competitor.
http://mangolassi.it/topic/5082/is-the-time-for-vmware-in-the-smb-over
-
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
-
@ntoxicator said:
ProxMox reason: It has straight KVM or OpenVZ support. also has enterprise features. Why pay more for Citrix Xen Server when its complete BULLSHIT in my eyes. Sorry.. I dont see the benefits of Citrix Xen Server (KVM based).
XenServer is based on Xen. It's very mature (second only to ESXi) and the only hypervisor that offers full PV and is the choice of the most enterprise of environments (Amazon, Rackspace and IBM clouds.) Xen is super fast, super stable and feature rich. And it is 100% free, even the Citrix packaging of it. I know people here in the community getting 20% performance increase moving from ESXi to Xen, for example. I've been using Xen for over a decade, it's pretty awesome.
If you are building a platform on your own (nothing packaged) Xen / XenServer is where I would start. ESXi I would just ignore, it rarely makes any sense at all. HyperV can be good but mostly for MS shops wanting to stick with a single vendor or people looking for specific features. All other things being equal, Xen is my go to choice due to performance, stability, enterprise support and maturity (and the PV feature rocks.)
KVM is much harder to deal with on your own but is great technology and especially good at Windows workloads (Xen is better at Linux ones) and is better for vendors to handle automation around which is why you often find it in other products.
These days, having used literally everything out there, at @ntg where we've been virtualizing for a long time (more than a decade) on X86 we use a mix of Scale and XenServer. HyperV only for testing and ESXi only when customers request it.
-
@johnhooks said:
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
Well yeah, nothing is going to touch DRBD mirroring. It's full on network RAID 1. The difference is that Gluster can scale to massive numbers of nodes. Anything other than DRBD for two hosts would be purely for purposes of experimentation.
-
@scottalanmiller said:
@johnhooks said:
@scottalanmiller said:
@johnhooks said:
I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.
not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.
I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.
Well yeah, nothing is going to touch DRBD mirroring. It's full on network RAID 1. The difference is that Gluster can scale to massive numbers of nodes. Anything other than DRBD for two hosts would be purely for purposes of experimentation.
Have you used Ganeti at all?
-
@ntoxicator said:
Right now the DATA Storage is piped through Citrix Xen Server in means of ISCSI LUN and mapped as a Drive associated to the VM. This was not smart on my behalf years ago. I would of been better to just directly attach a LUN right to Windows server using the ISCSI initiator. Everything was a blur 2 years ago when was scrambling to put the build together at the time.
Some thoughts on this bit, knowing that it is ancillary to the main topic (and about to be split to its own...)
- Best Option would be to share out directly from the NAS and never get SAN involved.
- Next best would be NFS or iSCSI to XenServer and then mapped to the VM. This is the "right way" to do it with a VM.
- Direct to the Windows VM is a "no no" both in the virtual space (it should always go through the hypervisor not the guest) and in the Windows world (Windows iSCSI is not the best.)
NFS is always preferred over iSCSI here both from the hypervisor side (XenServer, ESXi and KVM are all NFS natives) and from the NAS side (Synology, ReadyNAS, etc. are all NFS native while iSCSI is a secondary function) and from a design complexity standpoint.
-
@johnhooks said:
Have you used Ganeti at all?
No