ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Cannot decide between 1U servers for growing company

    IT Discussion
    18
    246
    134.8k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @stacksofplates
      last edited by

      @johnhooks said:

      @scottalanmiller said:

      @johnhooks said:

      I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.

      not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.

      I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.

      Well yeah, nothing is going to touch DRBD mirroring. It's full on network RAID 1. The difference is that Gluster can scale to massive numbers of nodes. Anything other than DRBD for two hosts would be purely for purposes of experimentation.

      stacksofplatesS 1 Reply Last reply Reply Quote 0
      • stacksofplatesS
        stacksofplates @scottalanmiller
        last edited by

        @scottalanmiller said:

        @johnhooks said:

        @scottalanmiller said:

        @johnhooks said:

        I want to try GlusterFS with two KVM hosts and see how it works. I've seen a couple people online do it.

        not for the faint of heart. I've worked with some huge shops that did this and it can be done, but rarely were they happy about it in the end.

        I just read some stuff saying that DRBD is much more reliable. So I'll probably give that up haha. I do want to try ovirt though just to see how it works.

        Well yeah, nothing is going to touch DRBD mirroring. It's full on network RAID 1. The difference is that Gluster can scale to massive numbers of nodes. Anything other than DRBD for two hosts would be purely for purposes of experimentation.

        Have you used Ganeti at all?

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @ntoxicator
          last edited by

          @ntoxicator said:

          Right now the DATA Storage is piped through Citrix Xen Server in means of ISCSI LUN and mapped as a Drive associated to the VM. This was not smart on my behalf years ago. I would of been better to just directly attach a LUN right to Windows server using the ISCSI initiator. Everything was a blur 2 years ago when was scrambling to put the build together at the time.

          Some thoughts on this bit, knowing that it is ancillary to the main topic (and about to be split to its own...)

          • Best Option would be to share out directly from the NAS and never get SAN involved.
          • Next best would be NFS or iSCSI to XenServer and then mapped to the VM. This is the "right way" to do it with a VM.
          • Direct to the Windows VM is a "no no" both in the virtual space (it should always go through the hypervisor not the guest) and in the Windows world (Windows iSCSI is not the best.)

          NFS is always preferred over iSCSI here both from the hypervisor side (XenServer, ESXi and KVM are all NFS natives) and from the NAS side (Synology, ReadyNAS, etc. are all NFS native while iSCSI is a secondary function) and from a design complexity standpoint.

          1 Reply Last reply Reply Quote 1
          • scottalanmillerS
            scottalanmiller @stacksofplates
            last edited by

            @johnhooks said:

            Have you used Ganeti at all?

            No

            1 Reply Last reply Reply Quote 0
            • AconboyA
              Aconboy
              last edited by

              @ntoxicator - As a non-sales guy from Scale (office of the CTO), I would be happy to set up a webex and go over what we do with HC3 and see if it a fit for you or not. We can dive to whatever level of technical depth you would like to go to.

              1 Reply Last reply Reply Quote 2
              • ntoxicatorN
                ntoxicator
                last edited by

                Thank you everyone for all the information.

                Still confused as to why local storage being recommended over centralized storage on a NAS?

                I suppose I just gave up with Citrix Xen Server at 6.1(free) release. Was still bugs (windows drivers). Also any Linux VM's I install, there is no memory view and does not calculate total node memory usage correctly.

                With NFS Storage -- I'm unaware of a practice of where I can attach a disk as LOCAL to a windows server. keep in mind. our primary domain controller has ALL! network shares that are viable to the company. This data rides on data within an iSCSI LUN which is attached to Citrix Xen Server and as a disk tied to the VM.

                Might need to split this to a seperate thread on network storage and layout....

                coliverC scottalanmillerS M 5 Replies Last reply Reply Quote 1
                • coliverC
                  coliver @ntoxicator
                  last edited by

                  @ntoxicator said:

                  Thank you everyone for all the information.

                  Still confused as to why local storage being recommended over centralized storage on a NAS?

                  I suppose I just gave up with Citrix Xen Server at 6.1(free) release. Was still bugs (windows drivers). Also any Linux VM's I install, there is no memory view and does not calculate total node memory usage correctly.

                  With NFS Storage -- I'm unaware of a practice of where I can attach a disk as LOCAL to a windows server. keep in mind. our primary domain controller has ALL! network shares that are viable to the company. This data rides on data within an iSCSI LUN which is attached to Citrix Xen Server and as a disk tied to the VM.

                  Might need to split this to a seperate thread on network storage and layout....

                  Why would you want to mount NFS storage locally in Windows? Setup it up as usable storage in XenServer (or whatever hypervisor you pick) and store the virtual hard disk on it. This will look like a local disk to Windows but have the pick up and move where ever you want advantage of just being a file (because it is).

                  ntoxicatorN 1 Reply Last reply Reply Quote 1
                  • ntoxicatorN
                    ntoxicator
                    last edited by

                    NOTE:

                    Just spoke with folks at Oracle sales, had a conference call to discuss X5-2 servers and specs. Awaiting pricing.

                    Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.

                    AconboyA scottalanmillerS 2 Replies Last reply Reply Quote 1
                    • scottalanmillerS
                      scottalanmiller @ntoxicator
                      last edited by

                      @ntoxicator said:

                      Still confused as to why local storage being recommended over centralized storage on a NAS?

                      Because HA. If you need multiple servers for your VMs to failover, you need multiple for your storage. Storage is more critical and more fragile than the host servers, so it is where you need to focus HA efforts even more. Your VMs are only as safe as your NAS is, and any NAS under $30K isn't as reliable as a cheap normal server. And there are more points of failure, not just riskier ones.

                      Check out these articles.

                      http://www.smbitjournal.com/2013/06/the-inverted-pyramid-of-doom/

                      https://www.storagecraft.com/blog/dependency-chain/

                      1 Reply Last reply Reply Quote 2
                      • M
                        marcinozga @ntoxicator
                        last edited by

                        @ntoxicator said:

                        Thank you everyone for all the information.

                        Still confused as to why local storage being recommended over centralized storage on a NAS?

                        Because it's faster, cheaper and more reliable. And with DRBD or Starwind all local storage is in sync, so if one server node goes down, your storage and remaining servers are still up. If your centralised NAS or SAN goes down, all server nodes are down.

                        1 Reply Last reply Reply Quote 1
                        • scottalanmillerS
                          scottalanmiller @ntoxicator
                          last edited by

                          @ntoxicator said:

                          With NFS Storage -- I'm unaware of a practice of where I can attach a disk as LOCAL to a windows server. keep in mind. our primary domain controller has ALL! network shares that are viable to the company. This data rides on data within an iSCSI LUN which is attached to Citrix Xen Server and as a disk tied to the VM.

                          All storage in your VM world should be attached to your host, not to guests. iSCSI lets you do things that you should not be doing here. This is an additional benefit of NFS in this case that it would prevent you from doing bad things.

                          But if you can present NFS to the VMs, you could present SMB directly to the network and bypass the extra layer gaining speed, simplicity and reliability that way too. So while you would want to use NFS when talking to the VMs / VM host, in this case you would want to bypass that extra step completely.

                          It is that it is on a LUN now that is limiting you. If it was on a NAS instead of a SAN, you'd have more options.

                          DashrenderD 1 Reply Last reply Reply Quote 0
                          • AconboyA
                            Aconboy @ntoxicator
                            last edited by

                            @ntoxicator said:

                            NOTE:

                            Just spoke with folks at Oracle sales, had a conference call to discuss X5-2 servers and specs. Awaiting pricing.

                            Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.

                            I am not super surprised, as I was looking at specs on the 3250M5 yesterday and was floored by how outdated they are compared to Dell, HP, SM, etc

                            1 Reply Last reply Reply Quote 0
                            • ntoxicatorN
                              ntoxicator @coliver
                              last edited by

                              @coliver

                              I'm aware of this - and that is the point I was getting across.

                              As with iSCSI initiator I COULD attach as local disk and direct connect and take advantage of near full network speed with smaller overhead.

                              In my opinion. There would be more overhead

                              ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

                              Furthermore. The issue stands.

                              With the primary data being on the Citrix Xen Server as local disk (iSCSI LUN storage). if I was to migrate to an NFS Stor. Mounted to Xen Server.

                              I would attach as a NEW disk to that Virtual Machine. Mount it within Windows and format. Then I'll be stuck wit 'xcopy' the data & Permissions over to this new storage drive.

                              As this is an issue now, as the Citrix Xen server has storage ties to our original Synology 4-bay NAS.

                              I've been wanting to move ALL our LUN's and data to our newer larger Synology NAS. And then use the original 4-bay as a replication/ back-up

                              scottalanmillerS coliverC 4 Replies Last reply Reply Quote 0
                              • coliverC
                                coliver @ntoxicator
                                last edited by

                                @ntoxicator said:

                                Still confused as to why local storage being recommended over centralized storage on a NAS?

                                Because a standard NAS isn't any more reliable then a standard server... mostly because they are standard servers with special software thrown on top. Why would you worry about a server node dying but not your storage node?

                                1 Reply Last reply Reply Quote 1
                                • scottalanmillerS
                                  scottalanmiller @ntoxicator
                                  last edited by

                                  @ntoxicator said:

                                  Also noticed ALOT of IBM server X on ebay.. newer ones at that. Not a good sign. Also relates back to how IBM didnt trust their own servers.

                                  Now that IBM doesn't make or support IBM servers even for customers... the one reason that people had for selecting them is gone.

                                  1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @ntoxicator
                                    last edited by

                                    @ntoxicator said:

                                    @coliver

                                    I'm aware of this - and that is the point I was getting across.

                                    As with iSCSI initiator I COULD attach as local disk and direct connect and take advantage of near full network speed with smaller overhead.

                                    In my opinion. There would be more overhead

                                    Oh absolutely, there is more overhead. But that overhead is trivial, it gets handled in a more reliable way (Linux iSCSI is more reliable than Windows iSCSI and storage is better to the host than the guest and networking has less overhead at the host than at the guest) so this is generally considered to not be a factor at all. But more importantly is fragility and manageability.

                                    What if you need to pause a VM... how will the VM know to tell the SAN to freeze in this way?

                                    1 Reply Last reply Reply Quote 1
                                    • scottalanmillerS
                                      scottalanmiller @ntoxicator
                                      last edited by

                                      @ntoxicator said:

                                      I've been wanting to move ALL our LUN's and data to our newer larger Synology NAS. And then use the original 4-bay as a replication/ back-up

                                      Synology is Supermicro gear. It's just a normal server. If you are okay with having a normal lower end enterprise server on which everything rests, why have the other servers at all? Why not go down to a single server for everything? What's the purpose of the additional servers?

                                      1 Reply Last reply Reply Quote 1
                                      • coliverC
                                        coliver @ntoxicator
                                        last edited by

                                        @ntoxicator said:

                                        In my opinion. There would be more overhead

                                        ISCSI LUN attached to Xen Hypervisor > VM > attached as local disk. Unless pass-through?

                                        Slightly more overhead... probably an immeasurable amount. At the same time you are going against best practices and defeating many of the advantages of virtualization in one fell swoop by not attaching the storage to your hypervisor and presenting a virtual disk to the VM.

                                        scottalanmillerS 1 Reply Last reply Reply Quote 1
                                        • scottalanmillerS
                                          scottalanmiller @ntoxicator
                                          last edited by

                                          @ntoxicator said:

                                          With the primary data being on the Citrix Xen Server as local disk (iSCSI LUN storage). if I was to migrate to an NFS Stor. Mounted to Xen Server.

                                          I would attach as a NEW disk to that Virtual Machine. Mount it within Windows and format. Then I'll be stuck wit 'xcopy' the data & Permissions over to this new storage drive.

                                          Yes, sadly using SAN instead of NAS instroduces all kinds of complications because all data has to be processed through another machine to be useful - including doing transfers of the data.

                                          However, as long as you don't start attaching directly to the guests, you can use storage vmotion to do this move on a block level without needing to deal with xcopy or anything of the sort. XenServer can do this for you - one of the big, critical reasons why you don't attach storage to the guests is because you lose the protections and features that the hypervisor has to provide.

                                          1 Reply Last reply Reply Quote 1
                                          • ntoxicatorN
                                            ntoxicator
                                            last edited by

                                            @scottalanmiller

                                            Thank you for the insight.. great points from you & everyone

                                            For centralized storage.

                                            Right now its essentially a single Synology NAS (Serving out NFS & iSCSI LUNS)

                                            I have two(2) Synology NAS's. But one is directly associated to the Citrix Xen Server and it storage needs. The 2nd larger Synology NAS is tied to both Citrix Xen Server (NFS) and also Prox Mox storage.

                                            The goal was to migrate ALL data off the old NAS to the new larger NAS. But due to limitations and the storage size growing so rapidly became so difficult

                                            Company bitches to me of anydown time. As users will randomly want to work remotely or from home. So telling CEO that I need to migrate 2TB of data over the network to the new storage pool and will take 10 hours. Its pulling teeth.

                                            Ultimate goal in new setup I was planning

                                            meaning WAS

                                            2 - Synology NAS 12 bay units - data replicated between

                                            2 - 3 NODE servers for housing the Virtual Machines

                                            scottalanmillerS 1 Reply Last reply Reply Quote 0
                                            • 1
                                            • 2
                                            • 3
                                            • 4
                                            • 5
                                            • 12
                                            • 13
                                            • 3 / 13
                                            • First post
                                              Last post