ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    BackUp device for local or colo storage

    IT Discussion
    backup disaster recovery
    7
    195
    89.1k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @DustinB3403
      last edited by

      @DustinB3403 said:

      @scottalanmiller Even if we spent for the 10G NICs in the servers?

      2 Hypervisors 1 Backup device.

      Oh, so two servers and one backup device and one uplink to the rest of the network? Four ports might do it then. But.... no redundancy at all?

      1 Reply Last reply Reply Quote 0
      • DustinB3403D
        DustinB3403
        last edited by

        Cost consciousness.

        Is there that much added value in doubling what we have for those "if" events.

        scottalanmillerS DashrenderD 2 Replies Last reply Reply Quote 0
        • coliverC
          coliver
          last edited by

          Nevermind, I was thinking about the S3300 series, they have 2 uplink ports. There are 2 10Gb copper or fiber ports that you can use interchangeably... but only two at a time. They do have the XT712T but I have a feeling that may be a bit too expensive.

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller @DustinB3403
            last edited by

            @DustinB3403 said:

            Cost consciousness.

            Is there that much added value in doubling what we have for those "if" events.

            NICs and switches tend to die and it doubles throughput.

            1 Reply Last reply Reply Quote 0
            • DustinB3403D
              DustinB3403
              last edited by

              Well even in that case I would still look at a bigger switch with 8 - 12 ports. possibly with some level of management on it.

              1 Reply Last reply Reply Quote 0
              • coliverC
                coliver
                last edited by

                Can you do port bonding? I thought I read someone suggest that but didn't see your response. That would be a really good stop gap solution for now.

                DustinB3403D 1 Reply Last reply Reply Quote 0
                • DustinB3403D
                  DustinB3403 @coliver
                  last edited by

                  @coliver Possibly.

                  The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.

                  Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.

                  coliverC DashrenderD 2 Replies Last reply Reply Quote 0
                  • coliverC
                    coliver @DustinB3403
                    last edited by

                    @DustinB3403 said:

                    @coliver Possibly.

                    The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.

                    Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.

                    The cost of an additional 4 port 1Gbe card is minimal. You could easily add that to all your systems for a fraction the cost of the 10Gbe switch and adapters.

                    1 Reply Last reply Reply Quote 1
                    • DustinB3403D
                      DustinB3403
                      last edited by

                      I'm forking to a new thread. Will post a link shortly.

                      1 Reply Last reply Reply Quote 0
                      • DustinB3403D
                        DustinB3403
                        last edited by

                        New topic discussing just the goals of this project.
                        http://mangolassi.it/topic/6453/backup-and-recovery-goals

                        1 Reply Last reply Reply Quote 0
                        • DustinB3403D
                          DustinB3403 @scottalanmiller
                          last edited by

                          @scottalanmiller said:

                          Wouldn't you carry off daily?

                          Sorry just saw this, its a nuisance to have to swap tape or drive daily to do it. Our current plan is carry off weekly.

                          1 Reply Last reply Reply Quote 0
                          • DashrenderD
                            Dashrender @DustinB3403
                            last edited by

                            @DustinB3403 said:

                            Cost consciousness.

                            Is there that much added value in doubling what we have for those "if" events.

                            Remember this post when you ask for a full second server to run your VM environment.

                            1 Reply Last reply Reply Quote 1
                            • DashrenderD
                              Dashrender @DustinB3403
                              last edited by

                              @DustinB3403 said:

                              @coliver Possibly.

                              The biggest bottleneck with the existing backup solution is the server performing the work. Which is just constantly getting hit.

                              Port bonding on the new setup would reduce some cost, at the price of reducing what we can run VM wise since those ports would be tied up.

                              What do you mean? you typically bond all the NICs in a VM host together and all the VMs on the host share the pipe.

                              Next question, do you really use 800 Mb (realistic use from 1 Gb ports) on each server at the same time?

                              scottalanmillerS 1 Reply Last reply Reply Quote 1
                              • DustinB3403D
                                DustinB3403
                                last edited by

                                I've never bonded all of the NICs as we haven't had the need for it.

                                In most cases we've simply allocated a specific NIC for a specific number of VM's.

                                scottalanmillerS 1 Reply Last reply Reply Quote 0
                                • DashrenderD
                                  Dashrender
                                  last edited by

                                  Unless you need to leave bandwidth overhead for something, why split it?

                                  It's just like you always you OBR10 unless you have a specific reason not to.

                                  1 Reply Last reply Reply Quote 1
                                  • DustinB3403D
                                    DustinB3403
                                    last edited by

                                    Why Bond when I'm still only capable of pushing 1Gb/s at best?

                                    scottalanmillerS 1 Reply Last reply Reply Quote 0
                                    • scottalanmillerS
                                      scottalanmiller @DustinB3403
                                      last edited by

                                      @DustinB3403 said:

                                      I've never bonded all of the NICs as we haven't had the need for it.

                                      Aren't we seeing bottlenecks, though? Bonding is a standard, best practice.

                                      1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @DustinB3403
                                        last edited by

                                        @DustinB3403 said:

                                        Why Bond when I'm still only capable of pushing 1Gb/s at best?

                                        What is limiting you to 1Gb/s if not the GigE link?

                                        1 Reply Last reply Reply Quote 0
                                        • scottalanmillerS
                                          scottalanmiller
                                          last edited by

                                          And you bond for failover, not just speed.

                                          1 Reply Last reply Reply Quote 0
                                          • scottalanmillerS
                                            scottalanmiller @Dashrender
                                            last edited by

                                            @Dashrender said:

                                            What do you mean? you typically bond all the NICs in a VM host together and all the VMs on the host share the pipe.

                                            Up to four NICs.

                                            1 Reply Last reply Reply Quote 1
                                            • 1
                                            • 2
                                            • 5
                                            • 6
                                            • 7
                                            • 8
                                            • 9
                                            • 10
                                            • 7 / 10
                                            • First post
                                              Last post