ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    ServerBear Performance Comparison of Rackspace, Digital Ocean, Linode and Vultr

    IT Discussion
    serverbear server benchmarking rackspace iaas vps digital ocean vultr centos centos 7 linux linux server kvm xen
    12
    56
    17.7k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • scottalanmillerS
      scottalanmiller @dafyre
      last edited by

      @dafyre said:

      @scottalanmiller said:

      Whoops, sorry. We just killed the RS node because it is expensive 🙂

      You can test that RS ping against mangolassi.it instead. Same location, same node type. Sorry.

      With the RS nodes being so expensive... why would you not stand them up on DO or Vultr?

      Edit: I mean for production and not tests like this.

      Well DO and Vultr were not well known or well tested at the time that most of the RS nodes were created. And RS still offers a lot of features that those do not, like load balancers. But these days, the advantages to RS are fewer and fewer.

      1 Reply Last reply Reply Quote 0
      • scottalanmillerS
        scottalanmiller @wirestyle22
        last edited by

        @wirestyle22 said:

        @scottalanmiller said:

        Whoops, sorry. We just killed the RS node because it is expensive 🙂

        You can test that RS ping against mangolassi.it instead. Same location, same node type. Sorry.

        Running now

        Thanks

        1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller
          last edited by

          Right now, we are favouring a migration to Vultr. But the Linode test is running and is a major contender. Information on that to follow....

          1 Reply Last reply Reply Quote 1
          • wirestyle22W
            wirestyle22
            last edited by

            updated above

            1 Reply Last reply Reply Quote 0
            • brianlittlejohnB
              brianlittlejohn
              last edited by

              Ping statistics for 108.61.151.173:
              Packets: Sent = 204, Received = 203, Lost = 1 (0% loss),
              Approximate round trip times in milli-seconds:
              Minimum = 58ms, Maximum = 62ms, Average = 58ms

              Ping statistics for 104.236.119.59:
              Packets: Sent = 231, Received = 229, Lost = 2 (0% loss),
              Approximate round trip times in milli-seconds:
              Minimum = 56ms, Maximum = 66ms, Average = 56ms

              Ping statistics for 162.242.243.171:
              Packets: Sent = 95, Received = 94, Lost = 1 (1% loss),
              Approximate round trip times in milli-seconds:
              Minimum = 51ms, Maximum = 56ms, Average = 51ms

              About the same from west Texas.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller
                last edited by

                OMG WE HAVE A WINNER!!!!

                Linode just took everyone out back and took their lunch money!! They have load balancers too!! (a la Rackspace and Amazone.) Look at that IO capacity!!! And that UNIX Bench! Their single thread was by far the fastest too!

                wirestyle22W 1 Reply Last reply Reply Quote 3
                • wirestyle22W
                  wirestyle22 @scottalanmiller
                  last edited by wirestyle22

                  @scottalanmiller said:

                  OMG WE HAVE A WINNER!!!!

                  Linode just took everyone out back and took their lunch money!! They have load balancers too!! (a la Rackspace and Amazone.) Look at that IO capacity!!! And that UNIX Bench! Their single thread was by far the fastest too!

                  Wow. That's fantastic.

                  scottalanmillerS 1 Reply Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller @wirestyle22
                    last edited by

                    @wirestyle22 said:

                    Wow. That's fantastic.

                    I'm so excited. No question that they are by FAR the hardest to use, but who cares. That performance is crazy!!

                    wirestyle22W 1 Reply Last reply Reply Quote 1
                    • wirestyle22W
                      wirestyle22 @scottalanmiller
                      last edited by

                      @scottalanmiller said:

                      @wirestyle22 said:

                      Wow. That's fantastic.

                      I'm so excited. No question that they are by FAR the hardest to use, but who cares. That performance is crazy!!

                      Rewarded complexity is fine by me 😄

                      1 Reply Last reply Reply Quote 0
                      • scottalanmillerS
                        scottalanmiller
                        last edited by

                        Throughout the range, Linode comes in either as cheap or cheaper than everyone else, too. It pretty much tracks Vultr until it outscales them. Then it matches or beats DO.

                        1 Reply Last reply Reply Quote 0
                        • scottalanmillerS
                          scottalanmiller
                          last edited by

                          Of additional consideration... Vultr and RS cap out pretty small. DO and Linode make massive single nodes, which is important when we are running epic databases, which we are doing. The growth rate on the database is quite healthy.

                          1 Reply Last reply Reply Quote 1
                          • A
                            Alex Sage
                            last edited by Alex Sage

                            http://www.theregister.co.uk/2016/01/04/linode_back_at_last_after_ten_days_of_hell/

                            travisdh1T 1 Reply Last reply Reply Quote 0
                            • travisdh1T
                              travisdh1 @Alex Sage
                              last edited by travisdh1

                              @aaronstuder said:

                              http://www.theregister.co.uk/2016/01/04/linode_back_at_last_after_ten_days_of_hell/

                              That's just painful, but have to expect that to happen now and then. Guess I'll start an overnight ping test, see how bad it still is.

                              Edit: Never mind, don't have the IP address for the Linode.

                              scottalanmillerS 1 Reply Last reply Reply Quote 0
                              • scottalanmillerS
                                scottalanmiller
                                last edited by

                                Here is the blog response to that...

                                https://blog.linode.com/2016/01/29/christmas-ddos-retrospective/

                                wrx7mW 1 Reply Last reply Reply Quote 1
                                • scottalanmillerS
                                  scottalanmiller @travisdh1
                                  last edited by

                                  @travisdh1 said:

                                  Edit: Never mind, don't have the IP address for the Linode.

                                  It's already offline but I will get you a new one privately in a few minutes.

                                  1 Reply Last reply Reply Quote 1
                                  • A
                                    Alex Sage
                                    last edited by Alex Sage

                                    Will Mangolassi be moving to Linode?

                                    DustinB3403D scottalanmillerS 2 Replies Last reply Reply Quote 1
                                    • DustinB3403D
                                      DustinB3403 @Alex Sage
                                      last edited by

                                      @aaronstuder said:

                                      Will Mongolassi be moving to Linode?

                                      Nope Mongolassi doesn't exist!

                                      1 Reply Last reply Reply Quote 0
                                      • scottalanmillerS
                                        scottalanmiller @Alex Sage
                                        last edited by

                                        @aaronstuder said:

                                        Will Mangolassi be moving to Linode?

                                        Yes, going to make an attempt of it.

                                        1 Reply Last reply Reply Quote 0
                                        • wrx7mW
                                          wrx7m @scottalanmiller
                                          last edited by

                                          @scottalanmiller said:

                                          Here is the blog response to that...

                                          https://blog.linode.com/2016/01/29/christmas-ddos-retrospective/

                                          This was really interesting.

                                          DashrenderD 1 Reply Last reply Reply Quote 1
                                          • DashrenderD
                                            Dashrender @wrx7m
                                            last edited by

                                            @wrx7m said:

                                            @scottalanmiller said:

                                            Here is the blog response to that...

                                            https://blog.linode.com/2016/01/29/christmas-ddos-retrospective/

                                            This was really interesting.

                                            Wow - this sounds nearly the same as the GRC DDOS attack, only on a HUGE scale.

                                            scottalanmillerS 1 Reply Last reply Reply Quote 1
                                            • 1
                                            • 2
                                            • 3
                                            • 2 / 3
                                            • First post
                                              Last post