Tonight's Platform Update


  • Service Provider

    Some major under the hood changes to MangoLassi today that we are very excited about. After two years on Rackspace we were hitting capacity issues and had to find a way to continue to grow the community as site responsiveness has been holding things back and regular users were beginning to feel some lag when the site got busy.

    To address these issues we have done a few things today:

    • We moved from Ubuntu 15.10 to CentOS 7.2
    • We moved from Node 0.10.25 to 5.9.0 (a sizeable jump)
    • We moved from MongoDB 2.6 to MongoDB 3.2
    • We Increased Per Thread CPU Performance by about 15%
    • We Increased Thread Processing (CPU Count) by 300%
    • We Increased Memory by 400%
    • We more than tripled storage write IOPS!!
    • We migrated from Rackspace to Linode
    • We updated to NodeBB 1.0.1

    Rather an incredible about of movement. We are very excited as this provides for massively more overhead to tackle the ever growing performance demands of the community. As we grow it gets harder to keep the site fast and responsive and we feel that this move is going to provide for keeping the site how we want it for the next year or two and provides us with an easy, fast upgrade path for short term performance gains when needed as well.



  • That is a lot of changes!


  • Service Provider

    It felt a bit epic. More than we have changed in two whole years combined, all at once.



  • why all of these software changes at the same time instead of there time of release, other than Linux conversion.

    Why the switch to CentOS?


  • Service Provider

    @Dashrender don't make me break out the shusher

    87422f7d-a781-416a-87cf-6d4d9f352e65.jpg.w480.jpg



  • Interesting!


  • Banned

    So there's only one box behind the whole site? Why aren't you load balancing and using a farm for The DB if the site is starting to need more scale?


  • Service Provider

    @Dashrender said:

    Why the switch to CentOS?

    Support from MongoDB. Ubuntu does not support MongoDB well and MongoDB does support Ubuntu much at all. We got burned pretty badly by the Ubuntu decision as far as database updates and we were completely unable to move forward to MongoDB 3.x (they are already late into the 3.2 series, Ubuntu left us years out of date) due to MongoDB not seeing Ubuntu as even slightly serious platform. Ubuntu is not a good target for database makers and that really hurt us. Sure we could have been self compiling, but we are trying to be a production system here, not appropriate workflow there.

    That was the biggest impetus on moving to CentOS. But really, over two years on Ubuntu and while it was fine, it offered no advantages and only caveats that sometimes reared their ugly heads six or twelve months down the line. Want to avoid that in the future.


  • Service Provider

    @Dashrender said:

    why all of these software changes at the same time instead of there time of release, other than Linux conversion.

    MongoDB has not yet released a newer version for Ubuntu. We've been running on the latest MongoDB code for Ubuntu since day one. So because we moved to CentOS, the other tools that we need shot forward, too.

    Node we moved from OS support to Node's own repos (which did not exist when we started and that is not Ubuntu's fault.) Node has changed a lot since we started and this is the post merged io.js world now.


  • Service Provider

    @Jason said:

    So there's only one box behind the whole site? Why aren't you load balancing and using a farm for The DB if the site is starting to need more scale?

    We've considered that from the beginning but from a practicality standpoint it just hasn't made sense. We were seriously considering using ObjectRocket before this move but the more we looked at it the more it did not make sense. We can't leverage traditional load balancing for the front end, we tried working with Rackspace on this and they just could not handle it, so that's a difficult path, but one that we do not need right now. The site load, over two years, has never been a spiky one and investing heavily in an infrastructure design that is slower but scales more easily doesn't make sense as it would be far more costly and introduce latency for day to day for the ability to handle load spikes that never happen.

    Even so, moving to a DB farm is trivial with the current design. Everything is in tiers and can be separated almost as quickly now than if they were already split up. But this ways means we have pure memory communications between the application layer and the database for lower overhead and better performance.



  • @scottalanmiller said:

    @Dashrender said:

    Why the switch to CentOS?

    Support from MongoDB. Ubuntu does not support MongoDB well and MongoDB does support Ubuntu much at all. We got burned pretty badly by the Ubuntu decision as far as database updates and we were completely unable to move forward to MongoDB 3.x (they are already late into the 3.2 series, Ubuntu left us years out of date) due to MongoDB not seeing Ubuntu as even slightly serious platform. Ubuntu is not a good target for database makers and that really hurt us. Sure we could have been self compiling, but we are trying to be a production system here, not appropriate workflow there.

    That was the biggest impetus on moving to CentOS. But really, over two years on Ubuntu and while it was fine, it offered no advantages and only caveats that sometimes reared their ugly heads six or twelve months down the line. Want to avoid that in the future.

    Why was Ubuntu chosen in the first place? Trying to learn about the decision making process.


  • Service Provider

    @Dashrender said:

    Why was Ubuntu chosen in the first place? Trying to learn about the decision making process.

    Ubuntu is where NodeBB was doing their development at the time (probably still are, did not check) and because NodeBB was so untested and nascent we wanted to be as close to their development process as possible to eliminate unknowns. They mostly run on Redis, which we did not do, because ML is much larger and needs more capability than Redis will provide, hence why we are on MongoDB which was their second choice database platform.

    At the time, MongoDB and Node support on Ubuntu was not so bad, that was pre-14.04. Ubuntu has stagnated hard since the release of 14.04 and they strong push on the marketing for the LTS stuff has basically divided their market with some packages and vendors focusing solely on their LTS releases and others focuses solely on their supported current releases leaving the Ubuntu platform fragmented and crappy in a way no one would have guessed back in the 13.10 era when we first deployed.

    CentOS and RHEL avoid this by having a single release, not different competing ones. CentOS is always CentOS, there isn't some weird LTS vs. Current ideology going on within the ecossytem.



  • @scottalanmiller said:

    @Jason said:

    So there's only one box behind the whole site? Why aren't you load balancing and using a farm for The DB if the site is starting to need more scale?

    We've considered that from the beginning but from a practicality standpoint it just hasn't made sense. We were seriously considering using ObjectRocket before this move but the more we looked at it the more it did not make sense. We can't leverage traditional load balancing for the front end, we tried working with Rackspace on this and they just could not handle it, so that's a difficult path, but one that we do not need right now. The site load, over two years, has never been a spiky one and investing heavily in an infrastructure design that is slower but scales more easily doesn't make sense as it would be far more costly and introduce latency for day to day for the ability to handle load spikes that never happen.

    Even so, moving to a DB farm is trivial with the current design. Everything is in tiers and can be separated almost as quickly now than if they were already split up. But this ways means we have pure memory communications between the application layer and the database for lower overhead and better performance.

    I'm a bit lost on Jason's question - at least with regards for the DB farm part. Unless you're going to put the DB on dedicated hardware/cluster the boxes would all be virtualized, right? So assuming you are providing the correct resources to each VM would it make any difference if it's on a single host vs a cluster?

    At what point do you split the pieces out for performance? You can build some pretty beefcake servers giving the VMs some outrageous resources. It's one thing to split for uptime/redundancy, etc - long before you split for performance, or is there something at larger scale I simply don't understand?


  • Service Provider

    @Dashrender said:

    I'm a bit lost on Jason's question - at least with regards for the DB farm part. Unless you're going to put the DB on dedicated hardware/cluster the boxes would all be virtualized, right? So assuming you are providing the correct resources to each VM would it make any difference if it's on a single host vs a cluster?

    Standard design is to have the database layer on its own VMs without anything on them except the database code. So generally at least three MongoDB Shard servers.

    Then you would have a layer of application servers. Node + NodeBB in this case. That run nothing but that.

    Then a layer of NGinx reverse proxies that do nothing but that in front of them.

    Then a load balancer pair sitting in front of that that spreads the load out to them.

    Given this design, you can scale any layer as needed to handle load.


  • Service Provider

    @Dashrender said:

    At what point do you split the pieces out for performance?

    When you out scale what you can do in a single box. Because a single box has performance advantages, quite large ones, but when you go beyond its practical limits you generally will grow best by having a full tiered approach.


  • Service Provider

    @Dashrender said:

    You can build some pretty beefcake servers giving the VMs some outrageous resources.

    Not with Rackspace or some others. We are already pushing the capacity of Rackspace's largest appropriate VM type. Linode lets us go much larger, so we have a lot of breathing room, but we are already at the limits of "common" cloud nodes.



  • @scottalanmiller said:

    @Dashrender said:

    You can build some pretty beefcake servers giving the VMs some outrageous resources.

    Not with Rackspace or some others. We are already pushing the capacity of Rackspace's largest appropriate VM type. Linode lets us go much larger, so we have a lot of breathing room, but we are already at the limits of "common" cloud nodes.

    So where do you go from there? Once you out grown the current capacity?


  • Service Provider

    @coliver said:

    @scottalanmiller said:

    @Dashrender said:

    You can build some pretty beefcake servers giving the VMs some outrageous resources.

    Not with Rackspace or some others. We are already pushing the capacity of Rackspace's largest appropriate VM type. Linode lets us go much larger, so we have a lot of breathing room, but we are already at the limits of "common" cloud nodes.

    So where do you go from there? Once you out grown the current capacity?

    That will depend what we have available to us at the time that we hit that scale. We can go to the full split as I described above, which we are designed to do. So that would actually be quite easy. But likely, before that, we will do a full split geographically with a full stack coming up in London and one in Singapore or Hong Kong to offload regional load in that way. That will let the database shards do the heavy lifting while the physical location of the nodes would provide improved latency for people in those regions.



  • @scottalanmiller said:

    @Dashrender said:

    You can build some pretty beefcake servers giving the VMs some outrageous resources.

    Not with Rackspace or some others. We are already pushing the capacity of Rackspace's largest appropriate VM type. Linode lets us go much larger, so we have a lot of breathing room, but we are already at the limits of "common" cloud nodes.

    Sure, at what point does a colo make more sense than rending VM space from these vendors? Of course the problem with that is a single server, single point of failure.


  • Service Provider

    @Dashrender said:

    Sure, at what point does a colo make more sense than rending VM space from these vendors? Of course the problem with that is a single server, single point of failure.

    When we start getting to the point that having 4-6 physical CPUs and 96GB+ of RAM are needed per site for performance. That is a long, long way off. The platform that we use is so efficient that we are handling close to 100,000 views a day and were only just starting to run into memory constraints on the old system and mostly because the system has grown over the last two years to have so much content that keeping stuff in memory was bogging things down.

    The leap that we have made likely will carry us for more than another two years, every aspect of the system is 300% or more faster or bigger than we had before. We have 300% more cores now, but also 15% more speed per core, that adds up. That's like a 345% total speed increase. That is a lot. And that wasn't our bottleneck. And we have heard numbers up to 300% increase from the database update. And 300% faster disk IO!!! These things all add up. The CPU waits on the disks less, the database requires less of all of the resources, more things are cached in memory - if we were capping out at 100,000 views a day (before people could notice some minor lag) we are guessing that the new system can handle a million or more.

    The amount of growth that we are prepared to handle with zero to trivial effort is pretty enormous. That we will need to consider anything else for a very long time is unlikely.

    Chances are in two years we will want to revisit the architecture and, at that time, there is a very good chance that getting a 100% boost in memory size will be obvious and simple, that per core CPU speeds will have increased, etc. The platform naturally gets faster underneath us in many ways. So the need for moving to a completely different approach is much farther off than you would think. Because of the type of site that we are, the scale that can be handled from the current design is pretty extreme.


  • Service Provider

    @Dashrender said:

    Of course the problem with that is a single server, single point of failure.

    With good database backups, we could mitigate that pretty easily. MongoDB does an export, compress, transport, decompress, restore with blinding speed. We just did it last night and it was insane how fast the entire site was able to be transported around.



  • @scottalanmiller said:

    Chances are in two years we will want to revisit the architecture and, at that time, there is a very good chance that getting a 100% boost in memory size will be obvious and simple, that per core CPU speeds will have increased, etc. The platform naturally gets faster underneath us in many ways. So the need for moving to a completely different approach is much farther off than you would think. Because of the type of site that we are, the scale that can be handled from the current design is pretty extreme.

    Actually, this is exactly what I would expect. As the platform under the VMs becomes better as old hardware is replaced, I would expect there to be less need to migrate to something new. Sure you might need to assign more RAM, something that the underlying hardware might have more of now because it was upgraded, but the VM won't because, well that's not how that works. But when the CPU and disk are improved, the VM just gets those gains because everything on the system get those gains - not taking tiered storage into account.


  • Service Provider

    @Dashrender said:

    @scottalanmiller said:

    Chances are in two years we will want to revisit the architecture and, at that time, there is a very good chance that getting a 100% boost in memory size will be obvious and simple, that per core CPU speeds will have increased, etc. The platform naturally gets faster underneath us in many ways. So the need for moving to a completely different approach is much farther off than you would think. Because of the type of site that we are, the scale that can be handled from the current design is pretty extreme.

    Actually, this is exactly what I would expect. As the platform under the VMs becomes better as old hardware is replaced, I would expect there to be less need to migrate to something new. Sure you might need to assign more RAM, something that the underlying hardware might have more of now because it was upgraded, but the VM won't because, well that's not how that works. But when the CPU and disk are improved, the VM just gets those gains because everything on the system get those gains - not taking tiered storage into account.

    Yup, and realistically we grow at a rather steady pace not like we get twenty fold increase in a month. So the underlying hardware tends to grow with us pretty steadily.


  • Service Provider

    NodeBB 1.0.2 is out. Might as well upgrade 🙂 It has been a day of upheaval already.


  • Service Provider

    That was ridiculously fast, lol.


  • Service Provider

    We did a full update in under a minute!!



  • Huzzah! What did it fix/change?


  • Service Provider

    @nadnerB said:

    Huzzah! What did it fix/change?

    No idea 🙂



  • @scottalanmiller said:

    @nadnerB said:

    Huzzah! What did it fix/change?

    No idea 🙂

    There's this thing called Changelog... Generally one should read it before applying said updates...

    0_1458616061260_upload-98bf6ccd-584f-4055-99f0-40a07b2728dd


  • Service Provider

    A month in and Linode is still rocking it!


Log in to reply
 

Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.