Coming Out of the Closet, SMB Enters the Hosted World


  • Service Provider

    All businesses want their infrastructure to be reliable and cost effective, it is the nature of business. Companies spend tens or hundreds of thousands of dollars on high availability hardware and software. However, we know that money alone does not buy reliability. In the words of John Nicholson: High availability is something that you do, not something that you buy. And this could not be more true.

    Audiophiles have long known that half of the sound quality of your stereo system comes from the amplifier, source, speakers, cabling and other aspects of the stereo itself; and that the other half comes from the physical room that you put it into and proper setup of the system within the room. Literally fifty percent of the quality of the system comes from being used properly, not the system itself. The same is true of computing systems.

    Many factors including stable air temperatures, proper air flow, physical security, proper cable management, quality racks and power distribution units, high quality and high capacity uninterrupted power supplies, quality generators, redundancy for all aspects of power, cooling and Internet access, around the clock staff, air filtration, humidity control, vibration dampening, sensor monitoring and more play key differences between quality environments and terrible ones. With the best environment, even a moderate desktop will often run without interruption for a decade if left undisturbed! A great environment into which to place servers can be far more of a factor for reliability than the build of the server itself.

    SMBs often believe that servers and other datacenter equipment will fail every few years, or more. But companies using high quality datacenters see very different numbers with failures more likely to be expected at double or triple those numbers between failures. Even without addressing high availability in the hardware and software, a good datacenter can effectively move a traditional enterprise server with obvious internal redundancies such as RAID, hot swap components and dual power supplies, into numbers that mimic the target numbers of entry level high availability. The environment is just as important, probably more important, than the server hardware itself.

    It is too often overlooking, believing that you can go to the store and simply buy a convenient box that will wave away all of the complexities of environmental management and will simply be a panacea to IT reliability needs. This, quite simply, cannot be the case. High quality server hardware and highly reliable software can, to some small degree, combat poor environmental factors but, at best, only work to offset them. This is generally costly and ineffective.

    Of course, businesses can attempt to create enterprise class hosting environments on their own premises, but this is extremely costly and requires not just a large, often staggering, up front investment which might be ten times or more the cost of the systems it is designed to protect, but will then require maintenance and staffing, indefinitely. Large cost initially, larger costs ongoing.

    It goes, we hope, without saying that many factors play in to a decision around whether computational and storage systems will be kept on premises or off premises and that there can be no singular solution. But on premises systems, especially with the increasingly common availability and affordability of highly quality, high speed WAN links and moves from LAN-based to LAN-less system designs and a desire to continuously improve uptime and security, should now be the default assumption for system deployments for the vast majority of organizations.

    The more that organizations seek high availability, the more that they must consider how the lack of ability to provide an adequately protected and stable environment impacts their capacity to deliver systems of this nature to their businesses. This has driven companies to need to consider either hosted cloud computing or colocation to fit their needs. Each is very different, while sharing the capability of offloading the environmental needs from the end organization.

    Because of the great cost involved to make on premises systems reliable enough to justify high availability spending, it is very often actually dramatically less costly to use enterprise class colocation for the same equipment while moving large up front cost (capex) into more predictable opex payments that leverage both the time value of money and uncertainty factors that are so critical to IT and business in general.

    It is time for the SMB market to join its enterprise brethren in leaving on premises systems behind and moving to the world of large scale, high efficiency and highly reliable systems hosting.



  • *fixed


  • Service Provider

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    All businesses what their infrastructure to be reliable and cost effective

    maybe they want their infrastructure

    29262kw.jpg



  • SMBs often believe that servers and other datacenter equipment will fail every few years, or more. But companies using high quality datacenters see very different numbers with failures more likely to be expected at double or triple those numbers between failures. Even without addressing high availability in the hardware and software, a good datacenter can effectively move a traditional enterprise server with obvious internal redundancies such as RAID, hot swap components and dual power supplies, into numbers that mimic the target numbers of entry level high availability. The environment is just as important, probably more important, than the server hardware itself.

    It's funny, you're right that they think this, which is so weird, if you just sit down and think about it. Normal SMB equipment lasts for 5+ years in there totally messed up closet, how could a datacenter ever be anything but better than what they have in that uncontrolled closet? At worst it would be the same.

    So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.

    I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.


  • Service Provider

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.

    I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.

    Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.


  • Service Provider

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.

    Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.



  • @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.

    I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.

    Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.

    I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.



  • @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.

    Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.

    Sure, I'll agree that those mainframes are designed to last longer - but didn't advances in computer science often warrant upgrading them? or was the cost just to great to upgrade in less than 12-15 years?


  • Service Provider

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.

    I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.

    Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.

    I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.

    Oh, well there you go. Those are pretty decent machines still. I can see why you would retire them, but it is not like they would be useless at this point. Plenty of companies would be interested in using those today and they are great for lab boxes.


  • Service Provider

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.

    Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.

    Sure, I'll agree that those mainframes are designed to last longer - but didn't advances in computer science often warrant upgrading them? or was the cost just to great to upgrade in less than 12-15 years?

    Often, no. Mainframes are so much faster than commodity machines and would remain useful for a very long time. Reliability and IO were their main value propositions and replacing them would be very expensive and often not provide a compelling advancement over what was already there.



  • @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.

    I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.

    Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.

    I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.

    Oh, well there you go. Those are pretty decent machines still. I can see why you would retire them, but it is not like they would be useless at this point. Plenty of companies would be interested in using those today and they are great for lab boxes.

    yeah - I'm thinking about doing just that. I think I have enough 300 GB drives to fill one box, but I wonder if I should even bother. If I could get away with several consumer 480 GB SSDs the thing would probably sing. One of the DL380 G5's has 32 GB RAM, so it can handle a few workloads in a lab.



  • 32GB can be a lot of workloads. Even if you only have 300GB SAS drives, that's not bad. For a lab that's great.



  • @Dashrender True they'd only be good for a couple instances of Windows, but you can cram tons of Linux things on 32GB RAM and 300GB-600GB of storage.



  • @scottalanmiller sure does like to play with words.



  • @travisdh1 said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender True they'd only be good for a couple instances of Windows, but you can cram tons of Linux things on 32GB RAM and 300GB-600GB of storage.

    OH - I could stick a pile of Windows on here too if I wanted things only for test.

    One of the first things I'm going to do is run an IOPs test on it. See how it compares with the generic numbers for these drives. 8 SAS 300 GB 6 GB/s drives.



  • The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.



  • @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.

    Why not RAID 6? What VMs would you run on it?



  • @StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.

    Why not RAID 6? What VMs would you run on it?

    No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.


  • Service Provider

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    @StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.

    Why not RAID 6? What VMs would you run on it?

    No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    @StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.

    Why not RAID 6? What VMs would you run on it?

    No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.

    What do you plan to do where anlab will have a lot of writes?



  • I dont know.

    I was wrong on ram... Only have 12 GB.



  • I find the warranty gets very expensive on HP servers after a few years - to the extent that it doesn't cost much more to buy new servers. Every time I renew an annual Care Pack on an old server I think "is this really cost effective?".

    @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    SMBs often believe that servers and other datacenter equipment will fail every few years, or more.

    I don't. Maybe 20 years of looking after servers that have simply never failed has made be over-confident, I don't know. But if I'd had hassle, I would be looking at moving to a hosted environment simply to remove that hassle and increase reliability. But because my on-premise life has been so hassle free (so far!), I'm kinda, why mess with it and introduce new hassles into my life from off-premise. Not lease because in the UK, a hosted environment means at some point you will have to rely on Openreach, and I'd go to great lengths to avoid that nightmare.


  • Service Provider

    @Carnival-Boy said in Coming Out of the Closet, SMB Enters the Hosted World:

    I find the warranty gets very expensive on HP servers after a few years - to the extent that it doesn't cost much more to buy new servers. Every time I renew an annual Care Pack on an old server I think "is this really cost effective?".

    That's very true. If you buy them up front I think that they tend to be much cheaper. If you buy enterprise bulk support, even cheaper still.


  • Service Provider

    @Carnival-Boy said in Coming Out of the Closet, SMB Enters the Hosted World:

    Not lease because in the UK, a hosted environment means at some point you will have to rely on Openreach, and I'd go to great lengths to avoid that nightmare.

    What would make you have to deal with them in a colocation facility?



  • When your internet connection goes down or becomes unreliable. Unlikely, but it happens.


  • Service Provider

    @Carnival-Boy said in Coming Out of the Closet, SMB Enters the Hosted World:

    When your internet connection goes down or becomes unreliable. Unlikely, but it happens.

    In normal (most?) colocation, you don't deal with any ISPs. There are certainly cases where you can bring your own or negotiate with them directly, but I've used colocation in multiple countries continuously for nearly two decades and never once have run into a situation where I was exposed to ISPs at all. It's normally the colocation facility that has to do that.



  • @Carnival-Boy said in Coming Out of the Closet, SMB Enters the Hosted World:

    I find the warranty gets very expensive on HP servers after a few years - to the extent that it doesn't cost much more to buy new servers. Every time I renew an annual Care Pack on an old server I think "is this really cost effective?".

    @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    SMBs often believe that servers and other datacenter equipment will fail every few years, or more.

    I don't. Maybe 20 years of looking after servers that have simply never failed has made be over-confident, I don't know. But if I'd had hassle, I would be looking at moving to a hosted environment simply to remove that hassle and increase reliability. But because my on-premise life has been so hassle free (so far!), I'm kinda, why mess with it and introduce new hassles into my life from off-premise. Not lease because in the UK, a hosted environment means at some point you will have to rely on Openreach, and I'd go to great lengths to avoid that nightmare.

    I'm more or less in this same boat. I've been supporting SMB servers since the early 2000's and in general they are rock solid.

    The expense of having a server in a co-location doesn't seem to pay for itself on the small side of SMB. I'd really have to crunch a whole lot of number - I'm guessing it would be close to a wash.

    Toss in the need for local onsite storage for network shares (sorry that web based stuff that doesn't integrate with apps is pretty horrible compared to open Word, open S: drive find directory, pick file, done, all at Gb speeds).

    Many tiny SMBs have a single server literally in a closet that's warm if not hot and the server still manages to survive 5+ years on average, even without constant temps, regulated power, filtered air, etc.

    Then toss in the idea of the warranties as @Carnival-Boy mentioned. When I purchased my HPs, a 5 year option was available, but nothing longer - remember we're talking about SMBs here, so their buying power is low. After that 5 years, HP wants you to replace it, and they show you that by ratcheting that warranty work up through the roof. The last quote I got for my 5 year old HP server was $500/yr.


  • Service Provider

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    The expense of having a server in a co-location doesn't seem to pay for itself on the small side of SMB. I'd really have to crunch a whole lot of number - I'm guessing it would be close to a wash.

    I never understand this. Even for my servers at home it is cheaper. How small of an SMB can be below the home level? If one server pays for itself, how do you get smaller?


  • Service Provider

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    Then toss in the idea of the warranties as @Carnival-Boy mentioned. When I purchased my HPs, a 5 year option was available, but nothing longer - remember we're talking about SMBs here, so their buying power is low. After that 5 years, HP wants you to replace it, and they show you that by ratcheting that warranty work up through the roof. The last quote I got for my 5 year old HP server was $500/yr.

    SuperMicro is more and more taking over this space. Or alternative warranty options like xByte can probably offer you.



  • @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    The expense of having a server in a co-location doesn't seem to pay for itself on the small side of SMB. I'd really have to crunch a whole lot of number - I'm guessing it would be close to a wash.

    I never understand this. Even for my servers at home it is cheaper. How small of an SMB can be below the home level? If one server pays for itself, how do you get smaller?

    I recall you saying this before - but at home I simply can't understand how this can be true. At home you have no dedicated AC for it, probably no dedicated UPS for it (though, knowing you - you did). You're only paying for one internet connection, not two (yes I understand that most colocation includes the internet in the price - but that just makes the price that much higher - granted they buy in bulk so you get it at discounted rate, but it's still more because you still have to have internet at home), your connection is definitely not as fast from home to the colo as it probably is if the server is at home.

    Now maybe you'll say some of these things don't matter - like the speed thing. The whole idea is to live like corporate lives - and most SMBs don't have 1 Gb to the internet, so their servers don't either.. so they are probably limited to 100 Mb.. by I digress.



  • @scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:

    @Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:

    Then toss in the idea of the warranties as @Carnival-Boy mentioned. When I purchased my HPs, a 5 year option was available, but nothing longer - remember we're talking about SMBs here, so their buying power is low. After that 5 years, HP wants you to replace it, and they show you that by ratcheting that warranty work up through the roof. The last quote I got for my 5 year old HP server was $500/yr.

    SuperMicro is more and more taking over this space. Or alternative warranty options like xByte can probably offer you.

    I really need to give both of them a good shake.


Log in to reply
 

Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.