Coming Out of the Closet, SMB Enters the Hosted World
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.
Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.
-
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.
I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.
Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.
I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.
-
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.
Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.
Sure, I'll agree that those mainframes are designed to last longer - but didn't advances in computer science often warrant upgrading them? or was the cost just to great to upgrade in less than 12-15 years?
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.
I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.
Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.
I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.
Oh, well there you go. Those are pretty decent machines still. I can see why you would retire them, but it is not like they would be useless at this point. Plenty of companies would be interested in using those today and they are great for lab boxes.
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
I think the answer to my own question now is becoming yes it matters because the power in a server is so much greater today, we are reaching a point where we aren't needing more computational power in 5 years.
Exactly. Every generation of computers lasts just a little longer usefully than the one before it. So while maybe we say that today a nine year old server is about the limit that you'd want to consider, in five years we'll be saying the same for a ten year old server and in another five years we will be saying it about an eleven year old server. And we are talking low cost commodity servers here. Enterprise servers have traditionally had much longer lifespans as it is with mainframes looking at fifteen years or more easily, even a decade ago.
Sure, I'll agree that those mainframes are designed to last longer - but didn't advances in computer science often warrant upgrading them? or was the cost just to great to upgrade in less than 12-15 years?
Often, no. Mainframes are so much faster than commodity machines and would remain useful for a very long time. Reliability and IO were their main value propositions and replacing them would be very expensive and often not provide a compelling advancement over what was already there.
-
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
So a few question that this brings to mind - does it matter? So we can get 8/10 years out of the hardware now instead of 5. In the past, we often upgraded servers more because of performance or warranty than because of a failed server.
I don't think that this is often true today or for the past many years. Ten year old servers today might be long in the tooth, but only barely. Ten years from today the servers from today should be perfectly fine. Ten years ago was 2007, solidly into the 64bit, multi-core, virtualize everything world. Most SMBs today could run comfortably on ten year old gear. They would be looking for a replacement soon, but it has already been a full ten years.
Ten years today is pushing it a bit. That would be HP Proliant G4 era gear. But eight years old today with G5 gear would be perfectly acceptable and an average SMB would be perfectly content with that as long as it was running reliably. It would be pretty clear that two more years, taking it to ten years, would make it a rather old server but literally two more years from a G5 would be totally reasonable.
I have two DL380 G5's I'm about to retire that reach 10 years this fall. They were installed by me in Sept 2007.
Oh, well there you go. Those are pretty decent machines still. I can see why you would retire them, but it is not like they would be useless at this point. Plenty of companies would be interested in using those today and they are great for lab boxes.
yeah - I'm thinking about doing just that. I think I have enough 300 GB drives to fill one box, but I wonder if I should even bother. If I could get away with several consumer 480 GB SSDs the thing would probably sing. One of the DL380 G5's has 32 GB RAM, so it can handle a few workloads in a lab.
-
32GB can be a lot of workloads. Even if you only have 300GB SAS drives, that's not bad. For a lab that's great.
-
@Dashrender True they'd only be good for a couple instances of Windows, but you can cram tons of Linux things on 32GB RAM and 300GB-600GB of storage.
-
@scottalanmiller sure does like to play with words.
-
@travisdh1 said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender True they'd only be good for a couple instances of Windows, but you can cram tons of Linux things on 32GB RAM and 300GB-600GB of storage.
OH - I could stick a pile of Windows on here too if I wanted things only for test.
One of the first things I'm going to do is run an IOPs test on it. See how it compares with the generic numbers for these drives. 8 SAS 300 GB 6 GB/s drives.
-
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
Why not RAID 6? What VMs would you run on it?
-
@StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
Why not RAID 6? What VMs would you run on it?
No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.
-
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
Why not RAID 6? What VMs would you run on it?
No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
@StrongBad said in Coming Out of the Closet, SMB Enters the Hosted World:
@Dashrender said in Coming Out of the Closet, SMB Enters the Hosted World:
The bigger issue is simply the amount of storage. eight 300 GB drives in RAID 10 is only 1.2 TB of storage. Hardely seems worth it for the power they will consume. If I swapped to three 500 GB drives, the power usage would probably be 1/2 what is now, and WAY faster.
Why not RAID 6? What VMs would you run on it?
No idea if this system offers that or not. Plus RAID 6 - talk about killing throughput. This isn't just going to be a backup server, I don't want to die of old age waiting, even if it is just a lab box.
What do you plan to do where anlab will have a lot of writes?
-
I dont know.
I was wrong on ram... Only have 12 GB.
-
I find the warranty gets very expensive on HP servers after a few years - to the extent that it doesn't cost much more to buy new servers. Every time I renew an annual Care Pack on an old server I think "is this really cost effective?".
@scottalanmiller said in Coming Out of the Closet, SMB Enters the Hosted World:
SMBs often believe that servers and other datacenter equipment will fail every few years, or more.
I don't. Maybe 20 years of looking after servers that have simply never failed has made be over-confident, I don't know. But if I'd had hassle, I would be looking at moving to a hosted environment simply to remove that hassle and increase reliability. But because my on-premise life has been so hassle free (so far!), I'm kinda, why mess with it and introduce new hassles into my life from off-premise. Not lease because in the UK, a hosted environment means at some point you will have to rely on Openreach, and I'd go to great lengths to avoid that nightmare.
-
@Carnival-Boy said in Coming Out of the Closet, SMB Enters the Hosted World:
I find the warranty gets very expensive on HP servers after a few years - to the extent that it doesn't cost much more to buy new servers. Every time I renew an annual Care Pack on an old server I think "is this really cost effective?".
That's very true. If you buy them up front I think that they tend to be much cheaper. If you buy enterprise bulk support, even cheaper still.
-
@Carnival-Boy said in Coming Out of the Closet, SMB Enters the Hosted World:
Not lease because in the UK, a hosted environment means at some point you will have to rely on Openreach, and I'd go to great lengths to avoid that nightmare.
What would make you have to deal with them in a colocation facility?
-
When your internet connection goes down or becomes unreliable. Unlikely, but it happens.
-
@Carnival-Boy said in Coming Out of the Closet, SMB Enters the Hosted World:
When your internet connection goes down or becomes unreliable. Unlikely, but it happens.
In normal (most?) colocation, you don't deal with any ISPs. There are certainly cases where you can bring your own or negotiate with them directly, but I've used colocation in multiple countries continuously for nearly two decades and never once have run into a situation where I was exposed to ISPs at all. It's normally the colocation facility that has to do that.