Budget Backups: Which is better
-
@scottalanmiller said:
@Dashrender said:
You said that WAN speed increases were outpacing SSD growth and lowering of cost.
Exactly. SSDs have not become useful for backups yet and it is only predicted that at some point that they will (and they likely will.) But at the same time, WANs are already widely useful for backups with speeds getting much faster all the time. So while one hopes to someday be useful for niche use cases, the other is already broadly useful today and becoming more useful every day with the long term prediction of being the last remaining backup media at some point. So WANs have both the lead and the predicted winning outcome. SSDs hope to have a spike of utility somewhere in the middle of the WAN backup timeline.
Yeah, but most businesses don't even have 100Mbps WAN connections. Also, if they have satellite offices in an area that isn't a major metroplex, they might have a 3-10Mbps upload limit. I don't see WAN as outpacing SSDs for backup for anyone except the enterprise.
-
@scottalanmiller said:
Compare it to the failure rates of cars.
I love your analogies. I know about as much about cars as I do about hard drives, but I'll have a go. Generally, most of the damage to a car is done starting it up, when the engine is cold and the car experiences a dramatic change in temperature. So, all things equal, a car that's done 50k miles based on long journeys will be more reliable than a car that's done 50k of lots of small stop and start journeys.
To me this is like hard drives. A hard drive that's run for a constant 1000 hours will be more reliable than one that is run for 10 hours, then turned off, then run for another 10 hours, then turned off, and so. This is one reason to keep servers on 24/7.
But, a car that's done 50k miles based on long journeys will NOT be more reliable than one that has done just 1k and then spent the rest of the time sitting in the garage. I believe a hard drive sitting in a cupboard has a lower failure rate than one sitting in a server being constantly used.
But I don't know.
But if you don't know what the failure rate is, you can't just make a figure up!
-
@Dashrender said:
@scottalanmiller said:
And 1TB SSD isn't enough for most and $200/TB is too expensive for most SMBs still. That's not a good price for backups. You'll find that that remains a very niche price/capacity ratio.
It is? I suppose if the SBM is able to get 1 Gb WAN connections, sure, $200 might not be worth it, depending on the cost of cloud based storage - but there have been and always will be cheap companies who don't want to pay a recurring fee and would rather pay for the $200/TB drives for local backups.
Cloud backup is not a requirement, that is an artificial cost constraint on WAN backups. Where are your SSDs planning to live? At IronMountain? If so, then your $200 price is nothing as the primarily cost is the shipping and storage. If not, are you planning on a second office site or a home? If so, then a single fixes two disk NAS is a one time purchase and the WAN is mostly funded through operations, not backup, budget. The cost of a WAN backup could be as little as about $600 for many, many years of backups.
To store an equivalent on SSDs would likely require $1200 - $4800 in SSDs plus the continuous shipping and manual management of them.
WANs can easily blow $200/TB out of the water for most use cases.
-
@Dashrender said:
....but there have been and always will be cheap companies who don't want to pay a recurring fee and would rather pay for the $200/TB drives for local backups.
Sure, but while SSDs are not recurring, they feel like it as you need many of them (minimum of three probably, more likely 10 - 40.) And that's if your backup fits into 1TB. But WAN links are already there in 99.9% of companies (making that up, I suspect much higher rates today) so going to faster ones will happen naturally as speeds just increase.
I never suggested SSD backups didn't have a place. Only that their value was diminishing before they even existed and would continue to diminish over time. They are unlikely to ever be a significant backup media choice.
-
@ajstringham said:
Yeah, but most businesses don't even have 100Mbps WAN connections. Also, if they have satellite offices in an area that isn't a major metroplex, they might have a 3-10Mbps upload limit. I don't see WAN as outpacing SSDs for backup for anyone except the enterprise.
Sure. But that's not the point. Zero businesses have affordable SSDs for backup today. Lots of businesses have affordable WAN for backup today. One is already way ahead of the other. To make SSDs appear really useful you have to assume the future state of SSDs with the current state of WAN which makes no sense.
-
@Carnival-Boy said:
I love your analogies. I know about as much about cars as I do about hard drives, but I'll have a go. Generally, most of the damage to a car is done starting it up, when the engine is cold and the car experiences a dramatic change in temperature.
My analogy was about car accidents which only happen when driving or, I suppose, if the garage collapses on them.
-
@scottalanmiller said:
@ajstringham said:
Yeah, but most businesses don't even have 100Mbps WAN connections. Also, if they have satellite offices in an area that isn't a major metroplex, they might have a 3-10Mbps upload limit. I don't see WAN as outpacing SSDs for backup for anyone except the enterprise.
Sure. But that's not the point. Zero businesses have affordable SSDs for backup today. Lots of businesses have affordable WAN for backup today. One is already way ahead of the other. To make SSDs appear really useful you have to assume the future state of SSDs with the current state of WAN which makes no sense.
I agree that WAN is a much more viable option at this point. SMBs will go disk over SSD every time at the current cost of drives. Still, WAN isn't the preferred option of the three. Disk is.
-
@Carnival-Boy said:
But if you don't know what the failure rate is, you can't just make a figure up!
It's known to be significantly higher than 3%. Like I've been saying. 30% is one order of magnitude higher. It is a very useful reference point for expected failure rates on average. The real failure rates will vary over a massive range. If you want a reference number, 30% is probably the best that you can get.
-
@ajstringham said:
I agree that WAN is a much more viable option at this point. SMBs will go disk over SSD every time at the current cost of drives. Still, WAN isn't the preferred option of the three. Disk is.
Are you sure? That doesn't match anything that I have seen in recent years.
-
@scottalanmiller said:
@ajstringham said:
I agree that WAN is a much more viable option at this point. SMBs will go disk over SSD every time at the current cost of drives. Still, WAN isn't the preferred option of the three. Disk is.
Are you sure? That doesn't match anything that I have seen in recent years.
When I was a Unitrends installer, that's what I saw. I did several dozen installs, and for both backup and archiving, it was almost always disk. The size of the businesses varied, both in terms of size and budget. Disk was still preferred.
-
In the last few years, I've seen most businesses (that I interact with in person and via forums) going mostly to WAN backups when possible. Disks are still common, but nothing like they were five years ago. For larger backups that need to be transported, still tape.
-
@ajstringham said:
When I was a Unitrends installer, that's what I saw. I did several dozen installs, and for both backup and archiving, it was almost always disk. The size of the businesses varied, both in terms of size and budget. Disk was still preferred.
We are talking about portable disk here, not a disk array. Are you sure that you are not referring to arrays (fixed disk?)
-
@scottalanmiller said:
@ajstringham said:
When I was a Unitrends installer, that's what I saw. I did several dozen installs, and for both backup and archiving, it was almost always disk. The size of the businesses varied, both in terms of size and budget. Disk was still preferred.
We are talking about portable disk here, not a disk array. Are you sure that you are not referring to arrays (fixed disk?)
Obviously the appliance is disk based, but for archiving and even rotating sets of archives, disk is still what I saw almost exclusively.
-
@scottalanmiller said:
It's known to be significantly higher than 3%. Like I've been saying. 30% is one order of magnitude higher. It is a very useful reference point for expected failure rates on average. The real failure rates will vary over a massive range. If you want a reference number, 30% is probably the best that you can get.
I don't believe that a @scottalanmiller completely made up figure is probably the best that I can get. 4% is also significantly higher than 3%, so maybe I'll go with that. I also don't accept that there is no such thing as wear and tear when hard drives are used - any physical device will suffer from this. I'm not a hard drive expert, but that's a basic law of physics.
-
@ajstringham said:
Obviously the appliance is disk based, but for archiving and even rotating sets of archives, disk is still what I saw almost exclusively.
That didn't answer anything. You see fixed disk (arrays) or mobile disk (USB / IEEE 1394 / eSATA) for archiving?
-
@Carnival-Boy said:
I don't believe that a @scottalanmiller completely made up figure is probably the best that I can get. 4% is also significantly higher than 3%, so maybe I'll go with that. I also don't accept that there is no such thing as wear and tear when hard drives are used - any physical device will suffer from this. I'm not a hard drive expert, but that's a basic law of physics.
Never said or suggested that there was no wear and tear. I said it was completely insignificant - which is a statistical fact and incredibly well established by every drive study. How do you think drives are expected to run 20 years or more but will experience noticeable wear and tear in just hours or days of usage? Those two things cannot go together.
-
4% is not a reasonable failure number. 3% is a best case for the best drives. External USB arrays don't get those drives. 3% is not achievable by those drives even under ideal (fixed, datacenter) conditions.
-
Failure rates vary a lot too. Google found 3%. Backblaze found even datacenters see 4.2%. That's with consumer drives from BB.
-
@scottalanmiller said:
Never said or suggested that there was no wear and tear. I said it was completely insignificant - which is a statistical fact and incredibly well established by every drive study. How do you think drives are expected to run 20 years or more but will experience noticeable wear and tear in just hours or days of usage? Those two things cannot go together.
@ajstringham suggested there was no wear and tear. I don't understand your question. What do you mean they'll experience noticeable wear and tear in just hours?
Can you give me a link to a hard drive study saying wear and tear is a completely insignificant cause of failure? I'm only going on Wikipedia which talks about wear and tear and may be wrong, but it makes sense to me.
-
@Carnival-Boy said:
@ajstringham suggested there was no wear and tear. I don't understand your question. What do you mean they'll experience noticeable wear and tear in just hours?
Ah, AJ might have overstated it. There is effectively no wear and tear, not that there is none. It's so trivial as any consideration of it is a complete waste. Expectation of 20+ years of wear and tear before wearing out is 7,300 days. Running as a backup system how many days does it run, maybe 30? 30 is completely non-noticeable out of 7,300.
My point is that you can't expect any measurable wear and tear in the time used as a backup system.