New server q's
-
I assume that because "1998 was the year of IT" and the complete overtaking of hardware RAID was 1999 that that is why the idea remained when otherwise it seems like no one would remember the idea.
-
@siringo said in New server q's:
That was when we were being taught about x86 server systems such as NT and NetWare, not OSs such as VMS, Unix, OS400 etc.
Yes, non-enterprise systems had a very brief window ONLY WHEN deployed on IA32 (x86) architecture (aka non-enterprise) hardware where hardware RAID made sense for performance reasons.
Even then, it's important to note that hardware RAID was very, very rarely faster. Pentium Pro and Pentium 2 procs were faster BUT were resource constrained. So even though they could do RAID processes faster, the RAID card gave us additional processing power and additional RAM. That was important because it was an era when we were often limited by the total amount of CPU and RAM that we could buy for those kinds of devices. It wasn't that hardware RAID was faster than software RAID, it was that hardware RAID represented a means of adding more CPU and RAM to the total when we just weren't able to get enough.
THe Pentium III had so much more cache, faster RAM controller, and the ability to address to much RAM that we weren't constrained by the hardware anymore, but by the cost to grow it as big as we wanted (and you could afford main CPU and RAM before you could afford equal improvements from a hardware RAID card.)
-
@scottalanmiller said in New server q's:
@siringo said in New server q's:
That was when we were being taught about x86 server systems such as NT and NetWare, not OSs such as VMS, Unix, OS400 etc.
Yes, non-enterprise systems had a very brief window ONLY WHEN deployed on IA32 (x86) architecture (aka non-enterprise) hardware where hardware RAID made sense for performance reasons.
Even then, it's important to note that hardware RAID was very, very rarely faster. Pentium Pro and Pentium 2 procs were faster BUT were resource constrained. So even though they could do RAID processes faster, the RAID card gave us additional processing power and additional RAM. That was important because it was an era when we were often limited by the total amount of CPU and RAM that we could buy for those kinds of devices. It wasn't that hardware RAID was faster than software RAID, it was that hardware RAID represented a means of adding more CPU and RAM to the total when we just weren't able to get enough.
THe Pentium III had so much more cache, faster RAM controller, and the ability to address to much RAM that we weren't constrained by the hardware anymore, but by the cost to grow it as big as we wanted (and you could afford main CPU and RAM before you could afford equal improvements from a hardware RAID card.)
I like to think of these as the era when CPUs had frequencies in the MHz range.
When we got to the GHz range the overhead of bit calculations for RAID started to be come minuscule.
Now with NVMe SSD drives connected directly to the CPU on the PCIe bus, the RAID adapter has become obsolete.
-
@Pete-S said in New server q's:
I like to think of these as the era when CPUs had frequencies in the MHz range.
Jaja, that's so true.
-
@Pete-S said in New server q's:
When we got to the GHz range the overhead of bit calculations for RAID started to be come minuscule.
I was always told it was the extended math features and the extra cache that pushed the P3 over the edge compared to the P2. They were actually really close in clock cycles at first release. The P2 and P3 nearly blended into each other.
I bet in 99% of cases, the 180MHz and 200MHz PPros were faster than RAID cards, too. Just people needed the offloading because they were doing so much with the CPU already. Those PPros were actually amazing performers, but so few people had them.
I was lucky, NTG deployed all Pentium Pro desktops in the NT era. They were screaming fast.
-
@Pete-S said in New server q's:
Now with NVMe SSD drives connected directly to the CPU on the PCIe bus, the RAID adapter has become obsolete.
Yes, for the most part, although they are starting to do RAID controllers on board with NVMe in some cases. So there is some return to it, as well. But in general, yeah, SSD era even pre-NVMe started to push RAID cards past the point of no return. You just can't make a RAID card fast enough without making it cost so much, and the system has SO much extra overhead, there's just no benefit.
-
That's all very interesting.
When I did my training, I got to attend a breakfast meeting with 2 high up tech exec's of Compaq.
They showed us the new 90Mhz Pentium. At the meeting they said that they couldn't see the speed getting much higher as so much heat was generated.
So that fits in with Pete.S's Mhz comment.
-
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
-
Thanks for the info, it's greatly appreciated.
Just so I'm clear, are we saying that software RAID is the RAID function provided by the operating system?
What about the controller configuration programs you can use that come in most servers, when you get told to press CTRL-R to configure etc? They'd be the configuration programs for inbuilt hardware RAID controllers I guess.
-
But with software RAID in Windows or maybe call it Windows RAID, to the best of my knowledge, I can't configure a hot swap / hot standby disk (or can I?????).
-
@JaredBusch said in New server q's:
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
-
@siringo said in New server q's:
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
Blind swap is hot.
But hot can be done without being blind.
Hot means the system is powered on. But with all software RAID and good Hardware RAID, you can manually mark a disk to be removed, swap it and then start the rebuild.
Blind means an idiot walks in, rips it out, sticks the new one in and then it rebuilds itself.
-
@siringo said in New server q's:
That's all very interesting.
When I did my training, I got to attend a breakfast meeting with 2 high up tech exec's of Compaq.
They showed us the new 90Mhz Pentium. At the meeting they said that they couldn't see the speed getting much higher as so much heat was generated.
So that fits in with Pete.S's Mhz comment.
Must have been about 1994. I bought my P75 in 1994, the 90 was the faster option at the time.
-
@siringo said in New server q's:
@JaredBusch said in New server q's:
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
Blind Swap is the singular benefit of hardware RAID. All other aspects of hardware RAID are negatives (slower, riskier, more costly, more complex). Blind Swap is the one and only thing that gives it a reason to exist.
-
@siringo said in New server q's:
They'd be the configuration programs for inbuilt hardware RAID controllers I guess.
No reason to believe so. IT's about 50/50 hardware or FakeRAID.
-
@siringo said in New server q's:
But with software RAID in Windows or maybe call it Windows RAID, to the best of my knowledge, I can't configure a hot swap / hot standby disk (or can I?????).
Under no conditions, EVER would you deploy Windows software RAID. It's flaky and unstable. But this should never matter because nothing from Microsoft should ever touch the hardware. You should always have an enterprise hypervisor touching the hardware and the good ones have excellent software RAID. Only VMware lacks it and if you've used VMware recently, I'd struggle to consider to production ready, it's gotten so bad.
Microsoft has never gotten a handle on storage. It's an area they've always deferred to others. Now that they are phasing out Hyper-V, there's no situation where it would ever matter as anything RAID or Logical Volumes is handled by something lower in the stack (almost always Linux.) Microsoft used to depend on external hardware, now they depend on Linux. Basically Windows is designed today around the assumption that it will depend on Linux for speed and stability and Windows will just ride on top providing an API layer for compatibility reasons.
-
I understand why you'd deploy Hyper-V because there's probably no benefit to doing a good job and a lot of risk in not doing what everyone else does. The sad state of politics over results. Education in the US is the same, they could care less if things are done well, only care if it makes someone else look bad or funnels money to wherever they are laundering it. So in your case, you aren't dealing with anything resembling IT best practices or standards or really anything you could consider production. Again, not that Hyper-V is bad, it's just.... done. And done by years, last release was three years ago and no more are coming. That's not ancient, it's just really, really old to be deploying something whose future came to a full stop years ago.
Hyper-V in your environment is technical debt. But likely they will run it long, long after it is safe because, really, who cares, and likely you will not be around to deal with any issues it causes. But it is technical debt that never should have existed (it was never a GREAT choice, only an acceptable one) and should have been discontinued immediately as the "new" deployment choice as soon as the product was discontinued as a production release. So now it's nothing but debt, problems for their own sake without any benefit. LIterally, zero.
But you probably need to do it. So you have to work within those confines of not deploying production level systems. Hyper-V has no production level software RAID so since that is the choice, obviously you rule our Software RAID because you are stuck with a system that lacks it. That Software RAID is the better technology and costs a lot less is completely irrelevant because your issues have nothing to do with RAID types but with the availability of implementations given your pre-chosen deployment systems.
Likewise, you used to have no option of hardware RAID on big RISC and EPIC systems because hardware RAID wasn't just not considered good there, it was never offered. GIant systems have never had hardware RAID options, not ever. They were always limited to small x86 and AMD64 systems. Even ARM based systems have never had RAID hardware offered. So in the past if you chose those big iron systems (and still today with mainframes) you ruled out hardware RAID because it didn't exist. So with choosing Hyper-V, you rule out software RAID because while it exists, it doesn't exist in a production viable form.
-
All of that is to say...
Knowing that software RAID is excellent and that hardware RAID exists for the last two decades for questionable reasons, that you were given bad info and so forth is good to know. But it ultimately doesn't change what you are going to deploy.
You have to deploy hardware RAID on Hyper-V because those choices were made for you ahead of time not based on what is good, but on something else. It is what it is.
Your statement that software RAID was frowned upon was wrong (as far as actually storage engineers goes), that it was bad was always a myth. Now you know the truth. But the truth isn't relevant here because it's not part of your decision matrix, if you even have one.
Either you deploy what everyone else does and you are stuck with their decisions. You can't rethink individual decisions without reconsidering the whole - nothing in a system can be changed in a vacuum. Or you start over and follow best practices and good decision guidelines and you'll come up with systems with absolutely no resemblance to what they had before. I doubt you want to do that, so all your choices are already made for you as each depends on the last like dominos.
-
Those last 2 posts are spot on Scott.
If I were to deploy a solution that was different to what everyone else was deploying, even if it was cheaper, better, faster, more resilient etc, I'd be lambasted by others simply because it was different and more likely, not understood.
That can lead to unhappy management, which can then lead to all sorts of grief for me.
This is obviously, not what I want.
Thanks for all the info & advice, it is greatly appreciated.
-
@siringo said in New server q's:
Those last 2 posts are spot on Scott.
If I were to deploy a solution that was different to what everyone else was deploying, even if it was cheaper, better, faster, more resilient etc, I'd be lambasted by others simply because it was different and more likely, not understood.
That can lead to unhappy management, which can then lead to all sorts of grief for me.
This is obviously, not what I want.
Thanks for all the info & advice, it is greatly appreciated.
So you should buy the same old server model from 2016 to stay consistent with what they currently know