New server q's
-
@siringo said in New server q's:
I don't have to use Hyper-V but if I don't, this school will stick out like a sore thumb as Hyper-V is used throughout this (Govt) Department.
How are they keeping that updated? They no longer release a production version of Hyper-V so.... does this mean the entire department is running on no longer maintained software as the very core AND hasn't even discussed the future and is just... hoping for the best as if hackers and malware aren't a thing? Or is Hyper-V being deployed like a desktop system - which is the only form of Hyper-V still made, for end users not for production servers.
-
@siringo said in New server q's:
When I did all my training, we were always told that hardware RAID was much faster than software RAID.
You were taught by people who didn't have the slightest clue what they were talking about. That's some seriously bad info.
Here is why...
-
The speed is based on the processor, not hardware vs software. If the hardware RAID card has a faster processor (and memory combo) than the main CPU and RAM, then it would be faster. So the concept of saying this makes no sense. If we could make RAID cards that fast, we'd make our servers faster. It's totally nonsense to think that this would be the case. Whoever said this used no common sense.
-
Software RAID runs on the Hardware RAID, so at some low level the two conceptually cannot be separated. That's not what they mean, but what they mean makes no sense.
-
Enterprise Software RAID has always, no exceptions, been the fastest RAID. Hardware RAID has never had an implementation, ever, that was "the fastest". This isn't that the people teaching you were out of date, they were simply stating a common myth.
-
There was a VERY brief time in the 1990s when Windows systems and only Windows systems (because no other system required 32bit slow Intel processors) could not scale up adequately to have the necessary CPU resources to dedicate to RAID and so enterprise CPUs were used in the RAID cards and not for Windows (Windows used consumer processors, so while servers could go faster than a RAID card, Windows servers could not for a time) and so they actually made RAID cards that were faster than the non-enterprise Windows servers. This was only true during the era of the Intel 386, 486 and Pentium, Pentium Pro and Pentium 2 processors - which is a VERY short time frame. In February 1999 the Pentium 3 was released and had so much performance that no RAID card has been able to compete with the main CPU, even on Windows machines, since.
So while the info was flat our wrong in every possible way, it's often based on a misunderstanding of an extremely brief and marginal performance advantage to RAID cards in a super specific, non-enterprise scenario in an era when Windows servers were marginal in the business world at best and only just starting to prove themselves and only Windows variations running on Intel 32bit (there were non-IA32 options during that era for which this did not apply, so even on Windows, it was never actually true that hardware RAID was faster - that only happened if you bought intentionally a slower server CPU and "fixed" the mistake by adding hardware RAID) had the issue. Since software RAID in every conceivable scenario since early 1999 (Windows NT 4 era) has been faster and this was broadly taught by Microsoft themselves, it was even required in their exams. It's often used as the example of fake information making it into the IT culture and being repeated by mentors to interns generation after generation.
Given that RAID was invented in 1987, and that hardware RAID devices didn't come out for a while after it was invented, that RAID 0, 1 and 10 probably never had hardware RAID be faster and that all performance reasons to ever consider hardware RAID were over by early 1999 it's amazing that anyone has continued to use or remember hardware RAID at all (it's expensive and weird, honestly) or that the industry ever existed or that anyone remembers the brief blip in time when hardware RAID was marginally faster for a very tiny, limited scenario.
-
-
I assume that because "1998 was the year of IT" and the complete overtaking of hardware RAID was 1999 that that is why the idea remained when otherwise it seems like no one would remember the idea.
-
@siringo said in New server q's:
That was when we were being taught about x86 server systems such as NT and NetWare, not OSs such as VMS, Unix, OS400 etc.
Yes, non-enterprise systems had a very brief window ONLY WHEN deployed on IA32 (x86) architecture (aka non-enterprise) hardware where hardware RAID made sense for performance reasons.
Even then, it's important to note that hardware RAID was very, very rarely faster. Pentium Pro and Pentium 2 procs were faster BUT were resource constrained. So even though they could do RAID processes faster, the RAID card gave us additional processing power and additional RAM. That was important because it was an era when we were often limited by the total amount of CPU and RAM that we could buy for those kinds of devices. It wasn't that hardware RAID was faster than software RAID, it was that hardware RAID represented a means of adding more CPU and RAM to the total when we just weren't able to get enough.
THe Pentium III had so much more cache, faster RAM controller, and the ability to address to much RAM that we weren't constrained by the hardware anymore, but by the cost to grow it as big as we wanted (and you could afford main CPU and RAM before you could afford equal improvements from a hardware RAID card.)
-
@scottalanmiller said in New server q's:
@siringo said in New server q's:
That was when we were being taught about x86 server systems such as NT and NetWare, not OSs such as VMS, Unix, OS400 etc.
Yes, non-enterprise systems had a very brief window ONLY WHEN deployed on IA32 (x86) architecture (aka non-enterprise) hardware where hardware RAID made sense for performance reasons.
Even then, it's important to note that hardware RAID was very, very rarely faster. Pentium Pro and Pentium 2 procs were faster BUT were resource constrained. So even though they could do RAID processes faster, the RAID card gave us additional processing power and additional RAM. That was important because it was an era when we were often limited by the total amount of CPU and RAM that we could buy for those kinds of devices. It wasn't that hardware RAID was faster than software RAID, it was that hardware RAID represented a means of adding more CPU and RAM to the total when we just weren't able to get enough.
THe Pentium III had so much more cache, faster RAM controller, and the ability to address to much RAM that we weren't constrained by the hardware anymore, but by the cost to grow it as big as we wanted (and you could afford main CPU and RAM before you could afford equal improvements from a hardware RAID card.)
I like to think of these as the era when CPUs had frequencies in the MHz range.
When we got to the GHz range the overhead of bit calculations for RAID started to be come minuscule.
Now with NVMe SSD drives connected directly to the CPU on the PCIe bus, the RAID adapter has become obsolete.
-
@Pete-S said in New server q's:
I like to think of these as the era when CPUs had frequencies in the MHz range.
Jaja, that's so true.
-
@Pete-S said in New server q's:
When we got to the GHz range the overhead of bit calculations for RAID started to be come minuscule.
I was always told it was the extended math features and the extra cache that pushed the P3 over the edge compared to the P2. They were actually really close in clock cycles at first release. The P2 and P3 nearly blended into each other.
I bet in 99% of cases, the 180MHz and 200MHz PPros were faster than RAID cards, too. Just people needed the offloading because they were doing so much with the CPU already. Those PPros were actually amazing performers, but so few people had them.
I was lucky, NTG deployed all Pentium Pro desktops in the NT era. They were screaming fast.
-
@Pete-S said in New server q's:
Now with NVMe SSD drives connected directly to the CPU on the PCIe bus, the RAID adapter has become obsolete.
Yes, for the most part, although they are starting to do RAID controllers on board with NVMe in some cases. So there is some return to it, as well. But in general, yeah, SSD era even pre-NVMe started to push RAID cards past the point of no return. You just can't make a RAID card fast enough without making it cost so much, and the system has SO much extra overhead, there's just no benefit.
-
That's all very interesting.
When I did my training, I got to attend a breakfast meeting with 2 high up tech exec's of Compaq.
They showed us the new 90Mhz Pentium. At the meeting they said that they couldn't see the speed getting much higher as so much heat was generated.
So that fits in with Pete.S's Mhz comment.
-
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
-
Thanks for the info, it's greatly appreciated.
Just so I'm clear, are we saying that software RAID is the RAID function provided by the operating system?
What about the controller configuration programs you can use that come in most servers, when you get told to press CTRL-R to configure etc? They'd be the configuration programs for inbuilt hardware RAID controllers I guess.
-
But with software RAID in Windows or maybe call it Windows RAID, to the best of my knowledge, I can't configure a hot swap / hot standby disk (or can I?????).
-
@JaredBusch said in New server q's:
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
-
@siringo said in New server q's:
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
Blind swap is hot.
But hot can be done without being blind.
Hot means the system is powered on. But with all software RAID and good Hardware RAID, you can manually mark a disk to be removed, swap it and then start the rebuild.
Blind means an idiot walks in, rips it out, sticks the new one in and then it rebuilds itself.
-
@siringo said in New server q's:
That's all very interesting.
When I did my training, I got to attend a breakfast meeting with 2 high up tech exec's of Compaq.
They showed us the new 90Mhz Pentium. At the meeting they said that they couldn't see the speed getting much higher as so much heat was generated.
So that fits in with Pete.S's Mhz comment.
Must have been about 1994. I bought my P75 in 1994, the 90 was the faster option at the time.
-
@siringo said in New server q's:
@JaredBusch said in New server q's:
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
Blind Swap is the singular benefit of hardware RAID. All other aspects of hardware RAID are negatives (slower, riskier, more costly, more complex). Blind Swap is the one and only thing that gives it a reason to exist.
-
@siringo said in New server q's:
They'd be the configuration programs for inbuilt hardware RAID controllers I guess.
No reason to believe so. IT's about 50/50 hardware or FakeRAID.
-
@siringo said in New server q's:
But with software RAID in Windows or maybe call it Windows RAID, to the best of my knowledge, I can't configure a hot swap / hot standby disk (or can I?????).
Under no conditions, EVER would you deploy Windows software RAID. It's flaky and unstable. But this should never matter because nothing from Microsoft should ever touch the hardware. You should always have an enterprise hypervisor touching the hardware and the good ones have excellent software RAID. Only VMware lacks it and if you've used VMware recently, I'd struggle to consider to production ready, it's gotten so bad.
Microsoft has never gotten a handle on storage. It's an area they've always deferred to others. Now that they are phasing out Hyper-V, there's no situation where it would ever matter as anything RAID or Logical Volumes is handled by something lower in the stack (almost always Linux.) Microsoft used to depend on external hardware, now they depend on Linux. Basically Windows is designed today around the assumption that it will depend on Linux for speed and stability and Windows will just ride on top providing an API layer for compatibility reasons.
-
I understand why you'd deploy Hyper-V because there's probably no benefit to doing a good job and a lot of risk in not doing what everyone else does. The sad state of politics over results. Education in the US is the same, they could care less if things are done well, only care if it makes someone else look bad or funnels money to wherever they are laundering it. So in your case, you aren't dealing with anything resembling IT best practices or standards or really anything you could consider production. Again, not that Hyper-V is bad, it's just.... done. And done by years, last release was three years ago and no more are coming. That's not ancient, it's just really, really old to be deploying something whose future came to a full stop years ago.
Hyper-V in your environment is technical debt. But likely they will run it long, long after it is safe because, really, who cares, and likely you will not be around to deal with any issues it causes. But it is technical debt that never should have existed (it was never a GREAT choice, only an acceptable one) and should have been discontinued immediately as the "new" deployment choice as soon as the product was discontinued as a production release. So now it's nothing but debt, problems for their own sake without any benefit. LIterally, zero.
But you probably need to do it. So you have to work within those confines of not deploying production level systems. Hyper-V has no production level software RAID so since that is the choice, obviously you rule our Software RAID because you are stuck with a system that lacks it. That Software RAID is the better technology and costs a lot less is completely irrelevant because your issues have nothing to do with RAID types but with the availability of implementations given your pre-chosen deployment systems.
Likewise, you used to have no option of hardware RAID on big RISC and EPIC systems because hardware RAID wasn't just not considered good there, it was never offered. GIant systems have never had hardware RAID options, not ever. They were always limited to small x86 and AMD64 systems. Even ARM based systems have never had RAID hardware offered. So in the past if you chose those big iron systems (and still today with mainframes) you ruled out hardware RAID because it didn't exist. So with choosing Hyper-V, you rule out software RAID because while it exists, it doesn't exist in a production viable form.
-
All of that is to say...
Knowing that software RAID is excellent and that hardware RAID exists for the last two decades for questionable reasons, that you were given bad info and so forth is good to know. But it ultimately doesn't change what you are going to deploy.
You have to deploy hardware RAID on Hyper-V because those choices were made for you ahead of time not based on what is good, but on something else. It is what it is.
Your statement that software RAID was frowned upon was wrong (as far as actually storage engineers goes), that it was bad was always a myth. Now you know the truth. But the truth isn't relevant here because it's not part of your decision matrix, if you even have one.
Either you deploy what everyone else does and you are stuck with their decisions. You can't rethink individual decisions without reconsidering the whole - nothing in a system can be changed in a vacuum. Or you start over and follow best practices and good decision guidelines and you'll come up with systems with absolutely no resemblance to what they had before. I doubt you want to do that, so all your choices are already made for you as each depends on the last like dominos.