New server q's
-
@Pete-S said in New server q's:
When we got to the GHz range the overhead of bit calculations for RAID started to be come minuscule.
I was always told it was the extended math features and the extra cache that pushed the P3 over the edge compared to the P2. They were actually really close in clock cycles at first release. The P2 and P3 nearly blended into each other.
I bet in 99% of cases, the 180MHz and 200MHz PPros were faster than RAID cards, too. Just people needed the offloading because they were doing so much with the CPU already. Those PPros were actually amazing performers, but so few people had them.
I was lucky, NTG deployed all Pentium Pro desktops in the NT era. They were screaming fast.
-
@Pete-S said in New server q's:
Now with NVMe SSD drives connected directly to the CPU on the PCIe bus, the RAID adapter has become obsolete.
Yes, for the most part, although they are starting to do RAID controllers on board with NVMe in some cases. So there is some return to it, as well. But in general, yeah, SSD era even pre-NVMe started to push RAID cards past the point of no return. You just can't make a RAID card fast enough without making it cost so much, and the system has SO much extra overhead, there's just no benefit.
-
That's all very interesting.
When I did my training, I got to attend a breakfast meeting with 2 high up tech exec's of Compaq.
They showed us the new 90Mhz Pentium. At the meeting they said that they couldn't see the speed getting much higher as so much heat was generated.
So that fits in with Pete.S's Mhz comment.
-
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
-
Thanks for the info, it's greatly appreciated.
Just so I'm clear, are we saying that software RAID is the RAID function provided by the operating system?
What about the controller configuration programs you can use that come in most servers, when you get told to press CTRL-R to configure etc? They'd be the configuration programs for inbuilt hardware RAID controllers I guess.
-
But with software RAID in Windows or maybe call it Windows RAID, to the best of my knowledge, I can't configure a hot swap / hot standby disk (or can I?????).
-
@JaredBusch said in New server q's:
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
-
@siringo said in New server q's:
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
Blind swap is hot.
But hot can be done without being blind.
Hot means the system is powered on. But with all software RAID and good Hardware RAID, you can manually mark a disk to be removed, swap it and then start the rebuild.
Blind means an idiot walks in, rips it out, sticks the new one in and then it rebuilds itself.
-
@siringo said in New server q's:
That's all very interesting.
When I did my training, I got to attend a breakfast meeting with 2 high up tech exec's of Compaq.
They showed us the new 90Mhz Pentium. At the meeting they said that they couldn't see the speed getting much higher as so much heat was generated.
So that fits in with Pete.S's Mhz comment.
Must have been about 1994. I bought my P75 in 1994, the 90 was the faster option at the time.
-
@siringo said in New server q's:
@JaredBusch said in New server q's:
Hardware RAID cards will not go anywhere until we have systems designed to blind swap.
I have no issues with software RAID. I use it many places. But for most SMB, it is always hardware RAID cards because I phone it in and need that blind swap capability.
What do you mean by blind swap Jared? Do you mean hot swap? Remove while powered up?
Blind Swap is the singular benefit of hardware RAID. All other aspects of hardware RAID are negatives (slower, riskier, more costly, more complex). Blind Swap is the one and only thing that gives it a reason to exist.
-
@siringo said in New server q's:
They'd be the configuration programs for inbuilt hardware RAID controllers I guess.
No reason to believe so. IT's about 50/50 hardware or FakeRAID.
-
@siringo said in New server q's:
But with software RAID in Windows or maybe call it Windows RAID, to the best of my knowledge, I can't configure a hot swap / hot standby disk (or can I?????).
Under no conditions, EVER would you deploy Windows software RAID. It's flaky and unstable. But this should never matter because nothing from Microsoft should ever touch the hardware. You should always have an enterprise hypervisor touching the hardware and the good ones have excellent software RAID. Only VMware lacks it and if you've used VMware recently, I'd struggle to consider to production ready, it's gotten so bad.
Microsoft has never gotten a handle on storage. It's an area they've always deferred to others. Now that they are phasing out Hyper-V, there's no situation where it would ever matter as anything RAID or Logical Volumes is handled by something lower in the stack (almost always Linux.) Microsoft used to depend on external hardware, now they depend on Linux. Basically Windows is designed today around the assumption that it will depend on Linux for speed and stability and Windows will just ride on top providing an API layer for compatibility reasons.
-
I understand why you'd deploy Hyper-V because there's probably no benefit to doing a good job and a lot of risk in not doing what everyone else does. The sad state of politics over results. Education in the US is the same, they could care less if things are done well, only care if it makes someone else look bad or funnels money to wherever they are laundering it. So in your case, you aren't dealing with anything resembling IT best practices or standards or really anything you could consider production. Again, not that Hyper-V is bad, it's just.... done. And done by years, last release was three years ago and no more are coming. That's not ancient, it's just really, really old to be deploying something whose future came to a full stop years ago.
Hyper-V in your environment is technical debt. But likely they will run it long, long after it is safe because, really, who cares, and likely you will not be around to deal with any issues it causes. But it is technical debt that never should have existed (it was never a GREAT choice, only an acceptable one) and should have been discontinued immediately as the "new" deployment choice as soon as the product was discontinued as a production release. So now it's nothing but debt, problems for their own sake without any benefit. LIterally, zero.
But you probably need to do it. So you have to work within those confines of not deploying production level systems. Hyper-V has no production level software RAID so since that is the choice, obviously you rule our Software RAID because you are stuck with a system that lacks it. That Software RAID is the better technology and costs a lot less is completely irrelevant because your issues have nothing to do with RAID types but with the availability of implementations given your pre-chosen deployment systems.
Likewise, you used to have no option of hardware RAID on big RISC and EPIC systems because hardware RAID wasn't just not considered good there, it was never offered. GIant systems have never had hardware RAID options, not ever. They were always limited to small x86 and AMD64 systems. Even ARM based systems have never had RAID hardware offered. So in the past if you chose those big iron systems (and still today with mainframes) you ruled out hardware RAID because it didn't exist. So with choosing Hyper-V, you rule out software RAID because while it exists, it doesn't exist in a production viable form.
-
All of that is to say...
Knowing that software RAID is excellent and that hardware RAID exists for the last two decades for questionable reasons, that you were given bad info and so forth is good to know. But it ultimately doesn't change what you are going to deploy.
You have to deploy hardware RAID on Hyper-V because those choices were made for you ahead of time not based on what is good, but on something else. It is what it is.
Your statement that software RAID was frowned upon was wrong (as far as actually storage engineers goes), that it was bad was always a myth. Now you know the truth. But the truth isn't relevant here because it's not part of your decision matrix, if you even have one.
Either you deploy what everyone else does and you are stuck with their decisions. You can't rethink individual decisions without reconsidering the whole - nothing in a system can be changed in a vacuum. Or you start over and follow best practices and good decision guidelines and you'll come up with systems with absolutely no resemblance to what they had before. I doubt you want to do that, so all your choices are already made for you as each depends on the last like dominos.
-
Those last 2 posts are spot on Scott.
If I were to deploy a solution that was different to what everyone else was deploying, even if it was cheaper, better, faster, more resilient etc, I'd be lambasted by others simply because it was different and more likely, not understood.
That can lead to unhappy management, which can then lead to all sorts of grief for me.
This is obviously, not what I want.
Thanks for all the info & advice, it is greatly appreciated.
-
@siringo said in New server q's:
Those last 2 posts are spot on Scott.
If I were to deploy a solution that was different to what everyone else was deploying, even if it was cheaper, better, faster, more resilient etc, I'd be lambasted by others simply because it was different and more likely, not understood.
That can lead to unhappy management, which can then lead to all sorts of grief for me.
This is obviously, not what I want.
Thanks for all the info & advice, it is greatly appreciated.
So you should buy the same old server model from 2016 to stay consistent with what they currently know
-
@siringo said in New server q's:
Those last 2 posts are spot on Scott.
If I were to deploy a solution that was different to what everyone else was deploying, even if it was cheaper, better, faster, more resilient etc, I'd be lambasted by others simply because it was different and more likely, not understood.
That can lead to unhappy management, which can then lead to all sorts of grief for me.
This is obviously, not what I want.
Thanks for all the info & advice, it is greatly appreciated.
It's a sad situation that IT is so easy to cover up and not take seriously that it isn't just allowed, but mandated. But this is the world that we live in. IT tends to be a point where we are constantly exposing varying degrees of ineptitude, corruption, money laundering, fraud, etc. What's so awful is that it turns out to be absolutely everywhere, and literally no one cares.
-
@Pete-S said in New server q's:
@siringo said in New server q's:
Those last 2 posts are spot on Scott.
If I were to deploy a solution that was different to what everyone else was deploying, even if it was cheaper, better, faster, more resilient etc, I'd be lambasted by others simply because it was different and more likely, not understood.
That can lead to unhappy management, which can then lead to all sorts of grief for me.
This is obviously, not what I want.
Thanks for all the info & advice, it is greatly appreciated.
So you should buy the same old server model from 2016 to stay consistent with what they currently know
Unfortunately yes.
It comes under the job preservation title. I live in area where IT work is extremely hard to secure due to there not being much of it, so rocking the boat is not a good move.