We Don't Have the Budget to Save Money
-
@Carnival-Boy said:
This is where Spiceworks rocks, I get tons of advice from an NTG consultant and I don't pay a penny for it, but not everyone knows about that! Generally, getting advice from forums can be extremely hit and miss, and you need to be fairly experienced to tell the expert advice from the bullshit - not everyone is.
And MangoLassi (cough, cough). But yeah, there are tons of awesome consulting firms and consultants who give advice freely in the public space. There are many options to getting advice out there when it is needed.
-
This mentality is way too common. Anything that people do not know is labeled as "too expensive." And then shops who don't hire good IT talent wonder why everything that they do costs so much. There is a reason that good IT people cost real money, because they save real money (or drive innovation.) Not every company needs expensive IT in house all of the time, but everyone needs it now and then.
-
@scottalanmiller Speaking of which, here's another example:
-
-
@Bill-Kindle said:
"Can't afford a SAN, you HAVE TO HAVE A SAN to do virtualization." and then did everything they could to torpedo the idea to upper management. Frustrating.
Wow. I've never actually used my SANs for Virtualization.. Just don't like the eggs all in one basket as they say.
-
@ajstringham said:
@Bill-Kindle said:
@scottalanmiller Speaking of which, here's another example:
Wow...
It's a rough one. I had not seen it until it was linked here.
-
You see this behavior a lot more in non-profits. I assume that this has to do with the lack of management training that is had there. People with MBAs or extensive experience are expensive. When a non-profit cuts corners that often means cutting corners in the places that would save them money, like management that understands money. So the result is losing money all over the place and no one realizing it or caring.
-
@scottalanmiller said:
@Carnival-Boy said:
How much did they spend? I've bought HP's super-budget Proliant servers for less than $200 in the past. So six of those would come in at $1200. If you know nothing about virtualisation and would have to learn, then six budget servers could, in theory, work out cheaper. And how unreliable is HP's software RAID, anyway?
In this case it was not HP, it was Dell. Same difference, just saying. But Dell does not have the equivalent of the $200 HP MicroServer. Their entry level is basically a $600 - $800 desktop unit. There is no way that six of them was anywhere nearly as cheap as a single slightly better unit with hardware RAID.
And the software RAID not working is what brought the topic up. Third party awkward software RAID that both HP and Dell offer is not
I tossed it about a time or two,.. Software RAID or hardware - Even with my lack of knowledge and hand on experience with RAIDs, I'd go hardware.. just seems safer and more reliable.
-
@g.jacobse said:
I tossed it about a time or two,.. Software RAID or hardware - Even with my lack of knowledge and hand on experience with RAIDs, I'd go hardware.. just seems safer and more reliable.
I think what a lot of people miss is that it is more reliable when humans are involved. Good software RAID (like that built into Linux or Solaris) is rock solid. But it requires more knowledge and planning than hardware RAID does, at least the good stuff. A SmartArray or a PERC is going to "just work." You don't even need a system admin. Your server tech in the server room can swap drives when he sees an amber light come on, no need to grab the system admin for such a low level hardware task.
Put software RAID in there and that same drive swap could be a disaster and the system admin has to coordinate the swap. That blind swap capability and automated rebuilds allow for a separation of duties and an abstraction from the OS that is really critical in the SMB and is handy anywhere.
Software RAID can be great. But hardware RAID has some real advantages.
-
@scottalanmiller said:
@g.jacobse said:
I tossed it about a time or two,.. Software RAID or hardware - Even with my lack of knowledge and hand on experience with RAIDs, I'd go hardware.. just seems safer and more reliable.
I think what a lot of people miss is that it is more reliable when humans are involved. Good software RAID (like that built into Linux or Solaris) is rock solid. But it requires more knowledge and planning than hardware RAID does, at least the good stuff. A SmartArray or a PERC is going to "just work." You don't even need a system admin. Your server tech in the server room can swap drives when he sees an amber light come on, no need to grab the system admin for such a low level hardware task.
Put software RAID in there and that same drive swap could be a disaster and the system admin has to coordinate the swap. That blind swap capability and automated rebuilds allow for a separation of duties and an abstraction from the OS that is really critical in the SMB and is handy anywhere.
Software RAID can be great. But hardware RAID has some real advantages.
Wouldn't hardware RAID be faster?
-
@ajstringham said:
Wouldn't hardware RAID be faster?
Not since around 2001 or so. It is actually slower. There is more horsepower in the main CPUs than there is in the embedded processors even on the best RAID cards. And the best cards only have a 1GB cache whereas software RAID has as much cache as you want to throw at it, 2GB is easy, 32GB isn't unheard of.
The idea that hardware RAID is faster comes from the Pentium 32bit era (1990s) when hardware RAID was new. Back then the main CPU(s) were so slow and overloaded that offloading the parity calculations was a big deal. It was in the Pentium Pro era (inclusive of the PPro, P2, P3 and Core processors) that the CPU(s) got so fast and were so rarely overloaded that software RAID passed hardware RAID in performance. So that could have been in the 1990s but was more reasonably in the early 2000s. Big systems like Power and Sparc systems never went to hardware RAID at all (and were always the fastest systems) because they always had stable operating systems with super fast CPUs and more of them.
-
@scottalanmiller said:
@ajstringham said:
Wouldn't hardware RAID be faster?
Not since around 2001 or so. It is actually slower. There is more horsepower in the main CPUs than there is in the embedded processors even on the best RAID cards. And the best cards only have a 1GB cache whereas software RAID has as much cache as you want to throw at it, 2GB is easy, 32GB isn't unheard of.
The idea that hardware RAID is faster comes from the Pentium 32bit era (1990s) when hardware RAID was new. Back then the main CPU(s) were so slow and overloaded that offloading the parity calculations was a big deal. It was in the Pentium Pro era (inclusive of the PPro, P2, P3 and Core processors) that the CPU(s) got so fast and were so rarely overloaded that software RAID passed hardware RAID in performance. So that could have been in the 1990s but was more reasonably in the early 2000s. Big systems like Power and Sparc systems never went to hardware RAID at all (and were always the fastest systems) because they always had stable operating systems with super fast CPUs and more of them.
Interesting. Good to know.
-
Hardware RAID is really about improving the human interaction today more than anything. Way easier for people to understand, easier to move between devices, easier to deal with when problems arise and it can rebuild even if the OS is offline.
-
@scottalanmiller said:
Hardware RAID is really about improving the human interaction today more than anything. Way easier for people to understand, easier to move between devices, easier to deal with when problems arise and it can rebuild even if the OS is offline.
Got it.
-
The problem with software RAID is the raid then become tied to the OS so you need to back up the OS to get it back. Of course you can rebulid them with tools but it's not something Joe Brown is going to do. This can also become a problem with DATA software RAIDs and new OS etc. It can be done, espcially in linux but it takes some use of the command line and people would just rather have it work.
-
@thecreativeone91 said:
The problem with software RAID is the raid then become tied to the OS so you need to back up the OS to get it back. Of course you can rebulid them with tools but it's not something Joe Brown is going to do. This can also become a problem with DATA software RAIDs and new OS etc. It can be done, espcially in linux but it takes some use of the command line and people would just rather have it work.
Exactly. There is nothing technically keeping you from your data but junior admins (or people who don't know the OS inside and out) might be really wary of trying to recover data from software RAID.
On the flip side, you can get at your data without the specific hardware. But with hardware RAID you need a compatible replacement card to get back at your data.
-
@scottalanmiller How would you do software RAID with something like ESXi?
-
Or is that even possible?
-
Would you just create virtual drives from each drive on each datastore and create the software RAID at the VM level?
-
@ajstringham said:
@scottalanmiller How would you do software RAID with something like ESXi?
ESXi does not support software RAID and has none built it. Hardware RAID is the only enterprise way to do that with ESXi. ESXi does support some software RAID from the likes of HP and Dell but I would never go that route.
ESXi is the only enterprise hypervisor without built-in software RAID. HyperV uses Windows software RAID, though, which is not good.