The Textbook Things Gone Wrong in IT Thread
-
I don't know why we have never made a post like this, but one is needed. There has become a list of the standard things that people do wrong in IT. "Wrong" is always subjective, but there are approaches that sometimes should never exist or should almost never be done or were done for the wrong reasons that have become common, recurring and predictable antipatterns or the examples of "what not to do" in IT. In many cases, we can tell that when one of these has been done, others are likely hiding in the wings or when disaster as struck, often we can backtrack to these common SMB mistakes.
Let's go...
- Using RAID 5 because it is "standard." It was never standard, really, it was always a cost cutting measure but one that was acceptable in 1998 when the Microsoft guide was published but even then they made sure that people knew when to use it and when not to and it has been industry deprecated since 2009.
- Using a SAN for no reason. Often based on some assumption or myth that a SAN provides some functionality that it does not.
- Using high availability or fault tolerance when it is not warranted - doing no financial research and just assuming that they are the special case and that downtime is "impossible". Typically the more this is believed, the less the value of uptime as companies that really are costly to be down know how much that costs and how much to spend protecting against it.
- Buying HA products but not doing HA. Leading to disasters like the inverted pyramid of doom.
- Doing something risky and/or costly and then believing that "getting lucky" was the same as "making a good decision."
- Confusing SAN and NAS and using the terms interchangeably and never realizing that everyone is trying to explain that these are two very different things.
- Mistaking their SAN as a device for file sharing, hooking it to two or more guests without a clustered filesystem and being surprised when the SAN itself destroys all of their data (as intended.)
- Powering down a server before replacing a failed hard drive rather than using the hot swap features that they paid for.
- Confusing sales people with consultants and attempting to get free advice from sales people who are just trying to sell them things that they don't need (often a SAN.)
- Buying big name products based on nothing but the fact that lots of marketing exists around it. Or forgetting that they never hear about affordable products because affordable ones don't have the margins to market to them so strongly (if at all.)
- Avoiding virtualization because they think it is only for people bigger than them, smaller than them or they confuse it with other things like high availability or consolidation.
-
Does this stem from the What Are You Doing Right Now comments?
-
That gave me the idea. But it is based off of "every hour of my day".
-
TL;DR - Do your research before implementing stuff.
-
I can understand how people can get lost though. There are so many bad tutorials and how to's out there.
For example, the gazillion tutorials that just tell you to turn off SELinux.
-
@MattSpeller said:
TL;DR - Do your research before implementing stuff.
Or ask. And don't use terms you aren't sure about, that leads people down the rabbit hole all of the time.
-
My boss is being sold on a SAN for our network of 4TB of data, expected to grow to 6TB within 4 years. Not that a SAN isn't needed but it seems like a really big chuck of any money we have for our virtualization project.
When we could buy two 32TB NAS devices (or build them) for $1500 and have the replicate between each other...
-
One to add to the list:
- Changes on a Friday afternoon
-
@DustinB3403 said:
When we could buy two 32TB NAS devices (or build them) for $1500 and have the replicate between each other...
SAN and NAS cost the same. You could build a high availability SAN cluster for the exact same money using the exact same hardware.
-
@DustinB3403 said:
My boss is being sold on a SAN for our network of 4TB of data, expected to grow to 6TB within 4 years.
For the size of a single cheap disk?
-
@DustinB3403 said:
My boss is being sold on a SAN for our network of 4TB of data, expected to grow to 6TB within 4 years. Not that a SAN isn't needed ...
There is only one reason ever to have a SAN in a case like this, large scale storage consolidation for a large number of physical hosts. Hard to imagine 4-6TB being able to save money even if it was being shared by twenty physical servers.
This sounds like it is breaking several of the textbook rules. Can't be sure, but hard to imagine a case where it is not.
-
@DustinB3403 said:
My boss is being sold on a SAN for our network of 4TB of data, expected to grow to 6TB within 4 years. Not that a SAN isn't needed but it seems like a really big chuck of any money we have for our virtualization project.
When we could buy two 32TB NAS devices (or build them) for $1500 and have the replicate between each other...
May want to look at an R730xd... you can put a crazy amount of disks in that. But really even for 4-6TB you don't need anything like a NAS or a SAN.
-
We have a few separate network shares hosted on different servers at the moment.
-
@DustinB3403 said:
We have a few separate network shares hosted on different servers at the moment.
Well four physical servers is the absolute minimum to possibly get value from shared storage and the rule of thumb is ten is the beginning of the reasonable part of the bell curve and a dozen or more start to become likely - but only when they are heavily consolidating and sharing.
-
@coliver said:
May want to look at an R730xd... you can put a crazy amount of disks in that. But really even for 4-6TB you don't need anything like a NAS or a SAN.
4-6TB you can put in a laptop!
-
Maxing at 6 TB, do you need more processing power and RAM than can be stuck in a single VM host? Or do you have a situation where you can't VM for some reason?
Sounds like a single server with possibly direct attached external storage (if needed for the number of spindles for performance - assuming you can't afford SSD storage) would do the trick - again unless you have work load that requires huge amounts of compute power.
-
@Dashrender said:
Sounds like a single server with possibly direct attached external storage (if needed for the number of spindles for performance - assuming you can't afford SSD storage) would do the trick - again unless you have work load that requires huge amounts of compute power.
SSD would be far less than the cost of a DAS chassis and spindles. You could be at a million IOPS for cheaper!
-
SAM, again this is the same MSP making this recommendation as in past conversations. . .
We have a few locations some over seas, but they all come back to the main office via our VPN for network shares etc.
-
@DustinB3403 said:
We have a few locations some over seas, but they all come back to the main office via our VPN for network shares etc.
It's not locations, it is the physical number of servers attached to the storage. For SAN you would need roughly ten or more virtualization hosts for SAN to even come up in conversation. That's it. A million users, large storage, many locations, etc. have no bearing on making a SAN more or less useful. SAN has one purpose and if there isn't the large number of host servers directly sharing the storage AND saving money by doing so, the SAN is doing the opposite of its purpose.
-
That's my point, we have 3 servers as file servers. And maybe 100 employees, the entire idea is just baffling.