Burned by Eschewing Best Practices
-
@JaredBusch said:
I am working from a 6 year old desktop right now. Should I replace it?
Not necessarily... Just keep a good backup and spare drive on hand, lol.
-
@JaredBusch said:
I am working from a 6 year old desktop right now. Should I replace it?
Do you make money from that 6 year old desktop? Does that desktop hold critical data or backup data?
If either, yes replace it.
Otherwise shut it. Just because the bulk of the topics here are linked to SW has no bearing at all. If a topic like this was posted here, I'd create a link for it and say the same exact thing.
-
@DustinB3403 said:
@JaredBusch said:
I am working from a 6 year old desktop right now. Should I replace it?
Do you make money from that 6 year old desktop? Does that desktop hold critical data or backup data?
If either, yes replace it.
Otherwise shut it. Just because the bulk of the topics here are linked to SW has no bearing at all. If a topic like this was posted here, I'd create a link for it and say the same exact thing.
And that would also be bashing simply because it is in this thread.
And yes my desktop is critical to my ability to earn a paycheck. But I have no intention of replacing it anytime soon. I expect more than that out of my hardware.
-
So posting a link to SW (or any other forum) would be bashing, just because it was posted.
Wow.
You have a very skewed view of the world.
-
Be nice boys....
-
People learn from reading and doing, if there aren't examples of where something poorly setup / designed has failed, when there are safer and better ways of doing things; How'd you expect people to learn?
This is a learning topic. A topic that hopefully with SEO will help others see that maybe their plans need to be rethought as they introduce a lot of risk.
In the SW topic, the OP doesn't understand how RAID works, as he asks if replacing the failed drives will rebuild the array. So what would you say to that?
-
So this topic is on SW, and he's looking for an OpenSource Alternative to Exchange (and specifically doesn't like Office365) Nothing wrong with that.
What I do see as insane is the insistence that Cloud providers are bad. That if something happens, it's difficult to prove who's at fault. Or that it's difficult to implement (migrate) to O365.
Now @JaredBusch will probably rip on me for mentioning the topic, but the entire argument that the OP has had (brought up by others) and responded to by many pretty much sums up to misinformation and understanding.
The OP is still in the mindset that "I must run it locally to be better protected while saving money". Which needs to be addressed, but I have no better way to do it than by saying "You're insane OP"
Which it's not a best practice to use a cloud service in every instance, but in terms of uptime, cloud providers literally have to be up 99.9% of the time, or their clients will leave.
So by default, doesn't that make it a best practice?
-
Reading that guys post - just damn!
Although on the reliability stand point - I've had an inhouse Exchange server for 5+ years now. I've not had one Exchange outage (finds wood to knock on). Sure I've had ISP outages, and power outages, but never an Exchange outage.
With that in mind, MS had had countless O365 outages in the past year alone. I realize they are typically regional and often short lived, but still they are outages.
It's things like that that people sit back and say - why would I move to the cloud. MS is clearly not keeping their platform as stable as mine has been.
I understand the stats of if all, but local personal experience is really hard to overcome. Hell just look at the other recent posts around here where the guy has a two node HA setup with a NAS. His (and countless others) experience shows that his solution worked, was viable.
Frankly, as I type this I wonder if from a dollars and cents perspective if it isn't really an acceptable thing to consider. Of course you have to consider it fully (backups, recovery time, hardware replacement time, etc).
-
@DustinB3403 said:
So by default, doesn't that make it a best practice?
Not exactly. It makes it the de facto starting point, majority practice or the common reference implementation; not exactly a best practice if you want to be technical.
His approach and logic are lacking in best practices - he sounds (I did not read the post) like he is not following best practices of using real information or using logic and information to drive his decisions but instead making business decisions based on emotion. That is definitely not a best practice.
But going to hosted for anything isn't a best practice, exactly. Even if it is 99.9% of the use cases. A best practice should be something that is truly best, not really something that is "best for most people."
-
@Dashrender said:
Although on the reliability stand point - I've had an inhouse Exchange server for 5+ years now. I've not had one Exchange outage (finds wood to knock on). Sure I've had ISP outages, and power outages, but never an Exchange outage.
Define Exchange outage If I can't get email in or out, isn't Exchange down? If Exchange isn't defined as down in that case, how do you define it down when hosted elsewhere?
-
@Dashrender said:
With that in mind, MS had had countless O365 outages in the past year alone. I realize they are typically regional and often short lived, but still they are outages.
Have they? If you don't consider locally down or ISP outages to be done, has anything that has affected O365 users been a true outage?
This goes back to everyone saying that our outage with Exchange and Azure wasn't an outage because it didn't affect everyone, just a few people.
What is and isn't an outage is rarely clear to define.
-
@Dashrender said:
I understand the stats of if all, but local personal experience is really hard to overcome.
Read: Emotions trump logic in the SMB.
-
@Dashrender said:
Hell just look at the other recent posts around here where the guy has a two node HA setup with a NAS. His (and countless others) experience shows that his solution worked, was viable.
This is where we differ and I think that the terminology is incredibly important. You state it that his non-HA setup was viable and experience showed that it worked.
But looking at the same setup I say that it didn't work, it had the company at risk and it wasted money for no reason - all things that I would define as experience with it having failed. As a CIO, I would look at the same setup and see a failure. It did not meet IT goals of protecting the business technically nor financially. Sure, they got lucky and it didn't impact them, but that doesn't suggest that IT, whose job is to provide the right solutions, was successful.
I think that properly defining success is super critical here. And it isn't purely about risk, that's one aspect that cannot be ignored. But the solution actively lost money. Losing money is rarely a signal for success.
-
@scottalanmiller said:
@Dashrender said:
I understand the stats of if all, but local personal experience is really hard to overcome.
Read: Emotions trump logic in the SMB.
LOL - yup. Though I wonder if the shear size of large (and the relative small number of them) businesses makes them in general more likely to see the statistical norm, which let's their emotions be quelled by the logic?
-
@Dashrender said:
@scottalanmiller said:
@Dashrender said:
I understand the stats of if all, but local personal experience is really hard to overcome.
Read: Emotions trump logic in the SMB.
LOL - yup. Though I wonder if the shear size of large (and the relative small number of them) businesses makes them in general more likely to see the statistical norm, which let's their emotions be quelled by the logic?
I rarely see it outside of the SMB. In the enterprise space there isn't the direct attachment to the network by investors, owners, management, IT and it naturally removes a huge portion of the emotional element.
-
@scottalanmiller said:
@Dashrender said:
Although on the reliability stand point - I've had an inhouse Exchange server for 5+ years now. I've not had one Exchange outage (finds wood to knock on). Sure I've had ISP outages, and power outages, but never an Exchange outage.
Define Exchange outage If I can't get email in or out, isn't Exchange down? If Exchange isn't defined as down in that case, how do you define it down when hosted elsewhere?
The short, version: It doesn't matter where it lives. If your users (onsite and off) can't get to it... then isn't it an outage?
The Slightly Longer version:
It depends on what kind of Exchange you are talking about. Consider an in-house Exchange setup...If the internet connection goes out, sure, you can't get to Exchange from off-site ... But that doesn't mean that Exchange itself is down. Internal email will keep functioning for local users...That is an internet outage. On the flip side of that coin. If the Exchange server's internet connection is up, and mail is not flowing for some reason or another, then THAT is an exchange outage, regardless as to what the cause is.Considering Off-site, you can use the same criteria, with the end result being that "Exchange is down for everyone" because you have no local Exchange servers. Unless you work in the DC where the Exchange servers reside, you don't know if it is an "Exchange Outage" or an "Internet Outage"... Thinking about a Hosted service, I would call it an Exchange Outage
The short, short, slightly hypocritical version: Down is down. Call it what you want, but if it doesn't work for everybody, then it's an exchange outageoutage. If it works for local folks, then it most likely is an internet outage.
-
@scottalanmiller said:
@Dashrender said:
Hell just look at the other recent posts around here where the guy has a two node HA setup with a NAS. His (and countless others) experience shows that his solution worked, was viable.
This is where we differ and I think that the terminology is incredibly important. You state it that his non-HA setup was viable and experience showed that it worked.
But looking at the same setup I say that it didn't work, it had the company at risk and it wasted money for no reason - all things that I would define as experience with it having failed. As a CIO, I would look at the same setup and see a failure. It did not meet IT goals of protecting the business technically nor financially. Sure, they got lucky and it didn't impact them, but that doesn't suggest that IT, whose job is to provide the right solutions, was successful.
I think that properly defining success is super critical here. And it isn't purely about risk, that's one aspect that cannot be ignored. But the solution actively lost money. Losing money is rarely a signal for success.
How did it loose money? I'm thinking that it didn't because to do a real HA would have required larger servers (able to hold more disks) with a ton more disk for the required storage.
I'm thinking that he actually saved more than a small penny by using fewer disks and probably cheaper servers. Also, since he didn't have to worry about replication, he didn't need 4+ 1 GB Teaming or 10 GB NICs, etc.
-
@dafyre said:
The short, version: It doesn't matter where it lives. If your users (onsite and off) can't get to it... then isn't it an outage?
Is it? So your user leaves the house and their laptop batter dies, is your office having an outage?
Your home user's ISP goes down or they lose power and they can't reach email, is it an outage?
I don't think that it's as simple as "can't reach it."
-
@Dashrender said:
How did it loose money?
Because money was spent without benefit. Physical money that could have been saved, was spent. That's lost money.
-
@scottalanmiller said:
@dafyre said:
The short, version: It doesn't matter where it lives. If your users (onsite and off) can't get to it... then isn't it an outage?
Is it? So your user leaves the house and their laptop batter dies, is your office having an outage?
We are talking about system down time, not user whoopsies.
Your home user's ISP goes down or they lose power and they can't reach email, is it an outage?
If the user calls IT Guy and IT Guy says "It works for me", then again... It's classified as User issue, not "System Outage"
I don't think that it's as simple as "can't reach it."
Right, you are. Read the rest of my post, lol.