Pros/Cons Dual Best Effort ISP vs Fiber/MPLS.
-
And another thing about cloud backups.... if you move from Snowflake management to DevOps, restores can be in minutes. Only small amounts of data might need to be brought in from the cloud. DevOps models can make backups 1% of the size that they traditionally are for many shops.
As an example, MangoLassi is 14GB to restore a system image. Less than 1GB to restore the data. Only the data needs to be restored to get the community back up and running. 1GB doesn't take long to restore even over 10Mb/s and nothing over the 10Gb/s that we have.
-
@MattSpeller said:
@scottalanmiller said:
Think about how fast a restore could be for critical systems over 100Mb/s to 1Gb/s. In many cases, companies with good WANs have faster WAN links thatn @mattspeller has LAN speed!!
I'm right there with you, Seeing that other thread today where 1 Gb was $399/month.. Man I'd do that in a heart beat here! I'd let users do whatever they heck they wanted online.. lol
-
@thecreativeone91 said:
@scottalanmiller said:
@Dashrender said:
But just as bad, in the case of failure, how are you suppose to get back online? It would take days or more to download all of the data back in most cases, and that's assuming you left the connection alone for nothing but that.
We will call this problems that seem obvious when you are at a company with a 10Mb/s WAN. Lots of companies, certainly not all, have huge pipes and can restore systems really quickly. Lots of even homes now are starting to get 1Gb/s. Think about how fast a restore could be for critical systems over 100Mb/s to 1Gb/s. In many cases, companies with good WANs have faster WAN links thatn @mattspeller has LAN speed!!
The last company I interviewed at had backups from AppAssure replicated to a second location (they have 16) plus to the cloud. As well as the SANs replicated between two locations and backuped to Azure. Cloud backups when planned properly seems to be a good alternative (much better) than keeping tape or harddrives off site in a vault.
Sure, if you have 100Mb+ internet connection.
Granted I'm behind the times because I was worried about outages, but I'm working to solve that now, so soon I could see myself having 5 to 10 time the bandwidth I have now.
-
Can you get 100/100 to both Offices in your city? or only one of them?
-
@scottalanmiller said:
And another thing about cloud backups.... if you move from Snowflake management to DevOps, restores can be in minutes. Only small amounts of data might need to be brought in from the cloud. DevOps models can make backups 1% of the size that they traditionally are for many shops.
As an example, MangoLassi is 14GB to restore a system image. Less than 1GB to restore the data. Only the data needs to be restored to get the community back up and running. 1GB doesn't take long to restore even over 10Mb/s and nothing over the 10Gb/s that we have.
You have 10 Gb to the internet?
-
-
SOOOOOooooo.. no other Pros or Cons ???
-
@Dashrender said:
@thecreativeone91 said:
@scottalanmiller said:
@Dashrender said:
But just as bad, in the case of failure, how are you suppose to get back online? It would take days or more to download all of the data back in most cases, and that's assuming you left the connection alone for nothing but that.
We will call this problems that seem obvious when you are at a company with a 10Mb/s WAN. Lots of companies, certainly not all, have huge pipes and can restore systems really quickly. Lots of even homes now are starting to get 1Gb/s. Think about how fast a restore could be for critical systems over 100Mb/s to 1Gb/s. In many cases, companies with good WANs have faster WAN links thatn @mattspeller has LAN speed!!
The last company I interviewed at had backups from AppAssure replicated to a second location (they have 16) plus to the cloud. As well as the SANs replicated between two locations and backuped to Azure. Cloud backups when planned properly seems to be a good alternative (much better) than keeping tape or harddrives off site in a vault.
Sure, if you have 100Mb+ internet connection.
Granted I'm behind the times because I was worried about outages, but I'm working to solve that now, so soon I could see myself having 5 to 10 time the bandwidth I have now.
They don't have 100mb internet connection. It's metro between locations. Internet is like 40meg at each location. SANs are 42TB but they upload the data tranactionally as it happens so it doesn't make a hit on the connections. Backups are done hourly.
-
@Dashrender said:
@thecreativeone91 said:
@scottalanmiller said:
@Dashrender said:
But just as bad, in the case of failure, how are you suppose to get back online? It would take days or more to download all of the data back in most cases, and that's assuming you left the connection alone for nothing but that.
We will call this problems that seem obvious when you are at a company with a 10Mb/s WAN. Lots of companies, certainly not all, have huge pipes and can restore systems really quickly. Lots of even homes now are starting to get 1Gb/s. Think about how fast a restore could be for critical systems over 100Mb/s to 1Gb/s. In many cases, companies with good WANs have faster WAN links thatn @mattspeller has LAN speed!!
The last company I interviewed at had backups from AppAssure replicated to a second location (they have 16) plus to the cloud. As well as the SANs replicated between two locations and backuped to Azure. Cloud backups when planned properly seems to be a good alternative (much better) than keeping tape or harddrives off site in a vault.
Sure, if you have 100Mb+ internet connection.
Granted I'm behind the times because I was worried about outages, but I'm working to solve that now, so soon I could see myself having 5 to 10 time the bandwidth I have now.
You can bring down a ton with a lot less than 100Mb/s. At 100Mb/s you are getting some companies' LAN speeds. Remember that restores are often compressed and only need to be data. So 30Mb/s will often let you restore a ton. And keep in mind that only live data, not archives, need to be back before you are up and running.
-
-
@MattSpeller said:
@Dashrender said:
SOOOOOooooo.. no other Pros or Cons ???
Cat pics download way faster on 100mbit?
You know I laughed.. but I didn't put in the pro column that staff would be able to use streaming media more if management was OK with it. If I skip 50/10 and go to 100/15 I could probably even create a VLAN for patient internet access (of course throttle that sucker).
-
The more you mention the situation you are in, the more it smells like you need to get out of the closet and into the datacenter!
Get cheap crap pipes, move yer shit into a colo cage somewhere. That comes with a 100Mbps or even a 1Gbps Cogent unmetered pipe out to the interwebs. Have both sites VPN into it using as best as you can. I would take me a Peplink, break out a VPN connection to the colo, then route all the HTTP/HTTPS traffic over the cheapest pipe I can find.
Price on pipes and such would probably equal out on onsite versus offsite for a colo cage. That's when you move into the fun of counting power costs, cooling costs, even equipment costs if you move to a leased managed hosting model versus owning equipment. Then you will get good savings there in the long run.
As long as you can let go of the control of the physical machine, you can make some serious inroads into better network management. Hell, have you thought about cloud services? Don't even need a location, just be in the clooooooooooooooooud!
-
PSX is right. Hosted is the obvious answer. Once you move to fast pipes and redundancy, being hosted will be almost certainly a slam dunk.
-
HUH? How did you come to this conclusion? Sure eventually I'll probably push email offsite and to O365, then I'm left only with file and print onsite - no apps.
Currently today the only app I have onsite is a copy of my old EHR for reference purposes, email and file and print.
Going Colo (other than possibly saving me on power and cooling) wouldn't save me anyplace else.... I'd still need the exact same highly available or dual ISP setup as my original post.
Even if I go hosted today (never going to happen, the boss is anti remote - doubly admitted to me just yesterday), I'd still want/need very reliable fast links to the internet for my EHR with is my daily driver of an app that is already in the 'cloud'.
-
@Dashrender said:
Going Colo (other than possibly saving me on power and cooling) wouldn't save me anyplace else.... I'd still need the exact same highly available or dual ISP setup as my original post.
Do you ever need to physically be there off hours? Colo gives you 24x7 physical support. It also increases reliability my a dramatic amount.
-
@scottalanmiller said:
@Dashrender said:
Going Colo (other than possibly saving me on power and cooling) wouldn't save me anyplace else.... I'd still need the exact same highly available or dual ISP setup as my original post.
Do you ever need to physically be there off hours? Colo gives you 24x7 physical support. It also increases reliability my a dramatic amount.
You're right, reliability would be potentially better (at least the servers wouldn't suffer power loss when our building does), but unless the cost is exactly what we pay today, or less, they'd never go for it. I'd have show that the power use in my room and the heating/cooling cost would reduce by the amount of the rent at the Colo to even consider it.. because the reliability we have today is adequate.
Internet access is the single most important thing to us. If all of our internal servers just died.. yet we could still access the internet and our cloud based EHR, we would be able to continue to function.
-
@Dashrender said:
HUH? How did you come to this conclusion? Sure eventually I'll probably push email offsite and to O365, then I'm left only with file and print onsite - no apps.
What's email then? That's a serious driver of traffic, and a main reason you need bandwidth locally now. Why not move it to a box in the sky with massive redundant links that you could only dream about having locally?
Going Colo (other than possibly saving me on power and cooling) wouldn't save me anyplace else.... I'd still need the exact same highly available or dual ISP setup as my original post.
Nopes, this would reduce your need for bandwidth and could even get you to a single loop. A pipe with your stuff in colo if it goes offline isn't that big of a deal. Your services will still run, they will "never" go down because of problems in the main site. Site goes down for internet? Oh well, just smurf it out with cell data or failover to an el-cheapo pipe until the other pipe is back online.
Even if I go hosted today (never going to happen, the boss is anti remote - doubly admitted to me just yesterday), I'd still want/need very reliable fast links to the internet for my EHR with is my daily driver of an app that is already in the 'cloud'.
The only 100% reliable link I can guarantee is my LAN. How often does an internal LAN link go down?
Your EHR is your driver, but you are choking it with cheap bad pipes on hot sweaty hardware stuck in a closet for your users to access the internet. If it's already cloud based, setting up a terminal server or VDI farm on the colo cage would make sure anyone can work anywhere. BYOD, further reducing costs, by eliminiating the need to buy equipment for those guys to do their work or having to buy bleeding edge or newer equipment. A dumb terminal is cheap as shit, buy a bunch of Pi's and go to town! See, I've saved you even more money
And your boss is a dumbass.
-
@Dashrender said:
You're right, reliability would be potentially better (at least the servers wouldn't suffer power loss when our building does), but unless the cost is exactly what we pay today, or less, they'd never go for it.
Why worry about downtime if that's not a factor? Seems like the consensus is that cost, not uptime, is the only important factor. This suggests that the powers that be see the operational situation as having low value.
Colocation is cheaper in cases that we have measured. Better uptime, cost savings, less work for you. Pretty big win.
-
@Dashrender said:
.. because the reliability we have today is adequate.
"good enough" is an odd way to measure uptime. It should be a cost to risk scale. Not a "good enough" or "not good enough" scale. How does the CFO determine what is good enough without it being tied to money?
-
@PSX_Defector said:
The only 100% reliable link I can guarantee is my LAN. How often does an internal LAN link go down?
Had a customer lose their LAN last month