Cloudatcost currently offline
-
@thecreativeone91 I'm a terrible representative of my country; most are quite kind and very friendly. Social pressure from long cold winters produces a tight knit community. Then the spring thaw hits and we* all go nuts for a couple weeks.
*by we I mean people who live where it's cold. I do not.
Examples of weirdness / Canadians making fun of Canada (contains a huge number of f-bombs, chainsaws, old trucks, beer and an odd lack of maple syrup)
Youtube Video -
Still looks like a pretty decent amount of outages
-
-
Any news? It seems it has been a while since an update on the Twitter feed. Is Rogers saying anything?
-
@Reid-Cooper said:
Any news? It seems it has been a while since an update on the Twitter feed. Is Rogers saying anything?
Everything from C@C that I have is working. VPS is up and the management panel is back up now too.
-
Nothing here in Texas. Looks like the routes to here are still down. We saw that a lot with Verizon that they would lose Texas but not New York to Toronto all the time.
-
Sites are still not accessible from Florida.
-
nothing here.
-
Nor here, here is our local TraceRoute from Houston.
-
@scottalanmiller said:
Nor here, here is our local TraceRoute from Houston.
Mine ends at Comcast in Richmond,va
-
Still up from Florida? The last hop in my last traceroute has since disappeared. No update on Twitter for three hours.
-
@scottalanmiller said:
Still up from Florida? The last hop in my last traceroute has since disappeared. No update on Twitter for three hours.
Wonder if they went home for the night? Guess there's not much they can do about it.
-
True, sounds like they are helpless and held hostage by Rogers at the moment. Although, it would be wise to keep Twitter up to date and not go stale for over 30 minutes just as good practice. Even if it is "Rogers sucks and won't tell us anything more", at least we know that they are there poking them.
-
@scottalanmiller said:
Nor here, here is our local TraceRoute from Houston.
First off, get off that double NAT box.
Second, it appears to be a larger issue. Sounds like there is some peer issues between some of the backbones and Cogent.
C:\Users\v436525>tracert jump.ntg.co
Tracing route to jump.ntg.co [168.235.144.189]
over a maximum of 30 hops:1 <1 ms <1 ms <1 ms agrer003-ip002001.noa.vmotion.tmrk.eu [172.16.2.
1]
2 29 ms 29 ms 22 ms cpe-76-186-176-1.tx.res.rr.com [76.186.176.1]
3 15 ms 11 ms 12 ms tge7-2.allntx3901h.texas.rr.com [24.164.210.241]4 13 ms 15 ms 15 ms tge0-8-0-7.plantxmp01r.texas.rr.com [24.175.37.2
12]
5 13 ms 14 ms 15 ms agg27.crtntxjt01r.texas.rr.com [24.175.36.177]
6 * * * Request timed out.
7 * * * Request timed out.
8 * * * Request timed out.
9 * * * Request timed out.
10 ^C -
As a matter of course, I do have pipes from TimeWarner Cable and AT&T, seeing both hops going bad.
This is what happens when you get cheap bandwidth. This is also why I asked about it a week ago. If you are peered with Cogent, you have to have another pipe in, be it Level3 or InterNAP.
-
@PSX_Defector said:
First off, get off that double NAT box.
Temporary situation while working at my brother in law's house. My entire home network was put in here and sits behind his existing network
-
Only two more weeks before I'm on my own network in Andalusia.
-
Did anyone else notice this guy on twitter running a hosting biz off of C@C? https://twitter.com/tensioncore/with_replies seems like a bad move.
-
-
@thecreativeone91 said:
Did anyone else notice this guy on twitter running a hosting biz off of C@C? https://twitter.com/tensioncore/with_replies seems like a bad move.
Using CloudatCost isn't the issue. Any provider can go down and lose a datacenter. Even Rackspace and Amazon have had that happen. To be an enterprise class hoster you need redundancy. Using C@C for one leg of your redundancy is perfectly fine. But they need someone else, like Rackspace, AWS, Azure, Digital Ocean or whatever for the other leg. They are basically running on a single box, no backup and just "hoping" that nothing goes wrong. They didn't bother to implement failover, HA or any DR strategy, it would appear. That's the actual problem.
The difference is that larger providers like Amazon offer lots of disconnected datacenters and can provide full redundancy from a single company. But if you choose not to have that redundancy there, you'd be in exactly the same boat - and many people have been.
This is purely a bad host sticking their heads in the sand and just hoping that nothing goes wrong rather than designing and implementing a reliable solution.