Considering Colocation - What to watch for



  • I'm just now considering colocation as a viable option to move all of the server equipment offsite. Before you ask, because of the space requirements (50+TB plus equivalent backup space) cloud isn't really an option at this point. But prices for colocation have begun to really drop, especially in the Toronto area which is near where I live and a well established colocation hub in Canada. 3z.ca prices for example have almost halved over the last few months which makes this more feasible now.

    As this is my first foray into this, I'm hoping I can get some insight into what to watch out for. Specifically, where are extra costs likely to come from that may not be explicitely mentioned on websites, what questions should I be asking (beyond the usual suspects of security, network connectivity and power redundancy)?


  • Service Provider

    3z is awesome. We used them for a really long time in Toronto. Zero issues.


  • Service Provider

    We never had any surprise costs with 3z. You pay the rate on the site.

    How much are you looking to house there?



  • @scottalanmiller Looking at 10U of rack space.


  • Service Provider

    @NashBrydges said in Considering Colocation - What to watch for:

    @scottalanmiller Looking at 10U of rack space.

    Cool. Which DC? We used Front St and Mississauga.



  • @scottalanmiller Mississauga and Etobicoke are closer to me so they're obvious preferred choices. They also happen to be the less expensive of the 3 locations so that's a bonus. The Etobicoke location has pricing at $325 CDN for 10U which, frankly, is the cheapest I've seen from any colo.



  • One of the caveats is power consumption. We agreed up to 28 amps. If we stay under 28 amps, then we pay one price. If we go over 28 amps, then we have to pay a higher tiered price.



  • Verify that the dimensions of the rack you are getting will hold your equipment!! We are running into an issue with our colocation facility where 2 of our older Dell's rails are deeper that the edge of the mounting rail and that smack right into the PDU's. If you need extended depth be sure to ask for it and GET IT IN WRITING ON THE CONTRACT before you sign.
    That having been said, we are using Centrilogic for our colocation facility in Buffalo and they have been very accommodating even with our depth issue. They have 3 data centers in the GTA including Mississauga. I can get you contact info if you want it.



  • @jt1001001 said in Considering Colocation - What to watch for:

    Centrilogic

    Ah, this is great, thanks. Would have never thought about checking for rack depth. If you have a contact I can talk to about getting quotes for the GTA area, yes, please PM me that info. Much appreciated!



  • Having just completed our move a couple pointers:

    1. Diagram your ideal layout, even if its a "stick Figure" drawing have some sort of guide
    2. LENGTH OF POWERCABLES: Our power cables were WAY WAY WAY too long requiring us to do a lot of routing and rerouting, which in turn caused the back of the rack to be somewhat blocked which limits airflow. We are going to be purchasing short (2 and 3ft) lengths and redoing all the power.
    3. LENGTH OF NETWORK CABLES: 6ft were too short in some cases, and 10FT were too long. We had to use more cable management which ends up wasting 2U of space. If you can, use some string and a tape measure on your existing racks to get an idea of how much length you'll need.
    4. LABEL LABEL LABEL: In an ideal word, you put your stuff in the rack and never come back unless something breaks or its time to move to a new facility. Of course you are going to remember exactly what you did when you installed everything so when you come back 2 years later you remember exactly why that 1 server is in a separate switch port from the others. Take the time now to plan, and when installing label everything you can. This took a big chunk of our move window but will be worth it as I will be able to look and see which server that blue network cables goes to.


  • Take photos of all of your gear inside and out. Put it in a file or on a wiki page. You'll want to be looking at a photo of it when talking to a tech someday. You'll thank me.


  • Service Provider

    @jt1001001 said in Considering Colocation - What to watch for:

    Having just completed our move a couple pointers:

    1. Diagram your ideal layout, even if its a "stick Figure" drawing have some sort of guide
    2. LENGTH OF POWERCABLES: Our power cables were WAY WAY WAY too long requiring us to do a lot of routing and rerouting, which in turn caused the back of the rack to be somewhat blocked which limits airflow. We are going to be purchasing short (2 and 3ft) lengths and redoing all the power.
    3. LENGTH OF NETWORK CABLES: 6ft were too short in some cases, and 10FT were too long. We had to use more cable management which ends up wasting 2U of space. If you can, use some string and a tape measure on your existing racks to get an idea of how much length you'll need.

    Doesn't the DC handle all this for you? We've never provided cables of any sort or had access to look at the gear. Never mattered to us what length cables they chose to use or what the racking order was.



  • @scottalanmiller said in Considering Colocation - What to watch for:

    @jt1001001 said in Considering Colocation - What to watch for:

    Having just completed our move a couple pointers:

    1. Diagram your ideal layout, even if its a "stick Figure" drawing have some sort of guide
    2. LENGTH OF POWERCABLES: Our power cables were WAY WAY WAY too long requiring us to do a lot of routing and rerouting, which in turn caused the back of the rack to be somewhat blocked which limits airflow. We are going to be purchasing short (2 and 3ft) lengths and redoing all the power.
    3. LENGTH OF NETWORK CABLES: 6ft were too short in some cases, and 10FT were too long. We had to use more cable management which ends up wasting 2U of space. If you can, use some string and a tape measure on your existing racks to get an idea of how much length you'll need.

    Doesn't the DC handle all this for you? We've never provided cables of any sort or had access to look at the gear. Never mattered to us what length cables they chose to use or what the racking order was.

    Not all the time. Just yesterday, I went into my colo and racked & stacked my own servers, connected them up to power and network. They just provide us with physical security, a full rack, 2 independent sources of power w/ backups, and a network connection. We still have full control over everything else.



  • @scottalanmiller said in Considering Colocation - What to watch for:

    @jt1001001 said in Considering Colocation - What to watch for:

    Having just completed our move a couple pointers:

    1. Diagram your ideal layout, even if its a "stick Figure" drawing have some sort of guide
    2. LENGTH OF POWERCABLES: Our power cables were WAY WAY WAY too long requiring us to do a lot of routing and rerouting, which in turn caused the back of the rack to be somewhat blocked which limits airflow. We are going to be purchasing short (2 and 3ft) lengths and redoing all the power.
    3. LENGTH OF NETWORK CABLES: 6ft were too short in some cases, and 10FT were too long. We had to use more cable management which ends up wasting 2U of space. If you can, use some string and a tape measure on your existing racks to get an idea of how much length you'll need.

    Doesn't the DC handle all this for you? We've never provided cables of any sort or had access to look at the gear. Never mattered to us what length cables they chose to use or what the racking order was.

    It might depend on the setup inside the DC and/or how much space you're using. It doesn't make much sense for someone with 4u of space to go through all the security steps to get inside, where as a half or full rack could easily be secured separately making physical access less of a security risk for other clients.



  • @NerdyDad said in Considering Colocation - What to watch for:

    @scottalanmiller said in Considering Colocation - What to watch for:

    @jt1001001 said in Considering Colocation - What to watch for:

    Having just completed our move a couple pointers:

    1. Diagram your ideal layout, even if its a "stick Figure" drawing have some sort of guide
    2. LENGTH OF POWERCABLES: Our power cables were WAY WAY WAY too long requiring us to do a lot of routing and rerouting, which in turn caused the back of the rack to be somewhat blocked which limits airflow. We are going to be purchasing short (2 and 3ft) lengths and redoing all the power.
    3. LENGTH OF NETWORK CABLES: 6ft were too short in some cases, and 10FT were too long. We had to use more cable management which ends up wasting 2U of space. If you can, use some string and a tape measure on your existing racks to get an idea of how much length you'll need.

    Doesn't the DC handle all this for you? We've never provided cables of any sort or had access to look at the gear. Never mattered to us what length cables they chose to use or what the racking order was.

    Not all the time. Just yesterday, I went into my colo and racked & stacked my own servers, connected them up to power and network. They just provide us with physical security, a full rack, 2 independent sources of power w/ backups, and a network connection. We still have full control over everything else.

    The several colo's that we are looking at do exactly this. We rack our own equipment the only thing they provide is power and the physical rack.



  • @travisdh1 said in Considering Colocation - What to watch for:

    @scottalanmiller said in Considering Colocation - What to watch for:

    @jt1001001 said in Considering Colocation - What to watch for:

    Having just completed our move a couple pointers:

    1. Diagram your ideal layout, even if its a "stick Figure" drawing have some sort of guide
    2. LENGTH OF POWERCABLES: Our power cables were WAY WAY WAY too long requiring us to do a lot of routing and rerouting, which in turn caused the back of the rack to be somewhat blocked which limits airflow. We are going to be purchasing short (2 and 3ft) lengths and redoing all the power.
    3. LENGTH OF NETWORK CABLES: 6ft were too short in some cases, and 10FT were too long. We had to use more cable management which ends up wasting 2U of space. If you can, use some string and a tape measure on your existing racks to get an idea of how much length you'll need.

    Doesn't the DC handle all this for you? We've never provided cables of any sort or had access to look at the gear. Never mattered to us what length cables they chose to use or what the racking order was.

    It might depend on the setup inside the DC and/or how much space you're using. It doesn't make much sense for someone with 4u of space to go through all the security steps to get inside, where as a half or full rack could easily be secured separately making physical access less of a security risk for other clients.

    A lot goes into it, especially with compliancy--HIPAA/PCI/etc--but typically, we issue keycards to those with half-rack and more (if they want them), but they do need to inform us 24 hours in advance if they plan on coming in.



  • Look out for the cross connects... they can be a pain. Need to make sure someone manages getting everything cross connected and have good communication with the data center. Where the carrier meets the data center can be awkward at times.



  • @NerdyDad thats how our colo is. The rent includes security (keycard access to DC, combiantion access to rack), rack, power feed, and Internet. Oh, and basic PDU's. We wanted more advanced PDU's (remote management and monitoring) which we could either lease from them or buy ourselves.


Log in to reply
 

Looks like your connection to MangoLassi was lost, please wait while we try to reconnect.