BackUp device for local or colo storage
-
Even in relative idle times this servers slow. I don't know how old it even is. 6-8 years maybe
-
Realistically, you need a core backup infrastructure of 10Gb/s in a bonded pair which would drop your network bottleneck from 21.2 hours to 1.05 hours. Of course other bottlenecks will be exposed. But this is key. Your fundamental network infrastructure cannot handle your backup needs. This means you cannot restore in an emergency either. Nothing you do will speed it up, waiting a full day minimum would be your only option. And likely you would need to do a lot of different things at once and be very unable to keep the line fully saturated for a full day while doing the restore.
-
@scottalanmiller said:
@DustinB3403 said:
1Gbe
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
Well, then he's actually doing pretty good, if he says it takes around 24 hours to backup the whole system (all current 6 TB). He might have a bottle neck somewhere, but not a horrible one.
-
@DustinB3403 said:
Even in relative idle times this servers slow. I don't know how old it even is. 6-8 years maybe
Given that 2003 R2 came out in 2005, it is presumably 10+ years old.
-
@scottalanmiller said:
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
This math alone proves that using NAUBackup to create full backups won't really be much better than the current solution. Definitely sounds like it's time for a network upgrade.
-
Or just a dedicated 10Gig switch for the management port on Xen and the onsite backup solutions.
-
Of course I'd have to put 10Gbe NICs into the host servers.
-
@DustinB3403 said:
Of course I'd have to put 10Gbe NICs into the host servers.
Not necessarily, you only need your aggregate to be faster. I'm assuming that bonded NICs have not been set up? Get that fixed. If every server was 2Gb/s and the backup host was 10Gb/s you'd take rather an amazing leap forward just there. Probably enough to find other bottlenecks.
-
If you identify a single server that needs more speed, you can go up to triple or quadruple GigE if need be before making a leap to 10GigE connections.
-
You might find a single server or two with 10GigE needs, but likely not the majority. Spend opportunistically.
-
Might as well loop in StorageCraft themselves too: @Steven
-
@scottalanmiller said:
If you identify a single server that needs more speed, you can go up to triple or quadruple GigE if need be before making a leap to 10GigE connections.
What's the current cost for a 10 GigE card. Assuming he doesn't already have open ports of GigE, he'll need to buy regardless.
-
I'm surprised, an unmanged 10 GigE swith 8 port is $760
http://www.newegg.com/Product/Product.aspx?Item=N82E16833122529
A two port card from Dell is $650. Third party might be considerably less.
-
Yup, I've been pushing Netgear 10GigE for a long time now. I think that Dell has some decent 10GigE fiber switches for around $2K as well.
-
Damn the 12 port vs the 8 port is double the price of the 8 port, $1450.. ouch!
-
@Dashrender said:
Damn the 12 port vs the 8 port is double the price of the 8 port, $1450.. ouch!
I bet if you check the backplane gets a lot faster.
-
It might be the route we go with, 10GigE switch with bonded NICS or dedicated 10GigE NICS on each host.
-
How many VM hosts do you have?
-
1 Currently that is stand alone.
The equipment we're looking into would be a dual host setup. "Primary Primary" so to speak.
-
A long time ago I remember seeing some documentation (sales pamphlet) for industrial 10G switches. Small little units, 5 ports.
I can remember for the life of me their name though as this was years ago.