BackUp device for local or colo storage
-
@scottalanmiller said:
@DustinB3403 said:
1Gbe
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
Well, then he's actually doing pretty good, if he says it takes around 24 hours to backup the whole system (all current 6 TB). He might have a bottle neck somewhere, but not a horrible one.
-
@DustinB3403 said:
Even in relative idle times this servers slow. I don't know how old it even is. 6-8 years maybe
Given that 2003 R2 came out in 2005, it is presumably 10+ years old.
-
@scottalanmiller said:
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
This math alone proves that using NAUBackup to create full backups won't really be much better than the current solution. Definitely sounds like it's time for a network upgrade.
-
Or just a dedicated 10Gig switch for the management port on Xen and the onsite backup solutions.
-
Of course I'd have to put 10Gbe NICs into the host servers.
-
@DustinB3403 said:
Of course I'd have to put 10Gbe NICs into the host servers.
Not necessarily, you only need your aggregate to be faster. I'm assuming that bonded NICs have not been set up? Get that fixed. If every server was 2Gb/s and the backup host was 10Gb/s you'd take rather an amazing leap forward just there. Probably enough to find other bottlenecks.
-
If you identify a single server that needs more speed, you can go up to triple or quadruple GigE if need be before making a leap to 10GigE connections.
-
You might find a single server or two with 10GigE needs, but likely not the majority. Spend opportunistically.
-
Might as well loop in StorageCraft themselves too: @Steven
-
@scottalanmiller said:
If you identify a single server that needs more speed, you can go up to triple or quadruple GigE if need be before making a leap to 10GigE connections.
What's the current cost for a 10 GigE card. Assuming he doesn't already have open ports of GigE, he'll need to buy regardless.
-
I'm surprised, an unmanged 10 GigE swith 8 port is $760
http://www.newegg.com/Product/Product.aspx?Item=N82E16833122529
A two port card from Dell is $650. Third party might be considerably less.
-
Yup, I've been pushing Netgear 10GigE for a long time now. I think that Dell has some decent 10GigE fiber switches for around $2K as well.
-
Damn the 12 port vs the 8 port is double the price of the 8 port, $1450.. ouch!
-
@Dashrender said:
Damn the 12 port vs the 8 port is double the price of the 8 port, $1450.. ouch!
I bet if you check the backplane gets a lot faster.
-
It might be the route we go with, 10GigE switch with bonded NICS or dedicated 10GigE NICS on each host.
-
How many VM hosts do you have?
-
1 Currently that is stand alone.
The equipment we're looking into would be a dual host setup. "Primary Primary" so to speak.
-
A long time ago I remember seeing some documentation (sales pamphlet) for industrial 10G switches. Small little units, 5 ports.
I can remember for the life of me their name though as this was years ago.
-
@DustinB3403 said:
A long time ago I remember seeing some documentation (sales pamphlet) for industrial 10G switches. Small little units, 5 ports.
I can remember for the life of me their name though as this was years ago.
Netgear has some fairly inexpensive managed switches that have 10G uplink ports. I think in the 800-900$ range. You could look into those if you don't need a ton of 10Gbe ports.
-
Yeah I'm thinking we won't need many.
Just device to device basically.
From XenServer host to onsite backup device.