BackUp device for local or colo storage
-
If your OS's really don't matter, and they are only serving the purpose of file server - then that might work. Setup your main OS install, create all of your shares, then create images of the OS and boot drives. Then backup the data drives as desired.
But if you application settings that need to be backed up regularly that live on the OS drive, why not an incremental forever solution, once your StorageCraft performance problem is solved? Then you can replicate your StorageCraft to another site, only replicating the daily changes.
-
Let's back up and drop all of it. And start from the beginning....
Why is StorageCraft taking so long to take a backup?
-
@scottalanmiller said:
Let's back up and drop all of it. And start from the beginning....
Why is StorageCraft taking so long to take a backup?
It takes forever to backup when creating a full. So ~6TB of live systems.
Also ask the MSP who set it up.... before my time and I barely touch it.
-
@DustinB3403 said:
It takes forever to backup when creating a full. So ~6TB of live systems.
Also ask the MSP who set it up.... before my time and I barely touch it.
I think that everything has to be focused here. This is the problem, this has to be fixed before you can address other issues.
-
Can you work to solve that problem?
What can you tell us about that system? Is it on 1 Gb network? Same switch as the VM hosts? How many network connections does it have?
Are the disks on the VM hosts IOPs overloaded? Do they have any to spare for backups?
Are you doing backups now to NAUBackup? If so, how long do they take to finish? What's doing the processing for the NAUBackup, the VM host (XenServer?) or the data repository that holding the full backups?
-
You need to identify the bottlenecks on the backup system. Something is taking a really long time. Is it the target, is it the backup server, is it the source systems...
-
iSCSI Buffalo drive attached to a Server 2008 file server with 1Gbe NIC (single) which backups to a 2 and 4 drive Synology devices.
Each synology backup different items.
We also have an ancient "archive server" which has 6 drives, running Server 2003 which actually runs the Storage Craft software. Single NIC connected, 1Gbe, 8GB RAM with a Quad Core AMD Opteron 1385 CPU.
-
No wonder you have issues!
So the iSCSI traffic for the Buffalo goes over the same NIC as the traffic being sent to the 2 Synology devices?
And it's all driven by the StorageCraft software that's running on the Server 2003 box?
What does the Buffalo device do that's different than the 2 Synology devices?
Is the Buffalo the primary storage, boot storage, etc, for the Server 2008?
-
The iSCSI target is housing our network shares.
The buffalo is being decommissioned but it was a backup device.
-
@DustinB3403 said:
We also have an ancient "archive server" which has 6 drives, running Server 2003 which actually runs the Storage Craft software. Single NIC connected, 1Gbe, 8GB RAM with a Quad Core AMD Opteron 1385 CPU.
So everything flows through this machine? All 8TB of backups goes through this choke point? Have you checked CPU to see if it is maxed out? Memory to see if it is exhausted? IOPS to see if you are beyond the limits of the drives?
-
The server 2003 is horribly slow. CPU usage is constantly peaking. Memory usage doesn't seem to be hit very hard.
But this device is also looking to be tossed. I was considering just using it for drive space as just another backup of our backup sort of device.
Maybe not?
-
@DustinB3403 said:
1Gbe
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
-
Even in relative idle times this servers slow. I don't know how old it even is. 6-8 years maybe
-
Realistically, you need a core backup infrastructure of 10Gb/s in a bonded pair which would drop your network bottleneck from 21.2 hours to 1.05 hours. Of course other bottlenecks will be exposed. But this is key. Your fundamental network infrastructure cannot handle your backup needs. This means you cannot restore in an emergency either. Nothing you do will speed it up, waiting a full day minimum would be your only option. And likely you would need to do a lot of different things at once and be very unable to keep the line fully saturated for a full day while doing the restore.
-
@scottalanmiller said:
@DustinB3403 said:
1Gbe
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
Well, then he's actually doing pretty good, if he says it takes around 24 hours to backup the whole system (all current 6 TB). He might have a bottle neck somewhere, but not a horrible one.
-
@DustinB3403 said:
Even in relative idle times this servers slow. I don't know how old it even is. 6-8 years maybe
Given that 2003 R2 came out in 2005, it is presumably 10+ years old.
-
@scottalanmiller said:
1Gb/s has a realistic maximum transfer rate of 800Mb/s and that would be HARD to hit and sustain. 8TB on 1Gb/s is 21.2 hours to copy. That's with zero bottlenecks anywhere, just wide open streaming without ever dropping the speed.
This math alone proves that using NAUBackup to create full backups won't really be much better than the current solution. Definitely sounds like it's time for a network upgrade.
-
Or just a dedicated 10Gig switch for the management port on Xen and the onsite backup solutions.
-
Of course I'd have to put 10Gbe NICs into the host servers.
-
@DustinB3403 said:
Of course I'd have to put 10Gbe NICs into the host servers.
Not necessarily, you only need your aggregate to be faster. I'm assuming that bonded NICs have not been set up? Get that fixed. If every server was 2Gb/s and the backup host was 10Gb/s you'd take rather an amazing leap forward just there. Probably enough to find other bottlenecks.