XenServer 6.5 - Clean Up Storage Repository
-
@momurda said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh I have had a few disk migrations fail over the last 2 years. Most of the time you just end up with a broken vdi on the destination and the source is fine without issue.
But i have had to restart guest after disk migration failure before.I'm OK with that. I will have to think about this...
-
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
Last night I set up a new SR for Zimbra and live migrated the OS disk over to the new SR without any issues. Took ~5 minutes for the 20 GB VHD.
I'd like to do the same for the 1 TB VHD, but it makes me nervous...
If the process was to bomb mid-progress, what would happen? Is it easy to recover from?
You would have a backup, but why go through this process? Why not let the system run a coalesce? Is this a production system or your lab?
Production system. The problem is that there isn't enough space on the SR to perform a coalesce. I'm trying to avoid bringing Zimbra down if at all possible. Might not be possible, but I can at least try.
-
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@momurda said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh I have had a few disk migrations fail over the last 2 years. Most of the time you just end up with a broken vdi on the destination and the source is fine without issue.
But i have had to restart guest after disk migration failure before.I'm OK with that. I will have to think about this...
This can mean unplanned downtime. Which for an exchange system can be costly. Granted the downtime is usually nominal but is worth considering allowing the system to clean up this issue at the nearest possible planned down time.
-
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@momurda said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh I have had a few disk migrations fail over the last 2 years. Most of the time you just end up with a broken vdi on the destination and the source is fine without issue.
But i have had to restart guest after disk migration failure before.I'm OK with that. I will have to think about this...
This can mean unplanned downtime. Which for an exchange system can be costly. Granted the downtime is usually nominal but is worth considering allowing the system to clean up this issue at the nearest possible planned down time.
Yes, it's a gamble for sure.
-
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
Last night I set up a new SR for Zimbra and live migrated the OS disk over to the new SR without any issues. Took ~5 minutes for the 20 GB VHD.
I'd like to do the same for the 1 TB VHD, but it makes me nervous...
If the process was to bomb mid-progress, what would happen? Is it easy to recover from?
You would have a backup, but why go through this process? Why not let the system run a coalesce? Is this a production system or your lab?
Production system. The problem is that there isn't enough space on the SR to perform a coalesce. I'm trying to avoid bringing Zimbra down if at all possible. Might not be possible, but I can at least try.
The coalesce process is likely already running attempting to clean up an old snapshot. Performing a manual SR scan, should clean up this issue.
-
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
Last night I set up a new SR for Zimbra and live migrated the OS disk over to the new SR without any issues. Took ~5 minutes for the 20 GB VHD.
I'd like to do the same for the 1 TB VHD, but it makes me nervous...
If the process was to bomb mid-progress, what would happen? Is it easy to recover from?
You would have a backup, but why go through this process? Why not let the system run a coalesce? Is this a production system or your lab?
Production system. The problem is that there isn't enough space on the SR to perform a coalesce. I'm trying to avoid bringing Zimbra down if at all possible. Might not be possible, but I can at least try.
The coalesce process is likely already running attempting to clean up an old snapshot. Performing a manual SR scan, should clean up this issue.
Already done this several times. No dice.
-
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
Last night I set up a new SR for Zimbra and live migrated the OS disk over to the new SR without any issues. Took ~5 minutes for the 20 GB VHD.
I'd like to do the same for the 1 TB VHD, but it makes me nervous...
If the process was to bomb mid-progress, what would happen? Is it easy to recover from?
You would have a backup, but why go through this process? Why not let the system run a coalesce? Is this a production system or your lab?
Production system. The problem is that there isn't enough space on the SR to perform a coalesce. I'm trying to avoid bringing Zimbra down if at all possible. Might not be possible, but I can at least try.
The coalesce process is likely already running attempting to clean up an old snapshot. Performing a manual SR scan, should clean up this issue.
Already done this several times. No dice.
I should probably just re-read the thread, but what type of backup are you running with this guest? You aren't using XO so I'm curious where/what is causing this issue.
-
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
Last night I set up a new SR for Zimbra and live migrated the OS disk over to the new SR without any issues. Took ~5 minutes for the 20 GB VHD.
I'd like to do the same for the 1 TB VHD, but it makes me nervous...
If the process was to bomb mid-progress, what would happen? Is it easy to recover from?
You would have a backup, but why go through this process? Why not let the system run a coalesce? Is this a production system or your lab?
Production system. The problem is that there isn't enough space on the SR to perform a coalesce. I'm trying to avoid bringing Zimbra down if at all possible. Might not be possible, but I can at least try.
The coalesce process is likely already running attempting to clean up an old snapshot. Performing a manual SR scan, should clean up this issue.
Already done this several times. No dice.
I should probably just re-read the thread, but what type of backup are you running with this guest? You aren't using XO so I'm curious where/what is causing this issue.
I am using Alike's "enhanced backup" model, which is snapshot based backups. I also take snapshots of the VM (all VMs, really) whenever I do any sort of maintenance, so I can't really point the blame at Alike. I don't know how long the orphaned snapshots have been around. The interesting thing is that except for Saturday night's run (when I got the alert from the pool that the coalesce failed due to not enough disk space), backups have been successful. Backups aren't performed on Sundays.
I don't have the orphaned snapshots issue with any other VM that I'm aware of. SR usage everywhere else looks to be what is expected.
-
What output do you get from XS cli when you run this?
xapi-explore-sr
-
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
What output do you get from XS cli when you run this?
xapi-explore-sr
Hm. I don't seem to have that command. I have the following:
[root@vmhost10 ~]# xapi xapi xapi-autostart-vms xapi-db-process xapi-wait-init-complete [root@vmhost10 ~]#
-
Curious. . .
-
Doh. . .
This is a feature of XO directly. . . ha yea nevermind.
-
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
Doh. . .
This is a feature of XO directly. . . ha yea nevermind.
Well, I guess that means I should set it up.
-
Here is the github for that specific feature
-
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
Doh. . .
This is a feature of XO directly. . . ha yea nevermind.
Well, I guess that means I should set it up.
You really should. It takes moments to setup and to get using it.
-
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
@anthonyh said in XenServer 6.5 - Clean Up Storage Repository:
@dustinb3403 said in XenServer 6.5 - Clean Up Storage Repository:
Doh. . .
This is a feature of XO directly. . . ha yea nevermind.
Well, I guess that means I should set it up.
You really should. It takes moments to setup and to get using it.
Looking at it now.
-
Debian or Ubuntu
Installer
sudo bash <password> sudo curl https://raw.githubusercontent.com/Jarli01/xenorchestra_installer/master/xo_install.sh | bash <password>
Updater
sudo bash <password> sudo curl https://raw.githubusercontent.com/Jarli01/xenorchestra_updater/master/xo-update.sh | bash -s -- -f <password>
-
-
Here is the output from xapi-explore-sr
Zimbra_Vol1 (3 VDIs)
āāā¬ base copy - c52a7680-b3fa-4ffd-8e73-a472067eb710 - 85.97 Gi
āāā¬ base copy - 00c565b0-ab40-4e6d-886e-41c51f62992a - 1024.79 Gi
āāā mail.domain.org 1 - 586e7cc3-3fbc-4aa1-89bc-6974454aee7d - 1026.01 Gi -
I've realized that there is other Zimbra maintenance that I need to schedule (most importantly upgrading from 8.6.0 to current). I'm going to do the shut down, rescan SR, and hope it coalesces when I do this work. I seem to be in OK shape for the moment. Alike is able to back it up and backups are good (I did a test restore successfully).