XenServer 6.5 Storage - Reset Multipath Count
-
Hey All,
I have a pool of XenServer 6.5 hosts that share a FC SAN (an inherited configuration that will change when we upgrade, and for those of you who've read past posts of mine YES the hosts in question are the blades...please hold the lecture I know this is all bad ). Somehow some of the LUNs on said SAN (3PAR 7200) were exported multiple times which inflated the number of actual paths available. I went ahead and cleaned these up, however now XenCenter is complaining about failed paths. Rightfully so, as the number of paths Xen is expecting has decreased.
Does anyone know how to have Xen reset the path count for a given SR? There was an out-of-the-box script that was supposed to do this (/opt/xensource/sm/mpathcount.py), but apparently does not work under XS 6.5. due to changes in how it handles multipathing (I believe it now uses dm-multipath exclusively). I posted on the official Citrix forum, but did not get a whole lot of help there. I'm hoping someone here may have suggestions on things to try.
Thanks!!
-
@anthonyh said:
Hey All,
I have a pool of XenServer 6.5 hosts that share a FC SAN (an inherited configuration that will change when we upgrade, and for those of you who've read past posts of mine YES the hosts in question are the blades...please hold the lecture I know this is all bad ).
While I don't know how to fix you're issue, I have to give you a golf clap for that much
-
@travisdh1 said:
@anthonyh said:
Hey All,
I have a pool of XenServer 6.5 hosts that share a FC SAN (an inherited configuration that will change when we upgrade, and for those of you who've read past posts of mine YES the hosts in question are the blades...please hold the lecture I know this is all bad ).
While I don't know how to fix you're issue, I have to give you a golf clap for that much
Ha, thanks.
I suspect detatching and re-attaching the SRs one by one will fix the issue, but that's not something I've ever done in test, much less production...so I don't know what the implications are of doing that. If an SR that a VM's virtual disks resides on is detatched, what happens to said VM?
-
@anthonyh said:
@travisdh1 said:
@anthonyh said:
Hey All,
I have a pool of XenServer 6.5 hosts that share a FC SAN (an inherited configuration that will change when we upgrade, and for those of you who've read past posts of mine YES the hosts in question are the blades...please hold the lecture I know this is all bad ).
While I don't know how to fix you're issue, I have to give you a golf clap for that much
Ha, thanks.
I suspect detatching and re-attaching the SRs one by one will fix the issue, but that's not something I've ever done in test, much less production...so I don't know what the implications are of doing that. If an SR that a VM's virtual disks resides on is detatched, what happens to said VM?
I'd suspect that you'd have to move any VM attached to the SR you are working with to a different SR, or shut down the VM.
-
@travisdh1 said:
@anthonyh said:
@travisdh1 said:
@anthonyh said:
Hey All,
I have a pool of XenServer 6.5 hosts that share a FC SAN (an inherited configuration that will change when we upgrade, and for those of you who've read past posts of mine YES the hosts in question are the blades...please hold the lecture I know this is all bad ).
While I don't know how to fix you're issue, I have to give you a golf clap for that much
Ha, thanks.
I suspect detatching and re-attaching the SRs one by one will fix the issue, but that's not something I've ever done in test, much less production...so I don't know what the implications are of doing that. If an SR that a VM's virtual disks resides on is detatched, what happens to said VM?
I'd suspect that you'd have to move any VM attached to the SR you are working with to a different SR, or shut down the VM.
Hmm. While it'd be a PITA, I suppose I could shuffle the SRs one by one. Create a new SR of equal size, migrate the disks, delete old SR, rinse and repeat. It would at least be a solution, and possibly a solution with minimal downtime...
-
I found a slightly easier solution to this. For each pool member:
- Enter Maintenance Mode
- Disable multipathing
- Enable multipathing
- Exit Maintenance Mode
BOOM. Path counts are reset.
-
And that didn't take your VMs down?