An ESXi Rebuild and Veeam Backup Job Oddities
-
I completely rebuilt an ESXi host this weekend because the jump drive running ESXi had a bootbank issue. The host was running 5.1U1 (all VMs on local storage) and is now running 5.5U1 and has been patched for Heartbleed.
Well, I am backing up the VMs on this host and one other with Veeam. After the rebuild of ESXi, I had to re-import all VMs on the host into inventory. I then had to reconnect the host to Veeam and enter the new root credentials for it so Veeam could back up its VMs successfully. Well, after Veeam re-scanned the host for VMs, my backup jobs went haywire. The selection lists that contained VMs for the newly rebuilt host were all wrong (different VMs on different jobs than they should be), and backups of any VM on the newly rebuilt ESXi host would fail. I had to remove all VMs that were on the host I rebuilt from Veeam backup jobs and add them to the backup jobs anew to fix the problem. Once I did that, all was smooth sailing.
My guess is this may have to do with the way Veeam sees VMs inventoried on an ESXi host and the fact that they had to be re-imported into inventory on the host due to the rebuild. Has anyone else experienced this?
-
@NetworkNerd said:
I completely rebuilt an ESXi host this weekend because the jump drive running ESXi had a bootbank issue. The host was running 5.1U1 (all VMs on local storage) and is now running 5.5U1 and has been patched for Heartbleed.
Well, I am backing up the VMs on this host and one other with Veeam. After the rebuild of ESXi, I had to re-import all VMs on the host into inventory. I then had to reconnect the host to Veeam and enter the new root credentials for it so Veeam could back up its VMs successfully. Well, after Veeam re-scanned the host for VMs, my backup jobs went haywire. The selection lists that contained VMs for the newly rebuilt host were all wrong (different VMs on different jobs than they should be), and backups of any VM on the newly rebuilt ESXi host would fail. I had to remove all VMs that were on the host I rebuilt from Veeam backup jobs and add them to the backup jobs anew to fix the problem. Once I did that, all was smooth sailing.
My guess is this may have to do with the way Veeam sees VMs inventoried on an ESXi host and the fact that they had to be re-imported into inventory on the host due to the rebuild. Has anyone else experienced this?
Veeam uses something other than name to track the VMs. I've seen similar when I"ve moved a VM out of a vSphere datacenter to a mobile host. When I moved the VM back in, Veeam didn't see it without having to update the replica and backup jobs.
-
Veeam tracks all VMs by unique moref ID. The upgrade process seems to have resulted in morefID changes. Thus, the jobs containing previously existed VMs failed. Once VMs were re-added to the job, everything started working as expected.
However, I should mention that typically in-place upgrade shouldn't lead to morefID changes.
Thanks.
-
@Vladimir-Eremin said:
Veeam tracks all VMs by unique moref ID. The upgrade process seems to have resulted in morefID changes. Thus, the jobs containing previously existed VMs failed. Once VMs were re-added to the job, everything started working as expected.
However, I should mention that typically in-place upgrade shouldn't lead to morefID changes.
Thanks.
Since this was a complete ESXi rebuild on a newer version of ESXi, that makes complete sense. Thanks for sharing!
-
You're welcome. Should any other questions arise, feel free to contact me either here or on our Community Forum referenced above. Thanks.
-
Good replies on here. This should help others out.