@dustinb3403 I appreciate the recommendation of alternate imaging software and i personally would love to use them but my mission is to get the VHDs stored in the windows image to transfer over directly to a blank HDD, no matter the method used the windows image must be the source of the restoration.
This is the issue here. He is stating he must use Windows Recovery to restore the operating system.
Not sure of why or what requirement this is other than
@dustinb3403 ah yes i neglected to mention we are able to clone the drive via clonezilla but a lot of the times the users have to continue work while we send out a new drive, so as of now i only have a windows image to work with, with the constraint of not being able to use a repair disk
Where he says the users have to be able to use their systems.
Which the answer to this is an Agent based backup solution (Veeam Endpoint, UrBackup etc) can all write to remote storage and can be written to new drives that then get sent out to the user to physically remove the hard drive from their system to install this "restored image".
There are potential issues with this. Restoring a computer image that's 3 months old could easily find a situation where the restore has a different computer password with the network, etc.
Too bad ovs isnt in the repos for RHEL/CentOS. You can set up these private networks and connect them through a VXLAN with ovs. That way you can have something like a separate dev network on the same hosts and they can communicate between hosts.
Not available in the epel repo?
That is apparently the case unless my google--fu isn't up to snuff
Nope. It is available in Fedora though. If you want to install it you have to manually build the RPMs. While not hard to build it would be a pain to maintain updates.
OVS is used by oVirt so maybe the centos ovirt repo has it (or the ovirt stable repo)
I'm assuming it's just building the RPM since it's not in the normal repo.
and after a lot of episodes, I've fixed the thing.
please let me state this again: I HAVE.
Not the HPE support, not the reseller tech support.
I = An almost idiot ex-embedded sw developer illogically cast into the sysadmin role of a non-sense company!
there was a mismatch between OS version, bios version, controllers firmware version, controller drivers version. Selecting the right combination has (apprarently) fixed the issue!
how storage controllers affect reboot signals is still a mistery... but actually all was related to the smart paging I mentioned before. Moving it from the default location caused all the issues.
I have discovered the "bug" recreating a new vm with copy-pasted hdd but with hypervisor defaults... 2 twins VM one running one not... only difference: location of the smart paging file. XD