Someone using @halizard IN 2021? I am implementing it on 2 XCP-NG 8.2 hosts and everything is going well but the tests are not over yet. But I was unable to create the SR using XCP-NG Center. I get the error: the SR could not be connected because the driver gfs2 was not recognized. But it is possible to see that it is installed with the command yum search gfs2 (this command finds glusterfs and also gives the correct name). the command yum install <full name I don't remember> --enablerepo = base,updates downloads from the debian base repository but without enabling this repository in the default XCP-NG settings (the XCP-NG documentation guides you through this) . Finally, the command mkfs.gfs2 -V shows that this is installed, but it still doesn't work. I suppose (based on various forums) it is a problem with the XCP-NG Center because creating and connecting the SR via the graphical screen directly on the server (or through xsconsole) everything worked perfectly. When doing in the primary, the same SR appeared immediately in the secondary.
Now the war continues. After all the configuration is done on an open internet network, I need to change the IP of the management card to an IP of my internal network and perform some reboots to see how the system behaves. This part is causing some problems but they will be overcoming. I'll keep you informed.
thank you all.
Best posts made by fabiorpn
-
Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment
Latest posts made by fabiorpn
-
RE: Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment
@dbeato Hi.
my vms are on an intranet via a bond from network cards 0 and 1. However, the management interface (network card 4) also appears as a connection available on VMs. When I need to update something on VMs without worrying about Proxy, I just change the network from interface 4 to external internet. So there is no downtime for VMs. However the same procedure I still can't do with the XCP-NG and keep the VMs connected. To recognize the new network, the servers need to reboot even though I have done several procedures before, for example stopping all HA and replication services, etc. But I haven't been too concerned with this case because we already have alternatives in case we need to update xcp-ng. I was unable to make the proxy work directly on Dom0, even though it is a debian. Would anyone know how to do this permanently?I don't know if I answered your question
-
RE: Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment
@dbeato ah, thank you. I am really seeing these errors, but I have read a lot about them and we do not have the resources to invest in paid solutions. Our structure is small: 1 Windows server 2019 (AD, DHCP, DNS ...) an old redhat, an ubuntu for an intranet page, a windows server 2016 for some applications. All are accessed 24 hours a day but by FEW users. So we are betting on this solution. We are hopeful.
-
RE: Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment
@dbeato The idea is to use only 2 nodes. sharing local storage. we don't have the resources to invest in a SAN. functionality that HA-LIZARD proposes to do. The tests continue. we are currently simulating failures and describing what to do if failures happen.
When exchanging IP from an open network to a closed network, from both nodes simultaneously the master works fine (after restart) but the slave loses all network connections. After the master wakes up, it is necessary to perform an emergency reset of the network settings on the slave and reboot it. when he wakes up, it is necessary to recreate his bond (through the xcp-ng center) and so the synchronism is resumed. but even so the vm didn’t stop working on the master. We will still improve this procedure by doing this process in maintenance mode one node at a time.
-
RE: Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment
@dbeato hi!!
but did i post it in the wrong place? if yes i apologize, it's my first time here. -
Ha-lizard on the XCP-NG 8.2 in 2021. Progress of my deployment
Someone using @halizard IN 2021? I am implementing it on 2 XCP-NG 8.2 hosts and everything is going well but the tests are not over yet. But I was unable to create the SR using XCP-NG Center. I get the error: the SR could not be connected because the driver gfs2 was not recognized. But it is possible to see that it is installed with the command yum search gfs2 (this command finds glusterfs and also gives the correct name). the command yum install <full name I don't remember> --enablerepo = base,updates downloads from the debian base repository but without enabling this repository in the default XCP-NG settings (the XCP-NG documentation guides you through this) . Finally, the command mkfs.gfs2 -V shows that this is installed, but it still doesn't work. I suppose (based on various forums) it is a problem with the XCP-NG Center because creating and connecting the SR via the graphical screen directly on the server (or through xsconsole) everything worked perfectly. When doing in the primary, the same SR appeared immediately in the secondary.
Now the war continues. After all the configuration is done on an open internet network, I need to change the IP of the management card to an IP of my internal network and perform some reboots to see how the system behaves. This part is causing some problems but they will be overcoming. I'll keep you informed.
thank you all.