Fedora 39 Server as host with HAproxy and Qemu/KVM virtual machines. Trouble with communication.
-
Hei,
I am running the latest Fedora 39 Server Edition with one VM as Qemu/KVM. The VM is connected to the network via “Direct Attachment”. A Debian 11 Linux with Nextcloud is running in the VM.
The data traffic is sent from the router directly to the local IP of the VM for ports 80 and 443. For the WAN IP I use a DynDNS service.
Since the VM is connected to the host's network device via "direct attachment", the host and the VM are isolated from each other. Everything works great. The local IP range I can use is 10.0.0.1 to 10.0.0.137. 10.0.0.138 is the router.
So much for the basic configuration.I would now like to add one or two more VMs to the host via Qemu/KVM. These should also be reachable from outside.
I installed HAProxy on the Fedora host and configured it accordingly. "Direct Attachment" between host and VM does not work with HAProxy. I tried with "Virtual Network" and "Bridge to LAN". For both, a new local network with IP range 192.168.122.x is created.
The HAProxy finds the two VMs. With my DynDNS provider I have created corresponding domains for the respective VMs, which are updated via ddclient. The problem is that the VMs cannot be reached from outside. I can ping them locally.
I think the problem lies with the bridge between host and VM's.
If I install HAProxy on a separate PC and connect the VMs to the host via "Direct Attachment", the connection from outside works. But I don't want to use an extra PC just for the HAProxy. Surely this must also work with my Fedora Host Server?
Google and all AI's couldn't help me.
I hope human intelligence can help you.Any help is greatly appreciated.
-
@Woti said in Fedora 39 Server as host with HAproxy and Qemu/KVM virtual machines. Trouble with communication.:
With my DynDNS provider I have created corresponding domains for the respective VMs,
Why would you need more then one DynDNS domain for your VMs, if they are both on same external IP?
(I know this is not the question, but I wonder) -
@Woti How many LAN cards/ports do you have on your physical host?
If you want to share the same physical LAN for multiple VMs, you need to use "Bridged mode", not "Direct attachment".
Have you even tried with Bridged mode?
-
Why is HAProxy on the host? I would always put it on its own VM and keep the host isolated as the host. If you put HAProxy on it, it's now functioning as one of the worker VMs, but also the host. While this works fine, in theory, it's extra complicated and violates fundamentals design principals for virtualization. The host, in theory, is expected to be completely limited to host functions. But also handling one random workload.
-
Hello again
@Mario-Jakovina:
Because one domain is intended for everyday use and the other for testing purposes. The domain for everyday use has been in use for a few years. This domain is used for Debian with Nextcloud. Since Nextcloud is developing very quickly, I would like to have an extra VM for testing purposes.
Since I also need ports 80 and 443 from the outside for testing, I have to use a reverse proxy that routes the requests from the same external IP inside to the correct local IPs of the corresponding VMs.
That's why there are 2 different DynDNS domains.
Or, perhaps I am expressing myself incorrectly. I mean ONE Dyndns domain with a subdomain. For example, the main domain is <mycloud.home-webserver.no> and the subdomain is <mycloud-testing.home-webserver.no>Yes, I tried bridge mode. That hadn't worked. But I found the error at least and the bridge between the host and the VMs works now. I can ping the VMs from the host and vice versa, with IP address and domain name.
However, I can only reach the VMs via their local IP address, not via the domains when using a browser.
@scottalanmiller
Because I thought it was easier. I didn't want another VM just because of HAProxy. That's even more maintenance work.
I understand what you mean about the host should remain isolated. That's how it has been so far.
Now with the new configuration it is no longer isolated, that's true. I agree with you, it is a security risk.
On the other hand, I don't want to set up yet another physical device for the HAProxy either.What's the best match here?
-
@Woti You made this job harder by not using a VM for the proxy because the host cannot talk to the guests. and you are trying to run the proxy on the host that cannot talk to the guests.
This is why you never run anything on the host.
-
oookkaayyyy I'll try with a VM for the proxy
-
@Woti said in Fedora 39 Server as host with HAproxy and Qemu/KVM virtual machines. Trouble with communication.:
Yes, I tried bridge mode. That hadn't worked. But I found the error at least and the bridge between the host and the VMs works now. I can ping the VMs from the host and vice versa, with IP address and domain name.
However, I can only reach the VMs via their local IP address, not via the domains when using a browser.
I think you should first solve the issue why you can't reach your VMs from outside your LAN (I mean before you setup HAproxy).
I would first test, can I reach them via external IP address. (have you tried this?)
If that is OK, then test access via DynDNS domain.