Network setup for production KVM
-
@stacksofplates No loss of access.
But that is me creating a bridge from my Fedora desktop, the bridge is on a remote server -
@jaredbusch said in Network setup for production KVM:
So for those who have KVM in production, how do you setup the network?
In Hyper-V i always team the NICs in switch independent mode and then make the vSwitch on the team. The host will have access to the guest VM networks.
For my home lab (Fedora 26) and on my desktop (F25) and laptop (F26) I just use the macvtap in bridged mode. But I have no host to guest communication. This is not an issue for my lab or desktop. But I do not want this in production.
So if I have 2-4 NICs in a server, assuming Fedora 26 or RHEL 7:
- How should I team them?
- Should I create a bridge?
- What source mode should I use?
I have stuff in prod running with macvtap and then there is a separate network for access to the host if the VM needs it. But I also have stuff in prod with a bridge and full access. Just depends on what you want.
IIRC NetworkManager doesn't play nicely sometimes with libvirt and other bits (or it may have just been with bridging in general). You can build a bridge or bond directly from Virt-Manager as well, but I believe it uses network and not NetworkManager.
-
Doesn't using a bridge have a negative impact network performance of the VM?
-
@tim_g said in Network setup for production KVM:
Doesn't using a bridge have a negative impact network performance of the VM?
Not usually.
-
@jaredbusch why don't you add host interface to macvtap bridge and route all traffic through it? I'm doing that with my LXD containers and host.
Here's how to do it:
http://noyaudolive.net/2012/05/09/lxc-and-macvlan-host-to-guest-connection/ -
@marcinozga said in Network setup for production KVM:
@jaredbusch why don't you add host interface to macvtap bridge and route all traffic through it? I'm doing that with my LXD containers and host.
Here's how to do it:
http://noyaudolive.net/2012/05/09/lxc-and-macvlan-host-to-guest-connection/All you are doing there is making a bridge on the host.
Also LXC is containerization not virtualization..
-
@jaredbusch said in Network setup for production KVM:
@marcinozga said in Network setup for production KVM:
@jaredbusch why don't you add host interface to macvtap bridge and route all traffic through it? I'm doing that with my LXD containers and host.
Here's how to do it:
http://noyaudolive.net/2012/05/09/lxc-and-macvlan-host-to-guest-connection/All you are doing there is making a bridge on the host.
Also LXC is containerization not virtualization..
It's both. Containerization is Type-C Virtualization. It's always been considered a form of virtualization, even though it is a totally different technological approach.
-
Containers are the trendy new term for OS Level Virtualization. https://en.wikipedia.org/wiki/Operating-system-level_virtualization
-
@jaredbusch no, you're creating macvlan interface on physical host adapter. And by routing traffic through it, you allow host to communicate with guests.
Containers or VM guests makes no difference here. -
Perhaps this explains it better: https://superuser.com/a/368023
-
@marcinozga said in Network setup for production KVM:
Containers or VM guests makes no difference here.
Rarely does.