Networking and 1U Colocation
-
You can also do extra hardening with something like SCAP.
-
The story has evolved a bit, as Colocation America gave me a /29 network rather than /30, so it's possible that could just assign a public IP to the other physical NIC on my server -- though, that seems like not a good practice.
It seems like there has to be a way for my host to be able to access the Internet through one of the guests.
-
@eddiejennings said in Networking and 1U Colocation:
The story has evolved a bit, as Colocation America gave me a /29 network rather than /30, so it's possible that could just assign a public IP to the other physical NIC on my server -- though, that seems like not a good practice.
It seems like there has to be a way for my host to be able to access the Internet through one of the guests.
The only way to do that is a full bridge. Either a normal bridge or an OVS bridge.
-
I just tested it on my one hypervisor. If I set hosts.allow to my ZT address on my laptop and hosts.deny to all I can still ssh to the KVM host over ZT.
-
@stacksofplates said in Networking and 1U Colocation:
I just tested it on my one hypervisor. If I set hosts.allow to my ZT address on my laptop and hosts.deny to all I can still ssh to the KVM host over ZT.
So applying that to my scenario, one of your KVM hosts's NICs would have a public IP address, correct?
There was one point I missed that you said. Eventually, there will be others connecting to the VMs, I'm planning on running a NextCloud VM, PBX, and Zimbra.
-
This looks like it worked. I added this line to the appropriate network using
virsh net-edit
:<route address='0.0.0.0' prefix='0' gateway='192.168.100.1'/>
(yes, the final subnet decision was to use 192.168.100.0/24).That created a default route, which shows up with
ip route show
. If I can get DNS resolution, then I'm all set . -
And for DNS, this worked.
nmcli connection mod virbr1 ipv4.dns "8.8.8.8"
-
@eddiejennings said in Networking and 1U Colocation:
@stacksofplates said in Networking and 1U Colocation:
If you're not having other people connect to it and it's just for testing, I'd just leave the connection go to the host (SSH and Cockpit) and then join all of your VMs to ZeroTier.
Would you expose your hypervisor to the Internet with no firewall in between?
I forget what hypervisor you're doing and don't feel like scrolling up, so I'm assuming KVM.
But I see no reason to really treat the hypervisor much different than a VPS that basically directly exposed to the public too.
For your hypervisor, you can do what I do for my VPS and ONLY allowSSH, only key-based access, and no root login via ssh. Also make sure you got logwatch and fail2ban going.
-
Another good idea is to use something to keep your hypervisor in a specified state, such as SaltStack. That's what I use on my VPS, so I always know a bunch of specific things are ALWAYS in check.
-
@tim_g said in Networking and 1U Colocation:
fail2ban
Fail2ban does nothing with key based access. It's denied before fail2ban even sees it.