@JaredBusch said in Firewalls, the good, the bad, and the ugly.:
Specific customization can only be done by creating a special text file and putting it in a specific location.
There's your shot to start with Ansible
@JaredBusch said in Firewalls, the good, the bad, and the ugly.:
Specific customization can only be done by creating a special text file and putting it in a specific location.
There's your shot to start with Ansible
@DustinB3403 said in XOA Pricing Model - What might it look like from a US perspective:
@Danp said in XOA Pricing Model - What might it look like from a US perspective:
There's also HA-Lizard.
That's true, totally forgot about that one. (and I even deployed it)
Didn't it blow up on you? That's also limited to two nodes. And fairly susceptible to split brain.
Alert emails aren't that pretty, but who really needs them to be?
@DustinB3403 said in Burned by Eschewing Best Practices:
@stacksofplates said in Burned by Eschewing Best Practices:
@DustinB3403 said in Burned by Eschewing Best Practices:
@Dashrender said in Burned by Eschewing Best Practices:
@DustinB3403 said in Burned by Eschewing Best Practices:
Potential IPOD in the works for a mere 43TB.
Mere?
Someone's perspective seems odd.
Really it is a mere 43TB. Not until you're in the 100's of TB range should SAN's even be considered. Which this is where you start to reach the limit of single servers.
The size of data isn't really a limiting factor. It's the number of hosts that need to share the storage.
Sure, but I can fit that much storage into a single server and be within the tolerances set by the OP.
So while he is downsizing to 2 servers, 1 is all that he might actually require. Assuming SA is good enough.
The point was the statement you made was definitive statement without any relation to the OP. If you said "in this case" fine. But it was just a blanket statement.
@DustinB3403 said in Monitoring Systems:
Grafana is pretty cool looking.
Here's a simple dashboard I built for a few VMs at home to show load.
This is a really detailed one that is tabbed. I couldn't possible show all of it.
@momurda said in openvas test results:
Actually doing some work while ranting in another thread.
All the linux servers i have been scanning with openvas show basically the same vulnerabilities.
I think i know how to mitigate teh SSH weak encryption/MAC algorightms ones. Where can i find a list of good ciphers? the ssh_config and sshd_config show mostly these older ones listed as weak.
The TCP timestamp one can possibly allow someone to see my server uptime? Why is that bad?
Sorry for the basic questions.
Here's some hardened SSH stuff
https://mangolassi.it/topic/10391/fairly-hardened-jump-box
Also if you run SCAP on a machine it will give you a report with mitigation information.
@scottalanmiller said in open source hypervisors: do we really have them? do we really need them?:
@msff-amman-Itofficer said in open source hypervisors: do we really have them? do we really need them?:
If you want to bypass all this just get ESXi licensed, and your set.
Doing all that is easier than getting the license, I've tried.
If you want the power of KVM without the complexity, Scale HC3 is the way to go.
I don't think KVM has any complexity. I always thought XenServer was too complex to manage. Cross referencing UUIDs to image names is annoying. Not being able to store images in whatever directory you want is annoying. Not being able to store ISOs on your host is annoying (not using XO).
KVM is stupid simple. Click the hypervisor role on CentOS install. Done. You can store images in 1000 different directories if you want. Virsh and the virt tools (virt-sysprep, virt-customize, virt-builder, etc) give you so much power. Networking is done with dns-masq so it's easy to set reservations and do DNS within the host.
Single host deployments are stupid easy. More than one host deployments add some complexity but using orchestration it makes everything easy.
@FATeknollogee said in open source hypervisors: do we really have them? do we really need them?:
@stacksofplates said in open source hypervisors: do we really have them? do we really need them?:
More than one host deployments add some complexity but using orchestration it makes everything easy.
Details, please?
You just manage the host like anything else. I ship the template to each host. I clone the template with the correct MAC and it gets whatever reservation it's supposed to. Then Ansible does all of the work. 99% of my systems don't get backed up because it's all code based. The 1% that do have backing stores and agent based backups that are orchestrated and are part of the code base for that VM.
You essentially treat your hosts like data center regions on a cloud provider. VMs replicate within themselves. The hosts are just a place for them to run. There is nothing special about any of the hosts.
@FATeknollogee said in open source hypervisors: do we really have them? do we really need them?:
@stacksofplates said in open source hypervisors: do we really have them? do we really need them?:
VMs replicate within themselves.
On a single host or across multiple hosts?
Across multiple. This has to be set up obviously. I usually use floating IPs and if there is stateful data that needs replicated I'll use Gluster. But if it's just stateless data I'll just use floating IPs.
I might have to buy one of these for my house. Currently I'm using a wire rack to hold 3 2u servers and switches. A couple of these would be nice.
Both of my R710s and my DL380 have 8 cores. The R710s were under $200 and the DL380 was like $230
@wirestyle22 said in Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1):
After power outage tonight my VM booted into maintenance mode and displayed:
Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1)
ls -l /dev/disk/by-uuid
nano /etc/fstab
As you can see I commented out two just to see if it would boot without error and it did. My question is how do I know which UUID to use for what? I'm not entirely sure of the syntax for this as well.
You don't need the UUID for logical volumes. It's really only for partitions.
@scottalanmiller said in Error getting authority: Error initializing authority: Could not connect: No such file or directory (g-io-error-quark, 1):
THis is hardware RAID, but lsblk is showing individual drives coming from the controller. So it looks like the controller has potentially failed?
They're VHDs
I'm also assuming this VM is running on a XenServer since the disks are xvda and xvdb.
@Dashrender said in Really Panda AV?:
You're kidding right?
Windows does this exact same thing today. Updates install silently in the backgroup and HOPE you'll reboot on your own inside of 3 days. If you don't, I think it auto reboots.
So if windows was in the same situation, you'd be just as stuck.
Assuming 'nix has an autoupdate feature - how are reboots handled what that needs to happen?
It's also possible that 'nix also doesn't need to do processing on the way down in a reboot because of architecture differences.
We don't "have" to reboot, we can live kernel patch. Obviously you still want to reboot, but you aren't forced to. And even before that you weren't forced to because that's ridiculous.
oVirt should be run on CentOS. It's the upstream for RHEV.
Mist.io also does KVM management.
I don't use web interfaces for KVM. Either CLI or virt-Manager through SSH.
Ah looks like HPE bought it and then discontinued development? Idk
@dbeato said in Really Panda AV?:
@stacksofplates That sucks.
Ya luckily I don't use it that much. All of my stuff is Linux.
I use it on Fedora. You can have it output what Git branch you are on. It's a pretty awesome tool.