ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. Woti
    3. Posts
    W
    • Profile
    • Following 0
    • Followers 0
    • Topics 4
    • Posts 72
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Fedora 39 Server as host with HAproxy and Qemu/KVM virtual machines. Trouble with communication.

      oookkaayyyy I'll try with a VM for the proxy ๐Ÿ™‚

      posted in IT Discussion
      W
      Woti
    • RE: Fedora 39 Server as host with HAproxy and Qemu/KVM virtual machines. Trouble with communication.

      Hello again ๐Ÿ™‚

      @Mario-Jakovina:
      Because one domain is intended for everyday use and the other for testing purposes. The domain for everyday use has been in use for a few years. This domain is used for Debian with Nextcloud. Since Nextcloud is developing very quickly, I would like to have an extra VM for testing purposes.
      Since I also need ports 80 and 443 from the outside for testing, I have to use a reverse proxy that routes the requests from the same external IP inside to the correct local IPs of the corresponding VMs.
      That's why there are 2 different DynDNS domains.
      Or, perhaps I am expressing myself incorrectly. I mean ONE Dyndns domain with a subdomain. For example, the main domain is <mycloud.home-webserver.no> and the subdomain is <mycloud-testing.home-webserver.no>

      Yes, I tried bridge mode. That hadn't worked. But I found the error at least and the bridge between the host and the VMs works now. I can ping the VMs from the host and vice versa, with IP address and domain name.

      However, I can only reach the VMs via their local IP address, not via the domains when using a browser.

      @scottalanmiller
      Because I thought it was easier. I didn't want another VM just because of HAProxy. That's even more maintenance work.
      I understand what you mean about the host should remain isolated. That's how it has been so far.
      Now with the new configuration it is no longer isolated, that's true. I agree with you, it is a security risk.
      On the other hand, I don't want to set up yet another physical device for the HAProxy either.

      What's the best match here?

      posted in IT Discussion
      W
      Woti
    • Fedora 39 Server as host with HAproxy and Qemu/KVM virtual machines. Trouble with communication.

      Hei,

      I am running the latest Fedora 39 Server Edition with one VM as Qemu/KVM. The VM is connected to the network via โ€œDirect Attachmentโ€. A Debian 11 Linux with Nextcloud is running in the VM.
      The data traffic is sent from the router directly to the local IP of the VM for ports 80 and 443. For the WAN IP I use a DynDNS service.
      Since the VM is connected to the host's network device via "direct attachment", the host and the VM are isolated from each other. Everything works great. The local IP range I can use is 10.0.0.1 to 10.0.0.137. 10.0.0.138 is the router.
      So much for the basic configuration.

      I would now like to add one or two more VMs to the host via Qemu/KVM. These should also be reachable from outside.
      I installed HAProxy on the Fedora host and configured it accordingly. "Direct Attachment" between host and VM does not work with HAProxy. I tried with "Virtual Network" and "Bridge to LAN". For both, a new local network with IP range 192.168.122.x is created.
      The HAProxy finds the two VMs. With my DynDNS provider I have created corresponding domains for the respective VMs, which are updated via ddclient. The problem is that the VMs cannot be reached from outside. I can ping them locally.
      I think the problem lies with the bridge between host and VM's.
      If I install HAProxy on a separate PC and connect the VMs to the host via "Direct Attachment", the connection from outside works. But I don't want to use an extra PC just for the HAProxy. Surely this must also work with my Fedora Host Server?
      Google and all AI's couldn't help me.
      I hope human intelligence can help you. ๐Ÿ™‚

      Any help is greatly appreciated.

      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      I see ๐Ÿ™‚ I haven't tried your solution yet. But I did read about your kind of solution on Redhat Access sites.
      The case with default.target is that, if podman containers runs as user they have no access on multi-user.target through systemd. If I did understand right ๐Ÿ˜„ That's why you have to use default.target instead.

      I'll try your solution in a VM soonly.

      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      Finally I found the solution here on github: https://github.com/containers/libpod/issues/5494

      I used podman v1.8.0 this time I generated the easyepg.service file with podman generate. There was a bug in this version which not generated default.target. In later version it is fixed. Now it is working ๐Ÿ™‚

      [Install]
      WantedBy=multi-user.target default.target
      
      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      @stacksofplates said in Fedora 31 Server, podman and SELinux:

      @Woti said in Fedora 31 Server, podman and SELinux:

      Hei, I wanted to try your solution. Fรธrst, I wanted to run meg container setup but I get this error:

      systemctl --user status container-easyepg.service
      Failed to connect to bus: No such file or directory
      

      I haven't changed anything since the last time and the container file exists...
      I can start it in Cockpit but not in the console. Strange...

      I figured out: I need to issue the above command as user not as root.
      Is it wrong to issuer this command as user? I setted up podman to use easyepg as user not as root.
      Maybe that's why the container not starts during boot?

      Which podman owner are you using @stacksofplates : user or root?

      I'm using user but not that way. I put the service in /etc/systemd/system and set a user in the unit file. So I still start it with sudo systemctl restart plex but systemd uses the user defined in the unit file to run the service.

      Okay. I have mine in /home/user/.config... one or another hidden directory created by podman generate commando.
      Stupid question maybe: but what is the unit file?

      posted in IT Discussion
      W
      Woti
    • RE: Fail2Ban not working with Fedora-Server Edition

      Finally I got it to work ๐Ÿ™‚
      I need to use httpd_log_t to get access through SELinux to the logfile for both httpd, php-fpm and fail2ban.
      I tried and my test-IPs was banned ๐Ÿ™‚

      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      Hei, I wanted to try your solution. Fรธrst, I wanted to run meg container setup but I get this error:

      systemctl --user status container-easyepg.service
      Failed to connect to bus: No such file or directory
      

      I haven't changed anything since the last time and the container file exists...
      I can start it in Cockpit but not in the console. Strange...

      I figured out: I need to issue the above command as user not as root.
      Is it wrong to issuer this command as user? I setted up podman to use easyepg as user not as root.
      Maybe that's why the container not starts during boot?

      Which podman owner are you using @stacksofplates : user or root?

      posted in IT Discussion
      W
      Woti
    • RE: Fail2Ban not working with Fedora-Server Edition

      I used this command to give apache og php-fpm read and write access to the logfile

      semanage fcontext -a -t httpd_sys_rw_content_t '/var/log/nextcloud(/.*)?'
      restorecon -Rv '/var/log/nextcloud/'
      

      But how to give fail2ban access through SElinux?
      Using fail2ban_log_t as descriped here https://linux.die.net/man/8/fail2ban_selinux is not working.
      Of course I can remove the above SElinux file context and issuer:

      semanage fcontext -a -t fail2ban_log_t '/var/log/nextcloud(/.*)?'
      restorecon -Rv '/var/log/nextcloud/'
      

      This way I get read write access to the nextcloud logfile for fail2ban but not til apache php-fpm anymore.
      It is confusing.

      posted in IT Discussion
      W
      Woti
    • RE: Fail2Ban not working with Fedora-Server Edition

      Now I get SELinux error: SELinux prevents f2b / f.nextcloud from accessing the nextcloud directory with search access.
      My nextcloud.log file is in /var/log/nextcloud/nextcloud.log

      posted in IT Discussion
      W
      Woti
    • RE: Fail2Ban not working with Fedora-Server Edition

      nextcloud.conf is the same as yours. As well as the path.
      but I have no nextcloud.local. The same content as yours is in jail.local
      I'll try your solution.

      posted in IT Discussion
      W
      Woti
    • RE: Fail2Ban not working with Fedora-Server Edition

      I have now activated sshd and it works perfectly. But Nextcloud doesn't.
      What other jails do you use? What is recommended?

      sudo fail2ban-client status sshd
      Status for the jail: sshd
      |- Filter
      |  |- Currently failed: 1
      |  |- Total failed:     13
      |  `- Journal matches:  _SYSTEMD_UNIT=sshd.service + _COMM=sshd
      `- Actions
         |- Currently banned: 1
         |- Total banned:     1
         `- Banned IP list:   77.16.71.32
      

      Nextcloud is missing matching Journal it looks like?

      sudo fail2ban-client status nextcloud
      Status for the jail: nextcloud
      |- Filter
      |  |- Currently failed: 0
      |  |- Total failed:     0
      |  `- Journal matches:
      `- Actions
         |- Currently banned: 0
         |- Total banned:     0
         `- Banned IP list:
      
      posted in IT Discussion
      W
      Woti
    • RE: Fail2Ban not working with Fedora-Server Edition

      Do you guys have some recommended setup guides based on latest Fedora/Centos or how to check if fail2ban works probably?

      posted in IT Discussion
      W
      Woti
    • RE: Fail2Ban not working with Fedora-Server Edition

      My problem is that nothing is banned. Maybe no one is attacking my server? ๐Ÿ˜„
      @black3dynamite Yes I have the same setup from Riegers.

      It doesn't matter if I try with Nextcloud, ssh and so on. No banning.

      Backend uses systemd. That should be right for Fedora / Centos, shouldn't be?

      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      Sounds good ๐Ÿ™‚ I'll try your solution and report.

      posted in IT Discussion
      W
      Woti
    • Fail2Ban not working with Fedora-Server Edition

      Re: [How to] Fail2ban on CentOS 7
      Is there any solutions to get fail2ban working successfully in Fedora-Server Edition?
      I mean I followed tutorials especially for Nextcloud but my fail2ban is not blocking at all.
      The last one I tried https://riegers.in/nextcloud-installation-guide-apache2/ is not working either.

      Did you guys have some solutions?

      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      Heiho ๐Ÿ™‚
      I haven't seen your message yet. Now 1 month has passed ๐Ÿ˜„
      Your script starts Podman automatically at boot?

      Are you using Plex? I am using Kodi ๐Ÿ˜›

      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      As for now the server is rebooting once or twice in a month due updates. There's no big problem to start the service manually. Maybe one day we figure out why it isn't starting automatically.

      Anyway. Thanx for your effort to get rid of the SElinux problem. ๐Ÿ™‚

      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      Hello again ๐Ÿ™‚
      I have now created a systemd service for podman easyepg by following this tutorial:
      https://www.redhat.com/sysadmin/podman-shareable-systemd-services
      and it looks like it works.
      Is there any way I can test if updating of epg channel informasjon is working as expected by triggering manuelly? The cronjob executes 2 a.m.

      After reboot the service is loaded but inactive. I have to activate manually? How can I figure out what's going wrong during boot?

      podman generate systemd --name easyepg.cron 
      
      # container-easyepg.cron.service
      # autogenerated by Podman 1.8.0
      # Mon Mar 16 22:40:13 CET 2020
      
      [Unit]
      Description=Podman container-easyepg.cron.service
      Documentation=man:podman-generate-systemd(1)
      
      [Service]
      Restart=on-failure
      ExecStart=/usr/bin/podman start easyepg.cron
      ExecStop=/usr/bin/podman stop -t 10 easyepg.cron
      PIDFile=/run/user/1000/containers/overlay-containers/a5482f12e8b718d6d080eb0a10283b456e58f57c2f1bd22c64e49f9e91073da8/userdata/conmon.pid
      KillMode=none
      Type=forking
      
      [Install]
      WantedBy=multi-user.target
      
      systemctl --user status container-easyepg.service
      
      โ— container-easyepg.service - Podman container-easyepg.cron.service
         Loaded: loaded (/home/twolf/.config/systemd/user/container-easyepg.service; disabled; vendor preset: enabled)
         Active: active (running) since Tue 2020-03-17 21:30:35 CET; 1s ago
           Docs: man:podman-generate-systemd(1)
        Process: 1405 ExecStart=/usr/bin/podman start easyepg.cron (code=exited, status=0/SUCCESS)
       Main PID: 1429 (conmon)
          Tasks: 4 (limit: 2333)
         Memory: 23.0M
            CPU: 1.092s
         CGroup: /user.slice/user-1000.slice/[email protected]/container-easyepg.service
                 โ”œโ”€1420 /usr/bin/fuse-overlayfs -o lowerdir=/home/twolf/.local/share/containers/storage/overlay/l/2YMPIRCLJIU>           โ”œโ”€1423 /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 -c -e 3 -r 4 --netns-type=path /run/user/100>           โ””โ”€1429 /usr/bin/conmon --api-version 1 -s -c a5482f12e8b718d6d080eb0a10283b456e58f57c2f1bd22c64e49f9e91073da>
      Mรคr 17 21:30:33 localhost.localdomain systemd[981]: Starting Podman container-easyepg.cron.service...
      Mรคr 17 21:30:35 localhost.localdomain podman[1405]: 2020-03-17 21:30:35.237845063 +0100 CET m=+1.249145219 container in>Mรคr 17 21:30:35 localhost.localdomain podman[1405]: 2020-03-17 21:30:35.287066083 +0100 CET m=+1.298366135 container st>Mรคr 17 21:30:35 localhost.localdomain podman[1405]: easyepg.cron
      Mรคr 17 21:30:35 localhost.localdomain systemd[981]: Started Podman container-easyepg.cron.service.
      
      podman ps
      
      CONTAINER ID  IMAGE                                     COMMAND  CREATED     STATUS             PORTS  NAMES
      a5482f12e8b7  docker.io/qoopido/easyepg.minimal:latest           6 days ago  Up 12 minutes ago         easyepg.cron
      
      
      posted in IT Discussion
      W
      Woti
    • RE: Fedora 31 Server, podman and SELinux

      @stacksofplates your semanage commands are working fine ๐Ÿ™‚

      posted in IT Discussion
      W
      Woti
    • 1
    • 2
    • 3
    • 4
    • 1 / 4