ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login
    1. Topics
    2. stacksofplates
    3. Best
    • Profile
    • Following 0
    • Followers 13
    • Topics 145
    • Posts 7,946
    • Groups 0

    Posts

    Recent Best Controversial
    • RE: Caddy vs. Nginx

      @marcinozga said in Caddy vs. Nginx:

      Caddy is really nice, and usually my choice for reverse proxy, except docker deployments. Here Traefik shines, you just can't beat auto discovery and configuration.

      The file provider for Traefik makes even non container deployments easy.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Virtual appliances?

      @JaredBusch said in Virtual appliances?:

      @stacksofplates said in Virtual appliances?:

      This day and age Id just prefer a container. They're so much easier to deploy and manage.

      Only when done right, which is still not often, IMO.

      That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Virtual appliances?

      @Pete-S said in Virtual appliances?:

      @stacksofplates said in Virtual appliances?:

      @scottalanmiller said in Virtual appliances?:

      @stacksofplates said in Virtual appliances?:

      @JaredBusch said in Virtual appliances?:

      @stacksofplates said in Virtual appliances?:

      This day and age Id just prefer a container. They're so much easier to deploy and manage.

      Only when done right, which is still not often, IMO.

      That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.

      True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.

      What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.

      There isn't any need for operational oversight of devs because it's all done through things like merge/pull requests. Then tools like Flux/Argo/whatever deploy it for you.

      I'm not sure what you mean about no production way to deploy. Automated pipelines are a more production way that just installing packages in systems. You have easier rollback, easier ways to apply seccomp profiles, resources, etc. Its very production ready.

      I think there is a big difference in the production environment of say a SaaS company compared to the rest of the companies that are not in the software business.

      CI/CD pipelines seems highly unlikely in a company that doesn't develop software or provide software services. Why would they have that?

      If you have enough workloads you need automation tools to deploy patches and administrate your environment but that is a different thing and something all environments of size needs.

      SaaS companies aren't the only ones with internal development. Pretty much any fortune 1000 and up has that.

      But yes pipelines are mostly for internal development. But you also can just deploy containers the same way. If you aren't using a CD tool to deploy the update containers automatically, you would have a merge/pull request with the new container tag. The same idea applies, just not with the CI part.

      Its not about enough workloads to automate deployment. It takes almost no effort to automate container deployments. You run a helm install command against your cluster to set Flux up and then have it read a couple yaml files. Its less work to do that than update software the old way.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Virtual appliances?

      Here's an example. To set up Flux you run these couple commands:

      helm repo add fluxcd https://charts.fluxcd.io

      kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml

      kubectl create namespace flux

      helm upgrade -i flux fluxcd/flux \
         --set [email protected]:user/some-repo \
         --namespace flux
      

      that sets up Flux. Flux is now watching the repo you told it there in the last command.

      If you don't use a predefined key, you just grab the SSH key Flux created and add it to your repo.

      Then to deploy something like NextCloud, you need these two files. The first creates a namespace for nextcloud. Not a requirement, but makes sense. The second is a HelmRelease file that the Flux Helm Operator uses to read the Helm chart for NextCloud.

      apiVersion: v1
      kind: Namespace
      metadata:
        name: nextcloud
      
      apiVersion: helm.fluxcd.io/v1
      kind: HelmRelease
      metadata:
        name: nextcloud
        namespace: nextcloud
        annotations:
          fluxcd.io/automated: "true"
          filter.fluxcd.io/chart-image: "glob:*"
      spec:
        releaseName: nextcloud
        chart:
          repository: https://nextcloud.github.io/helm/
          name: nextcloud
        values:
          replicaCount: 2
          any other values here to override in the chart
      

      that's it. You now have a fully automated system that will automatically deploy the new updates to your NextCloud pods. You can disable the auto updates by removing the annotations and then manually update the container versions by adding the version in the HelmRelease. Once it's approved, then Flux will update the containers.

      You also have have a deployment that created a replicaset of your pod because you defined 2 for your replicacount. So any traffic entering your cluster will be split between both replicas (or more if you define more). By default, k8s does a rolling update. So pods aren't all killed at once. The first pod will be terminated and a new one spun up with the updates. When it's live, the second will be terminated and recreated with the updates. So your service stays live during updates.

      It's that easy. It shouldn't take you more than 10 minutes to set Flux up. And then the rest is the specific things you need the apps to do. Like with NextCloud the type of database, if you want ingress or not, those kinds of options.

      Containers and container orchestrators help literally every business from small to giant enterprises developing hundreds to thousands of internal microservices.

      I don't even have some things installed on my system anymore. I'll just run a container to use a specific tool and kill the container when I'm done. You can even have full dev environments packaged up in a contianer and have VSCode deploy itself in the container so you have a consistent development environment across different users. And that happens literally with the push of a button in VSCode.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Work from Home - Computer setups

      I'm full remote, and I have a Macbook Pro for work and an XPS 13 for personal. I use both for work interchangeably. I have a 34" curved ultrawide and a vertical 27" beside it. Everything is on a desk that's standing or sitting (by crank). And I have a desk I made that I can attach to my treadmill to walk and work at the same time.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: The future of the CentOS Project is CentOS Stream

      @DustinB3403 said in The future of the CentOS Project is CentOS Stream:

      @VoIP_n00b said in The future of the CentOS Project is CentOS Stream:

      Interesting Development:

      https://arstechnica.com/gadgets/2021/01/centos-is-gone-but-rhel-is-now-free-for-up-to-16-production-servers/

      See that should've been an initial statement from RHEL.

      "We're ending the CentOS line, but are offering 16 production servers for free as a part of this change"

      The way this was handled was still horribly performed and has likely killed the RHEL userbase off from trusting anything from RHEL/IBM.

      16 servers? What good is that though? Just use Oracle and you have no limit. No matter how you slice it IBM has ruined Red Hat as most people predicted.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: The future of the CentOS Project is CentOS Stream

      @JaredBusch said in The future of the CentOS Project is CentOS Stream:

      @stacksofplates said in The future of the CentOS Project is CentOS Stream:

      @DustinB3403 said in The future of the CentOS Project is CentOS Stream:

      @VoIP_n00b said in The future of the CentOS Project is CentOS Stream:

      Interesting Development:

      https://arstechnica.com/gadgets/2021/01/centos-is-gone-but-rhel-is-now-free-for-up-to-16-production-servers/

      See that should've been an initial statement from RHEL.

      "We're ending the CentOS line, but are offering 16 production servers for free as a part of this change"

      The way this was handled was still horribly performed and has likely killed the RHEL userbase off from trusting anything from RHEL/IBM.

      16 servers? What good is that though? Just use Oracle and you have no limit. No matter how you slice it IBM has ruined Red Hat as most people predicted.

      For most SMB, that use CentOS in house, it is likely more than enough.

      I have a client with 6 internal Linux systems, Proxy server, Nextcloud, Salt master (testing still, need ot get back to that), file server, jump box, and Email relay. If you add their phone system hosted on Vultr, then they have 7.

      I'm assuming they aren't on a supported cloud environment. You still have to follow their licensing limitations vs just using Oracle. This whole thing is only going to make Oracle money.

      Screen Shot 2021-01-20 at 12.23.46 PM.png

      Screen Shot 2021-01-20 at 12.26.41 PM.png

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Gophemeral

      @scottalanmiller said in Gophemeral:

      If I go to this site and your mascot isn't an adorable gopher I'm going to be serious disappointed.

      It's on the GitLab page lol.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Obtaining hardware from terminated remote employee

      @Pete-S said in Obtaining hardware from terminated remote employee:

      @StorageNinja said in Obtaining hardware from terminated remote employee:

      @JaredBusch said in Obtaining hardware from terminated remote employee:

      Hardware is not worth the fucking time to get back.

      If the company thinks wasting man hours on that is a good idea the company is insane

      While I largely agree, our R&D laptops are ~2-3K a pop. (fully max spec' MPB or XPS with onsite repair agreements).

      I did hear we have started on the Mac's using DEP, so the device will auto-enroll in MDM even if the device is wiped.
      https://support.apple.com/en-us/HT204142

      Makes no sense developing on a laptop IMHO - unless you're talking about another kind of R&D in another field.

      On our team we remote into development servers and all development and testing is run there. Which means the computer you're actually sitting in front of just needs to be able to run a browser, rdp, ssh etc. So any machine suitable for general office work would get the job done. So no 2-3K laptops needed for development, even if that is not the primary reason. I kind of assumed everyone worked that way but haven't actually given it much thought until now.

      I haven't really seen anyone do this other than CAD work. Everywhere I've been it's local development, possibly using Eclipse Che or Coder or something for a remote IDE but still local.

      VSCode and JetBrains tools allow you to include your development environment in a container. So when you open the project it will open inside of a container with all of the dependencies included. That's the best workflow ive seen so far.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Obtaining hardware from terminated remote employee

      @scottalanmiller said in Obtaining hardware from terminated remote employee:

      @stacksofplates said in Obtaining hardware from terminated remote employee:

      @Pete-S said in Obtaining hardware from terminated remote employee:

      @StorageNinja said in Obtaining hardware from terminated remote employee:

      @JaredBusch said in Obtaining hardware from terminated remote employee:

      Hardware is not worth the fucking time to get back.

      If the company thinks wasting man hours on that is a good idea the company is insane

      While I largely agree, our R&D laptops are ~2-3K a pop. (fully max spec' MPB or XPS with onsite repair agreements).

      I did hear we have started on the Mac's using DEP, so the device will auto-enroll in MDM even if the device is wiped.
      https://support.apple.com/en-us/HT204142

      Makes no sense developing on a laptop IMHO - unless you're talking about another kind of R&D in another field.

      On our team we remote into development servers and all development and testing is run there. Which means the computer you're actually sitting in front of just needs to be able to run a browser, rdp, ssh etc. So any machine suitable for general office work would get the job done. So no 2-3K laptops needed for development, even if that is not the primary reason. I kind of assumed everyone worked that way but haven't actually given it much thought until now.

      I haven't really seen anyone do this other than CAD work. Everywhere I've been it's local development, possibly using Eclipse Che or Coder or something for a remote IDE but still local.

      VSCode and JetBrains tools allow you to include your development environment in a container. So when you open the project it will open inside of a container with all of the dependencies included. That's the best workflow ive seen so far.

      I've seen, but never tried myself, a remote option in VSCode. I just saw it in an article the other day. Interested to try it out.

      It works really well, at least for the Go projects I work on. Everyone having the same extensions and environments is really nice.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: I need this script to email the log it generates

      @Pete-S said in I need this script to email the log it generates:

      @stacksofplates said in I need this script to email the log it generates:

      you could use mailgun. I just wrote this tiny app that will send the contents of a file.

      I lazily take the first argument as the file with the contents you want to send. Just have env vars for the recipient address, your api key and domain (or hard code them). You could take flags and whatever, but this was free and I'm lazy.

      There is one problem with the api approach I think.
      I believe tht if you use postfix to deliver the message over SMTP it will put the message in postfix' queue. So if for some reason the mail can't be delivered at that particular time, for instance because the firewall is being rebooted or there is a problem with the mailservice, it will try again later. Using the API you will just a timeout error and that was it.

      I'm not a mailgun user but I assume they have a SMTP relay as well?

      In any case since we like zoho, I intend to try the new Zoho TransMail service. They only send transactional mail, not marketing, and I think it was $2.5 or something like that per 10K emails. They have both api and SMTP.

      That's no different than postfix. It would be pretty trivial to add a retry block for a timeout condition. Postfix has it built in, you'd need the 5 lines of code here or whatever would be needed. And this will work on literally any system that the binary can be compiled for:

      • aix
      • android
      • darwin
      • dragonfly
      • freebsd
      • hurd
      • illumos
      • js
      • linux
      • nacl
      • netbsd
      • openbsd
      • plan9
      • solaris
      • windows
      • zos
      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: I need this script to email the log it generates

      @Pete-S said in I need this script to email the log it generates:

      @stacksofplates said in I need this script to email the log it generates:

      @Pete-S said in I need this script to email the log it generates:

      @stacksofplates said in I need this script to email the log it generates:

      @JaredBusch said in I need this script to email the log it generates:

      @stacksofplates said in I need this script to email the log it generates:

      Another plus for an API is that snmp is commonly blocked from providers and only enabled with some kind of request.

      Also if you want to integrate any other notification (text, telegram notification, slack, etc) it would be trivial to add with this approach.

      Not saying API is the best way, but it definitely has advantages.

      I have not had time to look at what you posted yet but I plan on it thank you

      Yeah no problem. It's really simple so it could be optimized quite a bit with maybe like 30 mins of work.

      Wouldn't just using curl do the job equally well of sending the message to mailgun?

      Yep. If you really like handling errors and responses in shell scripting it's fine. Unless you use it on Windows where cURL is not really cURL.

      I'm not doing raw API calls with the example above. That's using their Go package which simplifies a lot of the API information.

      OK, I understand. Thanks.

      How is it with the maintenance with APIs like this? I mean when you compile something and have to support it over time, it's just like being a package maintainer. If they change the API or their Go package or perhaps OS or Go version, you'd have to recompile and redistribute it to every machine. Or is this something that would only happen very seldom like every two years or something like that?

      I haven't seen it happen much where they would break the API often. If they were to do a big API change like that you'd likely have a good bit of notice. The packages for Go are versioned in the modules, so you can pin them to limit changes.

      Looks like Mailgun is currently on /v3 of their API but some endpoints still use /v1 or /v2. The package would handle that for you since it's maintained by them.

      One more advantage specifically with Go, is that you can containerize this easily. This could be done in a scratch container which has 0 dependencies. Then you could just run this with Docker/podman/whatever and easily keep the container up to date across all of your systems. I'm sure Jared wouldn't be doing that here, just a side note.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Pi-hole dumps on Fedora

      Just run it in a container and none of this matters.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Need help with Autohotkey Windows

      So you're just switching between desktops? Win+ctrl is the shortcut for that.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Pi-hole dumps on Fedora

      @JaredBusch said in Pi-hole dumps on Fedora:

      @stacksofplates said in Pi-hole dumps on Fedora:

      @JaredBusch said in Pi-hole dumps on Fedora:

      @stacksofplates said in Pi-hole dumps on Fedora:

      Just run it in a container and none of this matters.

      Pi-Hole's docker version was not a well done container 3 years ago when I implemented this solution. It was by no means a good idea at the time.

      Well I just meant going forward.

      That gets into different issues since Fedora went with podman. I haven't done much tinkering with it yet to see how compatible things are.

      It's a drop in replacement. It doesn't use the Docker socket so you can't use things like docker-compose, but I don't necessarily see that as a bad thing. Any normal tasks should be the same. There's people who have said they aliased podman to docker and never noticed a difference.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Return Values in Bash Script and generate e-mail which shows successes, errors and if the directory is empty

      @wirestyle22 said in Return Values in Bash Script and generate e-mail which shows successes, errors and if the directory is empty:

      I am breaking this down very slowly for myself. Not a bash master by any measure, but I do want to continue learning it. Arrays seems somewhat annoying in Bash. I will likely learn python to deal with more complex stuff I may need to do with them.

      I was going to suggest that. This would likely be easier in Python and more straightforward. If you have to stick to bash, don't declare your arrays in a looplike that, just declare them in the variables at the top

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Return Values in Bash Script and generate e-mail which shows successes, errors and if the directory is empty

      @wirestyle22 said in Return Values in Bash Script and generate e-mail which shows successes, errors and if the directory is empty:

      @stacksofplates said in Return Values in Bash Script and generate e-mail which shows successes, errors and if the directory is empty:

      @wirestyle22 said in Return Values in Bash Script and generate e-mail which shows successes, errors and if the directory is empty:

      I am breaking this down very slowly for myself. Not a bash master by any measure, but I do want to continue learning it. Arrays seems somewhat annoying in Bash. I will likely learn python to deal with more complex stuff I may need to do with them.

      I was going to suggest that. This would likely be easier in Python and more straightforward. If you have to stick to bash, don't declare your arrays in a looplike that, just declare them in the variables at the top

      Alright I did that. Is there something better than ShellCheck for bash syntax checking that you know of? Do you just use bash -n script

      I don't know of anything better.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Return Values in Bash Script and generate e-mail which shows successes, errors and if the directory is empty

      Here's a quick setup in Python if you want to try it instead.

      import os
      import gnupg
      
      
      encrypted_dir = "/tmp/encrypted_files"
      archive = "/tmp/archive"
      password = os.getenv(os.getenv("DECRYPT_PASSWORD"))
      gpg = gnupg.GPG(gnupghome='/home/user/.gnupg')
      responses = {}
      
      
      def decrypt_file(file: str, password: str):
          out_name = f'{encrypted_dir}/{file}.decrypted'
          stream = open(f'{encrypted_dir}/{file}', "rb")
          return gpg.decrypt_file(stream, passphrase=password, output=out_name)
      
      
      
      for file in os.listdir(encrypted_dir):
          if file.endswith(".gpg"):
              stat = decrypt_file(file, password)
              responses[file] = stat
          else:
              continue
      
      
      for file in responses:
          status = responses[file]
          if status.ok:
              os.rename(f'{encrypted_dir}/{file}', f'{archive}/{file}')
              print(f'File {file} decrypted and moved')
          else:
              print(f'File {file} had error, {status.stderr}')
      
      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Return Values in Bash Script and generate e-mail which shows successes, errors and if the directory is empty

      Rather than mess with multiple arrays, you can just have a single dictionary that holds the file and status. A single function can decrypt the file. Then just save the file name and status of the decryption in that dictionary. Then loop through the dictionary and here I just print the data, but you could email it or send to Slack or whatever.

      This was a quick pass so probably can be cleaned up a bit.

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • RE: Looking at Atom and VS Code

      @Pete-S said in Looking at Atom and VS Code:

      JetBrains IDEs looks good but are not open source.

      PyCharm and IntelliJ IDEA are

      posted in IT Discussion
      stacksofplatesS
      stacksofplates
    • 1 / 1