Virtual appliances?
-
@travisdh1 said in Virtual appliances?:
I prefer using an iso. Almost never is a virtual image easy to use with my preferred virtualization host platform (KVM or Xen), and if using an appliance I really don't want to d*** around with a base OS first.
I agree, if you are going to do this, I need an ISO.
-
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
Btw not trying to argue with you. People def do it wrong. I'm just saying I've seen them do it wrong with VMs too.
True, anything can be screwed up.
-
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.
-
@Pete-S said in Virtual appliances?:
@travisdh1 said in Virtual appliances?:
I prefer using an iso. Almost never is a virtual image easy to use with my preferred virtualization host platform (KVM or Xen), and if using an appliance I really don't want to d*** around with a base OS first.
I like iso myself too as I've had some bad luck with ready-to-run images in the past. There always seems to be some kind of issue with different network drivers or installed guest additions.
I don't know the current status of ova files though. Is that the current standard for distributing virtual appliances that are supposed to run on every common virtualization platform? Or what just in theory?
Yup, exactly. Consistently I still need OS control. Whether VM or container, that never works.
-
@scottalanmiller said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.
What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.
There isn't any need for operational oversight of devs because it's all done through things like merge/pull requests. Then tools like Flux/Argo/whatever deploy it for you.
I'm not sure what you mean about no production way to deploy. Automated pipelines are a more production way that just installing packages in systems. You have easier rollback, easier ways to apply seccomp profiles, resources, etc. Its very production ready.
-
@travisdh1 said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
I think it's more of the application now. If it was something designed for Windows 2003 and you put it in a container it wouls be terrible, but it would also be terrible installed normally. K8s is so easy to set up now that it's trivial to get things going. Even if you just use podman and systemd I think it's a step above installing the application in a VM.
I need to try out K8s again. The first time I tried using it was early days and a pain. From what your saying it's a lot better/easier now.
For remote systems, k3s is probably easiest.
curl -sfL https://get.k3s.io | sh -
Run that and you have k8s.For local work, kind is probably easiest. It runs containers as k8s nodes that then run Docker so you can deploy to them. It works really well.
-
@stacksofplates said in Virtual appliances?:
@scottalanmiller said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.
What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.
There isn't any need for operational oversight of devs because it's all done through things like merge/pull requests. Then tools like Flux/Argo/whatever deploy it for you.
I'm not sure what you mean about no production way to deploy. Automated pipelines are a more production way that just installing packages in systems. You have easier rollback, easier ways to apply seccomp profiles, resources, etc. Its very production ready.
I think there is a big difference in the production environment of say a SaaS company compared to the rest of the companies that are not in the software business.
CI/CD pipelines seems highly unlikely in a company that doesn't develop software or provide software services. Why would they have that?
If you have enough workloads you need automation tools to deploy patches and administrate your environment but that is a different thing and something all environments of size needs.
-
@Pete-S said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@scottalanmiller said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
That argument could be made for pretty much anything though. I think even on a single host it's easier to manage.
True. I think the problem is that Docker feels like it's never set up correctly for third party application deployments. As a tech it's amazing, in the real world, it seems to result it devs bypassing all operational oversight and apps that have good code and no production way to deploy.
What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.
There isn't any need for operational oversight of devs because it's all done through things like merge/pull requests. Then tools like Flux/Argo/whatever deploy it for you.
I'm not sure what you mean about no production way to deploy. Automated pipelines are a more production way that just installing packages in systems. You have easier rollback, easier ways to apply seccomp profiles, resources, etc. Its very production ready.
I think there is a big difference in the production environment of say a SaaS company compared to the rest of the companies that are not in the software business.
CI/CD pipelines seems highly unlikely in a company that doesn't develop software or provide software services. Why would they have that?
If you have enough workloads you need automation tools to deploy patches and administrate your environment but that is a different thing and something all environments of size needs.
SaaS companies aren't the only ones with internal development. Pretty much any fortune 1000 and up has that.
But yes pipelines are mostly for internal development. But you also can just deploy containers the same way. If you aren't using a CD tool to deploy the update containers automatically, you would have a merge/pull request with the new container tag. The same idea applies, just not with the CI part.
Its not about enough workloads to automate deployment. It takes almost no effort to automate container deployments. You run a helm install command against your cluster to set Flux up and then have it read a couple yaml files. Its less work to do that than update software the old way.
-
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
Btw not trying to argue with you. People def do it wrong. I'm just saying I've seen them do it wrong with VMs too.
Oh I completely understand. Docker is super abused though.
What do you mean by abused?
-
Here's an example. To set up Flux you run these couple commands:
helm repo add fluxcd https://charts.fluxcd.io
kubectl apply -f https://raw.githubusercontent.com/fluxcd/helm-operator/master/deploy/crds.yaml
kubectl create namespace flux
helm upgrade -i flux fluxcd/flux \ --set [email protected]:user/some-repo \ --namespace flux
that sets up Flux. Flux is now watching the repo you told it there in the last command.
If you don't use a predefined key, you just grab the SSH key Flux created and add it to your repo.
Then to deploy something like NextCloud, you need these two files. The first creates a namespace for nextcloud. Not a requirement, but makes sense. The second is a HelmRelease file that the Flux Helm Operator uses to read the Helm chart for NextCloud.
apiVersion: v1 kind: Namespace metadata: name: nextcloud
apiVersion: helm.fluxcd.io/v1 kind: HelmRelease metadata: name: nextcloud namespace: nextcloud annotations: fluxcd.io/automated: "true" filter.fluxcd.io/chart-image: "glob:*" spec: releaseName: nextcloud chart: repository: https://nextcloud.github.io/helm/ name: nextcloud values: replicaCount: 2 any other values here to override in the chart
that's it. You now have a fully automated system that will automatically deploy the new updates to your NextCloud pods. You can disable the auto updates by removing the annotations and then manually update the container versions by adding the version in the HelmRelease. Once it's approved, then Flux will update the containers.
You also have have a deployment that created a replicaset of your pod because you defined 2 for your replicacount. So any traffic entering your cluster will be split between both replicas (or more if you define more). By default, k8s does a rolling update. So pods aren't all killed at once. The first pod will be terminated and a new one spun up with the updates. When it's live, the second will be terminated and recreated with the updates. So your service stays live during updates.
It's that easy. It shouldn't take you more than 10 minutes to set Flux up. And then the rest is the specific things you need the apps to do. Like with NextCloud the type of database, if you want ingress or not, those kinds of options.
Containers and container orchestrators help literally every business from small to giant enterprises developing hundreds to thousands of internal microservices.
I don't even have some things installed on my system anymore. I'll just run a container to use a specific tool and kill the container when I'm done. You can even have full dev environments packaged up in a contianer and have VSCode deploy itself in the container so you have a consistent development environment across different users. And that happens literally with the push of a button in VSCode.
-
@stacksofplates What the what?
- Install Fedora
sudo dnf install -y kubernetes
- `systemctl enable --now podman1
That's all it takes.
-
@stacksofplates said in Virtual appliances?:
What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.
Meaning an application that's coming from an outside vendor rather than an internal team. When I see Docker being used by developers who are accountable (because they are internal) to the operations team, it seems to get used when. But too often what I see is developers just using Docker to presumably skip the due diligence in making things deployable and the result is a mess that "works for them" but no one else can figure out how to get to work as no one knows what their dependencies were that made it work.
Docker makes it easy to feel like you don't have to do anything and can just throw the product "over the wall" and not have to deal with it. And when devs aren't accountable to anyone, there's nothing to stop them from doing that.
-
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
@JaredBusch said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
This day and age Id just prefer a container. They're so much easier to deploy and manage.
Only when done right, which is still not often, IMO.
Btw not trying to argue with you. People def do it wrong. I'm just saying I've seen them do it wrong with VMs too.
Oh I completely understand. Docker is super abused though.
What do you mean by abused?
I think he means it's used as I'm describing... as an excuse for devs to not test or know how things work and package something that works in their test environment but isn't documented, tested, etc.
-
@travisdh1 said in Virtual appliances?:
@stacksofplates What the what?
- Install Fedora
sudo dnf install -y kubernetes
- `systemctl enable --now podman1
That's all it takes.
Yeah I see you haven't actually done that.
- Podman is not Kubernetes. Also when you install Kubernetes you don't get a podman1 service (or any type of podman service).
- When you install Kubernetes that way you don't get a Kubernetes service. You seemingly have to start the kube-proxy, kube-scheduler, kube-controller-manager, kube-api-server, and the kubelet separately.
- It installs docker, which is deprecated in k8s now. They have switched to using containerd which is pretty much the standard runtime now.
So I'll stick with my original recommendation.
-
@scottalanmiller said in Virtual appliances?:
@stacksofplates said in Virtual appliances?:
What do you mean about third party applications? That's pretty much what most people use it for unless you're an enterprise and writing micro services.
Meaning an application that's coming from an outside vendor rather than an internal team. When I see Docker being used by developers who are accountable (because they are internal) to the operations team, it seems to get used when. But too often what I see is developers just using Docker to presumably skip the due diligence in making things deployable and the result is a mess that "works for them" but no one else can figure out how to get to work as no one knows what their dependencies were that made it work.
Docker makes it easy to feel like you don't have to do anything and can just throw the product "over the wall" and not have to deal with it. And when devs aren't accountable to anyone, there's nothing to stop them from doing that.
I'd have to see an example of what you mean about "works for them but no one else can figure it out". Everything is defined in the Dockerfile. Its not hidden from anyone. So you can clearly see the dependencies.
Coming from a large enterprise that still wrote some legacy apps that weren't containerized, the throwing over the wall happened way more often on the not containerized side. I'd have to see an example of how that would work in the container space to understand what you mean here.
-
Side note just found out a former coworker just wrote a book on containers. https://www.amazon.com/dp/183921340X/ref=cm_sw_r_cp_apa_fabc_4Wn1FbVQZAEEV
-
@stacksofplates said in Virtual appliances?:
@travisdh1 said in Virtual appliances?:
@stacksofplates What the what?
- Install Fedora
sudo dnf install -y kubernetes
- `systemctl enable --now podman1
That's all it takes.
Yeah I see you haven't actually done that.
- Podman is not Kubernetes. Also when you install Kubernetes you don't get a podman1 service (or any type of podman service).
- When you install Kubernetes that way you don't get a Kubernetes service. You seemingly have to start the kube-proxy, kube-scheduler, kube-controller-manager, kube-api-server, and the kubelet separately.
- It installs docker, which is deprecated in k8s now. They have switched to using containerd which is pretty much the standard runtime now.
So I'll stick with my original recommendation.
Yep, this is why I need to mess with this stuff in my home lab. I can't even talk about it intelligently yet!