Is Docker a joke or do I just not see the point?
-
@stacksofplates said in Is Docker a joke or do I just not see the point?:
@momurda said in Is Docker a joke or do I just not see the point?:
Can you give an example relevant for a small business?
If i worked at Google or Amazon and was responsible for their infrastructure being available to the whole planet all the time i see the point. Why would any company with less than a few hundred million in revs or an equally large user count be interested?Scale isn’t the only thing that matters. The whole purpose is to abstract away the underlying OS from your applications. You want to update an application but your OS doesn’t include the correct libs in their repos? Doesn’t matter. Include the libs in the container. Want to update your application in the middle of the day without affecting users, go ahead. Want to test a new deployment alongside the old one, just deploy with the new image.
OK, so scale isn't the critical component. Performance (cost) is a critical component. The argument is that you're making better use of your server. This I can understand.
What I'm failing to see is, at least in my experience almost every workload I have is different, different kernel, different OS, different focus.
And a traditional scaling system for either performance or uptime requirements fits well enough.
Docker very much feels like FreeNAS does. It's the Jurassic Park Effect in my eyes. DevOps can be given access to a hypervisor and have a system running in a matter of moments, just "because they don't want to bother IT" doesn't seem like a valid reason to need another solution that IT ends up supporting anyways.
-
@dustinb3403 said in Is Docker a joke or do I just not see the point?:
@stacksofplates said in Is Docker a joke or do I just not see the point?:
@momurda said in Is Docker a joke or do I just not see the point?:
Can you give an example relevant for a small business?
If i worked at Google or Amazon and was responsible for their infrastructure being available to the whole planet all the time i see the point. Why would any company with less than a few hundred million in revs or an equally large user count be interested?Scale isn’t the only thing that matters. The whole purpose is to abstract away the underlying OS from your applications. You want to update an application but your OS doesn’t include the correct libs in their repos? Doesn’t matter. Include the libs in the container. Want to update your application in the middle of the day without affecting users, go ahead. Want to test a new deployment alongside the old one, just deploy with the new image.
OK, so scale isn't the critical component. Performance (cost) is a critical component. The argument is that you're making better use of your server. This I can understand.
What I'm failing to see is, at least in my experience almost every workload I have is different, different kernel, different OS, different focus.
And a traditional scaling system for either performance or uptime requirements fits well enough.
Docker very much feels like FreeNAS does. It's the Jurassic Park Effect in my eyes. DevOps can be given access to a hypervisor and have a system running in a matter of moments, just "because they don't want to bother IT" doesn't seem like a valid reason to need another solution that IT ends up supporting anyways.
How many different OSs do you have? Windows and Linux and something else?
It’s obviously geared towards Linux.
DevOps can be given access to a hypervisor and have a system running in a matter of moments
This isn’t always true and sometimes isn’t allowed for compliance reasons.
And again, you don’t have to be developing anything to use Docker. You can leverage the images that companies put out to run their software (like UNMS). At that point updates are completely separate from the OS.
I’ve been playing with Fedora Atomic Workstation. The base OS is built from rpm-ostree and all of the packages are some type of container (docker, flatpack, etc). The OS doesn’t have anything actually installed. You can pull in the new kernel image and not affect any applications at all. You can even rebase to another OS and still have your applications without change.
It’s all about abstraction, just like with any virtualization.
-
@stacksofplates you've change the purpose several times already, from performance, to scalability, to now abstraction.
You're not making a great argument to change my mind.
-
@stacksofplates None of the applications you install are managed by OS? How is this different and better than installing applications to %appdata% in Windows?
You must update docker images separately?
The docker images may or may not get updated with package versions available to OS?
What do you think about things like bitnami? -
@dustinb3403 said in Is Docker a joke or do I just not see the point?:
@stacksofplates you've change the purpose several times already, from performance, to scalability, to now abstraction.
You're not making a great argument to change my mind.
Ha ok. First I stated it’s for immutable and distributed systems which means abstraction. Then I stated it’s not for abstraction. Then i stated again it’s for abstraction.
I never said it’s for scale or performance ever I literally just not mentioned points of abstraction.
Edit: because I turned off auto correct and auto capitalization on my iPhone and I’m struggling with it.
-
Maybe containers is a solution for this, maybe not...
A set of in-house designed applications that require a bunch of specific pre-requisites, such as specific .net versions and other specifics on a Windows OS. This application needs to work on distributed systems around the world, on various configurations of Windows. Having the app in a deployable container that contains all of the specific requirements would do the trick.
Is that a legitimate use of Containers? Which would work in that Windows environment? (on Windows 10 systems)
-
@momurda said in Is Docker a joke or do I just not see the point?:
@stacksofplates None of the applications you install are managed by OS? How is this different and better than installing applications to %appdata% in Windows?
You must update docker images separately?
The docker images may or may not get updated with package versions available to OS?
What do you think about things like bitnami?I’m assuming you mean on Atomic Workstation? You can “install” certain packages but they aren’t installed in a traditional sense. It’s an overlay file system overtop of the operating system. Docker images are deployed separately that’s correct.
As far as I know, Bitnami just uses debs and rpms to deploy with.
-
@tim_g said in Is Docker a joke or do I just not see the point?:
Maybe containers is a solution for this, maybe not...
A set of in-house designed applications that require a bunch of specific pre-requisites, such as specific .net versions and other specifics on a Windows OS. This application needs to work on distributed systems around the world, on various configurations of Windows. Having the app in a deployable container that contains all of the specific requirements would do the trick.
Is that a legitimate use of Containers? Which would work in that Windows environment? (on Windows 10 systems)
Ya that’s a good use case. You can deploy to any version of an OS (that you can get Docker on obviously) because all of the dependencies are in the container. It’s abstracted away from the base OS.
Another cool use case is for immutable deployments that are constantly rebuilt. You can do it with full cloud images but it’s faster with containers (this also works with LXC/LXD). There are companies that are destroying and rebuilding their whole infrastructure repeatedly (some every hour) and that way you never systems that have long uptimes.
-
@momurda said in Is Docker a joke or do I just not see the point?:
@stacksofplates None of the applications you install are managed by OS? How is this different and better than installing applications to %appdata% in Windows?
You must update docker images separately?
The docker images may or may not get updated with package versions available to OS?
What do you think about things like bitnami?From what I understand is, you still have to update the base OS (whatever that is), and then you could have a billion tiny little (go stab yourself in the eye containers) that are all running their own version of whatever.
And all of these are generally considered disposable as @stacksofplates is describing it. But they might not be, they might be constant, never go down cases and would then require updates just like any other system.
That are all "as secure" as your base OS, but they could all still be running incredibly out of date software.
Does that sum up what docker does. Makes it so DevOps can screw with SysAdmins?
-
This post is deleted! -
From what I understand is, you still have to update the base OS
Yes.
and then you could have a billion tiny little (go stab yourself in the eye containers) that are all running their own version of whatever.
Again, use an orchestration tool. People don't run bare docker with "a billion containers".
But they might not be, they might be constant, never go down cases and would then require updates just like any other system.
This is weird and not the use case at all. You don't store persistent data in a container. If you care about the data, it's on a backing store that a new container attaches to. You'd have to do something really really weird to update something inside of a container like you're describing.
That are all "as secure" as your base OS
Again, the point is abstraction. So you would run the old software in containers on a new OS. But most "incredibly out of date software" can't be used in a distributed system like this. But it would be as isolated as you want. You can run unprivileged Docker with unprivileged users and contained with SELinux type enforcement.
Does that sum up what docker does. Makes it so DevOps can screw with SysAdmins?
I want to say something, but I won't.
-
@stacksofplates said in Is Docker a joke or do I just not see the point?:
But they might not be, they might be constant, never go down cases and would then require updates just like any other system.
This is weird and not the use case at all. You don't store persistent data in a container. If you care about the data, it's on a backing store that a new container attaches to. You'd have to do something really really weird to update something inside of a container like you're describing.
This is the big one here... think of containers as "throw-a-way". If you have data on there you can't throw away, containers aren't for you. Your data should be elsewhere like he says above.
-
I'm a bit late and haven't caught up. I think a huge piece of this is "Docker is great for specific scenarios" and in turn "awful for other scenarios." Like most good tech, it has a place. It's a strong player. But it is anything but one size fits all, like one would think that it was from how hot it is in the media.
-
@scottalanmiller Mostly this is just @DustinB3403 not having a clue what he is saying.
-
@DustinB3403 : Docker is a specific type of container. It is stateless (can't store data) and as any containers is omogeneous - unless you run it in a VM an entire host must run the same os.
In an SMB usually we have "thick" VM with all layers in (data, business logic, front end maybe) also we have either linux or win on the host.
From a tipical SMB IT Docker is of minimal use.
In bigger envs where your services are split in components and you have a lot of instances running on top of a backing store that make more sense. -
@matteo-nunziati said in Is Docker a joke or do I just not see the point?:
@DustinB3403 : Docker is a specific type of container. It is stateless (can't store data) and as any containers is omogeneous - unless you run it in a VM an entire host must run the same os.
In an SMB usually we have "thick" VM with all layers in (data, business logic, front end maybe) also we have either linux or win on the host.
From a tipical SMB IT Docker is of minimal use.
In bigger envs where your services are split in components and you have a lot of instances running on top of a backing store that make more sense.So Thin VM would be VM that mounts data from network location ?
-
Honestly i don't see much of a point to them but my only exposure to it so far is UNMS. Also i don't fully understand what they do, how they work, how to use them.
Just i ran the command for UNMS and it worked.
-
@emad-r said in Is Docker a joke or do I just not see the point?:
@matteo-nunziati said in Is Docker a joke or do I just not see the point?:
@DustinB3403 : Docker is a specific type of container. It is stateless (can't store data) and as any containers is omogeneous - unless you run it in a VM an entire host must run the same os.
In an SMB usually we have "thick" VM with all layers in (data, business logic, front end maybe) also we have either linux or win on the host.
From a tipical SMB IT Docker is of minimal use.
In bigger envs where your services are split in components and you have a lot of instances running on top of a backing store that make more sense.So Thin VM would be VM that mounts data from network location ?
He's refering to the "thinness" of the workload within the VM, rather than to the state of the VM itself.
-
@hobbit666 said in Is Docker a joke or do I just not see the point?:
Honestly i don't see much of a point to them but my only exposure to it so far is UNMS. Also i don't fully understand what they do, how they work, how to use them.
Just i ran the command for UNMS and it worked.
That's the thing. There's never a point to anything until you have a need for it, and frankly, understand it. Many of you simply have no need for it. That's fine, but it's wrong to say a technology is useless period, rather than useless to you or your company because you have no need or don't understand it.
-
I can explain why Docker is an attractive solution for us. It may not be the same for others.
We create app specific images (web applications) and store these on a registry. I am currently using Swarm so it operates in a cluster that scale up and down as needed. I've got a load balancer that discovers the services automatically with routing rules defined in each service.
This means I can scale up an app in 10 seconds without making any changes other than the scale command. This same ability lets me do app updates without downtime. As long as an app is scaled to at least 2, I can have it update the image by migrating connections away from one, replacing it, and then doing the same to the other.
For DR, I can duplicate our registry to a cloud service and run the stack in almost any cloud service because they all offer some type of container service.
Now this really only works well for our application layer. The data is all stored in databases that use dataguard or other methods of replication for dr/backup/etc.