ML
    • Recent
    • Categories
    • Tags
    • Popular
    • Users
    • Groups
    • Register
    • Login

    Multiple Containers

    IT Discussion
    4
    25
    1.5k
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • DashrenderD
      Dashrender
      last edited by

      Scott - it might be worth mentioning the primary purpose of containers.

      1 Reply Last reply Reply Quote 0
      • AlyRagabA
        AlyRagab @scottalanmiller
        last edited by

        @scottalanmiller said in Multiple Containers:

        @AlyRagab said in Multiple Containers:

        Since Docker is designed not to be like the traditional VT so there will be 2 ways :

        1- One process per One container.
        2- One container with many processes but using superviosrd to do the same task of the systemd inside the container.

        the below link explains how can we link containers with each other assuming we have a DB Server in a container and Web Server installed in other container and we need to link the two containers :

        https://rominirani.com/docker-tutorial-series-part-8-linking-containers-69a4e5bf50fb#.hn73efm1p

        Yes, basically you are just turning the containers into individual processes. But your OS already does that. That doesn't appear to do you any useful service - it's just complication. What is the purpose of the container, everything you are mentioning we have without containers.

        But we can use the lightweight size of the docker images from using docker other than using traditional VT.
        we will save resources.

        scottalanmillerS 1 Reply Last reply Reply Quote 0
        • scottalanmillerS
          scottalanmiller @AlyRagab
          last edited by

          @AlyRagab said in Multiple Containers:

          @scottalanmiller said in Multiple Containers:

          @AlyRagab said in Multiple Containers:

          Since Docker is designed not to be like the traditional VT so there will be 2 ways :

          1- One process per One container.
          2- One container with many processes but using superviosrd to do the same task of the systemd inside the container.

          the below link explains how can we link containers with each other assuming we have a DB Server in a container and Web Server installed in other container and we need to link the two containers :

          https://rominirani.com/docker-tutorial-series-part-8-linking-containers-69a4e5bf50fb#.hn73efm1p

          Yes, basically you are just turning the containers into individual processes. But your OS already does that. That doesn't appear to do you any useful service - it's just complication. What is the purpose of the container, everything you are mentioning we have without containers.

          But we can use the lightweight size of the docker images from using docker other than using traditional VT.
          we will save resources.

          No, you are missing the point. The way that you are describing it, you don't have any purpose to VMs at all. The containers are totally unnecessary.

          1 Reply Last reply Reply Quote 0
          • scottalanmillerS
            scottalanmiller
            last edited by

            In many ways, VMs, Containers and Applications are all similar things. A VM is just a series of things converted to run as an application on top of a hypervisor. A Container is a series of applications converted to run as a simple application on an OS. An application is already an application running on an OS.

            Once you want to make your containers so lean that they represent a single process, you are just adding overhead to applications that has no purpose. There is nothing for the container to bundle, it's too small. This is when you just deploy applications.

            The purpose to containers would only be if you needed to isolate bundles of things as individual units, not one Apache process and one MySQL process or whatever. Just run those processes and you are all set, the OS already keeps them apart from each other using the same underlying mechanism as containers.

            stacksofplatesS 1 Reply Last reply Reply Quote 0
            • scottalanmillerS
              scottalanmiller
              last edited by

              This is the same question I've been asking since the original post. You just assume that containers would be used, but never explain why. Now we've learned that you believe that you are using containers to replace VMs. But had the original post mentioned using one VM for each little tiny process we'd ask the same question - why do you have VMs?

              One VM, no containers, is all that you need. NGinx, Apache, MariaDB, Memchace, Varnish - all on a single VM, each running as an individual process. Any additional VMs or containers would just add overhead that will not benefit you. Complexity without value.

              When you would introduce VMs or containers here would be if different parts of this need to be split up, like if you were going to run the NGinx on one hardware platform and the MariaDB on another.

              1 Reply Last reply Reply Quote 0
              • scottalanmillerS
                scottalanmiller
                last edited by

                And keep in mind that OSTicket is just the code run by Nginx, not a separate thing being run. So you can't put OSTicket is one container and Nginx is another. Those two would be in the same processes regardless. OSTicket doesn't show up in the process list, it is just part of NGinx.

                So you'd only likely have two containers, NGinx and MariaBD. Millions of web servers run those two on the same OS all the time, it's standard. You are trying to reinvent the wheel here. There is a standard pattern for this that works really well, unless you have some unbelievably unique need here that hasn't been mentioned, you can safely assume that you are not special and you should just deploy like normal. This isn't even a unique application, it's a standard, well known PHP application so the possibility that your deployment need is so unique as to warrant what you are doing is dramatically unlikely even within the existing realm of dramatically unlikely scenarios that we were already in.

                1 Reply Last reply Reply Quote 0
                • stacksofplatesS
                  stacksofplates @scottalanmiller
                  last edited by stacksofplates

                  @scottalanmiller said in Multiple Containers:

                  In many ways, VMs, Containers and Applications are all similar things. A VM is just a series of things converted to run as an application on top of a hypervisor. A Container is a series of applications converted to run as a simple application on an OS. An application is already an application running on an OS.

                  Once you want to make your containers so lean that they represent a single process, you are just adding overhead to applications that has no purpose. There is nothing for the container to bundle, it's too small. This is when you just deploy applications.

                  The purpose to containers would only be if you needed to isolate bundles of things as individual units, not one Apache process and one MySQL process or whatever. Just run those processes and you are all set, the OS already keeps them apart from each other using the same underlying mechanism as containers.

                  To be fair this is how a lot of people spreading the information describe containers should work. One service per container. Then using something like CoreOS, etcd, flannel, fleet/Kubernetes the system can move the container to any system in your cluster.

                  Youtube Video

                  scottalanmillerS 1 Reply Last reply Reply Quote 0
                  • scottalanmillerS
                    scottalanmiller @stacksofplates
                    last edited by

                    @stacksofplates said in Multiple Containers:

                    To be fair this is how a lot of people spreading the information describe containers should work. One service per container. Then using something like CoreOS, etcd, and flannel the system can move the container to any system in your cluster.

                    Sure, IF this is about autoscaling and physical load balancing... but I mentioned that 🙂 But moving containers around like that for databases doesn't really work. So this is only about the PHP application. And to make that work, each one needs a VM on which to be moved to. So that brings all of the overhead that he thought that he was avoiding back into the picture. So in the only scenario where it would make sense, it defeats that purpose for which it exists.

                    stacksofplatesS 1 Reply Last reply Reply Quote 0
                    • DashrenderD
                      Dashrender
                      last edited by

                      This is just like the SAN conversations of yesteryear (and sadly still today). Sure the tech is great - but 9 time outta 10, you just don't need it. It doesn't fit your situation.

                      1 Reply Last reply Reply Quote 2
                      • stacksofplatesS
                        stacksofplates @scottalanmiller
                        last edited by

                        @scottalanmiller said in Multiple Containers:

                        @stacksofplates said in Multiple Containers:

                        To be fair this is how a lot of people spreading the information describe containers should work. One service per container. Then using something like CoreOS, etcd, and flannel the system can move the container to any system in your cluster.

                        Sure, IF this is about autoscaling and physical load balancing... but I mentioned that 🙂 But moving containers around like that for databases doesn't really work. So this is only about the PHP application. And to make that work, each one needs a VM on which to be moved to. So that brings all of the overhead that he thought that he was avoiding back into the picture. So in the only scenario where it would make sense, it defeats that purpose for which it exists.

                        For the DB you'd most likely be using a backing store so you should be able to migrate. But anyway, I'm not arguing with you. Just saying there is a legitimate reason people arrive at this conclusion. Even docker is kind of confusing in their own definitions.

                        https://valdhaus.co/writings/docker-misconceptions/

                        There are great theoretical arguments for having a process per container, but in practice, it's a bit of a nightmare to actually manage. Perhaps at extremely large scales that approach makes more sense, but for most systems, you'll want role-based containers (app, db, redis, etc).

                        The app, db, and redid are all separate processes so I don't know what they're saying here.

                        scottalanmillerS 1 Reply Last reply Reply Quote 0
                        • stacksofplatesS
                          stacksofplates
                          last edited by

                          Unless they are saying all of those are included in the container, which to me seems like a weird way to write that.

                          scottalanmillerS 1 Reply Last reply Reply Quote 0
                          • scottalanmillerS
                            scottalanmiller @stacksofplates
                            last edited by

                            @stacksofplates said in Multiple Containers:

                            For the DB you'd most likely be using a backing store so you should be able to migrate. But anyway, I'm not arguing with you. Just saying there is a legitimate reason people arrive at this conclusion.

                            .... marketing.

                            Sadly it mostly just comes down to concept marketing. Everyone is talking about containers in IT so now they seem like they will be the solution to everything. They have their place, but like SANs or ZFS, when we've had them for a decade like we have with containers and no one cares until there is hype around it... it just can't be that important. Otherwise people would have been all over it long ago.

                            1 Reply Last reply Reply Quote 2
                            • scottalanmillerS
                              scottalanmiller @stacksofplates
                              last edited by

                              @stacksofplates said in Multiple Containers:

                              Unless they are saying all of those are included in the container, which to me seems like a weird way to write that.

                              I believe that this is the interpretation. At which point they are just using containers like lightweight VMs. Which is totally sensible if you have no need for VMs and can remove your VM infrastructure and replace it with a container one. Then a single container for the entire osTIcket system makes total sense - but the container is just a VM. Which is what we've called them for a really long time... Type-C VMs.

                              1 Reply Last reply Reply Quote 1
                              • stacksofplatesS
                                stacksofplates
                                last edited by

                                Ya there is no way Docker does as good of a job as SELinux at containing processes.

                                scottalanmillerS 1 Reply Last reply Reply Quote 1
                                • scottalanmillerS
                                  scottalanmiller
                                  last edited by

                                  But the impression I have here is that he's just "wrapping" each process in a container adding crazy amounts of overhead (more human than computer) and redundant encapsulation and making interfacing between containers unnecessarily complex so that there is loads of headroom needed for what used to be light and fast. Like you get stuck using the full network stack instead of a loopback or even a socket.

                                  1 Reply Last reply Reply Quote 0
                                  • scottalanmillerS
                                    scottalanmiller @stacksofplates
                                    last edited by

                                    @stacksofplates said in Multiple Containers:

                                    Ya there is no way Docker does as good of a job as SELinux at containing processes.

                                    Exactly. More work, less benefit.

                                    1 Reply Last reply Reply Quote 2
                                    • 1
                                    • 2
                                    • 1 / 2
                                    • First post
                                      Last post