Server4You Review
-
@johnhooks said:
@scottalanmiller said:
Why does it being a container make HTTP assumed?
It's not but all of those things are done from the host. Not the public facing IP address.
Why would those things be from the host? Why run Docker if you bypass it and run services elsewhere?
-
Or if its an SSH container you give it a -P 8022:22 and access it that way.
-
@scottalanmiller said:
@johnhooks said:
@scottalanmiller said:
Why does it being a container make HTTP assumed?
It's not but all of those things are done from the host. Not the public facing IP address.
Why would those things be from the host? Why run Docker if you bypass it and run services elsewhere?
Like the example above. To make changes in a MySQL database. You would ssh into he host, create a throwaway container to give you the MySQL prompt then delete the container.
-
@johnhooks said:
Like the example above. To make changes in a MySQL database. You would ssh into he host, create a throwaway container to give you the MySQL prompt then delete the container.
That doesn't make sense if MySQL is the service itself. You are exclusively thinking about modifying the service but not about consuming it.
-
Let's take a real world example.... you have Redis running in a Docker container. It needs to talk to Redis instances around the world. As well as its Sentinel services need to do this. How do you expose them?
-
Most of the data that changes would be stored on a volume on the host also. Just for a quick example the /etc/nginx/conf.d folder would be stored in say /var/lib/nginx or whatever folder you create. That way you just add a conf file there and the container reads it. This keeps you from needing to access containers via SSH and keeping them light.
-
@johnhooks said:
Most of the data that changes would be stored on a volume on the host also. Just for a quick example the /etc/nginx/conf.d folder would be stored in say /var/lib/nginx or whatever folder you create. That way you just add a conf file there and the container reads it. This keeps you from needing to access containers via SSH and keeping them light.
Why are you talking about SSH? You don't consume any services normally over SSH other than SSH Itself. But in that case, say you are making SSH Proxies, how are you going to do this without exposing SSH?
-
Then you just attach the ports. That was the whole point of the original post. You dont have multiple ip addresses, either you use something like nginx to reverse proxy or use hardcoded ports.
-
@scottalanmiller You could expose those services via simple port forwarding.
ie: MyServerHost.My.Domain:5539 could point to my Redis instance running inside of a docker instance on port 5678(port numbers pulled from magic hat).
-
@johnhooks said:
Then you just attach the ports. That was the whole point of the original post. You dont have multiple ip addresses, either you use something like nginx to reverse proxy or use hardcoded ports.
Okay, so you are adding PAT in front of it and doing something really messy. Yes, that will work and something like HA-Proxy is probably best. But doing odd ports is sloppy and having to have PAT in front of your hosted Docker instances is messy and how do you handle graceful scaling? You can but it becomes much more complicated.
-
@johnhooks said:
Then you just attach the ports.
You can do that without NGinx too, but in the case of HTTP we can sense how messy that is.
-
Right. I'm only planning to use NGinX for the Web interfaces in each of my docklets (Is that even the right term? lol).
so my NGinX will reverse proxy for those.
In the event that I need to communicate between docklets, then I will simply attach the services to various ports and connect that way. IE: MySQL could be on port 3306 for my main instance, and 5306 for a Wordpress Docklet, and 9958 for a PGSQL Docklet... or whatever else I decide to set up.
-
@dafyre said:
Right. I'm only planning to use NGinX for the Web interfaces in each of my docklets (Is that even the right term? lol).
so my NGinX will reverse proxy for those.
In the event that I need to communicate between docklets, then I will simply attach the services to various ports and connect that way. IE: MySQL could be on port 3306 for my main instance, and 5306 for a Wordpress Docklet, and 9958 for a PGSQL Docklet... or whatever else I decide to set up.
Ya its cool. You don't even needto manage any of that. Just type
docker run --name some-app --link some-mysql:mysql -d
and it links them together. -
I didn't mean to make this into a giant discussion. I agree that PAT is kind of clumsy, but it's how they have it documented. So if you need a specific service from a container to the outside world you do it with ports. Inter-container communication is either done by linking the containers together. Controlling a container is either done by connecting to the container from the host and giving commands directly, by creating a throwaway container which links to the original container, or by using a shared volume on the host. This is all done behind 1 ip address.
With CoreOS you can link multiple hosts together with etcd and then have certain containers on certain hosts, but that's a whole different conversation.
-
@johnhooks said:
I didn't mean to make this into a giant discussion. I agree that PAT is kind of clumsy, but it's how they have it documented. So if you need a specific service from a container to the outside world you do it with ports.
All communication is with ports It's ports sharing a single IP that I've not seen done anywhere. Not that you can't, but it is very clumsy having to manage ports in that way for all systems.
-
@scottalanmiller said:
@johnhooks said:
I didn't mean to make this into a giant discussion. I agree that PAT is kind of clumsy, but it's how they have it documented. So if you need a specific service from a container to the outside world you do it with ports.
All communication is with ports It's ports sharing a single IP that I've not seen done anywhere. Not that you can't, but it is very clumsy having to manage ports in that way for all systems.
I could be 100% wrong, but I think that's why it's been more of a dev tool and not exploded in the production area. However, with CoreOS and etcd that might be different.
-
@johnhooks said:
I could be 100% wrong, but I think that's why it's been more of a dev tool and not exploded in the production area. However, with CoreOS and etcd that might be different.
Docker is very much a production tool. I just left Change.org where it is being used for production. Lots of devs use it, of course, but Docker is not being produced for development, it is for production.
-
@scottalanmiller said:
@johnhooks said:
I could be 100% wrong, but I think that's why it's been more of a dev tool and not exploded in the production area. However, with CoreOS and etcd that might be different.
Docker is very much a production tool. I just left Change.org where it is being used for production. Lots of devs use it, of course, but Docker is not being produced for development, it is for production.
So how do they handle the port issue?
-
Containerization was developed by Sun (not Oracle) and has been the only way for deploying Solaris for a decade now. Linux has had product containers for almost as long.
-
@johnhooks said:
So how do they handle the port issue?
They don't run Docker on a single IP address VM No different than how you host any VM, you get one IP per machine.