Hey Kevin, thanks for the infos :-)
I am not quite sure if I am right, but maybe you should use "expose" instead of "ports".
From the docs: Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Nevertheless, I didn't know that docker is somehow bypassing the firewall if a port is exposed to the host.
Absolutely. Using expose, or no port binding at all, is the safer approach. Unfortunately this does not work if you want to expose ports via proxy, which was the case for me. :(
I am a Developer Advocate for Security in Mobile Apps and APIs at approov.io.
Another passion is the Elixir programming language that was designed to be concurrent, distributed and fault tolerant.
Location
Scotland
Education
Self teached Developer
Work
Developer Advocate for Mobile and API Security at approov.io
When you use 9200:9200 you are indeed using 0.0.0.0:9200:127.0.1:9200 and this a design flaw in docker, because 0.0.0.0 will expose you to the host and to world.
Regarding expose I think is only their for backward compatibility with the deprecated option --links in order to allow inter container communication inside the docker network, not for communication with the machine hosting the docker engine.
Here the use case is to really expose to the host what its running inside the container, thus it really need to use ports, but always with the prefix 127.0.0.1.
Docker is NOT bypassing the firewall. It creates rules inside the kernel to redirect traffic that comes to the host, from the hosts specific port to the app inside the container. As such, these rules are validated before your filter rules because the routing is done before the kernel starts checking the filter table rules. As such, if the container responds to the packet saying "it is for me" the kernel then says "handle it" and moves on to the next packet. Otherwise it goes on to check the other rules until either one matches or uses the default action - which on most Linux OSs is ALLOW.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Hey Kevin, thanks for the infos :-)
I am not quite sure if I am right, but maybe you should use "expose" instead of "ports".
From the docs:
Expose ports without publishing them to the host machine - they’ll only be accessible to linked services. Only the internal port can be specified.
Nevertheless, I didn't know that docker is somehow bypassing the firewall if a port is exposed to the host.
Ah okay, I get it. Thanks for your explanations. Just thought the proxy would be in the same docker network.
Absolutely. Using expose, or no port binding at all, is the safer approach. Unfortunately this does not work if you want to expose ports via proxy, which was the case for me. :(
When you use
9200:9200
you are indeed using0.0.0.0:9200:127.0.1:9200
and this a design flaw in docker, because0.0.0.0
will expose you to the host and to world.Regarding
expose
I think is only their for backward compatibility with the deprecated option--links
in order to allow inter container communication inside the docker network, not for communication with the machine hosting the docker engine.Here the use case is to really expose to the host what its running inside the container, thus it really need to use ports, but always with the prefix
127.0.0.1
.Docker is NOT bypassing the firewall. It creates rules inside the kernel to redirect traffic that comes to the host, from the hosts specific port to the app inside the container. As such, these rules are validated before your filter rules because the routing is done before the kernel starts checking the filter table rules. As such, if the container responds to the packet saying "it is for me" the kernel then says "handle it" and moves on to the next packet. Otherwise it goes on to check the other rules until either one matches or uses the default action - which on most Linux OSs is ALLOW.