-
Notifications
You must be signed in to change notification settings - Fork 18.8k
Allow user to choose the IP address for the container #19001
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Nice! Is an IP-address reserved for a container, even if it's stopped, or can another container claim it during that period? |
@thaJeztah To cover the scenario you detailed, user will define an This will assure no other dynamic container will grab that address while the container is stopped. |
@aboch thanks, just wondering what people can expect from this feature; clear! |
@thaJeztah |
Possible question w.r.t. design is if we want to have all network options directly as options on docker run, or group them as a |
@thaJeztah Regarding the driver options (ipam driver or network driver), we allow the user to specify them only during network create. In other words as of now we only allow per-network driver specific options. We do not allow per-container driver specific options (so during |
@aboch from a design stand-point, I think we should allow the --ip or --ip6 only if the user has created a network with a proper If the user has not specified a subnet while creating the network, then there is no guarantee that the network will have the same subnet on a daemon restart. That will affect the container with the specified Also, we have to check if we have to support this feature for any network for which the subnet can potentially change. For example, the default bridge network (docker0) is the only network for which the user can change the --bip during the restart and any container with a specified --ip will be impacted. WDYT ? |
@mavenugo Will make the change. |
@aboch @mavenugo FYI, we were briefly discussing this feature, and, although there are lots of
We're struggling a bit if there are valid use cases, I came up with;
The last one could probably be resolved with a DHCP IPAM Wondering what your thoughts are on this; are there valid use-cases, or is this a feature "because we can"? Just adding this, happy to hear (and best wishes for the new year!) |
Not sure I followed, please correct me if I got it wrong, following my reply:
True, if a user deploys a DHCP based IPAM plugin then I guess he can indirectly achieve this by setting the mac-address for the container and have static never expiring mac to IP binding in the DHCP DB.
We simply can't forecast all use cases:
My biggest concern in not allowing users to directly define ip stability for their containers of choice is about limiting the composability of networking features blocks. |
@aboch thanks for adding your thoughts, that's useful.
Oh, right, I thought I'd seen a comment recently that we now had more stable ip-addresses for containers, I probably misread that. |
👍 would be super useful for swarm |
As @aboch commented earlier, the capability to specify IP addresses is really critical. In addition, because the IPAM plugin is virtually not provided with any context about the container (beside the requested network), it's not possible to use it to achieve more complex IP allocation schemes. In these cases, an entity outside of docker will manage the IPs and potentially influence the placement based on networking requirements. |
Thanks for adding your use-case @jc-m 👍 |
I played around with this a bit. Seemed to work fine but one thing I noticed is that |
if container.HostConfig.NetworkMode.IsContainer() { | ||
return runconfig.ErrConflictSharedNetwork | ||
} | ||
|
||
if !container.HostConfig.NetworkMode.IsUserDefined() && networktypes.HasUserDefinedIPAddress(endpointConfig) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
adding to @tonistiigi's comment, I think it is incorrect to check for HostConfig.NetworkMode here.
it must be the network that is being connected to using containertypes.NetworkMode(idOrName).IsUserDefined
.
If we are going down this path, then I think it is proper to also check if the network has a configured subnet
.
The user should not be depending on the auto-generated subnet and choose preferred-ip. It can cause the same container restart issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @tonistiigi @mavenugo, yes that check is wrong, it needs to check on the network which we are connecting to.
@mavenugo I am already checking for the subnet being configured: https://github.com/docker/docker/pull/19001/files#diff-0f20873a38571444bac38770160648a5R715
@tonistiigi Thanks for finding the issue. |
LGTM |
oh my gosh i have wanted this feature actually i think i remember @vishh implementing this too, maybe he can take a look as well :) |
I don't get something. If I create a network with Then how does @jfrazelle 's public IPs case work? If it still is NATing and proxying, a public or private IP won't be visible. How does it work with public (or at least native fabric network-visible) IPs? |
you pass the subnet and gateway for your public cidr when you create a new On Wed, Feb 3, 2016 at 10:23 AM, Avi Deitcher notifications@github.com
Jessie Frazelle |
@deitch |
@jfrazelle bridge to what? That is what I don't get? The bridge just passes L2 traffic along to its various ports. Linux host NAT acts like the L3 router. Or are you saying that the bridge with @aboch exactly what I don't get. If it is a public, fabric-addressable IP, then In one client of mine, we use fixed IPs, and do pipework magic (@jpetazzo rocks) to wire up a macvlan link on the interface, so each container has its own IP directly on the fabric. Then I don't need I am missing something here. Let me try it this way. Let's say the underlying fabric is 10.0.0.0/16. Host is at 10.0.100.10/16. Default docker bridge means container A gets 172.16.25.50/24 and all packets route via the host, which uses ip_forward to forward the packets and NATs them out. Inbound packets to, say, port 80, come to 10.0.100.10:80, which has an iptables NAT rule and translates to 172.16.25.50:80, which the host then uses ip_forward to give to docker0 bridge, and it receives them. What happens in the new option? I create a network What am I missing? NOTE: I used |
I'm curious as well, is this basically 1:1 NAT? |
you should get a public cidr range and try it out, i have no idea how to On Wed, Feb 3, 2016 at 1:00 PM, Rob Landers notifications@github.com
Jessie Frazelle |
@jfrazelle yeah, I guess we could, but much rather just know how it works. :-) |
OK, I did install it. Spun up a digital ocean ubuntu instance and installed docker 1.10.0-rc3. I don't see any difference at all. It has another bridge named I get how it is good to control the range... but how does it help me use IPs visible on the fabric or Internet? @aboch when you said you could use a public IP, what does that mean? |
@deitch As long as the public IP subnet you chose when creating the docker network is advertised outside of the host, you will be able to route up to any the container on that network. |
@deitch Google compute engine will allow you to give a host a public range, from what I hear. I have a pretty decent block at my home server rack, but I don't have any spare physical hosts to experiment with this on to tell you what happens. I looked over the PR and it looks like it will basically be 1:1 nat'd so, yes, you will still have a 'network' like before, but that network will be transparent for hosts inside and outside the physical host. It looks like it uses iptables on the host to setup that 1:1 nat. Can someone tell me if I'm right or wrong? |
@aboch how does that work? When you say "advertised outside of the host", do you mean that there is a route that puts it to that host, so essentially, in my example, |
@withinboredom, what do you mean by "1:1 nat"? |
A quick google search of "one to one nat" can explain it better than I can ... but basically:
As you can see, you still need a network interface, to talk to the router. You also need something to do the NAT'ing ... but the internal ip matches the external ip so it seems as though everything is external, even though its internal. This makes dns super super easy, with the only caveat is that you better be running a firewall somewhere in the chain or you may find yourself having a world of fun. |
yeah so ovh when you buy their extra ips hooks up the router but all the On Thu, Feb 4, 2016 at 8:38 AM, Rob Landers notifications@github.com
Jessie Frazelle |
@withinboredom I get that, I meant how does it play in here. What do you gain from the NAT, then? All you really are doing is routing. That is largely how Calico works. But if you are doing NAT, why even use public IPs? You could just as easily use private and put the public ones on the host. Or is that what they mean by this? @jfrazelle "when you buy extra IPs, they hook them up on the router", but you still need to map the incoming (and maybe the outgoing) traffic to an internal address, n'est ce-pas? |
One disclaimer: In other words, the discussion about the different container networking strategies does not belong in here. Feel free to open an issue in docker/libnetwork or docker/docker @deitch |
No literally all I did was what was in the blog post, no configuration on On Thu, Feb 4, 2016 at 8:46 AM, Avi Deitcher notifications@github.com
Jessie Frazelle |
@aboch thanks |
@aboch thanks. So the networking strategies remain the same. The apparent usage of directly accessible public IPs was what had thrown me off. @jfrazelle thanks. Got it now. Enjoying the posts by the way (your excitement and energy are a little impressive). @aboch so if you want to use public IPs (i.e. IPs that are valid on the underlying fabric, rather than on the bridge or other overlay), then that is a different networking strategy. The way it is done now is mostly pipework with macvlan or similar, but as a different strategy, it would be a different networking plugin. Thanks for clarifying. Much appreciated. |
great! |
need more instructions please |
I am a bit late to the party but here it goes. I find the inability to specify the IP on a pre-defined network quite limiting. In my specific case I am pre-configuring a bridge that docker will get to use. The said bridge is exposed to the LAN and my goal is to have all containers at the same level as any other network node (some called this using docker as VMs). This fixed IP will then be used for QoS and other IP centered activities. The problem is a set of design decisions prevent users from achieving this:
Am I left with any option? Perhaps specifying the container ID should be allowed, provided that the docker daemon is launched with a subnet specification (as a matter of fact it always is when a bridge is specified, as the subnet is inferred from the bridge net mask). Any chances to account for this? |
There might not be a way to satisfy your requirements, but I am posting something I did not see mentioned in your comment. You can create a user-defined network using an existing linux bridge (as you mentioned via driver options). At network creation if you also properly specify the subnet and network gateway, the IP of the bridge won't be changed.
|
@aboch that is true, but as soon as I stop the docker daemon, or delete the user defined network, my bridge is deleted. Also, I suspect (haven't tried yet) this will impact the routing table, disregarding the existing rules involving the pre-defined bridge. Perhaps the ideal solution would be to allow a -b switch on network creation, as suggested by someone else on #20349. This would have a similar behaviour as launching the daemon with -b (not touch the configuration, don't delete the bridge on shutdown). I have since found out that specifying -b=none on daemon launch does not create the bridge. However, that fact alone doesn't solve the problem, as I am not able to create an exact copy of it using network create. It boils down to:
|
@gedl Regarding
be aware moby/libnetwork#1301 was merged and the change will for sure be part of docker 1.13 |
Fixes #6743
Fixes #18297 (from libnetwork vendoring)
Fixes #18910 (from libnetwork vendoring)
This PR will allow user to choose the IP address(es) for the container during
docker run
anddocker network connect
. The configuration is persisted across container restart.Example: