This document proposes creating a option for kube-proxy to specify NodePort IP range.
NodePort type service gives developers the freedom to set up their own load balancers, to expose one or more nodes’ IPs directly. The service will be visible as the nodes’s IPs. For now, the NodePort addresses are the IPs from all available interfaces.
With iptables magic, all the IPs whose
dst-type LOCAL will be taken as the address of NodePort, which might look like,
Chain KUBE-SERVICES (2 references) target prot opt source destination KUBE-NODEPORTS all -- 0.0.0.0/0 0.0.0.0/0 /* kubernetes service nodeports; NOTE: this must be the last rule in this chain */ ADDRTYPE match dst-type LOCAL
By default, kube-proxy accepts everything from NodePort without any filter. It can be a problem for nodes which has both public and private NICs, and people only want to provide a service in private network and avoid exposing any internal service on the public IPs.
This proposal builds off of earlier requests to [proxy] Listening on a specific IP for nodePort , but proposes that we should find a way to tell kube-proxy what the NodePort IP blocks are instead of a single IP.
There should be an admin option to kube-proxy for specifying which IP to NodePort. The option is a list of IP blocks, say
--nodeport-addresses. These IP blocks as a parameter to select the interfaces where nodeport works. In case someone would like to expose a service on localhost for local visit and some other interfaces for particular purpose, an array of IP blocks would do that. People can populate it from their private subnets the same on every node.
--nodeport-addresses is defaulted to
0.0.0.0/0, which means select all available interfaces and is compliance with current NodePort behaviour.
If people set the
--nodeport-addresses option to “127.0.0.0/8”, kube-proxy will only select the loopback interface for NodePort.
If people set the
--nodeport-addresses option to “default-route”, kube-proxy will select the “who has the default route” interfaces. It’s the same heuristic we use for
--advertise-address in kube-apiserver and others.
If people provide a non-zero IP block for
--nodeport-addresses, kube-proxy will filter that down to just the IPs that applied to the node.
So, the following values for
--nodeport-addresses are all valid:
0.0.0.0/0 127.0.0.0/8 default-route 127.0.0.1/32,default-route 127.0.0.0/8,192.168.0.0/16
And an empty string for
--nodeport-addresses is considered as invalid.
NOTE: There is already a option
--bind-address, but it has nothing to do with nodeport and we need IP blocks instead of single IP.
kube-proxy will periodically refresh proxy rules based on the list of IP blocks specified by
--nodeport-addresses, in case of something like DHCP.
For example, if IP address of
eth0 changes from
18.104.22.168 and user specifies
--advertise-address. Kube-proxy will make sure proxy rules
-d 22.214.171.124/16 exist.
However, if IP address of
eth0 changes from
192.168.3.4 and user only specifies
--advertise-address. Kube-proxy will NOT create proxy rules for
eth0 has the default route.
When refer to DHCP user case, network administrator usually reserves a RANGE of IP addresses for the DHCP server. So, IP address change will always fall in an IP range in DHCP scenario. That’s to say an IP address of a interface will not change from
192.168.3.4 in our example.
The implementation is simple.
iptables support specify CIDR in the destination parameter(
For the special
default-route case, we should use
-i option in iptables command, e.g.
Same as iptables.
Create IPVS virtual services one by one according to provided node IPs, which is almost same as current behaviour(fetch all IPs from host).
Create multiple goroutines, each goroutine listens on a specific node IP to serve NodePort.
Need to specify node IPs here - current behaviour is leave the VIP to be empty to automatically select the node IP.