-
Notifications
You must be signed in to change notification settings - Fork 41.3k
Description
What happened:
We run kube-proxy
in ipvs
mode on RHEL 8.1
. Upgrading from kubernetes 1.15.3
to 1.18.1
introduces an additional 1 second of latency when making an HTTP call from host -> service IP -> pod
, when the pod is running on a different host than the host where the call is made. No additional latency is observed when making the same HTTP call from host -> pod
OR from pod -> service IP -> pod
OR when the pod is running on the same host where the call is being made.
This 1 second of additional latency is consistently seen on every new connection and was traced to the first SYN packet being dropped and being retransmitted. Furthermore, this appears to be related to the --random-fully
flag introduced by this PR. Removing the flag gets rid of the additional latency.
What you expected to happen:
No additional latency when calling host -> service IP -> pod
How to reproduce it (as minimally and precisely as possible):
On a multiple host kubernetes cluster, run nginx and expose it as a service:
[cloud-user@kube-worker-1 ~]$ kubectl run --image=nginx nginx-app --port=80
pod/nginx-app created
[cloud-user@kube-worker-1 ~]$ kubectl expose pod nginx-app --port=80 --name=nginx-http
service/nginx-http exposed
[cloud-user@kube-worker-1 ~]$ kubectl get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default nginx-app 1/1 Running 0 23s 198.18.102.10 kube-worker-2 <none> <none>
kube-system kube-state-metrics-6c574db878-ss5vt 1/1 Running 0 20d 198.18.77.2 kube-worker-3 <none> <none>
[cloud-user@kube-worker-1 ~]$ kubectl get svc --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 198.19.0.1 <none> 443/TCP 21m
default nginx-http ClusterIP 198.19.61.33 <none> 80/TCP 25s
kube-system kube-state-metrics ClusterIP None <none> 8080/TCP,8081/TCP 20d
[cloud-user@kube-worker-1 ~]$ kubectl get endpoints --all-namespaces
NAMESPACE NAME ENDPOINTS AGE
default kubernetes 10.34.2.112:6443,10.34.6.23:6443,10.34.8.81:6443 21m
default nginx-http 198.18.102.10:80 33s
kube-system kube-controller-manager <none> 20d
kube-system kube-scheduler <none> 20d
kube-system kube-state-metrics 198.18.77.2:8081,198.18.77.2:8080 20d
Trying to curl from host -> service -> pod
[cloud-user@kube-worker-1 ~]$ cat > time-format.txt <<EOF
> time_namelookup: %{time_namelookup}s\n
> time_connect: %{time_connect}s\n
> time_appconnect: %{time_appconnect}s\n
> time_pretransfer: %{time_pretransfer}s\n
> time_redirect: %{time_redirect}s\n
> time_starttransfer: %{time_starttransfer}s\n
> ----------\n
> time_total: %{time_total}s\n
> EOF
[cloud-user@kube-worker-1 ~]$ curl -I -w @time-format.txt 198.19.61.33
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Wed, 06 May 2020 22:32:45 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes
time_namelookup: 0.000024s
time_connect: 1.010933s
time_appconnect: 0.000000s
time_pretransfer: 1.010992s
time_redirect: 0.000000s
time_starttransfer: 1.011581s
----------
time_total: 1.011689s
[cloud-user@kube-worker-1 ~]$
Notice that the connect took 1.010933s
Running tcpdump shows the retransmitted SYN packet after 1 second:
$ sudo tcpdump -i flannel.1 -nn
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on flannel.1, link-type EN10MB (Ethernet), capture size 262144 bytes
19:31:37.438631 IP 198.18.103.0.13758 > 198.18.102.10.80: Flags [S], seq 265989161, win 43690, options [mss 65495,sackOK,TS val 2631200673 ecr 0,nop,wscale 7], length 0
19:31:38.493883 IP 198.18.103.0.13758 > 198.18.102.10.80: Flags [S], seq 265989161, win 43690, options [mss 65495,sackOK,TS val 2631201728 ecr 0,nop,wscale 7], length 0
19:31:38.494723 IP 198.18.102.10.80 > 198.18.103.0.13758: Flags [S.], seq 222967005, ack 265989162, win 27960, options [mss 1410,sackOK,TS val 2955153471 ecr 2631201728,nop,wscale 7], length 0
19:31:38.494798 IP 198.18.103.0.13758 > 198.18.102.10.80: Flags [.], ack 1, win 342, options [nop,nop,TS val 2631201729 ecr 2955153471], length 0
19:31:38.494953 IP 198.18.103.0.13758 > 198.18.102.10.80: Flags [P.], seq 1:78, ack 1, win 342, options [nop,nop,TS val 2631201729 ecr 2955153471], length 77: HTTP: HEAD / HTTP/1.1
19:31:38.495124 IP 198.18.102.10.80 > 198.18.103.0.13758: Flags [.], ack 78, win 219, options [nop,nop,TS val 2955153471 ecr 2631201729], length 0
19:31:38.495216 IP 198.18.102.10.80 > 198.18.103.0.13758: Flags [P.], seq 1:240, ack 78, win 219, options [nop,nop,TS val 2955153472 ecr 2631201729], length 239: HTTP: HTTP/1.1 200 OK
19:31:38.495230 IP 198.18.103.0.13758 > 198.18.102.10.80: Flags [.], ack 240, win 350, options [nop,nop,TS val 2631201730 ecr 2955153472], length 0
19:31:38.495392 IP 198.18.103.0.13758 > 198.18.102.10.80: Flags [F.], seq 78, ack 240, win 350, options [nop,nop,TS val 2631201730 ecr 2955153472], length 0
19:31:38.495579 IP 198.18.102.10.80 > 198.18.103.0.13758: Flags [F.], seq 240, ack 79, win 219, options [nop,nop,TS val 2955153472 ecr 2631201730], length 0
19:31:38.495599 IP 198.18.103.0.13758 > 198.18.102.10.80: Flags [.], ack 241, win 350, options [nop,nop,TS val 2631201730 ecr 2955153472], length 0
^C
11 packets captured
11 packets received by filter
0 packets dropped by kernel
Similarly tcpretrans shows the retransmission.
$ sudo ./tcpretrans
Tracing retransmits ... Hit Ctrl-C to end
TIME PID IP LADDR:LPORT T> RADDR:RPORT STATE
19:34:12 0 4 198.19.61.33:43826 R> 198.19.61.33:80 SYN_SENT
However, as best as I can tell, neither tcpdrop nor dropwatch seem to show the kernel dropping the SYN packet. Conntrack does not show any insert-failed entries:
$ sudo conntrack -S
cpu=0 found=325 invalid=19385 ignore=11274976 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=310036
cpu=1 found=203 invalid=1364578 ignore=10818506 insert=0 insert_failed=0 drop=0 early_drop=0 error=0 search_restart=513826
The iptables rules
[cloud-user@kube-worker-1 ~]$ sudo iptables-save
# Generated by xtables-save v1.8.2 on Wed May 6 20:05:43 2020
*filter
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-FORWARD - [0:0]
-A FORWARD -m comment --comment "kubernetes forwarding rules" -j KUBE-FORWARD
-A FORWARD -s 198.18.0.0/16 -j ACCEPT
-A FORWARD -d 198.18.0.0/16 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding rules" -m mark --mark 0x4000/0x4000 -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
COMMIT
# Completed on Wed May 6 20:05:43 2020
# Generated by xtables-save v1.8.2 on Wed May 6 20:05:43 2020
*nat
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:KUBE-MARK-DROP - [0:0]
:KUBE-MARK-MASQ - [0:0]
:KUBE-KUBELET-CANARY - [0:0]
:KUBE-SERVICES - [0:0]
:KUBE-POSTROUTING - [0:0]
:KUBE-FIREWALL - [0:0]
:KUBE-NODE-PORT - [0:0]
:KUBE-LOAD-BALANCER - [0:0]
-A PREROUTING -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A POSTROUTING -m comment --comment "kubernetes postrouting rules" -j KUBE-POSTROUTING
-A POSTROUTING ! -d 198.18.0.0/15 -j MASQUERADE
-A OUTPUT -m comment --comment "kubernetes service portals" -j KUBE-SERVICES
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
-A KUBE-SERVICES -m comment --comment "Kubernetes service cluster ip + port for masquerade purpose" -m set --match-set KUBE-CLUSTER-IP src,dst -j KUBE-MARK-MASQ
-A KUBE-SERVICES -m addrtype --dst-type LOCAL -j KUBE-NODE-PORT
-A KUBE-SERVICES -m set --match-set KUBE-CLUSTER-IP dst,dst -j ACCEPT
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
-A KUBE-FIREWALL -j KUBE-MARK-DROP
-A KUBE-LOAD-BALANCER -j KUBE-MARK-MASQ
COMMIT
# Completed on Wed May 6 20:05:43 2020
# Generated by xtables-save v1.8.2 on Wed May 6 20:05:43 2020
*mangle
:PREROUTING ACCEPT [0:0]
:INPUT ACCEPT [0:0]
:FORWARD ACCEPT [0:0]
:OUTPUT ACCEPT [0:0]
:POSTROUTING ACCEPT [0:0]
:KUBE-KUBELET-CANARY - [0:0]
COMMIT
ipvs rules
[cloud-user@kube-worker-1 ~]$ sudo ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 198.19.0.1:443 rr
-> 10.34.2.112:6443 Masq 1 0 0
-> 10.34.6.23:6443 Masq 1 0 0
-> 10.34.8.81:6443 Masq 1 0 0
TCP 198.19.61.33:80 rr
-> 198.18.102.10:80 Masq 1 0 0
ipset rules
Name: KUBE-LOAD-BALANCER-SOURCE-CIDR
Type: hash:ip,port,net
Revision: 7
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 384
References: 0
Number of entries: 0
Members:
Name: KUBE-NODE-PORT-LOCAL-TCP
Type: bitmap:port
Revision: 3
Header: range 0-65535
Size in memory: 8300
References: 0
Number of entries: 0
Members:
Name: KUBE-NODE-PORT-SCTP-HASH
Type: hash:ip,port
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 120
References: 0
Number of entries: 0
Members:
Name: KUBE-NODE-PORT-LOCAL-SCTP-HASH
Type: hash:ip,port
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 120
References: 0
Number of entries: 0
Members:
Name: KUBE-LOAD-BALANCER-FW
Type: hash:ip,port
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 120
References: 0
Number of entries: 0
Members:
Name: KUBE-LOAD-BALANCER-SOURCE-IP
Type: hash:ip,port,ip
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 128
References: 0
Number of entries: 0
Members:
Name: KUBE-LOAD-BALANCER
Type: hash:ip,port
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 120
References: 0
Number of entries: 0
Members:
Name: KUBE-LOOP-BACK
Type: hash:ip,port,ip
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 128
References: 0
Number of entries: 0
Members:
Name: KUBE-EXTERNAL-IP-LOCAL
Type: hash:ip,port
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 120
References: 0
Number of entries: 0
Members:
Name: KUBE-EXTERNAL-IP
Type: hash:ip,port
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 120
References: 0
Number of entries: 0
Members:
Name: KUBE-NODE-PORT-LOCAL-UDP
Type: bitmap:port
Revision: 3
Header: range 0-65535
Size in memory: 8300
References: 0
Number of entries: 0
Members:
Name: KUBE-NODE-PORT-TCP
Type: bitmap:port
Revision: 3
Header: range 0-65535
Size in memory: 8300
References: 0
Number of entries: 0
Members:
Name: KUBE-NODE-PORT-UDP
Type: bitmap:port
Revision: 3
Header: range 0-65535
Size in memory: 8300
References: 0
Number of entries: 0
Members:
Name: KUBE-CLUSTER-IP
Type: hash:ip,port
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 248
References: 2
Number of entries: 2
Members:
198.19.0.1,tcp:443
198.19.61.33,tcp:80
Name: KUBE-LOAD-BALANCER-LOCAL
Type: hash:ip,port
Revision: 5
Header: family inet hashsize 1024 maxelem 65536
Size in memory: 120
References: 0
Number of entries: 0
Members:
The interfaces/addresses
[cloud-user@kube-worker-1 ~]$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether fa:16:3e:ff:df:46 brd ff:ff:ff:ff:ff:ff
inet 10.34.8.132/18 brd 10.34.63.255 scope global dynamic eth0
valid_lft 48497sec preferred_lft 48497sec
inet6 fe80::f816:3eff:feff:df46/64 scope link
valid_lft forever preferred_lft forever
6: cni0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default qlen 1000
link/ether 9a:0d:e0:ee:97:85 brd ff:ff:ff:ff:ff:ff
inet 198.18.103.1/24 brd 198.18.103.255 scope global cni0
valid_lft forever preferred_lft forever
inet6 fe80::980d:e0ff:feee:9785/64 scope link
valid_lft forever preferred_lft forever
14: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether ca:63:57:12:28:7e brd ff:ff:ff:ff:ff:ff
inet 198.19.0.1/32 brd 198.19.0.1 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 198.19.61.33/32 brd 198.19.61.33 scope global kube-ipvs0
valid_lft forever preferred_lft forever
15: flannel.1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default
link/ether 7e:77:8a:72:91:29 brd ff:ff:ff:ff:ff:ff
inet 198.18.103.0/32 scope global flannel.1
valid_lft forever preferred_lft forever
inet6 fe80::7c77:8aff:fe72:9129/64 scope link
valid_lft forever preferred_lft forever
The routing table
[cloud-user@kube-worker-1 ~]$ ip route list
default via 10.34.0.1 dev eth0
10.34.0.0/18 dev eth0 proto kernel scope link src 10.34.8.132
198.18.77.0/24 via 198.18.77.0 dev flannel.1 onlink
198.18.87.0/24 via 198.18.87.0 dev flannel.1 onlink
198.18.89.0/24 via 198.18.89.0 dev flannel.1 onlink
198.18.102.0/24 via 198.18.102.0 dev flannel.1 onlink
198.18.103.0/24 dev cni0 proto kernel scope link src 198.18.103.1 linkdown
[cloud-user@kube-worker-1 ~]$ ip route list table local
broadcast 10.34.0.0 dev eth0 proto kernel scope link src 10.34.8.132
local 10.34.8.132 dev eth0 proto kernel scope host src 10.34.8.132
broadcast 10.34.63.255 dev eth0 proto kernel scope link src 10.34.8.132
broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1
broadcast 198.18.103.0 dev cni0 proto kernel scope link src 198.18.103.1 linkdown
local 198.18.103.0 dev flannel.1 proto kernel scope host src 198.18.103.0
local 198.18.103.1 dev cni0 proto kernel scope host src 198.18.103.1
broadcast 198.18.103.255 dev cni0 proto kernel scope link src 198.18.103.1 linkdown
local 198.19.0.1 dev kube-ipvs0 proto kernel scope host src 198.19.0.1
local 198.19.61.33 dev kube-ipvs0 proto kernel scope host src 198.19.61.33
kube-proxy config
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/kubernetes/conf/kube-proxy-kube-config.yml"
healthzBindAddress: 0.0.0.0:10256
iptables:
masqueradeAll: false
ipvs:
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 0.0.0.0:10249
mode: ipvs
With an identical cluster running kubernetes 1.15.3, we were NOT able to reproduce the 1 second latency. We saw the following differences in the created iptables rules between the two versions diff k8s-1.15.3-iptables.txt k8s-1.18.1-iptables.txt
:
1c1
< # Generated by xtables-save v1.8.2 on Wed May 6 21:23:40 2020
---
> # Generated by xtables-save v1.8.2 on Wed May 6 20:05:43 2020
6a7
> :KUBE-KUBELET-CANARY - [0:0]
8d8
< -A INPUT -j KUBE-FIREWALL
12,13d11
< -A OUTPUT -j KUBE-FIREWALL
< -A KUBE-FIREWALL -m comment --comment "kubernetes firewall for dropping marked packets" -m mark --mark 0x8000/0x8000 -j DROP
14a13,14
> -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod source rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
> -A KUBE-FORWARD -m comment --comment "kubernetes forwarding conntrack pod destination rule" -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
16,17c16,17
< # Completed on Wed May 6 21:23:40 2020
< # Generated by xtables-save v1.8.2 on Wed May 6 21:23:40 2020
---
> # Completed on Wed May 6 20:05:43 2020
> # Generated by xtables-save v1.8.2 on Wed May 6 20:05:43 2020
25c25
< :KUBE-POSTROUTING - [0:0]
---
> :KUBE-KUBELET-CANARY - [0:0]
26a27
> :KUBE-POSTROUTING - [0:0]
34d34
< -A KUBE-MARK-DROP -j MARK --set-xmark 0x8000/0x8000
36d35
< -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
39a39
> -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
43c43,52
< # Completed on Wed May 6 21:23:40 2020
---
> # Completed on Wed May 6 20:05:43 2020
> # Generated by xtables-save v1.8.2 on Wed May 6 20:05:43 2020
> *mangle
> :PREROUTING ACCEPT [0:0]
> :INPUT ACCEPT [0:0]
> :FORWARD ACCEPT [0:0]
> :OUTPUT ACCEPT [0:0]
> :POSTROUTING ACCEPT [0:0]
> :KUBE-KUBELET-CANARY - [0:0]
> COMMIT
We tried systematically changing the iptables rules for our 1.18.1 cluster to match those in 1.15.3 and noticed that the 1 second latency/retransmission disappeared when we removed the --random-fully
flag:
[cloud-user@kube-worker-1 ~]$ sudo systemctl stop kube-proxy
[cloud-user@kube-worker-1 ~]$ sudo iptables -t nat -D KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE --random-fully
[cloud-user@kube-worker-1 ~]$ sudo iptables -t nat -A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
[cloud-user@kube-worker-1 ~]$ curl -I -w @time-format.txt 198.19.61.33
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Thu, 07 May 2020 01:39:31 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes
time_namelookup: 0.000044s
time_connect: 0.001296s
time_appconnect: 0.000000s
time_pretransfer: 0.001338s
time_redirect: 0.000000s
time_starttransfer: 0.002616s
----------
time_total: 0.002700s
Starting kube-proxy
again reset the --random-fully
and reintroduced this bug. The --random-fully
change was made in this PR: #78547
Anything else we need to know?:
This behavior is not observed in HTTP calls that go from host -> pod
OR pod -> service IP -> pod
. See examples below (Click on the arrow to expand).
Trying to curl `host -> pod`
[cloud-user@kube-worker-1 ~]$ curl -I -w @time-format.txt 198.18.102.10
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Wed, 06 May 2020 22:37:00 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes
time_namelookup: 0.000024s
time_connect: 0.000877s
time_appconnect: 0.000000s
time_pretransfer: 0.000904s
time_redirect: 0.000000s
time_starttransfer: 0.001457s
----------
time_total: 0.001506s
Notice that the connect took 0.000877s
Trying `pod -> service IP -> pod`
[cloud-user@kube-worker-1 ~]$ kubectl run --image=curlimages/curl curl --attach -it --overrides='{ "apiVersion": "v1", "spec": { "nodeName": "kube-worker-1" } }' -- /bin/sh
If you don't see a command prompt, try pressing enter.
/ $ cd
~ $ cat > time-format.txt <<EOF
> time_namelookup: %{time_namelookup}s\n
> time_connect: %{time_connect}s\n
> time_appconnect: %{time_appconnect}s\n
> time_pretransfer: %{time_pretransfer}s\n
> time_redirect: %{time_redirect}s\n
> time_starttransfer: %{time_starttransfer}s\n
> ----------\n
> time_total: %{time_total}s\n
> EOF
~ $ curl -I -w @time-format.txt 198.19.61.33
HTTP/1.1 200 OK
Server: nginx/1.17.10
Date: Wed, 06 May 2020 23:16:20 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 14 Apr 2020 14:19:26 GMT
Connection: keep-alive
ETag: "5e95c66e-264"
Accept-Ranges: bytes
time_namelookup: 0.000051s
time_connect: 0.001467s
time_appconnect: 0.000000s
time_pretransfer: 0.001523s
time_redirect: 0.000000s
time_starttransfer: 0.003195s
----------
time_total: 0.003426s
~ $
Notice that the connect took 0.001467s
We were able to reproduce the exact same issue with kube-proxy running in iptables mode. Again removing the --random-fully
flag got rid of the extra 1 second latency.
Environment:
- Kubernetes version (use
kubectl version
):
[cloud-user@kube-worker-1 ~]$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:38:50Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:30:47Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
-
Cloud provider or hardware configuration:
openstack rocky
-
OS (e.g:
cat /etc/os-release
):
[cloud-user@kube-worker-1 ~]$ cat /etc/os-release
NAME="Red Hat Enterprise Linux"
VERSION="8.1 (Ootpa)"
ID="rhel"
ID_LIKE="fedora"
VERSION_ID="8.1"
PLATFORM_ID="platform:el8"
PRETTY_NAME="Red Hat Enterprise Linux 8.1 (Ootpa)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:redhat:enterprise_linux:8.1:GA"
HOME_URL="https://www.redhat.com/"
BUG_REPORT_URL="https://bugzilla.redhat.com/"
REDHAT_BUGZILLA_PRODUCT="Red Hat Enterprise Linux 8"
REDHAT_BUGZILLA_PRODUCT_VERSION=8.1
REDHAT_SUPPORT_PRODUCT="Red Hat Enterprise Linux"
REDHAT_SUPPORT_PRODUCT_VERSION="8.1"
- Kernel (e.g.
uname -a
):
[cloud-user@kube-worker-1 ~]$ uname -a
Linux kube-worker-1 4.18.0-147.el8.x86_64 #1 SMP Thu Sep 26 15:52:44 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
- Install tools: None
- Network plugin and version (if this is a network-related bug):
[cloud-user@kube-worker-1 ~]$ iptables --version
iptables v1.8.2 (nf_tables)
[cloud-user@kube-worker-1 ~]$ /usr/local/bin/flanneld --version
v0.11.0
[cloud-user@kube-worker-1 ~]$ /kubernetes/cni/bin/flannel --version
CNI flannel plugin v0.8.2
[cloud-user@kube-worker-1 ~]$ cat /kubernetes/cni/conf/10-flanneld.conf
{
"name": "cni0",
"cniVersion": "0.3.1",
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
}
- Others: