-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!
Description
Test Name
K8sDatapathServicesTest Checks E/W loadbalancing (ClusterIP, NodePort from inside cluster, etc) Tests NodePort inside cluster (kube-proxy) with IPSec and externalTrafficPolicy=Local
Failure Output
FAIL: Request from k8s1 to service http://[fd04::11]:30654 failed
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Request from k8s1 to service http://[fd04::11]:30654 failed
Expected command: kubectl exec -n kube-system log-gatherer-6vhm4 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:30654 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi'
To succeed, but it failed:
Exitcode: 42
Err: exit status 42
Stdout:
failed: :30641/1=28:30641/2=28:30641/3=28:30641/4=28:30641/5=28:30641/6=28:30641/7=28:30641/8=28:30641/9=28:30641/10=28
Stderr:
command terminated with exit code 42
/home/jenkins/workspace/Cilium-PR-K8s-1.25-kernel-4.19/src/github.com/cilium/cilium/test/k8s/service_helpers.go:863
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 2
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 4
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
UpdateIdentities: Skipping Delete of a non-existing identity
Attempt to remove non-existing IP from ipcache layer
Cilium pods: [cilium-g4frx cilium-hx5ff]
Netpols loaded:
CiliumNetworkPolicies loaded: default::hairpin-validation-policy
Endpoint Policy Enforcement:
Pod Ingress Egress
testds-2wzkv false false
testds-jpkk2 false false
app1-754f779d8c-9nvjk false false
app3-57f78c8bdf-czmn7 false false
testclient-cbwlp false false
testclient-crz5w false false
test-k8s2-56f67cd755-gnjvh false false
coredns-567b6dd84-ztwnf false false
app1-754f779d8c-ctrl7 false false
app2-86858f4bd6-p5bkj false false
echo-9674cb9d4-6grtn false false
echo-9674cb9d4-sz2xd false false
Cilium agent 'cilium-g4frx': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 45 Failed 0
Cilium agent 'cilium-hx5ff': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 55 Failed 0
Standard Error
Click to show.
09:47:11 STEP: Deploying ipsec_secret.yaml in namespace kube-system
09:47:11 STEP: Installing Cilium
09:47:14 STEP: Waiting for Cilium to become ready
09:47:40 STEP: Validating if Kubernetes DNS is deployed
09:47:40 STEP: Checking if deployment is ready
09:47:40 STEP: Checking if kube-dns service is plumbed correctly
09:47:40 STEP: Checking if DNS can resolve
09:47:40 STEP: Checking if pods have identity
09:47:42 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-g4frx: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-g4frx
09:47:43 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-g4frx: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-g4frx
09:47:45 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-g4frx: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-g4frx
09:47:47 STEP: Checking service kube-system/kube-dns plumbing in cilium pod cilium-g4frx: ClusterIP 10.96.0.10 not found in service list of cilium pod cilium-g4frx
09:47:50 STEP: Kubernetes DNS is up and operational
09:47:50 STEP: Validating Cilium Installation
09:47:50 STEP: Performing Cilium controllers preflight check
09:47:50 STEP: Performing Cilium health check
09:47:50 STEP: Performing Cilium status preflight check
09:47:50 STEP: Checking whether host EP regenerated
09:47:57 STEP: Performing Cilium service preflight check
09:47:57 STEP: Performing K8s service preflight check
09:47:58 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-hx5ff': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Defaulted container "cilium-agent" out of: cilium-agent, config (init), mount-cgroup (init), apply-sysctl-overwrites (init), mount-bpf-fs (init), clean-cilium-state (init), install-cni-binaries (init)
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
09:47:58 STEP: Performing Cilium status preflight check
09:47:58 STEP: Performing Cilium health check
09:47:58 STEP: Checking whether host EP regenerated
09:47:58 STEP: Performing Cilium controllers preflight check
09:48:05 STEP: Performing Cilium service preflight check
09:48:05 STEP: Performing K8s service preflight check
09:48:06 STEP: Performing Cilium controllers preflight check
09:48:06 STEP: Performing Cilium health check
09:48:06 STEP: Performing Cilium status preflight check
09:48:06 STEP: Checking whether host EP regenerated
09:48:13 STEP: Performing Cilium service preflight check
09:48:13 STEP: Performing K8s service preflight check
09:48:15 STEP: Performing Cilium controllers preflight check
09:48:15 STEP: Performing Cilium health check
09:48:15 STEP: Performing Cilium status preflight check
09:48:15 STEP: Checking whether host EP regenerated
09:48:21 STEP: Performing Cilium service preflight check
09:48:21 STEP: Performing K8s service preflight check
09:48:23 STEP: Performing Cilium controllers preflight check
09:48:23 STEP: Performing Cilium health check
09:48:23 STEP: Checking whether host EP regenerated
09:48:23 STEP: Performing Cilium status preflight check
09:48:30 STEP: Performing Cilium service preflight check
09:48:30 STEP: Performing K8s service preflight check
09:48:36 STEP: Waiting for cilium-operator to be ready
09:48:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
09:48:36 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
Skipping externalTrafficPolicy=Local test from external node
09:48:36 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.12:30775"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.12:31502/hello"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s2 to "http://192.168.56.12:30775"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s2 to "tftp://192.168.56.12:31502/hello"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://192.168.56.11:30775"
09:48:37 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://192.168.56.11:31502/hello"
09:48:41 STEP: Making 1 curl requests from k8s2 to "http://192.168.56.11:30775"
09:48:46 STEP: Making 1 curl requests from k8s2 to "tftp://192.168.56.11:31502/hello"
Skipping externalTrafficPolicy=Local test from external node
09:48:51 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::12]:30654"
09:48:51 STEP: Making 10 curl requests from pod (host netns) k8s1 to "tftp://[fd04::12]:32229/hello"
09:48:52 STEP: Making 10 curl requests from pod (host netns) k8s2 to "http://[fd04::12]:30654"
09:48:52 STEP: Making 10 curl requests from pod (host netns) k8s2 to "tftp://[fd04::12]:32229/hello"
09:48:52 STEP: Making 10 curl requests from pod (host netns) k8s1 to "http://[fd04::11]:30654"
FAIL: Request from k8s1 to service http://[fd04::11]:30654 failed
Expected command: kubectl exec -n kube-system log-gatherer-6vhm4 -- /bin/bash -c 'fails=""; id=$RANDOM; for i in $(seq 1 10); do if curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://[fd04::11]:30654 -H "User-Agent: cilium-test-$id/$i"; then echo "Test round $id/$i exit code: $?"; else fails=$fails:$id/$i=$?; fi; done; if [ -n "$fails" ]; then echo "failed: $fails"; fi; cnt="${fails//[^:]}"; if [ ${#cnt} -gt 0 ]; then exit 42; fi'
To succeed, but it failed:
Exitcode: 42
Err: exit status 42
Stdout:
failed: :30641/1=28:30641/2=28:30641/3=28:30641/4=28:30641/5=28:30641/6=28:30641/7=28:30641/8=28:30641/9=28:30641/10=28
Stderr:
command terminated with exit code 42
=== Test Finished at 2023-03-28T09:49:42Z====
09:49:42 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathServicesTest
===================== TEST FAILED =====================
09:49:42 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathServicesTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-98b4b9789-pbt8z 0/1 Running 0 37m 10.0.0.230 k8s1 <none> <none>
cilium-monitoring prometheus-6f66c554f4-nntzh 1/1 Running 0 37m 10.0.0.115 k8s1 <none> <none>
default app1-754f779d8c-9nvjk 2/2 Running 0 2m51s 10.0.0.30 k8s1 <none> <none>
default app1-754f779d8c-ctrl7 2/2 Running 0 2m51s 10.0.0.236 k8s1 <none> <none>
default app2-86858f4bd6-p5bkj 1/1 Running 0 2m51s 10.0.0.91 k8s1 <none> <none>
default app3-57f78c8bdf-czmn7 1/1 Running 0 2m51s 10.0.0.75 k8s1 <none> <none>
default echo-9674cb9d4-6grtn 2/2 Running 0 2m51s 10.0.0.195 k8s1 <none> <none>
default echo-9674cb9d4-sz2xd 2/2 Running 0 2m51s 10.0.1.166 k8s2 <none> <none>
default test-k8s2-56f67cd755-gnjvh 2/2 Running 0 2m51s 10.0.1.194 k8s2 <none> <none>
default testclient-cbwlp 1/1 Running 0 2m51s 10.0.1.132 k8s2 <none> <none>
default testclient-crz5w 1/1 Running 0 2m51s 10.0.0.29 k8s1 <none> <none>
default testds-2wzkv 2/2 Running 0 2m51s 10.0.0.66 k8s1 <none> <none>
default testds-jpkk2 2/2 Running 0 2m51s 10.0.1.96 k8s2 <none> <none>
kube-system cilium-g4frx 1/1 Running 0 2m33s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-hx5ff 1/1 Running 0 2m33s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-operator-74d5cb9875-cnxcc 1/1 Running 0 2m33s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-operator-74d5cb9875-dwzvw 1/1 Running 0 2m33s 192.168.56.12 k8s2 <none> <none>
kube-system coredns-567b6dd84-ztwnf 1/1 Running 0 3m24s 10.0.1.43 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 42m 192.168.56.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 42m 192.168.56.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 42m 192.168.56.11 k8s1 <none> <none>
kube-system kube-proxy-frs67 1/1 Running 0 38m 192.168.56.12 k8s2 <none> <none>
kube-system kube-proxy-g2f5v 1/1 Running 0 42m 192.168.56.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 42m 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-5sq86 1/1 Running 0 37m 192.168.56.12 k8s2 <none> <none>
kube-system log-gatherer-6vhm4 1/1 Running 0 37m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-dv5zh 1/1 Running 0 38m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-j8fzv 1/1 Running 0 38m 192.168.56.12 k8s2 <none> <none>
Stderr:
Fetching command output from pods [cilium-g4frx cilium-hx5ff]
cmd: kubectl exec -n kube-system cilium-g4frx -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.56.11:6443 (active)
2 10.109.86.64:9090 ClusterIP 1 => 10.0.0.115:9090 (active)
3 10.111.170.63:3000 ClusterIP
5 10.96.0.10:53 ClusterIP 1 => 10.0.1.43:53 (active)
6 10.96.0.10:9153 ClusterIP 1 => 10.0.1.43:9153 (active)
7 10.110.20.72:80 ClusterIP 1 => 10.0.0.30:80 (active)
2 => 10.0.0.236:80 (active)
8 10.110.20.72:69 ClusterIP 1 => 10.0.0.30:69 (active)
2 => 10.0.0.236:69 (active)
9 10.108.60.232:80 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
10 10.108.60.232:69 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
11 10.96.59.42:10069 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
12 10.96.59.42:10080 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
13 10.108.107.80:10080 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
14 10.108.107.80:10069 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
15 10.106.35.241:10080 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
16 10.106.35.241:10069 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
17 10.98.95.181:10080 ClusterIP 1 => 10.0.1.194:80 (active)
18 10.98.95.181:10069 ClusterIP 1 => 10.0.1.194:69 (active)
19 10.105.211.238:10080 ClusterIP 1 => 10.0.1.194:80 (active)
20 10.105.211.238:10069 ClusterIP 1 => 10.0.1.194:69 (active)
21 10.109.113.138:80 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
22 10.100.79.16:80 ClusterIP 1 => 10.0.1.194:80 (active)
23 10.96.254.206:20080 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
24 10.96.254.206:20069 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
25 10.96.18.6:80 ClusterIP 1 => 10.0.1.166:80 (active)
2 => 10.0.0.195:80 (active)
26 10.96.18.6:69 ClusterIP 1 => 10.0.1.166:69 (active)
2 => 10.0.0.195:69 (active)
27 [fd03::4b67]:80 ClusterIP 1 => [fd02::8a]:80 (active)
2 => [fd02::d2]:80 (active)
28 [fd03::4b67]:69 ClusterIP 1 => [fd02::8a]:69 (active)
2 => [fd02::d2]:69 (active)
29 [fd03::f5a0]:80 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
30 [fd03::f5a0]:69 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
31 [fd03::62f4]:10080 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
32 [fd03::62f4]:10069 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
33 [fd03::64b7]:10080 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
34 [fd03::64b7]:10069 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
35 [fd03::f78]:10080 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
36 [fd03::f78]:10069 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
37 [fd03::3f36]:10069 ClusterIP 1 => [fd02::143]:69 (active)
38 [fd03::3f36]:10080 ClusterIP 1 => [fd02::143]:80 (active)
39 [fd03::e80c]:10080 ClusterIP 1 => [fd02::143]:80 (active)
40 [fd03::e80c]:10069 ClusterIP 1 => [fd02::143]:69 (active)
41 [fd03::21f6]:20069 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
42 [fd03::21f6]:20080 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
43 [fd03::2f3f]:80 ClusterIP 1 => [fd02::1a8]:80 (active)
2 => [fd02::84]:80 (active)
44 [fd03::2f3f]:69 ClusterIP 1 => [fd02::1a8]:69 (active)
2 => [fd02::84]:69 (active)
45 10.107.235.32:80 ClusterIP 1 => 10.0.1.166:80 (active)
2 => 10.0.0.195:80 (active)
46 10.107.235.32:69 ClusterIP 1 => 10.0.1.166:69 (active)
2 => 10.0.0.195:69 (active)
47 [fd03::31e2]:80 ClusterIP 1 => [fd02::1a8]:80 (active)
2 => [fd02::84]:80 (active)
48 [fd03::31e2]:69 ClusterIP 1 => [fd02::1a8]:69 (active)
2 => [fd02::84]:69 (active)
49 10.109.217.167:443 ClusterIP 1 => 192.168.56.12:4244 (active)
Stderr:
cmd: kubectl exec -n kube-system cilium-g4frx -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
14 Disabled Disabled 8250 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::1b2 10.0.1.43 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
388 Enabled Enabled 34351 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::1a8 10.0.1.166 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:name=echo
485 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
1187 Disabled Disabled 43676 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::143 10.0.1.194 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=test-k8s2
1235 Disabled Disabled 3036 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::13c 10.0.1.132 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
1817 Disabled Disabled 8414 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::131 10.0.1.96 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
3988 Disabled Disabled 4 reserved:health fd02::19d 10.0.1.114 ready
Stderr:
cmd: kubectl exec -n kube-system cilium-hx5ff -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.111.170.63:3000 ClusterIP
3 10.96.0.10:53 ClusterIP 1 => 10.0.1.43:53 (active)
4 10.96.0.10:9153 ClusterIP 1 => 10.0.1.43:9153 (active)
5 10.96.0.1:443 ClusterIP 1 => 192.168.56.11:6443 (active)
6 10.109.86.64:9090 ClusterIP 1 => 10.0.0.115:9090 (active)
7 10.110.20.72:80 ClusterIP 1 => 10.0.0.30:80 (active)
2 => 10.0.0.236:80 (active)
8 10.110.20.72:69 ClusterIP 1 => 10.0.0.30:69 (active)
2 => 10.0.0.236:69 (active)
9 10.108.60.232:80 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
10 10.108.60.232:69 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
11 10.96.59.42:10069 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
12 10.96.59.42:10080 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
13 10.108.107.80:10080 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
14 10.108.107.80:10069 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
15 10.106.35.241:10080 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
16 10.106.35.241:10069 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
17 10.98.95.181:10069 ClusterIP 1 => 10.0.1.194:69 (active)
18 10.98.95.181:10080 ClusterIP 1 => 10.0.1.194:80 (active)
19 10.105.211.238:10080 ClusterIP 1 => 10.0.1.194:80 (active)
20 10.105.211.238:10069 ClusterIP 1 => 10.0.1.194:69 (active)
21 10.109.113.138:80 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
22 10.100.79.16:80 ClusterIP 1 => 10.0.1.194:80 (active)
23 10.96.254.206:20080 ClusterIP 1 => 10.0.1.96:80 (active)
2 => 10.0.0.66:80 (active)
24 10.96.254.206:20069 ClusterIP 1 => 10.0.1.96:69 (active)
2 => 10.0.0.66:69 (active)
25 10.96.18.6:69 ClusterIP 1 => 10.0.1.166:69 (active)
2 => 10.0.0.195:69 (active)
26 10.96.18.6:80 ClusterIP 1 => 10.0.1.166:80 (active)
2 => 10.0.0.195:80 (active)
27 [fd03::4b67]:80 ClusterIP 1 => [fd02::8a]:80 (active)
2 => [fd02::d2]:80 (active)
28 [fd03::4b67]:69 ClusterIP 1 => [fd02::8a]:69 (active)
2 => [fd02::d2]:69 (active)
29 [fd03::f5a0]:80 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
30 [fd03::f5a0]:69 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
31 [fd03::62f4]:10080 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
32 [fd03::62f4]:10069 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
33 [fd03::64b7]:10080 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
34 [fd03::64b7]:10069 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
35 [fd03::f78]:10080 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
36 [fd03::f78]:10069 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
37 [fd03::3f36]:10080 ClusterIP 1 => [fd02::143]:80 (active)
38 [fd03::3f36]:10069 ClusterIP 1 => [fd02::143]:69 (active)
39 [fd03::e80c]:10080 ClusterIP 1 => [fd02::143]:80 (active)
40 [fd03::e80c]:10069 ClusterIP 1 => [fd02::143]:69 (active)
41 [fd03::21f6]:20080 ClusterIP 1 => [fd02::131]:80 (active)
2 => [fd02::1b]:80 (active)
42 [fd03::21f6]:20069 ClusterIP 1 => [fd02::131]:69 (active)
2 => [fd02::1b]:69 (active)
43 [fd03::2f3f]:80 ClusterIP 1 => [fd02::1a8]:80 (active)
2 => [fd02::84]:80 (active)
44 [fd03::2f3f]:69 ClusterIP 1 => [fd02::1a8]:69 (active)
2 => [fd02::84]:69 (active)
45 10.107.235.32:80 ClusterIP 1 => 10.0.1.166:80 (active)
2 => 10.0.0.195:80 (active)
46 10.107.235.32:69 ClusterIP 1 => 10.0.1.166:69 (active)
2 => 10.0.0.195:69 (active)
47 [fd03::31e2]:80 ClusterIP 1 => [fd02::1a8]:80 (active)
2 => [fd02::84]:80 (active)
48 [fd03::31e2]:69 ClusterIP 1 => [fd02::1a8]:69 (active)
2 => [fd02::84]:69 (active)
49 10.109.217.167:443 ClusterIP 1 => 192.168.56.11:4244 (active)
Stderr:
cmd: kubectl exec -n kube-system cilium-hx5ff -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
254 Disabled Disabled 34813 k8s:id=app3 fd02::b5 10.0.0.75 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
522 Enabled Enabled 34351 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::84 10.0.0.195 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:name=echo
725 Disabled Disabled 4 reserved:health fd02::5d 10.0.0.206 ready
849 Disabled Disabled 8414 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::1b 10.0.0.66 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDS
1235 Disabled Disabled 35324 k8s:appSecond=true fd02::53 10.0.0.91 ready
k8s:id=app2
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1286 Disabled Disabled 24222 k8s:id=app1 fd02::8a 10.0.0.30 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
1622 Disabled Disabled 3036 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default fd02::b9 10.0.0.29 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testDSClient
2216 Disabled Disabled 24222 k8s:id=app1 fd02::d2 10.0.0.236 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=testapp
3961 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/control-plane
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
Stderr:
===================== Exiting AfterFailed =====================
09:49:54 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|ad3118f5_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_IPSec_and_externalTrafficPolicy=Local.zip]]
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1477/artifact/ad3118f5_K8sDatapathServicesTest_Checks_E-W_loadbalancing_(ClusterIP,_NodePort_from_inside_cluster,_etc)_Tests_NodePort_inside_cluster_(kube-proxy)_with_IPSec_and_externalTrafficPolicy=Local.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19//1477/artifact/test_results_Cilium-PR-K8s-1.25-kernel-4.19_1477_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.25-kernel-4.19/1477/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!