-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Labels
area/agentCilium agent related.Cilium agent related.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!
Description
Test Name
K8sDatapathConfig Transparent encryption DirectRouting Check connectivity with transparent encryption and direct routing with bpf_host
Failure Output
FAIL: Connectivity test between nodes failed
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Connectivity test between nodes failed
Expected
<bool>: false
to be true
/home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:398
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 4
Number of "level=error" in logs: 0
⚠️ Number of "level=warning" in logs: 10
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 4 errors/warnings:
Unable to restore endpoint, ignoring
Auto-disabling \
UpdateIdentities: Skipping Delete of a non-existing identity
Unable to install direct node route {Ifindex: 0 Dst: fd02::100/120 Src: <nil> Gw: <nil> Flags: [] Table: 0 Realm: 0}
Cilium pods: [cilium-97shd cilium-b5452]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
test-k8s2-67b464d775-xts5f false false
testclient-72twv false false
testclient-95fjd false false
testds-569h7 false false
testds-kzj2b false false
coredns-758664cbbf-rt5wg false false
Cilium agent 'cilium-97shd': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 29 Failed 0
Cilium agent 'cilium-b5452': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 38 Failed 0
Standard Error
Click to show.
02:14:44 STEP: Running BeforeAll block for EntireTestsuite K8sDatapathConfig Transparent encryption DirectRouting
02:14:44 STEP: Deploying ipsec_secret.yaml in namespace kube-system
02:14:44 STEP: Installing Cilium
02:14:46 STEP: Waiting for Cilium to become ready
02:16:19 STEP: Validating if Kubernetes DNS is deployed
02:16:19 STEP: Checking if deployment is ready
02:16:19 STEP: Checking if kube-dns service is plumbed correctly
02:16:19 STEP: Checking if DNS can resolve
02:16:19 STEP: Checking if pods have identity
02:16:22 STEP: Kubernetes DNS is up and operational
02:16:22 STEP: Validating Cilium Installation
02:16:22 STEP: Performing Cilium controllers preflight check
02:16:22 STEP: Performing Cilium status preflight check
02:16:22 STEP: Performing Cilium health check
02:16:22 STEP: Checking whether host EP regenerated
02:16:30 STEP: Performing Cilium service preflight check
02:16:30 STEP: Performing K8s service preflight check
02:16:30 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-97shd': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
02:16:30 STEP: Performing Cilium controllers preflight check
02:16:30 STEP: Performing Cilium status preflight check
02:16:30 STEP: Performing Cilium health check
02:16:30 STEP: Checking whether host EP regenerated
02:16:37 STEP: Performing Cilium service preflight check
02:16:37 STEP: Performing K8s service preflight check
02:16:37 STEP: Performing Cilium controllers preflight check
02:16:37 STEP: Performing Cilium status preflight check
02:16:37 STEP: Performing Cilium health check
02:16:37 STEP: Checking whether host EP regenerated
02:16:44 STEP: Performing Cilium service preflight check
02:16:44 STEP: Performing K8s service preflight check
02:16:47 STEP: Waiting for cilium-operator to be ready
02:16:47 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
02:16:47 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
02:16:47 STEP: Making sure all endpoints are in ready state
02:16:50 STEP: Creating namespace 202306070216k8sdatapathconfigtransparentencryptiondirectrouting
02:16:50 STEP: Deploying demo_ds.yaml in namespace 202306070216k8sdatapathconfigtransparentencryptiondirectrouting
02:16:50 STEP: Applying policy /home/jenkins/workspace/Cilium-PR-K8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
02:16:58 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
02:16:58 STEP: WaitforNPods(namespace="202306070216k8sdatapathconfigtransparentencryptiondirectrouting", filter="")
02:17:02 STEP: WaitforNPods(namespace="202306070216k8sdatapathconfigtransparentencryptiondirectrouting", filter="") => <nil>
02:17:02 STEP: Checking pod connectivity between nodes
02:17:02 STEP: WaitforPods(namespace="202306070216k8sdatapathconfigtransparentencryptiondirectrouting", filter="-l zgroup=testDSClient")
02:17:02 STEP: WaitforPods(namespace="202306070216k8sdatapathconfigtransparentencryptiondirectrouting", filter="-l zgroup=testDSClient") => <nil>
02:17:02 STEP: WaitforPods(namespace="202306070216k8sdatapathconfigtransparentencryptiondirectrouting", filter="-l zgroup=testDS")
02:17:02 STEP: WaitforPods(namespace="202306070216k8sdatapathconfigtransparentencryptiondirectrouting", filter="-l zgroup=testDS") => <nil>
FAIL: Connectivity test between nodes failed
Expected
<bool>: false
to be true
=== Test Finished at 2023-06-07T02:17:19Z====
02:17:19 STEP: Running JustAfterEach block for EntireTestsuite K8sDatapathConfig
===================== TEST FAILED =====================
02:17:19 STEP: Running AfterFailed block for EntireTestsuite K8sDatapathConfig
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
202306070216k8sdatapathconfigtransparentencryptiondirectrouting test-k8s2-67b464d775-xts5f 2/2 Running 0 35s 10.0.1.48 k8s2 <none> <none>
202306070216k8sdatapathconfigtransparentencryptiondirectrouting testclient-72twv 1/1 Running 0 35s 10.0.0.143 k8s1 <none> <none>
202306070216k8sdatapathconfigtransparentencryptiondirectrouting testclient-95fjd 1/1 Running 0 35s 10.0.1.164 k8s2 <none> <none>
202306070216k8sdatapathconfigtransparentencryptiondirectrouting testds-569h7 2/2 Running 0 35s 10.0.1.210 k8s2 <none> <none>
202306070216k8sdatapathconfigtransparentencryptiondirectrouting testds-kzj2b 2/2 Running 0 35s 10.0.0.19 k8s1 <none> <none>
cilium-monitoring grafana-585bb89877-8zm6h 0/1 Running 0 29m 10.0.0.251 k8s2 <none> <none>
cilium-monitoring prometheus-8885c5888-gktk6 1/1 Running 0 29m 10.0.0.130 k8s2 <none> <none>
kube-system cilium-97shd 1/1 Running 0 2m39s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-b5452 1/1 Running 0 2m39s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-operator-678b64f957-5qn6b 1/1 Running 0 2m39s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-operator-678b64f957-mq6dz 1/1 Running 0 2m39s 192.168.56.11 k8s1 <none> <none>
kube-system coredns-758664cbbf-rt5wg 1/1 Running 0 19m 10.0.1.21 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 32m 192.168.56.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 31m 192.168.56.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 32m 192.168.56.11 k8s1 <none> <none>
kube-system kube-proxy-7wm7w 1/1 Running 0 30m 192.168.56.12 k8s2 <none> <none>
kube-system kube-proxy-x7jdp 1/1 Running 0 33m 192.168.56.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 32m 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-7f2d6 1/1 Running 0 29m 192.168.56.12 k8s2 <none> <none>
kube-system log-gatherer-g87sh 1/1 Running 0 29m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-nxmb5 1/1 Running 0 30m 192.168.56.11 k8s1 <none> <none>
kube-system registry-adder-x4nh9 1/1 Running 0 30m 192.168.56.12 k8s2 <none> <none>
Stderr:
Fetching command output from pods [cilium-97shd cilium-b5452]
cmd: kubectl exec -n kube-system cilium-97shd -c cilium-agent -- cilium status
Exitcode: 0
Stdout:
KVStore: Ok Disabled
Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64]
Kubernetes APIs: ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Disabled
Host firewall: Disabled
CNI Chaining: none
Cilium: Ok 1.14.0-dev (v1.14.0-dev-647f901d)
NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 4/254 allocated from 10.0.0.0/24, IPv6: 4/254 allocated from fd02::/120
IPv6 BIG TCP: Disabled
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled]
Controller Status: 29/29 healthy
Proxy Status: OK, ip 10.0.0.63, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range: min 256, max 65535
Hubble: Ok Current/Max Flows: 524/65535 (0.80%), Flows/s: 3.66 Metrics: Disabled
Encryption: IPsec
Cluster health: 2/2 reachable (2023-06-07T02:16:40Z)
Stderr:
cmd: kubectl exec -n kube-system cilium-97shd -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
813 Disabled Disabled 37338 k8s:io.cilium.k8s.policy.cluster=default fd02::5d 10.0.0.143 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=202306070216k8sdatapathconfigtransparentencryptiondirectrouting
k8s:zgroup=testDSClient
904 Disabled Disabled 38402 k8s:io.cilium.k8s.policy.cluster=default fd02::29 10.0.0.19 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=202306070216k8sdatapathconfigtransparentencryptiondirectrouting
k8s:zgroup=testDS
1099 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/master
reserved:host
3456 Disabled Disabled 4 reserved:health fd02::da 10.0.0.59 ready
Stderr:
cmd: kubectl exec -n kube-system cilium-b5452 -c cilium-agent -- cilium status
Exitcode: 0
Stdout:
KVStore: Ok Disabled
Kubernetes: Ok 1.16 (v1.16.15) [linux/amd64]
Kubernetes APIs: ["EndpointSliceOrEndpoint", "cilium/v2::CiliumClusterwideNetworkPolicy", "cilium/v2::CiliumEndpoint", "cilium/v2::CiliumNetworkPolicy", "cilium/v2::CiliumNode", "cilium/v2alpha1::CiliumCIDRGroup", "core/v1::Namespace", "core/v1::Pods", "core/v1::Service", "networking.k8s.io/v1::NetworkPolicy"]
KubeProxyReplacement: Disabled
Host firewall: Disabled
CNI Chaining: none
Cilium: Ok 1.14.0-dev (v1.14.0-dev-647f901d)
NodeMonitor: Listening for events on 3 CPUs with 64x4096 of shared memory
Cilium health daemon: Ok
IPAM: IPv4: 6/254 allocated from 10.0.1.0/24, IPv6: 6/254 allocated from fd02::100/120
IPv6 BIG TCP: Disabled
BandwidthManager: Disabled
Host Routing: Legacy
Masquerading: IPTables [IPv4: Enabled, IPv6: Enabled]
Controller Status: 38/38 healthy
Proxy Status: OK, ip 10.0.1.61, 0 redirects active on ports 10000-20000, Envoy: embedded
Global Identity Range: min 256, max 65535
Hubble: Ok Current/Max Flows: 440/65535 (0.67%), Flows/s: 3.40 Metrics: Disabled
Encryption: IPsec
Cluster health: 2/2 reachable (2023-06-07T02:16:46Z)
Stderr:
cmd: kubectl exec -n kube-system cilium-b5452 -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
60 Disabled Disabled 4 reserved:health fd02::1cf 10.0.1.91 ready
492 Disabled Disabled 61387 k8s:io.cilium.k8s.policy.cluster=default fd02::152 10.0.1.48 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=202306070216k8sdatapathconfigtransparentencryptiondirectrouting
k8s:zgroup=test-k8s2
808 Disabled Disabled 38402 k8s:io.cilium.k8s.policy.cluster=default fd02::1ce 10.0.1.210 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=202306070216k8sdatapathconfigtransparentencryptiondirectrouting
k8s:zgroup=testDS
1396 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
3526 Disabled Disabled 37338 k8s:io.cilium.k8s.policy.cluster=default fd02::149 10.0.1.164 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=202306070216k8sdatapathconfigtransparentencryptiondirectrouting
k8s:zgroup=testDSClient
4007 Disabled Disabled 11785 k8s:io.cilium.k8s.policy.cluster=default fd02::1e3 10.0.1.21 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
Stderr:
===================== Exiting AfterFailed =====================
02:17:53 STEP: Running AfterEach for block EntireTestsuite K8sDatapathConfig
02:17:53 STEP: Deleting deployment demo_ds.yaml
02:17:53 STEP: Deleting deployment ipsec_secret.yaml
02:17:53 STEP: Deleting namespace 202306070216k8sdatapathconfigtransparentencryptiondirectrouting
02:18:06 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|9ce53784_K8sDatapathConfig_Transparent_encryption_DirectRouting_Check_connectivity_with_transparent_encryption_and_direct_routing_with_bpf_host.zip]]
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.19//482/artifact/9ce53784_K8sDatapathConfig_Transparent_encryption_DirectRouting_Check_connectivity_with_transparent_encryption_and_direct_routing_with_bpf_host.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.19//482/artifact/cilium-sysdump.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.19//482/artifact/test_results_Cilium-PR-K8s-1.16-kernel-4.19_482_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-kernel-4.19/482/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
area/agentCilium agent related.Cilium agent related.ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!