-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed
Closed
Copy link
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Description
Test Name
K8sPolicyTest Basic Test Validate to-entities policies Validate toEntities All
Failure Output
FAIL: Found 2 k8s-app=cilium logs matching list of errors that must be investigated:
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:427
Found 2 k8s-app=cilium logs matching list of errors that must be investigated:
JoinEP:
level=error
/home/jenkins/workspace/Cilium-PR-K8s-1.16-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:425
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
⚠️ Found "JoinEP: " in logs 14 times
⚠️ Found "level=error" in logs 29 times
Number of "context deadline exceeded" in logs: 5
⚠️ Number of "level=error" in logs: 29
⚠️ Number of "level=warning" in logs: 43
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
Error while rewriting endpoint BPF program
endpoint regeneration failed
JoinEP: Failed to load program
generating BPF for endpoint failed, keeping stale directory.
Regeneration of endpoint failed
Cilium pods: [cilium-5lkrn cilium-7x6sk]
Netpols loaded:
CiliumNetworkPolicies loaded: 202110012124k8spolicytestbasictestchecksallkindofkubernetespoli::to-entities-all
Endpoint Policy Enforcement:
Pod Ingress Egress
prometheus-d87f8f984-ql6xx
coredns-5fc58db489-j45cc
app1-5798c5fb6b-8rpds
app1-5798c5fb6b-qgb76
app2-5cc5d58844-mqtdx
app3-6c7856c5b5-4s2x4
grafana-7fd557d749-jl76c
Cilium agent 'cilium-5lkrn': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 25 Failed 0
Cilium agent 'cilium-7x6sk': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 45 Failed 0
Standard Error
Click to show.
21:30:52 STEP: Running BeforeEach block for EntireTestsuite K8sPolicyTest Basic Test
21:30:54 STEP: WaitforPods(namespace="202110012124k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp")
21:30:54 STEP: WaitforPods(namespace="202110012124k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp") => <nil>
21:30:54 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTest Basic Test Validate to-entities policies
21:30:54 STEP: Installing toEntities All
21:30:57 STEP: Verifying policy correctness
21:30:57 STEP: HTTP connectivity from pod to pod
21:30:57 STEP: HTTP connectivity from pod to pod
21:30:57 STEP: ICMP connectivity to 8.8.8.8
21:30:57 STEP: ICMP connectivity to 8.8.8.8
21:30:57 STEP: HTTP connectivity to 1.1.1.1
21:30:57 STEP: DNS lookup of kubernetes.default.svc.cluster.local
21:30:57 STEP: DNS lookup of kubernetes.default.svc.cluster.local
21:30:57 STEP: HTTP connectivity to 1.1.1.1
21:31:01 STEP: Installing deny toEntities All
21:33:53 STEP: Verifying policy correctness
21:33:53 STEP: HTTP connectivity from pod to pod
21:33:53 STEP: DNS lookup of kubernetes.default.svc.cluster.local
21:33:53 STEP: HTTP connectivity to 1.1.1.1
21:33:53 STEP: HTTP connectivity to 1.1.1.1
21:33:53 STEP: ICMP connectivity to 8.8.8.8
21:33:53 STEP: DNS lookup of kubernetes.default.svc.cluster.local
21:33:53 STEP: HTTP connectivity from pod to pod
21:33:53 STEP: ICMP connectivity to 8.8.8.8
=== Test Finished at 2021-10-01T21:34:55Z====
21:34:55 STEP: Running JustAfterEach block for EntireTestsuite K8sPolicyTest
FAIL: Found 2 k8s-app=cilium logs matching list of errors that must be investigated:
JoinEP:
level=error
===================== TEST FAILED =====================
21:34:55 STEP: Running AfterFailed block for EntireTestsuite K8sPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
202110012124k8spolicytestbasictestchecksallkindofkubernetespoli app1-5798c5fb6b-8rpds 2/2 Running 0 10m 10.0.0.216 k8s1 <none> <none>
202110012124k8spolicytestbasictestchecksallkindofkubernetespoli app1-5798c5fb6b-qgb76 2/2 Running 0 10m 10.0.0.10 k8s1 <none> <none>
202110012124k8spolicytestbasictestchecksallkindofkubernetespoli app2-5cc5d58844-mqtdx 1/1 Running 0 10m 10.0.0.173 k8s1 <none> <none>
202110012124k8spolicytestbasictestchecksallkindofkubernetespoli app3-6c7856c5b5-4s2x4 1/1 Running 0 10m 10.0.0.175 k8s1 <none> <none>
cilium-monitoring grafana-7fd557d749-jl76c 1/1 Running 0 11m 10.0.0.183 k8s1 <none> <none>
cilium-monitoring prometheus-d87f8f984-ql6xx 1/1 Running 0 11m 10.0.0.46 k8s1 <none> <none>
kube-system cilium-5lkrn 1/1 Running 0 11m 192.168.36.12 k8s2 <none> <none>
kube-system cilium-7x6sk 1/1 Running 0 11m 192.168.36.11 k8s1 <none> <none>
kube-system cilium-operator-77558f6976-5z97f 1/1 Running 0 11m 192.168.36.12 k8s2 <none> <none>
kube-system cilium-operator-77558f6976-rzxsv 1/1 Running 0 11m 192.168.36.13 k8s3 <none> <none>
kube-system coredns-5fc58db489-j45cc 1/1 Running 0 10m 10.0.1.70 k8s2 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 14m 192.168.36.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 14m 192.168.36.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 1/1 Running 0 14m 192.168.36.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 1/1 Running 0 14m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-2njpj 1/1 Running 0 11m 192.168.36.12 k8s2 <none> <none>
kube-system log-gatherer-h4ss8 1/1 Running 0 11m 192.168.36.11 k8s1 <none> <none>
kube-system log-gatherer-mctqb 1/1 Running 0 11m 192.168.36.13 k8s3 <none> <none>
kube-system registry-adder-khbt5 1/1 Running 0 12m 192.168.36.13 k8s3 <none> <none>
kube-system registry-adder-s6589 1/1 Running 0 12m 192.168.36.11 k8s1 <none> <none>
kube-system registry-adder-xmr5g 1/1 Running 0 12m 192.168.36.12 k8s2 <none> <none>
Stderr:
Fetching command output from pods [cilium-5lkrn cilium-7x6sk]
cmd: kubectl exec -n kube-system cilium-5lkrn -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.10:53 ClusterIP 1 => 10.0.1.70:53
2 10.96.0.10:9153 ClusterIP 1 => 10.0.1.70:9153
3 10.98.21.66:3000 ClusterIP 1 => 10.0.0.183:3000
4 10.102.155.210:9090 ClusterIP 1 => 10.0.0.46:9090
5 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
6 10.96.221.39:69 ClusterIP 1 => 10.0.0.216:69
2 => 10.0.0.10:69
7 10.96.221.39:80 ClusterIP 1 => 10.0.0.216:80
2 => 10.0.0.10:80
Stderr:
cmd: kubectl exec -n kube-system cilium-5lkrn -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
552 Disabled Enabled 40336 k8s:io.cilium.k8s.policy.cluster=default fd02::15f 10.0.1.70 ready
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
659 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
1223 Disabled Enabled 4 reserved:health fd02::1fc 10.0.1.128 ready
Stderr:
cmd: kubectl exec -n kube-system cilium-7x6sk -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.10:53 ClusterIP 1 => 10.0.1.70:53
2 10.96.0.10:9153 ClusterIP 1 => 10.0.1.70:9153
3 10.98.21.66:3000 ClusterIP 1 => 10.0.0.183:3000
4 10.102.155.210:9090 ClusterIP 1 => 10.0.0.46:9090
5 10.96.0.1:443 ClusterIP 1 => 192.168.36.11:6443
6 10.96.221.39:80 ClusterIP 1 => 10.0.0.216:80
2 => 10.0.0.10:80
7 10.96.221.39:69 ClusterIP 1 => 10.0.0.216:69
2 => 10.0.0.10:69
Stderr:
cmd: kubectl exec -n kube-system cilium-7x6sk -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
121 Disabled Enabled 59843 k8s:id=app3 fd02::58 10.0.0.175 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=202110012124k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
286 Disabled Enabled 4 reserved:health fd02::e0 10.0.0.93 ready
322 Disabled Enabled 18673 k8s:app=grafana fd02::3 10.0.0.183 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
2518 Disabled Enabled 28149 k8s:appSecond=true fd02::d2 10.0.0.173 ready
k8s:id=app2
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=202110012124k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
2756 Disabled Enabled 38339 k8s:id=app1 fd02::e3 10.0.0.10 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=202110012124k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
2994 Disabled Enabled 38339 k8s:id=app1 fd02::62 10.0.0.216 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=202110012124k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
3706 Disabled Enabled 20568 k8s:app=prometheus fd02::9d 10.0.0.46 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
3737 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/master
reserved:host
Stderr:
===================== Exiting AfterFailed =====================
21:35:14 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest Basic Test
21:35:15 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest
21:35:15 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|fb5423ed_K8sPolicyTest_Basic_Test_Validate_to-entities_policies_Validate_toEntities_All.zip]]
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1554/artifact/fb5423ed_K8sPolicyTest_Basic_Test_Validate_to-entities_policies_Validate_toEntities_All.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next//1554/artifact/test_results_Cilium-PR-K8s-1.16-net-next_1554_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.16-net-next/1554/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.