Skip to content

CI: K8sPolicyTestExtended Validate toEntities KubeAPIServer Allows connection to KubeAPIServer #25263

@maintainer-s-little-helper

Description

@maintainer-s-little-helper

Test Name

K8sPolicyTestExtended Validate toEntities KubeAPIServer Allows connection to KubeAPIServer

Failure Output

FAIL: Timed out after 240.001s.

Stacktrace

Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:453
Timed out after 240.001s.
Timeout while waiting for Cilium to become ready
Expected
    <*errors.errorString | 0xc0014e1790>: {
        s: "only 0 of 2 desired pods are ready",
    }
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-1.26-kernel-net-next/src/github.com/cilium/cilium/test/k8s/assertion_helpers.go:115

Standard Output

Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
⚠️  Found "2023-05-04T13:16:59.228452167Z level=error msg=\"Unable to contact k8s api-server\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" ipAddr=\"https://10.0.2.15:6443\" subsys=k8s-client" in logs 1 times
⚠️  Found "2023-05-04T13:16:59.228454886Z level=error msg=\"Start hook failed\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" function=\"client.(*compositeClientset).onStart\" subsys=hive" in logs 1 times
⚠️  Found "2023-05-04T13:16:59.267186305Z level=error msg=\"Unable to contact k8s api-server\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" ipAddr=\"https://10.0.2.15:6443\" subsys=k8s-client" in logs 1 times
⚠️  Found "2023-05-04T13:16:59.267194754Z level=error msg=\"Start hook failed\" error=\"Get \\\"https://10.0.2.15:6443/api/v1/namespaces/kube-system\\\": dial tcp 10.0.2.15:6443: connect: connection refused\" function=\"client.(*compositeClientset).onStart\" subsys=hive" in logs 1 times
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 4
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 2 errors/warnings:
Unable to contact k8s api-server
Start hook failed
Cilium pods: [cilium-5tr2d cilium-xkj9z]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                          Ingress   Egress
grafana-67ff49cd99-mcnrr     false     false
prometheus-8c7df94b4-wj67q   false     false


Standard Error

Click to show.
13:13:35 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTestExtended
13:13:35 STEP: Ensuring the namespace kube-system exists
13:13:35 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:13:35 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:13:36 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTestExtended Validate toEntities KubeAPIServer
13:13:36 STEP: Redeploying Cilium with tunnel disabled and KPR enabled
13:13:36 STEP: Installing Cilium
13:13:37 STEP: Waiting for Cilium to become ready
FAIL: Timed out after 240.001s.
Timeout while waiting for Cilium to become ready
Expected
    <*errors.errorString | 0xc0014e1790>: {
        s: "only 0 of 2 desired pods are ready",
    }
to be nil
13:17:37 STEP: Running JustAfterEach block for EntireTestsuite K8sPolicyTestExtended
FAIL: Found 4 io.cilium/app=operator logs matching list of errors that must be investigated:
2023-05-04T13:16:59.228452167Z level=error msg="Unable to contact k8s api-server" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" ipAddr="https://10.0.2.15:6443" subsys=k8s-client
2023-05-04T13:16:59.228454886Z level=error msg="Start hook failed" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" function="client.(*compositeClientset).onStart" subsys=hive
2023-05-04T13:16:59.267186305Z level=error msg="Unable to contact k8s api-server" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" ipAddr="https://10.0.2.15:6443" subsys=k8s-client
2023-05-04T13:16:59.267194754Z level=error msg="Start hook failed" error="Get \"https://10.0.2.15:6443/api/v1/namespaces/kube-system\": dial tcp 10.0.2.15:6443: connect: connection refused" function="client.(*compositeClientset).onStart" subsys=hive
===================== TEST FAILED =====================
13:17:37 STEP: Running AfterFailed block for EntireTestsuite K8sPolicyTestExtended
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                               READY   STATUS              RESTARTS      AGE    IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-67ff49cd99-mcnrr           1/1     Running             0             11m    10.0.0.180      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-8c7df94b4-wj67q         1/1     Running             0             11m    10.0.0.224      k8s1   <none>           <none>
	 kube-system         cilium-5tr2d                       0/1     Init:0/6            3 (48s ago)   4m4s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-6db5d9b4cf-brg9m   0/1     Running             3 (42s ago)   4m4s   192.168.56.13   k8s3   <none>           <none>
	 kube-system         cilium-operator-6db5d9b4cf-v89zj   0/1     Running             3 (42s ago)   4m4s   192.168.56.12   k8s2   <none>           <none>
	 kube-system         cilium-xkj9z                       0/1     Init:0/6            3 (48s ago)   4m4s   192.168.56.11   k8s1   <none>           <none>
	 kube-system         coredns-6d97d5ddb-jzkj7            0/1     ContainerCreating   0             4m6s   <none>          k8s2   <none>           <none>
	 kube-system         etcd-k8s1                          1/1     Running             0             18m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                1/1     Running             0             18m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1       1/1     Running             0             18m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         kube-scheduler-k8s1                1/1     Running             0             18m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-6mk6g                 1/1     Running             0             11m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         log-gatherer-hkxxd                 1/1     Running             0             11m    192.168.56.13   k8s3   <none>           <none>
	 kube-system         log-gatherer-q4jcg                 1/1     Running             0             11m    192.168.56.11   k8s1   <none>           <none>
	 kube-system         registry-adder-fhkc5               1/1     Running             0             12m    192.168.56.12   k8s2   <none>           <none>
	 kube-system         registry-adder-kfj8k               1/1     Running             0             12m    192.168.56.13   k8s3   <none>           <none>
	 kube-system         registry-adder-vt586               1/1     Running             0             12m    192.168.56.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-5tr2d cilium-xkj9z]
cmd: kubectl exec -n kube-system cilium-5tr2d -c cilium-agent -- cilium service list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-5tr2d -c cilium-agent -- cilium endpoint list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-xkj9z -c cilium-agent -- cilium service list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

cmd: kubectl exec -n kube-system cilium-xkj9z -c cilium-agent -- cilium endpoint list
Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 error: unable to upgrade connection: container not found ("cilium-agent")
	 

===================== Exiting AfterFailed =====================
13:17:42 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTestExtended Validate toEntities KubeAPIServer
13:17:42 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTestExtended
13:17:42 STEP: Running AfterEach for block EntireTestsuite

[[ATTACHMENT|c01fc254_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Allows_connection_to_KubeAPIServer.zip]]


ZIP Links:

Click to show.

https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2067/artifact/0e79ade9_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Denies_connection_to_KubeAPIServer.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2067/artifact/866ffcb1_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Still_allows_connection_to_KubeAPIServer_with_a_duplicate_policy.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2067/artifact/c01fc254_K8sPolicyTestExtended_Validate_toEntities_KubeAPIServer_Allows_connection_to_KubeAPIServer.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next//2067/artifact/test_results_Cilium-PR-K8s-1.26-kernel-net-next_2067_BDD-Test-PR.zip

Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.26-kernel-net-next/2067/

If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.

Metadata

Metadata

Assignees

No one assigned

    Labels

    ci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions