-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Open
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!pinnedThese issues are not marked stale by our issue bot.These issues are not marked stale by our issue bot.
Description
Test Name
K8sKafkaPolicyTest Kafka Policy Tests KafkaPolicies
Failure Output
FAIL: Kafka Pods are not ready after timeout
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-GKE@2/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:465
Kafka Pods are not ready after timeout
Expected
<*errors.errorString | 0xc002abb260>: {
s: "timed out waiting for pods with filter -l zgroup=kafkaTestApp to be ready: 4m0s timeout expired",
}
to be nil
/home/jenkins/workspace/Cilium-PR-K8s-GKE@2/src/github.com/cilium/cilium/test/k8sT/KafkaPolicies.go:118
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: [cilium-6ztvj cilium-n9fkf]
Netpols loaded:
CiliumNetworkPolicies loaded:
Endpoint Policy Enforcement:
Pod Ingress Egress
empire-outpost-9999-5d45bbc888-jtppz
konnectivity-agent-78879f5c49-2md5w
konnectivity-agent-78879f5c49-gjh7b
konnectivity-agent-autoscaler-6cb774c9cc-dr8z9
kube-dns-7f4d6f474d-6gz2m
l7-default-backend-56cb9644f6-kthft
event-exporter-gke-67986489c8-fnpmd
empire-backup-78bb758bc4-7dwd8
empire-hq-69b8866d77-w8jvd
empire-outpost-8888-544cdcd9b8-zrftn
kafka-broker-67f887645b-6kjwl
kube-dns-7f4d6f474d-xgb4r
kube-dns-autoscaler-844c9d9448-5rbw2
metrics-server-v0.3.6-595f77948b-4nm2n
Cilium agent 'cilium-6ztvj': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 42 Failed 0
Cilium agent 'cilium-n9fkf': Status: Ok Health: Ok Nodes "" ContinerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 63 Failed 0
Standard Error
Click to show.
12:44:40 STEP: Running BeforeAll block for EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
12:44:40 STEP: Ensuring the namespace kube-system exists
12:44:41 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
12:44:41 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
12:44:42 STEP: Installing Cilium
12:44:49 STEP: Waiting for Cilium to become ready
12:45:30 STEP: Validating if Kubernetes DNS is deployed
12:45:30 STEP: Checking if deployment is ready
12:45:30 STEP: Checking if kube-dns service is plumbed correctly
12:45:30 STEP: Checking if pods have identity
12:45:30 STEP: Checking if DNS can resolve
12:45:32 STEP: Kubernetes DNS is not ready: %!s(<nil>)
12:45:32 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
12:45:34 STEP: Waiting for Kubernetes DNS to become operational
12:45:34 STEP: Checking if deployment is ready
12:45:34 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:35 STEP: Checking if deployment is ready
12:45:35 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:36 STEP: Checking if deployment is ready
12:45:36 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:37 STEP: Checking if deployment is ready
12:45:37 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:38 STEP: Checking if deployment is ready
12:45:38 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:39 STEP: Checking if deployment is ready
12:45:39 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:40 STEP: Checking if deployment is ready
12:45:40 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:41 STEP: Checking if deployment is ready
12:45:41 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:42 STEP: Checking if deployment is ready
12:45:42 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:43 STEP: Checking if deployment is ready
12:45:43 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:44 STEP: Checking if deployment is ready
12:45:44 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:45 STEP: Checking if deployment is ready
12:45:45 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:46 STEP: Checking if deployment is ready
12:45:46 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:47 STEP: Checking if deployment is ready
12:45:47 STEP: Kubernetes DNS is not ready yet: unable to retrieve deployment kube-system/coredns: Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error from server (NotFound): deployments.apps "coredns" not found
12:45:48 STEP: Checking if deployment is ready
12:45:48 STEP: Checking if kube-dns service is plumbed correctly
12:45:48 STEP: Checking if DNS can resolve
12:45:48 STEP: Checking if pods have identity
12:45:50 STEP: Validating Cilium Installation
12:45:50 STEP: Performing Cilium status preflight check
12:45:50 STEP: Performing Cilium controllers preflight check
12:45:50 STEP: Performing Cilium health check
12:45:55 STEP: Performing Cilium service preflight check
12:45:55 STEP: Performing K8s service preflight check
12:45:55 STEP: Waiting for cilium-operator to be ready
12:45:55 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
12:45:55 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
12:45:55 STEP: WaitforPods(namespace="kube-system", filter="")
12:46:16 STEP: WaitforPods(namespace="kube-system", filter="") => <nil>
12:46:17 STEP: WaitforPods(namespace="default", filter="-l zgroup=kafkaTestApp")
12:50:17 STEP: WaitforPods(namespace="default", filter="-l zgroup=kafkaTestApp") => timed out waiting for pods with filter -l zgroup=kafkaTestApp to be ready: 4m0s timeout expired
12:50:18 STEP: cmd: kubectl describe pods -n default -l zgroup=kafkaTestApp
Exitcode: 0
Stdout:
Name: empire-backup-78bb758bc4-7dwd8
Namespace: default
Priority: 0
Node: gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx/10.128.15.214
Start Time: Tue, 22 Feb 2022 12:46:17 +0000
Labels: app=empire-backup
pod-template-hash=78bb758bc4
zgroup=kafkaTestApp
Annotations: <none>
Status: Running
IP: 10.236.2.20
IPs:
IP: 10.236.2.20
Controlled By: ReplicaSet/empire-backup-78bb758bc4
Containers:
empire-backup:
Container ID: containerd://60fcac53e059a3e30178a3c58a70dd07832d3c0fff85ca8b36293f27fbe44a30
Image: docker.io/cilium/kafkaclient:1.0
Image ID: docker.io/cilium/kafkaclient@sha256:407fcf0e67d49785d128eaec456a9a82a226b75c451063628ceebeb0c6e38fda
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 22 Feb 2022 12:46:41 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vj9pq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-vj9pq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vj9pq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m default-scheduler Successfully assigned default/empire-backup-78bb758bc4-7dwd8 to gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx
Normal Pulling 3m58s kubelet Pulling image "docker.io/cilium/kafkaclient:1.0"
Normal Pulled 3m36s kubelet Successfully pulled image "docker.io/cilium/kafkaclient:1.0" in 21.953237812s
Normal Created 3m36s kubelet Created container empire-backup
Normal Started 3m36s kubelet Started container empire-backup
Name: empire-hq-69b8866d77-w8jvd
Namespace: default
Priority: 0
Node: gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx/10.128.15.214
Start Time: Tue, 22 Feb 2022 12:46:16 +0000
Labels: app=empire-hq
pod-template-hash=69b8866d77
zgroup=kafkaTestApp
Annotations: <none>
Status: Running
IP: 10.236.2.91
IPs:
IP: 10.236.2.91
Controlled By: ReplicaSet/empire-hq-69b8866d77
Containers:
empire-hq:
Container ID: containerd://fce71ab65d008f57e3de3061c1d4db22c635b3f9e2cc28e2da0e419a94328152
Image: docker.io/cilium/kafkaclient:1.0
Image ID: docker.io/cilium/kafkaclient@sha256:407fcf0e67d49785d128eaec456a9a82a226b75c451063628ceebeb0c6e38fda
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 22 Feb 2022 12:46:41 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vj9pq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-vj9pq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vj9pq
Optional: false
QoS Class: BestEffort
Node-Selectors: cilium.io/ci-node=k8s1
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m1s default-scheduler Successfully assigned default/empire-hq-69b8866d77-w8jvd to gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx
Normal Pulling 3m59s kubelet Pulling image "docker.io/cilium/kafkaclient:1.0"
Normal Pulled 3m39s kubelet Successfully pulled image "docker.io/cilium/kafkaclient:1.0" in 19.980588643s
Normal Created 3m36s kubelet Created container empire-hq
Normal Started 3m36s kubelet Started container empire-hq
Name: empire-outpost-8888-544cdcd9b8-zrftn
Namespace: default
Priority: 0
Node: gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g/10.128.15.216
Start Time: Tue, 22 Feb 2022 12:46:17 +0000
Labels: app=empire-outpost
outpostid=8888
pod-template-hash=544cdcd9b8
zgroup=kafkaTestApp
Annotations: <none>
Status: Running
IP: 10.236.1.109
IPs:
IP: 10.236.1.109
Controlled By: ReplicaSet/empire-outpost-8888-544cdcd9b8
Containers:
empire-outpost-8888:
Container ID: containerd://c7df41202df907fc9ee95750750545b266a2f83882df8d060834e448821a653b
Image: docker.io/cilium/kafkaclient:1.0
Image ID: docker.io/cilium/kafkaclient@sha256:407fcf0e67d49785d128eaec456a9a82a226b75c451063628ceebeb0c6e38fda
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 22 Feb 2022 12:46:27 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vj9pq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-vj9pq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vj9pq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m1s default-scheduler Successfully assigned default/empire-outpost-8888-544cdcd9b8-zrftn to gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g
Normal Pulling 4m kubelet Pulling image "docker.io/cilium/kafkaclient:1.0"
Normal Pulled 3m53s kubelet Successfully pulled image "docker.io/cilium/kafkaclient:1.0" in 6.536365708s
Normal Created 3m51s kubelet Created container empire-outpost-8888
Normal Started 3m51s kubelet Started container empire-outpost-8888
Name: empire-outpost-9999-5d45bbc888-jtppz
Namespace: default
Priority: 0
Node: gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g/10.128.15.216
Start Time: Tue, 22 Feb 2022 12:46:17 +0000
Labels: app=empire-outpost
outpostid=9999
pod-template-hash=5d45bbc888
zgroup=kafkaTestApp
Annotations: <none>
Status: Running
IP: 10.236.1.172
IPs:
IP: 10.236.1.172
Controlled By: ReplicaSet/empire-outpost-9999-5d45bbc888
Containers:
empire-outpost-9999:
Container ID: containerd://e1f3707913595dc931124480c8d3dade83de09abb4babd21cdea8f2fc8dcdc63
Image: docker.io/cilium/kafkaclient:1.0
Image ID: docker.io/cilium/kafkaclient@sha256:407fcf0e67d49785d128eaec456a9a82a226b75c451063628ceebeb0c6e38fda
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 22 Feb 2022 12:46:28 +0000
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vj9pq (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-vj9pq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vj9pq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m1s default-scheduler Successfully assigned default/empire-outpost-9999-5d45bbc888-jtppz to gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g
Normal Pulling 3m59s kubelet Pulling image "docker.io/cilium/kafkaclient:1.0"
Normal Pulled 3m51s kubelet Successfully pulled image "docker.io/cilium/kafkaclient:1.0" in 8.587879989s
Normal Created 3m51s kubelet Created container empire-outpost-9999
Normal Started 3m50s kubelet Started container empire-outpost-9999
Name: kafka-broker-67f887645b-6kjwl
Namespace: default
Priority: 0
Node: gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx/10.128.15.214
Start Time: Tue, 22 Feb 2022 12:46:16 +0000
Labels: app=kafka
pod-template-hash=67f887645b
zgroup=kafkaTestApp
Annotations: <none>
Status: Running
IP: 10.236.2.1
IPs:
IP: 10.236.2.1
Controlled By: ReplicaSet/kafka-broker-67f887645b
Containers:
kafka:
Container ID: containerd://3681ff68cee45563772e426e8090e9c86596ea9b2c3395029da63ebd9a03f480
Image: docker.io/cilium/kafkaproxy:1.0
Image ID: docker.io/cilium/kafkaproxy@sha256:3d607d747fc5bece34b9479506018da795677d004524ca2851886a331b255ccc
Port: 9092/TCP
Host Port: 0/TCP
State: Running
Started: Tue, 22 Feb 2022 12:48:44 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 22 Feb 2022 12:46:33 +0000
Finished: Tue, 22 Feb 2022 12:48:44 +0000
Ready: False
Restart Count: 1
Liveness: tcp-socket :9092 delay=30s timeout=1s period=10s #success=1 #failure=10
Readiness: tcp-socket :9092 delay=30s timeout=1s period=5s #success=1 #failure=3
Environment:
ADVERTISED_HOST: kafka-service
ADVERTISED_PORT: 9092
CONSUMER_THREADS: 1
ZK_CONNECT: zk.connect=localhost:2181/root/path
GROUP_ID: groupid=test
TOPICS: empire-announce,deathstar-plans,test-topic
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-vj9pq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-vj9pq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-vj9pq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 4m2s default-scheduler Successfully assigned default/kafka-broker-67f887645b-6kjwl to gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx
Normal Pulling 4m kubelet Pulling image "docker.io/cilium/kafkaproxy:1.0"
Normal Pulled 3m49s kubelet Successfully pulled image "docker.io/cilium/kafkaproxy:1.0" in 10.500096263s
Normal Created 3m45s kubelet Created container kafka
Normal Started 3m45s kubelet Started container kafka
Warning Unhealthy 2m7s (x14 over 3m12s) kubelet Readiness probe failed: dial tcp 10.236.2.1:9092: connect: connection refused
Warning Unhealthy 2m6s (x7 over 3m6s) kubelet Liveness probe failed: dial tcp 10.236.2.1:9092: connect: connection refused
Stderr:
FAIL: Kafka Pods are not ready after timeout
Expected
<*errors.errorString | 0xc002abb260>: {
s: "timed out waiting for pods with filter -l zgroup=kafkaTestApp to be ready: 4m0s timeout expired",
}
to be nil
12:50:18 STEP: Running JustAfterEach block for EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
===================== TEST FAILED =====================
12:50:19 STEP: Running AfterFailed block for EntireTestsuite K8sKafkaPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
cilium-monitoring grafana-d69c97b9b-wx6c7 0/1 Running 0 78m 10.236.2.6 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
cilium-monitoring prometheus-655fb888d7-krblj 1/1 Running 0 78m 10.236.2.7 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
default empire-backup-78bb758bc4-7dwd8 1/1 Running 0 4m7s 10.236.2.20 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
default empire-hq-69b8866d77-w8jvd 1/1 Running 0 4m8s 10.236.2.91 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
default empire-outpost-8888-544cdcd9b8-zrftn 1/1 Running 0 4m8s 10.236.1.109 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
default empire-outpost-9999-5d45bbc888-jtppz 1/1 Running 0 4m7s 10.236.1.172 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
default kafka-broker-67f887645b-6kjwl 0/1 Running 1 4m8s 10.236.2.1 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system cilium-6ztvj 1/1 Running 0 5m36s 10.128.15.214 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system cilium-n9fkf 1/1 Running 0 5m36s 10.128.15.216 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system cilium-node-init-n77zn 1/1 Running 0 5m35s 10.128.15.214 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system cilium-node-init-s99l7 1/1 Running 0 5m35s 10.128.15.216 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system cilium-operator-54f86d6954-dwjw4 1/1 Running 0 5m35s 10.128.15.214 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system cilium-operator-54f86d6954-vh42z 1/1 Running 0 5m35s 10.128.15.216 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system event-exporter-gke-67986489c8-fnpmd 2/2 Running 0 41m 10.236.1.108 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system fluentbit-gke-4kwl8 2/2 Running 0 80m 10.128.15.216 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system fluentbit-gke-759sm 2/2 Running 0 80m 10.128.15.214 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system gke-metrics-agent-4n8fh 1/1 Running 0 80m 10.128.15.214 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system gke-metrics-agent-n6g8x 1/1 Running 0 80m 10.128.15.216 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system konnectivity-agent-78879f5c49-2md5w 1/1 Running 0 41m 10.236.1.23 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system konnectivity-agent-78879f5c49-gjh7b 1/1 Running 0 41m 10.236.2.232 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system konnectivity-agent-autoscaler-6cb774c9cc-dr8z9 1/1 Running 0 41m 10.236.1.231 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system kube-dns-7f4d6f474d-6gz2m 4/4 Running 0 4m50s 10.236.2.131 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system kube-dns-7f4d6f474d-xgb4r 4/4 Running 0 4m50s 10.236.1.207 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system kube-dns-autoscaler-844c9d9448-5rbw2 1/1 Running 0 41m 10.236.1.77 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system kube-proxy-gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx 1/1 Running 0 80m 10.128.15.214 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system kube-proxy-gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g 1/1 Running 0 80m 10.128.15.216 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system l7-default-backend-56cb9644f6-kthft 1/1 Running 0 41m 10.236.1.224 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system log-gatherer-br6kn 1/1 Running 0 78m 10.128.15.216 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system log-gatherer-xrh9p 1/1 Running 0 78m 10.128.15.214 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
kube-system metrics-server-v0.3.6-595f77948b-4nm2n 2/2 Running 0 41m 10.236.1.152 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system pdcsi-node-ctqx2 2/2 Running 0 80m 10.128.15.216 gke-cilium-ci-2-cilium-ci-2-0558b21e-zw2g <none> <none>
kube-system pdcsi-node-dgmzj 2/2 Running 0 80m 10.128.15.214 gke-cilium-ci-2-cilium-ci-2-0558b21e-4rgx <none> <none>
Stderr:
Fetching command output from pods [cilium-6ztvj cilium-n9fkf]
cmd: kubectl exec -n kube-system cilium-6ztvj -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.239.255.70:80 ClusterIP 1 => 10.236.1.224:8080
2 10.128.15.214:31347 NodePort 1 => 10.236.1.224:8080
3 0.0.0.0:31347 NodePort 1 => 10.236.1.224:8080
4 10.239.242.43:9090 ClusterIP 1 => 10.236.2.7:9090
5 10.239.248.168:443 ClusterIP 1 => 10.236.1.152:443
6 10.239.242.14:3000 ClusterIP
7 10.239.240.1:443 ClusterIP 1 => 35.247.97.173:443
8 10.239.240.10:53 ClusterIP 1 => 10.236.1.207:53
2 => 10.236.2.131:53
Stderr:
cmd: kubectl exec -n kube-system cilium-6ztvj -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
103 Disabled Disabled 4 reserved:health 10.236.2.50 ready
361 Disabled Disabled 47142 k8s:io.cilium.k8s.policy.cluster=default 10.236.2.131 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1553 Disabled Disabled 23265 k8s:io.cilium.k8s.policy.cluster=default 10.236.2.232 ready
k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=konnectivity-agent
1982 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:cloud.google.com/gke-boot-disk=pd-standard
k8s:cloud.google.com/gke-container-runtime=containerd
k8s:cloud.google.com/gke-nodepool=cilium-ci-2
k8s:cloud.google.com/gke-os-distribution=cos
k8s:cloud.google.com/machine-family=n1
k8s:node.kubernetes.io/instance-type=n1-standard-4
k8s:topology.gke.io/zone=us-west1-c
k8s:topology.kubernetes.io/region=us-west1
k8s:topology.kubernetes.io/zone=us-west1-c
reserved:host
2186 Disabled Disabled 14284 k8s:app=empire-backup 10.236.2.20 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=kafkaTestApp
2619 Disabled Disabled 7947 k8s:app=empire-hq 10.236.2.91 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=kafkaTestApp
3525 Disabled Disabled 61663 k8s:app=kafka 10.236.2.1 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:zgroup=kafkaTestApp
Stderr:
cmd: kubectl exec -n kube-system cilium-n9fkf -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.239.248.168:443 ClusterIP 1 => 10.236.1.152:443
2 10.239.242.14:3000 ClusterIP
3 10.239.240.1:443 ClusterIP 1 => 35.247.97.173:443
4 10.239.240.10:53 ClusterIP 1 => 10.236.1.207:53
2 => 10.236.2.131:53
5 10.239.255.70:80 ClusterIP 1 => 10.236.1.224:8080
6 10.128.15.216:31347 NodePort 1 => 10.236.1.224:8080
7 0.0.0.0:31347 NodePort 1 => 10.236.1.224:8080
8 10.239.242.43:9090 ClusterIP 1 => 10.236.2.7:9090
Stderr:
cmd: kubectl exec -n kube-system cilium-n9fkf -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
61 Disabled Disabled 37819 k8s:app=empire-outpost 10.236.1.172 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:outpostid=9999
k8s:zgroup=kafkaTestApp
68 Disabled Disabled 23265 k8s:io.cilium.k8s.policy.cluster=default 10.236.1.23 ready
k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=konnectivity-agent
87 Disabled Disabled 4493 k8s:io.cilium.k8s.policy.cluster=default 10.236.1.152 ready
k8s:io.cilium.k8s.policy.serviceaccount=metrics-server
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=metrics-server
k8s:version=v0.3.6
178 Disabled Disabled 21802 k8s:io.cilium.k8s.policy.cluster=default 10.236.1.77 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns-autoscaler
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns-autoscaler
400 Disabled Disabled 4 reserved:health 10.236.1.38 ready
525 Disabled Disabled 30597 k8s:app=empire-outpost 10.236.1.109 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=default
k8s:outpostid=8888
k8s:zgroup=kafkaTestApp
676 Disabled Disabled 14546 k8s:io.cilium.k8s.policy.cluster=default 10.236.1.231 ready
k8s:io.cilium.k8s.policy.serviceaccount=konnectivity-agent-cpha
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=konnectivity-agent-autoscaler
1515 Disabled Disabled 47142 k8s:io.cilium.k8s.policy.cluster=default 10.236.1.207 ready
k8s:io.cilium.k8s.policy.serviceaccount=kube-dns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
1920 Disabled Disabled 11497 k8s:io.cilium.k8s.policy.cluster=default 10.236.1.224 ready
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=glbc
k8s:name=glbc
2261 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
k8s:cloud.google.com/gke-boot-disk=pd-standard
k8s:cloud.google.com/gke-container-runtime=containerd
k8s:cloud.google.com/gke-nodepool=cilium-ci-2
k8s:cloud.google.com/gke-os-distribution=cos
k8s:cloud.google.com/machine-family=n1
k8s:node.kubernetes.io/instance-type=n1-standard-4
k8s:topology.gke.io/zone=us-west1-c
k8s:topology.kubernetes.io/region=us-west1
k8s:topology.kubernetes.io/zone=us-west1-c
reserved:host
2536 Disabled Disabled 37108 k8s:io.cilium.k8s.policy.cluster=default 10.236.1.108 ready
k8s:io.cilium.k8s.policy.serviceaccount=event-exporter-sa
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=event-exporter
k8s:version=v0.3.4
Stderr:
===================== Exiting AfterFailed =====================
12:51:03 STEP: Running AfterEach for block EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
12:51:05 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|a35648c8_K8sKafkaPolicyTest_Kafka_Policy_Tests_KafkaPolicies.zip]]
12:51:07 STEP: Running AfterAll block for EntireTestsuite K8sKafkaPolicyTest
12:51:07 STEP: Removing Cilium installation using generated helm manifest
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//7696/artifact/src/github.com/cilium/cilium/a35648c8_K8sKafkaPolicyTest_Kafka_Policy_Tests_KafkaPolicies.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE//7696/artifact/src/github.com/cilium/cilium/test_results_Cilium-PR-K8s-GKE_7696_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-GKE/7696/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!pinnedThese issues are not marked stale by our issue bot.These issues are not marked stale by our issue bot.