Skip to content

CI: E2E Test (1.27, f20-kafka) - K8sKafkaPolicyTest Kafka Policy Tests: KafkaPolicies #26009

@giorio94

Description

@giorio94

CI failure

Hit on #25966: https://github.com/cilium/cilium/actions/runs/5200507358/jobs/9379429675

• Failure [212.934 seconds]
K8sKafkaPolicyTest
/home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:461
  Kafka Policy Tests
  /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:461
    KafkaPolicies [It]
    /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:515

    Failed to produce from empire-hq on topic empire-announce
    Expected
        <*errors.errorString | 0xc000e0c320>: 
        ExecKafkaPodCmd: command 'kubectl exec -n default empire-hq-9b7455868-x45t2 -- sh -c "echo 'Happy 40th Birthday to General Tagge' | ./kafka-produce.sh --topic empire-announce"' failed Exitcode: 1 
        Err: exit status 1
        Stdout:
         	 
        Stderr:
         	 [2023-06-07 13:31:55,584] WARN Removing server kafka-service:9092 from bootstrap.servers as DNS resolution failed for kafka-service (org.apache.kafka.clients.ClientUtils)
        	 org.apache.kafka.common.KafkaException: Failed to construct kafka producer
        	 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:416)
        	 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:288)
        	 	at kafka.producer.NewShinyProducer.<init>(BaseProducer.scala:40)
        	 	at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)
        	 	at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
        	 Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
        	 	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:64)
        	 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:373)
        	 	... 4 more
        	 command terminated with exit code 1
        	 
        
        {
            s: "ExecKafkaPodCmd: command 'kubectl exec -n default empire-hq-9b7455868-x45t2 -- sh -c \"echo 'Happy 40th Birthday to General Tagge' | ./kafka-produce.sh --topic empire-announce\"' failed Exitcode: 1 \nErr: exit status 1\nStdout:\n \t \nStderr:\n \t [2023-06-07 13:31:55,584] WARN Removing server kafka-service:9092 from bootstrap.servers as DNS resolution failed for kafka-service (org.apache.kafka.clients.ClientUtils)\n\t org.apache.kafka.common.KafkaException: Failed to construct kafka producer\n\t \tat org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:416)\n\t \tat org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:288)\n\t \tat kafka.producer.NewShinyProducer.<init>(BaseProducer.scala:40)\n\t \tat kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)\n\t \tat kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)\n\t Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers\n\t \tat org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:64)\n\t \tat org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:373)\n\t \t... 4 more\n\t command terminated with exit code 1\n\t \n",
        }
    to be nil

    /home/runner/work/cilium/cilium/test/k8s/kafka_policies.go:154

Logs:

K8sKafkaPolicyTest Kafka Policy Tests 
  KafkaPolicies
  /home/runner/work/cilium/cilium/test/ginkgo-ext/scopes.go:515
13:28:46 STEP: Running BeforeAll block for EntireTestsuite
13:28:46 STEP: Starting tests: command line parameters: {Reprovision:false HoldEnvironment:false PassCLIEnvironment:false SSHConfig: ShowCommands:false TestScope: SkipLogGathering:false CiliumImage:quay.io/cilium/cilium-ci CiliumTag:da50c81348d32d0c84c092d454b16126d6a5a965 CiliumOperatorImage:quay.io/cilium/operator CiliumOperatorTag:da50c81348d32d0c84c092d454b16126d6a5a965 CiliumOperatorSuffix:-ci HubbleRelayImage:quay.io/cilium/hubble-relay-ci HubbleRelayTag:da50c81348d32d0c84c092d454b16126d6a5a965 ProvisionK8s:false Timeout:24h0m0s Kubeconfig:/root/.kube/config KubectlPath:/tmp/kubectl RegistryCredentials: Multinode:true RunQuarantined:false Help:false} environment variables: [SHELL=/bin/bash KERNEL=net-next K8S_NODES=3 *** LOGNAME=root KUBEPROXY=0 HOME=/root LANG=C.UTF-8 NO_CILIUM_ON_NODES=kind-worker2 NETNEXT=1 SSH_CONNECTION=10.0.2.2 49470 10.0.2.15 22 INTEGRATION_TESTS=true CILIUM_NO_IPV6_OUTSIDE=true USER=root CNI_INTEGRATION=kind SHLVL=1 K8S_VERSION=1.27 SSH_CLIENT=10.0.2.2 49470 22 PATH=/root/go/bin:/usr/local/go/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin MAIL=/var/mail/root OLDPWD=/root _=./test.test CILIUM_IMAGE=quay.io/cilium/cilium-ci CILIUM_TAG=da50c81348d32d0c84c092d454b16126d6a5a965 CILIUM_OPERATOR_IMAGE=quay.io/cilium/operator CILIUM_OPERATOR_TAG=da50c81348d32d0c84c092d454b16126d6a5a965 CILIUM_OPERATOR_SUFFIX=-ci HUBBLE_RELAY_IMAGE=quay.io/cilium/hubble-relay-ci HUBBLE_RELAY_TAG=da50c81348d32d0c84c092d454b16126d6a5a965 SKIP_K8S_PROVISION=true]
13:28:46 STEP: Ensuring the namespace kube-system exists
13:28:47 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:28:58 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:28:59 STEP: Preparing cluster
13:28:59 STEP: Deleting namespace local-path-storage
13:29:07 STEP: Labelling nodes
13:29:08 STEP: Cleaning up Cilium components
13:29:10 STEP: Running BeforeAll block for EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
13:29:10 STEP: Ensuring the namespace kube-system exists
13:29:10 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:29:10 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:29:11 STEP: Installing Cilium
13:29:16 STEP: Waiting for Cilium to become ready
13:29:54 STEP: Validating if Kubernetes DNS is deployed
13:29:54 STEP: Checking if deployment is ready
13:29:55 STEP: Kubernetes DNS is not ready: only 0 of 2 replicas are available
13:29:55 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
13:29:57 STEP: Waiting for Kubernetes DNS to become operational
13:29:57 STEP: Checking if deployment is ready
13:29:58 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:29:58 STEP: Checking if deployment is ready
13:29:59 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:29:59 STEP: Checking if deployment is ready
13:30:00 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:00 STEP: Checking if deployment is ready
13:30:01 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:01 STEP: Checking if deployment is ready
13:30:02 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:02 STEP: Checking if deployment is ready
13:30:03 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:03 STEP: Checking if deployment is ready
13:30:04 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:04 STEP: Checking if deployment is ready
13:30:05 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:05 STEP: Checking if deployment is ready
13:30:06 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:06 STEP: Checking if deployment is ready
13:30:07 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:07 STEP: Checking if deployment is ready
13:30:08 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:08 STEP: Checking if deployment is ready
13:30:09 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:09 STEP: Checking if deployment is ready
13:30:10 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:10 STEP: Checking if deployment is ready
13:30:11 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:11 STEP: Checking if deployment is ready
13:30:12 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:12 STEP: Checking if deployment is ready
13:30:13 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:13 STEP: Checking if deployment is ready
13:30:14 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:14 STEP: Checking if deployment is ready
13:30:15 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:15 STEP: Checking if deployment is ready
13:30:16 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:16 STEP: Checking if deployment is ready
13:30:17 STEP: Kubernetes DNS is not ready yet: only 0 of 2 replicas are available
13:30:17 STEP: Checking if deployment is ready
13:30:18 STEP: Checking if kube-dns service is plumbed correctly
13:30:18 STEP: Checking if DNS can resolve
13:30:18 STEP: Checking if pods have identity
13:30:25 STEP: Validating Cilium Installation
13:30:25 STEP: Checking whether host EP regenerated
13:30:25 STEP: Performing Cilium health check
13:30:25 STEP: Performing Cilium status preflight check
13:30:25 STEP: Performing Cilium controllers preflight check
13:30:37 STEP: Performing Cilium service preflight check
13:30:41 STEP: Cilium is not ready yet: cilium services are not set up correctly: Error validating Cilium service on pod {cilium-l7kl6 [{0xc0005d4400 0xc00051c2a8} {0xc0005d45c0 0xc00051c2c8} {0xc0005d4740 0xc00051c2d0} {0xc0005d4980 0xc00051c318} {0xc0005d4c40 0xc00051c328} {0xc0005d4d40 0xc00051c348}] map[10.96.0.10:53:[0.0.0.0:0 (3) (0) [ClusterIP, non-routable] 10.0.1.195:53 (3) (1) 10.0.1.129:53 (3) (2)] 10.96.0.10:9153:[10.0.1.195:9153 (4) (1) 0.0.0.0:0 (4) (0) [ClusterIP, non-routable] 10.0.1.129:9153 (4) (2)] 10.96.0.1:443:[172.18.0.3:6443 (1) (1) 0.0.0.0:0 (1) (0) [ClusterIP, non-routable]] 10.96.134.196:9090:[0.0.0.0:0 (6) (0) [ClusterIP, non-routable] 10.0.0.49:9090 (6) (1)] 10.96.176.86:3000:[0.0.0.0:0 (5) (0) [ClusterIP, non-routable]] 10.96.181.149:443:[0.0.0.0:0 (2) (0) [ClusterIP, InternalLocal, non-routable] 172.18.0.2:4244 (2) (1)]]}: Could not match cilium service backend address 10.0.0.49:9090 with k8s endpoint
13:30:41 STEP: Performing Cilium controllers preflight check
13:30:41 STEP: Performing Cilium status preflight check
13:30:41 STEP: Performing Cilium health check
13:30:41 STEP: Checking whether host EP regenerated
13:30:51 STEP: Performing Cilium service preflight check
13:30:51 STEP: Performing K8s service preflight check
13:30:56 STEP: Waiting for cilium-operator to be ready
13:30:56 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:30:57 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:30:57 STEP: WaitforPods(namespace="default", filter="-l zgroup=kafkaTestApp")
13:31:44 STEP: WaitforPods(namespace="default", filter="-l zgroup=kafkaTestApp") => <nil>
13:31:50 STEP: Wait for Kafka broker to be up
13:31:51 STEP: Creating new kafka topic empire-announce
13:31:53 STEP: Creating new kafka topic deathstar-plans
13:31:54 STEP: Waiting for DNS to resolve within pods for kafka-service
13:31:54 STEP: Testing basic Kafka Produce and Consume
FAIL: Failed to produce from empire-hq on topic empire-announce
Expected
    <*errors.errorString | 0xc000e0c320>: 
    ExecKafkaPodCmd: command 'kubectl exec -n default empire-hq-9b7455868-x45t2 -- sh -c "echo 'Happy 40th Birthday to General Tagge' | ./kafka-produce.sh --topic empire-announce"' failed Exitcode: 1 
    Err: exit status 1
    Stdout:
     	 
    Stderr:
     	 [2023-06-07 13:31:55,584] WARN Removing server kafka-service:9092 from bootstrap.servers as DNS resolution failed for kafka-service (org.apache.kafka.clients.ClientUtils)
    	 org.apache.kafka.common.KafkaException: Failed to construct kafka producer
    	 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:416)
    	 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:288)
    	 	at kafka.producer.NewShinyProducer.<init>(BaseProducer.scala:40)
    	 	at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)
    	 	at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
    	 Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers
    	 	at org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:64)
    	 	at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:373)
    	 	... 4 more
    	 command terminated with exit code 1
    	 
    
    {
        s: "ExecKafkaPodCmd: command 'kubectl exec -n default empire-hq-9b7455868-x45t2 -- sh -c \"echo 'Happy 40th Birthday to General Tagge' | ./kafka-produce.sh --topic empire-announce\"' failed Exitcode: 1 \nErr: exit status 1\nStdout:\n \t \nStderr:\n \t [2023-06-07 13:31:55,584] WARN Removing server kafka-service:9092 from bootstrap.servers as DNS resolution failed for kafka-service (org.apache.kafka.clients.ClientUtils)\n\t org.apache.kafka.common.KafkaException: Failed to construct kafka producer\n\t \tat org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:416)\n\t \tat org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:288)\n\t \tat kafka.producer.NewShinyProducer.<init>(BaseProducer.scala:40)\n\t \tat kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)\n\t \tat kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)\n\t Caused by: org.apache.kafka.common.config.ConfigException: No resolvable bootstrap urls given in bootstrap.servers\n\t \tat org.apache.kafka.clients.ClientUtils.parseAndValidateAddresses(ClientUtils.java:64)\n\t \tat org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:373)\n\t \t... 4 more\n\t command terminated with exit code 1\n\t \n",
    }
to be nil
=== Test Finished at 2023-06-07T13:31:55Z====
13:31:55 STEP: Running JustAfterEach block for EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
===================== TEST FAILED =====================
13:31:56 STEP: Running AfterFailed block for EntireTestsuite K8sKafkaPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                                         READY   STATUS    RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-758c69b6df-prf7v                     1/1     Running   0          3m2s    10.0.0.157   kind-control-plane   <none>           <none>
	 cilium-monitoring   prometheus-5bc5cbbf9d-78rm5                  1/1     Running   0          3m2s    10.0.0.49    kind-control-plane   <none>           <none>
	 default             empire-backup-687b67667f-jkv9z               1/1     Running   0          75s     10.0.1.47    kind-worker          <none>           <none>
	 default             empire-hq-9b7455868-x45t2                    1/1     Running   0          75s     10.0.0.210   kind-control-plane   <none>           <none>
	 default             empire-outpost-8888-dfcdf7bcf-xdssb          1/1     Running   0          75s     10.0.1.135   kind-worker          <none>           <none>
	 default             empire-outpost-9999-56984894dc-v7tsw         1/1     Running   0          75s     10.0.1.26    kind-worker          <none>           <none>
	 default             kafka-broker-76c74b5587-xdp8w                1/1     Running   0          75s     10.0.1.226   kind-worker          <none>           <none>
	 kube-system         cilium-cn5gt                                 1/1     Running   0          2m56s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system         cilium-l7kl6                                 1/1     Running   0          2m56s   172.18.0.2   kind-worker          <none>           <none>
	 kube-system         cilium-node-init-5hxvw                       1/1     Running   0          2m56s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system         cilium-node-init-bh54w                       1/1     Running   0          2m56s   172.18.0.4   kind-worker2         <none>           <none>
	 kube-system         cilium-node-init-lhjr7                       1/1     Running   0          2m56s   172.18.0.2   kind-worker          <none>           <none>
	 kube-system         cilium-operator-56ccfdd776-6csjq             1/1     Running   0          2m56s   172.18.0.4   kind-worker2         <none>           <none>
	 kube-system         cilium-operator-56ccfdd776-d4kmq             1/1     Running   0          2m56s   172.18.0.2   kind-worker          <none>           <none>
	 kube-system         coredns-5d78c9869d-mmb8h                     1/1     Running   0          2m15s   10.0.1.195   kind-worker          <none>           <none>
	 kube-system         coredns-5d78c9869d-p6dw4                     1/1     Running   0          2m15s   10.0.1.129   kind-worker          <none>           <none>
	 kube-system         etcd-kind-control-plane                      1/1     Running   0          3m52s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system         kube-apiserver-kind-control-plane            1/1     Running   0          3m52s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system         kube-controller-manager-kind-control-plane   1/1     Running   0          3m52s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system         kube-scheduler-kind-control-plane            1/1     Running   0          3m52s   172.18.0.3   kind-control-plane   <none>           <none>
	 kube-system         log-gatherer-6w77x                           1/1     Running   0          3m25s   172.18.0.4   kind-worker2         <none>           <none>
	 kube-system         log-gatherer-kvfhd                           1/1     Running   0          3m25s   172.18.0.2   kind-worker          <none>           <none>
	 kube-system         log-gatherer-tsmqj                           1/1     Running   0          3m25s   172.18.0.3   kind-control-plane   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-cn5gt cilium-l7kl6]
cmd: kubectl exec -n kube-system cilium-cn5gt -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                         
	 1    10.96.176.86:3000    ClusterIP      1 => 10.0.0.157:3000 (active)   
	 2    10.96.134.196:9090   ClusterIP      1 => 10.0.0.49:9090 (active)    
	 3    10.96.0.1:443        ClusterIP      1 => 172.18.0.3:6443 (active)   
	 4    10.96.181.149:443    ClusterIP      1 => 172.18.0.3:4244 (active)   
	 5    10.96.0.10:53        ClusterIP      1 => 10.0.1.195:53 (active)     
	                                          2 => 10.0.1.129:53 (active)     
	 6    10.96.0.10:9153      ClusterIP      1 => 10.0.1.195:9153 (active)   
	                                          2 => 10.0.1.129:9153 (active)   
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-cn5gt -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                             
	 668        Disabled           Disabled          2048       k8s:app=grafana                                                                    fd02::a4   10.0.0.157   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 780        Disabled           Disabled          26849      k8s:app=empire-hq                                                                  fd02::1f   10.0.0.210   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                             
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:zgroup=kafkaTestApp                                                                                            
	 1580       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                          
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                        
	                                                            reserved:host                                                                                                      
	 2458       Disabled           Disabled          4          reserved:health                                                                    fd02::d2   10.0.0.204   ready   
	 3310       Disabled           Disabled          35139      k8s:app=prometheus                                                                 fd02::b    10.0.0.49    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                             
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-l7kl6 -c cilium-agent -- cilium service list
Exitcode: 0 
Stdout:
 	 ID   Frontend             Service Type   Backend                         
	 1    10.96.0.1:443        ClusterIP      1 => 172.18.0.3:6443 (active)   
	 2    10.96.181.149:443    ClusterIP      1 => 172.18.0.2:4244 (active)   
	 3    10.96.0.10:53        ClusterIP      1 => 10.0.1.195:53 (active)     
	                                          2 => 10.0.1.129:53 (active)     
	 4    10.96.0.10:9153      ClusterIP      1 => 10.0.1.195:9153 (active)   
	                                          2 => 10.0.1.129:9153 (active)   
	 5    10.96.176.86:3000    ClusterIP      1 => 10.0.0.157:3000 (active)   
	 6    10.96.134.196:9090   ClusterIP      1 => 10.0.0.49:9090 (active)    
	 
Stderr:
 	 

cmd: kubectl exec -n kube-system cilium-l7kl6 -c cilium-agent -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                  IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                        
	 31         Disabled           Disabled          19893      k8s:app=kafka                                                                fd02::1be   10.0.1.226   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                        
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                       
	                                                            k8s:zgroup=kafkaTestApp                                                                                       
	 81         Disabled           Disabled          19382      k8s:app=empire-outpost                                                       fd02::187   10.0.1.135   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                        
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                       
	                                                            k8s:outpostid=8888                                                                                            
	                                                            k8s:zgroup=kafkaTestApp                                                                                       
	 151        Disabled           Disabled          56449      k8s:app=empire-backup                                                        fd02::106   10.0.1.47    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                        
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                       
	                                                            k8s:zgroup=kafkaTestApp                                                                                       
	 187        Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                            ready   
	                                                            reserved:host                                                                                                 
	 373        Disabled           Disabled          40263      k8s:app=empire-outpost                                                       fd02::10f   10.0.1.26    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default                                        
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                               
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                       
	                                                            k8s:outpostid=9999                                                                                            
	                                                            k8s:zgroup=kafkaTestApp                                                                                       
	 1827       Disabled           Disabled          9359       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::124   10.0.1.129   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                   
	                                                            k8s:k8s-app=kube-dns                                                                                          
	 3332       Disabled           Disabled          9359       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system   fd02::17b   10.0.1.195   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                      
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                               
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                   
	                                                            k8s:k8s-app=kube-dns                                                                                          
	 3604       Disabled           Disabled          4          reserved:health                                                              fd02::1c7   10.0.1.134   ready   
	 
Stderr:
 	 

===================== Exiting AfterFailed =====================
13:32:18 STEP: Running AfterEach for block EntireTestsuite K8sKafkaPolicyTest Kafka Policy Tests
13:32:18 STEP: Running AfterEach for block EntireTestsuite
<Checks>
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 4
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 12
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 5 errors/warnings:
CONFIG_LWTUNNEL_BPF optional kernel parameter is not in kernel (needed for: Lightweight Tunnel hook for IP-in-IP encapsulation)
Unable to get node resource
Waiting for k8s node information
UpdateIdentities: Skipping Delete of a non-existing identity
Unable to ensure that BPF JIT compilation is enabled. This can be ignored when Cilium is running inside non-host network namespace (e.g. with kind or minikube)
Cilium pods: [cilium-cn5gt cilium-l7kl6]
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod                                    Ingress   Egress
grafana-758c69b6df-prf7v               false     false
empire-outpost-8888-dfcdf7bcf-xdssb    false     false
coredns-5d78c9869d-p6dw4               false     false
empire-outpost-9999-56984894dc-v7tsw   false     false
kafka-broker-76c74b5587-xdp8w          false     false
coredns-5d78c9869d-mmb8h               false     false
prometheus-5bc5cbbf9d-78rm5            false     false
empire-backup-687b67667f-jkv9z         false     false
empire-hq-9b7455868-x45t2              false     false
Cilium agent 'cilium-cn5gt': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 32 Failed 0
Cilium agent 'cilium-l7kl6': Status: Ok  Health: Ok Nodes "" ContainerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 44 Failed 0

</Checks>

13:32:18 STEP: Running AfterAll block for EntireTestsuite K8sKafkaPolicyTest
13:32:18 STEP: Removing Cilium installation using generated helm manifest

Sysdump: test_results-E2E Test (1.27, f20-kafka).tar.gz

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions