-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed as not planned
Closed as not planned
Copy link
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Description
Test Name
Suite-k8s-1.16.K8sDatapathConfig MonitorAggregation Checks that monitor aggregation flags send notifications
Failure Output
FAIL: Timed out after 240.000s.
Stack Trace
/home/jenkins/workspace/cilium-master-k8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:515
Timed out after 240.000s.
monitor aggregation did not result in correct number of TCP notifications
Expected
<bool>: false
to be true
/home/jenkins/workspace/cilium-master-k8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/k8s/datapath_configuration.go:139
Standard Output
06:15:45 STEP: Installing Cilium
06:15:47 STEP: Waiting for Cilium to become ready
06:16:32 STEP: Validating if Kubernetes DNS is deployed
06:16:32 STEP: Checking if deployment is ready
06:16:32 STEP: Checking if kube-dns service is plumbed correctly
06:16:32 STEP: Checking if DNS can resolve
06:16:32 STEP: Checking if pods have identity
06:16:36 STEP: Kubernetes DNS is up and operational
06:16:36 STEP: Validating Cilium Installation
06:16:36 STEP: Performing Cilium status preflight check
06:16:36 STEP: Performing Cilium controllers preflight check
06:16:36 STEP: Checking whether host EP regenerated
06:16:36 STEP: Performing Cilium health check
06:16:43 STEP: Performing Cilium service preflight check
06:16:43 STEP: Performing K8s service preflight check
06:16:43 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-bspz9': Exitcode: 1
Err: exit status 1
Stdout:
Stderr:
Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
command terminated with exit code 1
06:16:43 STEP: Performing Cilium controllers preflight check
06:16:43 STEP: Performing Cilium health check
06:16:43 STEP: Performing Cilium status preflight check
06:16:43 STEP: Checking whether host EP regenerated
06:16:50 STEP: Performing Cilium service preflight check
06:16:50 STEP: Performing K8s service preflight check
06:16:50 STEP: Performing Cilium controllers preflight check
06:16:50 STEP: Performing Cilium status preflight check
06:16:50 STEP: Performing Cilium health check
06:16:50 STEP: Checking whether host EP regenerated
06:16:57 STEP: Performing Cilium service preflight check
06:16:57 STEP: Performing K8s service preflight check
06:16:57 STEP: Performing Cilium controllers preflight check
06:16:57 STEP: Checking whether host EP regenerated
06:16:57 STEP: Performing Cilium status preflight check
06:16:57 STEP: Performing Cilium health check
06:17:04 STEP: Performing Cilium service preflight check
06:17:04 STEP: Performing K8s service preflight check
06:17:11 STEP: Waiting for cilium-operator to be ready
06:17:11 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
06:17:11 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
06:17:11 STEP: Making sure all endpoints are in ready state
06:17:14 STEP: Launching cilium monitor on "cilium-bspz9"
06:17:14 STEP: Creating namespace 202304120617k8sdatapathconfigmonitoraggregationchecksthatmonito
06:17:14 STEP: Deploying demo_ds.yaml in namespace 202304120617k8sdatapathconfigmonitoraggregationchecksthatmonito
06:17:14 STEP: Applying policy /home/jenkins/workspace/cilium-master-k8s-1.16-kernel-4.19/src/github.com/cilium/cilium/test/k8s/manifests/l3-policy-demo.yaml
06:17:22 STEP: Waiting for 4m0s for 5 pods of deployment demo_ds.yaml to become ready
06:17:22 STEP: WaitforNPods(namespace="202304120617k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="")
06:17:26 STEP: WaitforNPods(namespace="202304120617k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="") => <nil>
06:17:26 STEP: Checking pod connectivity between nodes
06:17:26 STEP: WaitforPods(namespace="202304120617k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="-l zgroup=testDSClient")
06:17:26 STEP: WaitforPods(namespace="202304120617k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="-l zgroup=testDSClient") => <nil>
06:17:26 STEP: WaitforPods(namespace="202304120617k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="-l zgroup=testDS")
06:17:26 STEP: WaitforPods(namespace="202304120617k8sdatapathconfigmonitoraggregationchecksthatmonito", filter="-l zgroup=testDS") => <nil>
06:17:37 STEP: Checking the set of TCP notifications received matches expectations
06:17:37 STEP: Looking for TCP notifications using the ephemeral port "46090"
Could not locate final FIN notification in monitor log: egressTCPMatches [[84 67 80 9 123 67 111 110 116 101 110 116 115 61 91 46 46 51 50 46 46 93 32 80 97 121 108 111 97 100 61 91 93 32 83 114
...
56 49 49 53 54 53 32 68 97 116 97 79 102 102 115 101 116 61 56 32 70 73 78 61 116 114 117 101]]
FAIL: Timed out after 240.000s.
monitor aggregation did not result in correct number of TCP notifications
Expected
<bool>: false
to be true
=== Test Finished at 2023-04-12T06:21:37Z====
Standard Error
-
Resources
- Jenkins URL: https://jenkins.cilium.io/job/cilium-master-k8s-1.16-kernel-4.19/4451/testReport/junit/Suite-k8s-1/16/K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_flags_send_notifications/
- ZIP file(s): 24b57202_K8sDatapathConfig_MonitorAggregation_Checks_that_monitor_aggregation_flags_send_notifications.zip
Anything else?
No response
Metadata
Metadata
Assignees
Labels
area/CIContinuous Integration testing issue or flakeContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.