-
Notifications
You must be signed in to change notification settings - Fork 3.4k
Closed as not planned
Closed as not planned
Copy link
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.
Description
Test Name
K8sPolicyTest Basic Test checks all kind of Kubernetes policies
Failure Output
FAIL: "app2-58757b7dd5-cnr7k" cannot curl clusterIP "10.102.69.175"
Stacktrace
Click to show.
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:527
"app2-58757b7dd5-cnr7k" cannot curl clusterIP "10.102.69.175"
Expected command: kubectl exec -n 202212051339k8spolicytestbasictestchecksallkindofkubernetespoli app2-58757b7dd5-cnr7k -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.102.69.175/public -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000016()', Connect: '0.000000',Transfer '0.000000', total '5.001164'
Stderr:
command terminated with exit code 28
/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9/src/github.com/cilium/cilium/test/k8s/net_policies.go:274
Standard Output
Click to show.
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
⚠️ Number of "level=warning" in logs: 6
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Waiting for k8s node information
Unable to get node resource
Key allocation attempt failed
Cilium pods: [cilium-h6rxc cilium-mp8mm]
Netpols loaded:
CiliumNetworkPolicies loaded: 202212051339k8spolicytestbasictestchecksallkindofkubernetespoli::l3-l4-policy
Endpoint Policy Enforcement:
Pod Ingress Egress
grafana-5747bcc8f9-mp7cw false false
prometheus-655fb888d7-2qxvk false false
coredns-69b675786c-95m72 false false
app1-7469cfcb66-722kb false false
app1-7469cfcb66-htt9b false false
app2-58757b7dd5-cnr7k false false
app3-5d69599cdd-qg8ms false false
Cilium agent 'cilium-h6rxc': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 27 Failed 0
Cilium agent 'cilium-mp8mm': Status: Ok Health: Ok Nodes "" ContainerRuntime: Kubernetes: Ok KVstore: Ok Controllers: Total 39 Failed 0
Standard Error
Click to show.
13:39:11 STEP: Running BeforeAll block for EntireTestsuite
13:39:11 STEP: Starting tests: command line parameters: {Reprovision:false HoldEnvironment:false PassCLIEnvironment:true SSHConfig: ShowCommands:false TestScope: SkipLogGathering:false CiliumImage:quay.io/cilium/cilium-ci CiliumTag:39bbeff4891dc0ad8d1f38d35c68aa7d880da160 CiliumOperatorImage:quay.io/cilium/operator CiliumOperatorTag:39bbeff4891dc0ad8d1f38d35c68aa7d880da160 CiliumOperatorSuffix:-ci HubbleRelayImage:quay.io/cilium/hubble-relay-ci HubbleRelayTag:39bbeff4891dc0ad8d1f38d35c68aa7d880da160 ProvisionK8s:true Timeout:2h50m0s Kubeconfig:/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9/src/github.com/cilium/cilium/test/vagrant-kubeconfig KubectlPath:/tmp/kubectl RegistryCredentials: Multinode:true RunQuarantined:false Help:false} environment variables: [JENKINS_HOME=/var/jenkins_home ghprbSourceBranch=yutaro/v3-backend-map-downgrade ghprbTriggerAuthorEmail=yhayakawa3720@gmail.com VM_MEMORY=8192 MAIL=/var/mail/root SSH_CLIENT=54.148.123.155 59198 22 ghprbPullAuthorEmail=yhayakawa3720@gmail.com USER=root PROJ_PATH=src/github.com/cilium/cilium RUN_CHANGES_DISPLAY_URL=https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9/346/display/redirect?page=changes ghprbPullDescription=GitHub pull request #22416 of commit 39bbeff4891dc0ad8d1f38d35c68aa7d880da160, no merge conflicts. NETNEXT=0 ghprbActualCommit=39bbeff4891dc0ad8d1f38d35c68aa7d880da160 SHLVL=1 CILIUM_TAG=39bbeff4891dc0ad8d1f38d35c68aa7d880da160 NODE_LABELS=baremetal ginkgo nightly node-humane-racer vagrant HUDSON_URL=https://jenkins.cilium.io/ GIT_COMMIT=e95bbc004b0d0f738e6dfa3707eef73b779f20b0 OLDPWD=/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9 GINKGO_TIMEOUT=170m HOME=/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9 ghprbTriggerAuthorLoginMention=@YutaroHayakawa BUILD_URL=https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9/346/ ghprbPullAuthorLoginMention=@YutaroHayakawa HUDSON_COOKIE=119f0c5f-d04f-46fa-ac0c-71ccb66e1ac4 JENKINS_SERVER_COOKIE=durable-6fa3f6b97cd656c9be51ee5f43466676 ghprbGhRepository=cilium/cilium DOCKER_TAG=39bbeff4891dc0ad8d1f38d35c68aa7d880da160 JobKernelVersion=49 DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/0/bus KERNEL=49 CONTAINER_RUNTIME=docker WORKSPACE=/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9 ghprbPullLongDescription=This PR implements the downgrading path for v3 backend maps introduced in #21797 \r\n\r\n```release-note\r\nbpf: Implement downgrading path from v3 to v2 backend map\r\n```\r\n K8S_NODES=2 TESTDIR=/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9/src/github.com/cilium/cilium/test LOGNAME=root NODE_NAME=node-humane-racer ghprbCredentialsId=ciliumbot _=/usr/bin/java HUBBLE_RELAY_IMAGE=quay.io/cilium/hubble-relay-ci STAGE_NAME=BDD-Test-PR GIT_BRANCH=origin/pr/22416/merge EXECUTOR_NUMBER=0 ghprbTriggerAuthorLogin=YutaroHayakawa TERM=xterm XDG_SESSION_ID=5 HOST_FIREWALL=0 CILIUM_OPERATOR_TAG=39bbeff4891dc0ad8d1f38d35c68aa7d880da160 BUILD_DISPLAY_NAME=bpf: Implement downgrading path from v3 to v2 backend map https://github.com/cilium/cilium/pull/22416 #346 ghprbPullAuthorLogin=YutaroHayakawa HUDSON_HOME=/var/jenkins_home ghprbTriggerAuthor=Yutaro Hayakawa JOB_BASE_NAME=Cilium-PR-K8s-1.22-kernel-4.9 PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/go/bin:/root/go/bin sha1=origin/pr/22416/merge KUBECONFIG=/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9/src/github.com/cilium/cilium/test/vagrant-kubeconfig FOCUS=K8s BUILD_ID=346 XDG_RUNTIME_DIR=/run/user/0 BUILD_TAG=jenkins-Cilium-PR-K8s-1.22-kernel-4.9-346 RUN_QUARANTINED=false CILIUM_IMAGE=quay.io/cilium/cilium-ci JENKINS_URL=https://jenkins.cilium.io/ LANG=C.UTF-8 ghprbCommentBody=/test-backport-1.12 JOB_URL=https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9/ ghprbPullTitle=bpf: Implement downgrading path from v3 to v2 backend map GIT_URL=https://github.com/cilium/cilium ghprbPullLink=https://github.com/cilium/cilium/pull/22416 BUILD_NUMBER=346 JENKINS_NODE_COOKIE=979ed7c8-9468-4a96-9acd-5db1f1322046 SHELL=/bin/bash GOPATH=/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9 RUN_DISPLAY_URL=https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9/346/display/redirect IMAGE_REGISTRY=quay.io/cilium ghprbAuthorRepoGitUrl=https://github.com/YutaroHayakawa/cilium.git FAILFAST=false HUDSON_SERVER_COOKIE=693c250bfb7e85bf ghprbTargetBranch=v1.12 JOB_DISPLAY_URL=https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9/display/redirect K8S_VERSION=1.22 JOB_NAME=Cilium-PR-K8s-1.22-kernel-4.9 PWD=/home/jenkins/workspace/Cilium-PR-K8s-1.22-kernel-4.9/src/github.com/cilium/cilium/test SSH_CONNECTION=54.148.123.155 59198 145.40.77.29 22 ghprbPullId=22416 CILIUM_OPERATOR_IMAGE=quay.io/cilium/operator HUBBLE_RELAY_TAG=39bbeff4891dc0ad8d1f38d35c68aa7d880da160 JobK8sVersion=1.22 VM_CPUS=3 CILIUM_OPERATOR_SUFFIX=-ci]
13:39:11 STEP: Ensuring the namespace kube-system exists
13:39:11 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:39:13 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:39:14 STEP: Preparing cluster
13:39:14 STEP: Labelling nodes
13:39:14 STEP: Cleaning up Cilium components
13:39:14 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTest
13:39:14 STEP: Ensuring the namespace kube-system exists
13:39:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
13:39:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
13:39:14 STEP: Installing Cilium
13:39:15 STEP: Waiting for Cilium to become ready
13:39:33 STEP: Restarting unmanaged pods coredns-69b675786c-n6kbh in namespace kube-system
13:39:37 STEP: Validating if Kubernetes DNS is deployed
13:39:37 STEP: Checking if deployment is ready
13:39:37 STEP: Kubernetes DNS is not ready: only 0 of 1 replicas are available
13:39:37 STEP: Restarting Kubernetes DNS (-l k8s-app=kube-dns)
13:39:37 STEP: Waiting for Kubernetes DNS to become operational
13:39:37 STEP: Checking if deployment is ready
13:39:37 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:39:38 STEP: Checking if deployment is ready
13:39:38 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:39:39 STEP: Checking if deployment is ready
13:39:39 STEP: Kubernetes DNS is not ready yet: only 0 of 1 replicas are available
13:39:40 STEP: Checking if deployment is ready
13:39:40 STEP: Checking if kube-dns service is plumbed correctly
13:39:40 STEP: Checking if pods have identity
13:39:40 STEP: Checking if DNS can resolve
13:39:42 STEP: Validating Cilium Installation
13:39:42 STEP: Performing Cilium controllers preflight check
13:39:42 STEP: Performing Cilium health check
13:39:42 STEP: Checking whether host EP regenerated
13:39:42 STEP: Performing Cilium status preflight check
13:39:43 STEP: Performing Cilium service preflight check
13:39:44 STEP: Cilium is not ready yet: cilium services are not set up correctly: Error validating Cilium service on pod {cilium-h6rxc [{0xc000c2acc0 0xc000123968} {0xc000c2ae00 0xc000123980} {0xc000c2af00 0xc000123988} {0xc000c2b000 0xc0001239a0} {0xc000c2b100 0xc0001239a8} {0xc000c2b200 0xc0001239b0}] map[10.105.117.146:443:[192.168.56.12:4244 (6) (2) 192.168.56.11:4244 (6) (1) 0.0.0.0:0 (6) (0) [ClusterIP, non-routable]] 10.110.163.214:9090:[0.0.0.0:0 (5) (0) [ClusterIP, non-routable] 10.0.1.66:9090 (5) (1)] 10.96.0.10:53:[0.0.0.0:0 (2) (0) [ClusterIP, non-routable] 10.0.1.24:53 (2) (2) 10.0.0.207:53 (2) (1)] 10.96.0.10:9153:[10.0.1.24:9153 (3) (2) 10.0.0.207:9153 (3) (1) 0.0.0.0:0 (3) (0) [ClusterIP, non-routable]] 10.96.0.1:443:[192.168.56.11:6443 (1) (1) 0.0.0.0:0 (1) (0) [ClusterIP, non-routable]] 10.98.182.180:3000:[0.0.0.0:0 (4) (0) [ClusterIP, non-routable]]]}: Could not match cilium service backend address 10.0.1.24:53 with k8s endpoint
13:39:44 STEP: Performing Cilium controllers preflight check
13:39:44 STEP: Checking whether host EP regenerated
13:39:44 STEP: Performing Cilium status preflight check
13:39:44 STEP: Performing Cilium health check
13:39:45 STEP: Performing Cilium service preflight check
13:39:46 STEP: Cilium is not ready yet: cilium services are not set up correctly: Error validating Cilium service on pod {cilium-h6rxc [{0xc000216140 0xc000644058} {0xc000216280 0xc000644060} {0xc000216380 0xc000644070} {0xc000216480 0xc000644080} {0xc0002165c0 0xc000644098} {0xc000216740 0xc0006440a0}] map[10.105.117.146:443:[192.168.56.12:4244 (6) (2) 192.168.56.11:4244 (6) (1) 0.0.0.0:0 (6) (0) [ClusterIP, non-routable]] 10.110.163.214:9090:[0.0.0.0:0 (5) (0) [ClusterIP, non-routable] 10.0.1.66:9090 (5) (1)] 10.96.0.10:53:[0.0.0.0:0 (2) (0) [ClusterIP, non-routable] 10.0.1.24:53 (2) (2) 10.0.0.207:53 (2) (1)] 10.96.0.10:9153:[10.0.1.24:9153 (3) (2) 10.0.0.207:9153 (3) (1) 0.0.0.0:0 (3) (0) [ClusterIP, non-routable]] 10.96.0.1:443:[192.168.56.11:6443 (1) (1) 0.0.0.0:0 (1) (0) [ClusterIP, non-routable]] 10.98.182.180:3000:[0.0.0.0:0 (4) (0) [ClusterIP, non-routable]]]}: Could not match cilium service backend address 10.0.1.24:53 with k8s endpoint
13:39:46 STEP: Performing Cilium controllers preflight check
13:39:46 STEP: Performing Cilium status preflight check
13:39:46 STEP: Performing Cilium health check
13:39:46 STEP: Checking whether host EP regenerated
13:39:47 STEP: Performing Cilium service preflight check
13:39:47 STEP: Performing K8s service preflight check
13:39:49 STEP: Waiting for cilium-operator to be ready
13:39:49 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
13:39:49 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
13:39:49 STEP: Running BeforeAll block for EntireTestsuite K8sPolicyTest Basic Test
13:39:49 STEP: Deleting namespace 202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
13:39:50 STEP: Creating namespace 202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
13:39:50 STEP: WaitforPods(namespace="202212051339k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp")
13:40:09 STEP: WaitforPods(namespace="202212051339k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp") => <nil>
13:40:09 STEP: Running BeforeEach block for EntireTestsuite K8sPolicyTest Basic Test
13:40:11 STEP: WaitforPods(namespace="202212051339k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp")
13:40:11 STEP: WaitforPods(namespace="202212051339k8spolicytestbasictestchecksallkindofkubernetespoli", filter="-l zgroup=testapp") => <nil>
13:40:11 STEP: Testing L3/L4 rules
FAIL: "app2-58757b7dd5-cnr7k" cannot curl clusterIP "10.102.69.175"
Expected command: kubectl exec -n 202212051339k8spolicytestbasictestchecksallkindofkubernetespoli app2-58757b7dd5-cnr7k -- curl --path-as-is -s -D /dev/stderr --fail --connect-timeout 5 --max-time 20 http://10.102.69.175/public -w "time-> DNS: '%{time_namelookup}(%{remote_ip})', Connect: '%{time_connect}',Transfer '%{time_starttransfer}', total '%{time_total}'"
To succeed, but it failed:
Exitcode: 28
Err: exit status 28
Stdout:
time-> DNS: '0.000016()', Connect: '0.000000',Transfer '0.000000', total '5.001164'
Stderr:
command terminated with exit code 28
=== Test Finished at 2022-12-05T13:40:22Z====
13:40:22 STEP: Running JustAfterEach block for EntireTestsuite K8sPolicyTest
===================== TEST FAILED =====================
13:40:22 STEP: Running AfterFailed block for EntireTestsuite K8sPolicyTest
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0
Stdout:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
202212051339k8spolicytestbasictestchecksallkindofkubernetespoli app1-7469cfcb66-722kb 2/2 Running 0 33s 10.0.0.7 k8s1 <none> <none>
202212051339k8spolicytestbasictestchecksallkindofkubernetespoli app1-7469cfcb66-htt9b 2/2 Running 0 33s 10.0.0.108 k8s1 <none> <none>
202212051339k8spolicytestbasictestchecksallkindofkubernetespoli app2-58757b7dd5-cnr7k 1/1 Running 0 33s 10.0.0.244 k8s1 <none> <none>
202212051339k8spolicytestbasictestchecksallkindofkubernetespoli app3-5d69599cdd-qg8ms 1/1 Running 0 33s 10.0.0.185 k8s1 <none> <none>
cilium-monitoring grafana-5747bcc8f9-mp7cw 1/1 Running 0 69s 10.0.1.139 k8s2 <none> <none>
cilium-monitoring prometheus-655fb888d7-2qxvk 1/1 Running 0 69s 10.0.1.66 k8s2 <none> <none>
kube-system cilium-h6rxc 1/1 Running 0 68s 192.168.56.12 k8s2 <none> <none>
kube-system cilium-mp8mm 1/1 Running 0 68s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-operator-7c9869bc65-8fgz6 1/1 Running 0 68s 192.168.56.11 k8s1 <none> <none>
kube-system cilium-operator-7c9869bc65-z72xd 1/1 Running 0 68s 192.168.56.12 k8s2 <none> <none>
kube-system coredns-69b675786c-95m72 1/1 Running 0 46s 10.0.0.207 k8s1 <none> <none>
kube-system etcd-k8s1 1/1 Running 0 5m1s 192.168.56.11 k8s1 <none> <none>
kube-system kube-apiserver-k8s1 1/1 Running 0 5m 192.168.56.11 k8s1 <none> <none>
kube-system kube-controller-manager-k8s1 0/1 CrashLoopBackOff 2 (20s ago) 4m59s 192.168.56.11 k8s1 <none> <none>
kube-system kube-proxy-fsdhc 1/1 Running 0 113s 192.168.56.12 k8s2 <none> <none>
kube-system kube-proxy-zj7n6 1/1 Running 0 2m34s 192.168.56.11 k8s1 <none> <none>
kube-system kube-scheduler-k8s1 0/1 CrashLoopBackOff 1 (20s ago) 5m1s 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-ttnpf 1/1 Running 0 72s 192.168.56.11 k8s1 <none> <none>
kube-system log-gatherer-zntgw 1/1 Running 0 72s 192.168.56.12 k8s2 <none> <none>
kube-system registry-adder-t9kqx 1/1 Running 0 106s 192.168.56.12 k8s2 <none> <none>
kube-system registry-adder-tpzkx 1/1 Running 0 106s 192.168.56.11 k8s1 <none> <none>
Stderr:
Fetching command output from pods [cilium-h6rxc cilium-mp8mm]
cmd: kubectl exec -n kube-system cilium-h6rxc -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.1:443 ClusterIP 1 => 192.168.56.11:6443 (active)
2 10.96.0.10:53 ClusterIP 1 => 10.0.0.207:53 (active)
3 10.96.0.10:9153 ClusterIP 1 => 10.0.0.207:9153 (active)
4 10.98.182.180:3000 ClusterIP 1 => 10.0.1.139:3000 (active)
5 10.110.163.214:9090 ClusterIP 1 => 10.0.1.66:9090 (active)
6 10.105.117.146:443 ClusterIP 1 => 192.168.56.11:4244 (active)
2 => 192.168.56.12:4244 (active)
7 10.102.69.175:80 ClusterIP
8 10.102.69.175:69 ClusterIP
Stderr:
cmd: kubectl exec -n kube-system cilium-h6rxc -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
78 Disabled Disabled 4 reserved:health fd02::117 10.0.1.149 ready
513 Disabled Disabled 11244 k8s:app=prometheus fd02::187 10.0.1.66 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s
k8s:io.kubernetes.pod.namespace=cilium-monitoring
670 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s2 ready
reserved:host
1636 Disabled Disabled 49913 k8s:app=grafana fd02::19d 10.0.1.139 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=cilium-monitoring
Stderr:
cmd: kubectl exec -n kube-system cilium-mp8mm -c cilium-agent -- cilium service list
Exitcode: 0
Stdout:
ID Frontend Service Type Backend
1 10.96.0.10:53 ClusterIP 1 => 10.0.0.207:53 (active)
2 10.96.0.10:9153 ClusterIP 1 => 10.0.0.207:9153 (active)
3 10.98.182.180:3000 ClusterIP 1 => 10.0.1.139:3000 (active)
4 10.110.163.214:9090 ClusterIP 1 => 10.0.1.66:9090 (active)
5 10.105.117.146:443 ClusterIP 1 => 192.168.56.11:4244 (active)
2 => 192.168.56.12:4244 (active)
6 10.96.0.1:443 ClusterIP 1 => 192.168.56.11:6443 (active)
7 10.102.69.175:80 ClusterIP
8 10.102.69.175:69 ClusterIP
Stderr:
cmd: kubectl exec -n kube-system cilium-mp8mm -c cilium-agent -- cilium endpoint list
Exitcode: 0
Stdout:
ENDPOINT POLICY (ingress) POLICY (egress) IDENTITY LABELS (source:key[=value]) IPv6 IPv4 STATUS
ENFORCEMENT ENFORCEMENT
85 Disabled Disabled 31010 k8s:id=app3 fd02::96 10.0.0.185 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=default
k8s:io.kubernetes.pod.namespace=202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
493 Enabled Disabled 14657 k8s:id=app1 fd02::b7 10.0.0.108 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
617 Enabled Disabled 14657 k8s:id=app1 fd02::25 10.0.0.7 ready
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app1-account
k8s:io.kubernetes.pod.namespace=202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
670 Disabled Disabled 21978 k8s:appSecond=true fd02::40 10.0.0.244 ready
k8s:id=app2
k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=app2-account
k8s:io.kubernetes.pod.namespace=202212051339k8spolicytestbasictestchecksallkindofkubernetespoli
k8s:zgroup=testapp
814 Disabled Disabled 4 reserved:health fd02::56 10.0.0.144 ready
948 Disabled Disabled 1 k8s:cilium.io/ci-node=k8s1 ready
k8s:node-role.kubernetes.io/control-plane
k8s:node-role.kubernetes.io/master
k8s:node.kubernetes.io/exclude-from-external-load-balancers
reserved:host
1151 Disabled Disabled 29250 k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system fd02::3b 10.0.0.207 ready
k8s:io.cilium.k8s.policy.cluster=default
k8s:io.cilium.k8s.policy.serviceaccount=coredns
k8s:io.kubernetes.pod.namespace=kube-system
k8s:k8s-app=kube-dns
Stderr:
===================== Exiting AfterFailed =====================
13:40:30 STEP: Running AfterEach for block EntireTestsuite K8sPolicyTest Basic Test
13:40:30 STEP: Running AfterEach for block EntireTestsuite
[[ATTACHMENT|c3d7c0a7_K8sPolicyTest_Basic_Test_checks_all_kind_of_Kubernetes_policies.zip]]
ZIP Links:
Click to show.
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9//346/artifact/c3d7c0a7_K8sPolicyTest_Basic_Test_checks_all_kind_of_Kubernetes_policies.zip
https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9//346/artifact/test_results_Cilium-PR-K8s-1.22-kernel-4.9_346_BDD-Test-PR.zip
Jenkins URL: https://jenkins.cilium.io/job/Cilium-PR-K8s-1.22-kernel-4.9/346/
If this is a duplicate of an existing flake, comment 'Duplicate of #<issue-number>' and close this issue.
Metadata
Metadata
Assignees
Labels
ci/flakeThis is a known failure that occurs in the tree. Please investigate me!This is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.The stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.