Skip to content

CI: K8sConformance Portmap Chaining: connectivity-check pods are not ready after timeout #15791

@pchaigno

Description

@pchaigno

Happened in https://jenkins.cilium.io/job/Cilium-PR-K8s-1.21-kernel-4.9/199/.
dc252eaf_K8sConformance_Portmap_Chaining_Check_connectivity-check_compliance_with_portmap_chaining.zip
fb9678c7_K8sConformance_Portmap_Chaining_Check_one_node_connectivity-check_compliance_with_portmap_chaining.zip

Also happening a lot in other PRs: https://datastudio.google.com/s/kkQLXLfpQqM.

Stacktrace

/home/jenkins/workspace/Cilium-PR-K8s-1.21-kernel-4.9/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:518
connectivity-check pods are not ready after timeout
Expected
    <*errors.errorString | 0xc00109f020>: {
        s: "timed out waiting for pods with filter  to be ready: 4m0s timeout expired",
    }
to be nil
/usr/local/go/src/reflect/value.go:476

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Number of "context deadline exceeded" in logs: 4
Number of "level=error" in logs: 0
⚠️  Number of "level=warning" in logs: 67
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
Top 3 errors/warnings:
Failed to remove route
Unable to update ipcache map entry on pod add
Not processing API request. Wait duration exceeds maximum
Cilium pods: [cilium-6m974 cilium-qf27p]
Netpols loaded: 
CiliumNetworkPolicies loaded: default::echo-c default::pod-to-a-allowed-cnp default::pod-to-a-denied-cnp default::pod-to-a-intra-node-proxy-egress-policy default::pod-to-a-multi-node-proxy-egress-policy default::pod-to-c-intra-node-proxy-to-proxy-policy default::pod-to-c-multi-node-proxy-to-proxy-policy default::pod-to-external-fqdn-allow-google-cnp 
Endpoint Policy Enforcement:
Pod                                                          Ingress   Egress
grafana-5747bcc8f9-m6fsh                                               
echo-a-c4cdff77c-bjg2x                                                 
pod-to-a-denied-cnp-7bfb7d69b8-j8fh2                                   
pod-to-b-multi-node-clusterip-7984ccf8c6-x4gsp                         
pod-to-b-multi-node-nodeport-85f9fb6b7d-pfmrw                          
pod-to-external-1111-c98db84d4-jgss9                                   
prometheus-655fb888d7-mtpn5                                            
echo-c-55ffd4dc66-r8gct                                                
pod-to-a-679f686cb-6slkk                                               
pod-to-a-multi-node-proxy-egress-policy-5f7c9fd644-p6pl2               
pod-to-b-multi-node-hostport-68d86fc5f6-z6xhg                          
coredns-755cd654d4-ngvx5                                               
pod-to-external-fqdn-allow-google-cnp-5f55d8886b-x54xd                 
echo-b-598c78b9fc-5hr5k                                                
pod-to-b-intra-node-hostport-55d6d5988b-4q454                          
pod-to-b-intra-node-nodeport-847957bcb4-wgr9s                          
pod-to-b-multi-node-headless-7fb5f5c84b-85t9h                          
pod-to-c-intra-node-proxy-ingress-policy-54f9fb4cbc-tczkk              
pod-to-c-intra-node-proxy-to-proxy-policy-746b66cd44-bhfxl             
pod-to-a-allowed-cnp-54755cc9c6-zxlkg                                  
pod-to-a-intra-node-proxy-egress-policy-7b584c67f-lxjwb                
pod-to-c-multi-node-proxy-ingress-policy-66b67564fb-q7ggf              
pod-to-c-multi-node-proxy-to-proxy-policy-8578c699d7-bnw52             
Cilium agent 'cilium-6m974': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 74 Failed 0
Cilium agent 'cilium-qf27p': Status: Ok  Health: Ok Nodes "" ContinerRuntime:  Kubernetes: Ok KVstore: Ok Controllers: Total 75 Failed 0

Standard Error

Click here to see.
18:32:14 STEP: Running BeforeAll block for EntireTestsuite K8sConformance Portmap Chaining
18:32:14 STEP: Ensuring the namespace kube-system exists
18:32:14 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs")
18:32:15 STEP: WaitforPods(namespace="kube-system", filter="-l k8s-app=cilium-test-logs") => <nil>
18:32:15 STEP: Installing Cilium
18:32:15 STEP: Waiting for Cilium to become ready
18:32:34 STEP: Validating if Kubernetes DNS is deployed
18:32:34 STEP: Checking if deployment is ready
18:32:34 STEP: Checking if kube-dns service is plumbed correctly
18:32:34 STEP: Checking if pods have identity
18:32:34 STEP: Checking if DNS can resolve
18:32:38 STEP: Kubernetes DNS is up and operational
18:32:38 STEP: Validating Cilium Installation
18:32:38 STEP: Performing Cilium health check
18:32:38 STEP: Performing Cilium controllers preflight check
18:32:38 STEP: Performing Cilium status preflight check
18:32:41 STEP: Performing Cilium service preflight check
18:32:41 STEP: Performing K8s service preflight check
18:32:41 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-6m974': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

18:32:43 STEP: Performing Cilium controllers preflight check
18:32:43 STEP: Performing Cilium health check
18:32:43 STEP: Performing Cilium status preflight check
18:32:46 STEP: Performing Cilium service preflight check
18:32:46 STEP: Performing K8s service preflight check
18:32:48 STEP: Performing Cilium status preflight check
18:32:48 STEP: Performing Cilium health check
18:32:48 STEP: Performing Cilium controllers preflight check
18:32:51 STEP: Performing Cilium service preflight check
18:32:51 STEP: Performing K8s service preflight check
18:32:53 STEP: Performing Cilium controllers preflight check
18:32:53 STEP: Performing Cilium status preflight check
18:32:53 STEP: Performing Cilium health check
18:32:55 STEP: Performing Cilium service preflight check
18:32:55 STEP: Performing K8s service preflight check
18:32:58 STEP: Performing Cilium controllers preflight check
18:32:58 STEP: Performing Cilium health check
18:32:58 STEP: Performing Cilium status preflight check
18:33:00 STEP: Performing Cilium service preflight check
18:33:00 STEP: Performing K8s service preflight check
18:33:03 STEP: Performing Cilium status preflight check
18:33:03 STEP: Performing Cilium health check
18:33:03 STEP: Performing Cilium controllers preflight check
18:33:06 STEP: Performing Cilium service preflight check
18:33:06 STEP: Performing K8s service preflight check
18:33:08 STEP: Performing Cilium status preflight check
18:33:08 STEP: Performing Cilium controllers preflight check
18:33:08 STEP: Performing Cilium health check
18:33:11 STEP: Performing Cilium service preflight check
18:33:11 STEP: Performing K8s service preflight check
18:33:11 STEP: Cilium is not ready yet: connectivity health is failing: Cluster connectivity is unhealthy on 'cilium-6m974': Exitcode: 1 
Err: exit status 1
Stdout:
 	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init)
	 Error: Cannot get status/probe: Put "http://%2Fvar%2Frun%2Fcilium%2Fhealth.sock/v1beta/status/probe": dial unix /var/run/cilium/health.sock: connect: no such file or directory
	 
	 command terminated with exit code 1
	 

18:33:13 STEP: Performing Cilium controllers preflight check
18:33:13 STEP: Performing Cilium status preflight check
18:33:13 STEP: Performing Cilium health check
18:33:16 STEP: Performing Cilium service preflight check
18:33:16 STEP: Performing K8s service preflight check
18:33:18 STEP: Performing Cilium controllers preflight check
18:33:18 STEP: Performing Cilium health check
18:33:18 STEP: Performing Cilium status preflight check
18:33:22 STEP: Performing Cilium service preflight check
18:33:22 STEP: Performing K8s service preflight check
18:33:23 STEP: Performing Cilium controllers preflight check
18:33:23 STEP: Performing Cilium status preflight check
18:33:23 STEP: Performing Cilium health check
18:33:25 STEP: Performing Cilium service preflight check
18:33:25 STEP: Performing K8s service preflight check
18:33:28 STEP: Performing Cilium controllers preflight check
18:33:28 STEP: Performing Cilium health check
18:33:28 STEP: Performing Cilium status preflight check
18:33:30 STEP: Performing Cilium service preflight check
18:33:30 STEP: Performing K8s service preflight check
18:33:33 STEP: Performing Cilium controllers preflight check
18:33:33 STEP: Performing Cilium status preflight check
18:33:33 STEP: Performing Cilium health check
18:33:36 STEP: Performing Cilium service preflight check
18:33:36 STEP: Performing K8s service preflight check
18:33:38 STEP: Performing Cilium controllers preflight check
18:33:38 STEP: Performing Cilium status preflight check
18:33:38 STEP: Performing Cilium health check
18:33:40 STEP: Performing Cilium service preflight check
18:33:40 STEP: Performing K8s service preflight check
18:33:41 STEP: Waiting for cilium-operator to be ready
18:33:41 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator")
18:33:41 STEP: WaitforPods(namespace="kube-system", filter="-l name=cilium-operator") => <nil>
18:33:41 STEP: Making sure all endpoints are in ready state
18:33:43 STEP: WaitforPods(namespace="default", filter="")
18:37:43 STEP: WaitforPods(namespace="default", filter="") => timed out waiting for pods with filter  to be ready: 4m0s timeout expired
18:37:44 STEP: cmd: kubectl describe pods -n default 
Exitcode: 0 
Stdout:
 	 Name:         echo-a-c4cdff77c-bjg2x
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:42 +0000
	 Labels:       name=echo-a
	               pod-template-hash=c4cdff77c
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.116
	 IPs:
	   IP:           10.0.1.116
	   IP:           fd02::1d2
	 Controlled By:  ReplicaSet/echo-a-c4cdff77c
	 Containers:
	   echo-a-container:
	     Container ID:   docker://e59f67d69f0342c8f18c539e292ef6bd618d307eec6d202db514bde4639f6022
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           8080/TCP
	     Host Port:      0/TCP
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:50 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  8080
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k9ps7 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-k9ps7:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m1s   default-scheduler  Successfully assigned default/echo-a-c4cdff77c-bjg2x to k8s2
	   Normal  Pulled     3m53s  kubelet            Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal  Created    3m53s  kubelet            Created container echo-a-container
	   Normal  Started    3m53s  kubelet            Started container echo-a-container
	 
	 
	 Name:         echo-b-598c78b9fc-5hr5k
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:42 +0000
	 Labels:       name=echo-b
	               pod-template-hash=598c78b9fc
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.51
	 IPs:
	   IP:           10.0.1.51
	   IP:           fd02::155
	 Controlled By:  ReplicaSet/echo-b-598c78b9fc
	 Containers:
	   echo-b-container:
	     Container ID:   docker://c6b478e9e7795cd387a0155dd76a447e9827219dddf6f3526a91a0b25ae30870
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           8080/TCP
	     Host Port:      40000/TCP
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:47 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  8080
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rvrr6 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-rvrr6:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  4m1s   default-scheduler  Successfully assigned default/echo-b-598c78b9fc-5hr5k to k8s2
	   Normal   Pulled     3m57s  kubelet            Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal   Created    3m57s  kubelet            Created container echo-b-container
	   Normal   Started    3m56s  kubelet            Started container echo-b-container
	   Warning  Unhealthy  3m56s  kubelet            Readiness probe failed: curl: (7) Failed to connect to localhost port 8080: Connection refused
	 
	 
	 Name:         echo-b-host-5556f9488-7m6vb
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:42 +0000
	 Labels:       name=echo-b-host
	               pod-template-hash=5556f9488
	 Annotations:  <none>
	 Status:       Running
	 IP:           192.168.36.12
	 IPs:
	   IP:           192.168.36.12
	 Controlled By:  ReplicaSet/echo-b-host-5556f9488
	 Containers:
	   echo-b-host-container:
	     Container ID:   docker://741dbd30b4ffb87fde686e36612707fd411f2b09ab441f53bb67a99b765dd0e7
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           <none>
	     Host Port:      <none>
	     State:          Waiting
	       Reason:       CrashLoopBackOff
	     Last State:     Terminated
	       Reason:       Error
	       Exit Code:    1
	       Started:      Wed, 14 Apr 2021 18:36:46 +0000
	       Finished:     Wed, 14 Apr 2021 18:36:46 +0000
	     Ready:          False
	     Restart Count:  5
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:41000] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:41000] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  41000
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5g4l8 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-5g4l8:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age                     From               Message
	   ----     ------     ----                    ----               -------
	   Normal   Scheduled  4m1s                    default-scheduler  Successfully assigned default/echo-b-host-5556f9488-7m6vb to k8s2
	   Warning  Unhealthy  3m38s                   kubelet            Readiness probe failed: OCI runtime exec failed: exec failed: container_linux.go:349: starting container process caused "read init-p: connection reset by peer": unknown
	   Normal   Pulled     3m16s (x4 over 3m59s)   kubelet            Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal   Created    3m16s (x4 over 3m59s)   kubelet            Created container echo-b-host-container
	   Normal   Started    3m15s (x4 over 3m59s)   kubelet            Started container echo-b-host-container
	   Warning  Unhealthy  3m15s (x2 over 3m56s)   kubelet            Readiness probe failed: curl: (7) Failed to connect to localhost port 41000: Connection refused
	   Warning  BackOff    2m58s (x10 over 3m55s)  kubelet            Back-off restarting failed container
	 
	 
	 Name:         echo-c-55ffd4dc66-r8gct
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:42 +0000
	 Labels:       name=echo-c
	               pod-template-hash=55ffd4dc66
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.46
	 IPs:
	   IP:           10.0.1.46
	   IP:           fd02::198
	 Controlled By:  ReplicaSet/echo-c-55ffd4dc66
	 Containers:
	   echo-c-container:
	     Container ID:   docker://85d1e6e525d24afaad09c99fe186ec135dad265304c011726fe94e1408f5301c
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           8080/TCP
	     Host Port:      40001/TCP
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:52 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:8080] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  8080
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2ghf5 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-2ghf5:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m1s   default-scheduler  Successfully assigned default/echo-c-55ffd4dc66-r8gct to k8s2
	   Normal  Pulled     3m52s  kubelet            Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal  Created    3m51s  kubelet            Created container echo-c-container
	   Normal  Started    3m51s  kubelet            Started container echo-c-container
	 
	 
	 Name:         echo-c-host-78565c4bcd-gchvt
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:42 +0000
	 Labels:       name=echo-c-host
	               pod-template-hash=78565c4bcd
	 Annotations:  <none>
	 Status:       Running
	 IP:           192.168.36.12
	 IPs:
	   IP:           192.168.36.12
	 Controlled By:  ReplicaSet/echo-c-host-78565c4bcd
	 Containers:
	   echo-c-host-container:
	     Container ID:   docker://2c629cd3dbcca3f50cc8f08baaa5b91a8bfc2c9c037a295eb3a3d4f86565e147
	     Image:          docker.io/cilium/json-mock:1.2
	     Image ID:       docker-pullable://cilium/json-mock@sha256:941e03da57551dd4a71f351b35650c152a1192ac1df717e43ee58b5aa2b8e241
	     Port:           <none>
	     Host Port:      <none>
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:45 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:41001] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null localhost:41001] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:
	       PORT:  41001
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7pxm2 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-7pxm2:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m2s   default-scheduler  Successfully assigned default/echo-c-host-78565c4bcd-gchvt to k8s2
	   Normal  Pulled     3m59s  kubelet            Container image "docker.io/cilium/json-mock:1.2" already present on machine
	   Normal  Created    3m59s  kubelet            Created container echo-c-host-container
	   Normal  Started    3m59s  kubelet            Started container echo-c-host-container
	 
	 
	 Name:         host-to-b-multi-node-clusterip-7f8c9699d6-jnbzm
	 Namespace:    default
	 Priority:     0
	 Node:         k8s1/192.168.36.11
	 Start Time:   Wed, 14 Apr 2021 18:33:45 +0000
	 Labels:       name=host-to-b-multi-node-clusterip
	               pod-template-hash=7f8c9699d6
	 Annotations:  <none>
	 Status:       Running
	 IP:           192.168.36.11
	 IPs:
	   IP:           192.168.36.11
	 Controlled By:  ReplicaSet/host-to-b-multi-node-clusterip-7f8c9699d6
	 Containers:
	   host-to-b-multi-node-clusterip-container:
	     Container ID:  docker://628717622b9c2aeaadb806b0a17a2da604f091224cdc7469d71ac3e6cc4f9c3f
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:48 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6c2xr (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-6c2xr:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  3m59s  default-scheduler  Successfully assigned default/host-to-b-multi-node-clusterip-7f8c9699d6-jnbzm to k8s1
	   Normal   Pulled     3m57s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m57s  kubelet            Created container host-to-b-multi-node-clusterip-container
	   Normal   Started    3m56s  kubelet            Started container host-to-b-multi-node-clusterip-container
	   Warning  Unhealthy  3m54s  kubelet            Readiness probe failed: curl: (7) Failed to connect to echo-b port 8080: Connection refused
	 
	 
	 Name:         host-to-b-multi-node-headless-7df4d6fdb-2mzrq
	 Namespace:    default
	 Priority:     0
	 Node:         k8s1/192.168.36.11
	 Start Time:   Wed, 14 Apr 2021 18:33:45 +0000
	 Labels:       name=host-to-b-multi-node-headless
	               pod-template-hash=7df4d6fdb
	 Annotations:  <none>
	 Status:       Running
	 IP:           192.168.36.11
	 IPs:
	   IP:           192.168.36.11
	 Controlled By:  ReplicaSet/host-to-b-multi-node-headless-7df4d6fdb
	 Containers:
	   host-to-b-multi-node-headless-container:
	     Container ID:  docker://aa5548decade46aa17095af22dc31cd5d4403bd39a0073bba49322da817c544f
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:48 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-headless:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-headless:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-p2glv (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-p2glv:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  3m59s  default-scheduler  Successfully assigned default/host-to-b-multi-node-headless-7df4d6fdb-2mzrq to k8s1
	   Normal   Pulled     3m57s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m57s  kubelet            Created container host-to-b-multi-node-headless-container
	   Normal   Started    3m56s  kubelet            Started container host-to-b-multi-node-headless-container
	   Warning  Unhealthy  3m55s  kubelet            Readiness probe failed: curl: (6) Could not resolve host: echo-b-headless
	 
	 
	 Name:         pod-to-a-679f686cb-6slkk
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:42 +0000
	 Labels:       name=pod-to-a
	               pod-template-hash=679f686cb
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.4
	 IPs:
	   IP:           10.0.1.4
	   IP:           fd02::19e
	 Controlled By:  ReplicaSet/pod-to-a-679f686cb
	 Containers:
	   pod-to-a-container:
	     Container ID:  docker://ee35fcca7c1e79da84aeb0bf2a91d4645decdae0dcefe1705f354e1543341ea7
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:51 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-272mp (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-272mp:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m2s   default-scheduler  Successfully assigned default/pod-to-a-679f686cb-6slkk to k8s2
	   Normal  Pulled     3m53s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m53s  kubelet            Created container pod-to-a-container
	   Normal  Started    3m53s  kubelet            Started container pod-to-a-container
	 
	 
	 Name:         pod-to-a-allowed-cnp-54755cc9c6-zxlkg
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:43 +0000
	 Labels:       name=pod-to-a-allowed-cnp
	               pod-template-hash=54755cc9c6
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.89
	 IPs:
	   IP:           10.0.1.89
	   IP:           fd02::125
	 Controlled By:  ReplicaSet/pod-to-a-allowed-cnp-54755cc9c6
	 Containers:
	   pod-to-a-allowed-cnp-container:
	     Container ID:  docker://2ce954b9b01b083058f6c483ff9f64e31633c2ad4433458bc3fc817bd4a5d30d
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:57 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-hs8tr (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-hs8tr:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m1s   default-scheduler  Successfully assigned default/pod-to-a-allowed-cnp-54755cc9c6-zxlkg to k8s2
	   Normal  Pulled     3m48s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m48s  kubelet            Created container pod-to-a-allowed-cnp-container
	   Normal  Started    3m47s  kubelet            Started container pod-to-a-allowed-cnp-container
	 
	 
	 Name:         pod-to-a-denied-cnp-7bfb7d69b8-j8fh2
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:43 +0000
	 Labels:       name=pod-to-a-denied-cnp
	               pod-template-hash=7bfb7d69b8
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.207
	 IPs:
	   IP:           10.0.1.207
	   IP:           fd02::1e7
	 Controlled By:  ReplicaSet/pod-to-a-denied-cnp-7bfb7d69b8
	 Containers:
	   pod-to-a-denied-cnp-container:
	     Container ID:  docker://3d36bb9b27f6e30b04a69f2fa2fc0ca8b576286c9d3a9cf6941c864f9dd59f87
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:59 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ff228 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-ff228:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m1s   default-scheduler  Successfully assigned default/pod-to-a-denied-cnp-7bfb7d69b8-j8fh2 to k8s2
	   Normal  Pulled     3m46s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m46s  kubelet            Created container pod-to-a-denied-cnp-container
	   Normal  Started    3m45s  kubelet            Started container pod-to-a-denied-cnp-container
	 
	 
	 Name:         pod-to-a-intra-node-proxy-egress-policy-7b584c67f-lxjwb
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:43 +0000
	 Labels:       name=pod-to-a-intra-node-proxy-egress-policy
	               pod-template-hash=7b584c67f
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.86
	 IPs:
	   IP:           10.0.1.86
	   IP:           fd02::14b
	 Controlled By:  ReplicaSet/pod-to-a-intra-node-proxy-egress-policy-7b584c67f
	 Containers:
	   pod-to-a-intra-node-proxy-egress-policy-allow-container:
	     Container ID:  docker://7ab3b2e22267ffae8c2cae949b02ae81a58b29b65aff6769fb25eac9cbf3bfab
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:54 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5ns7 (ro)
	   pod-to-a-intra-node-proxy-egress-policy-reject-container:
	     Container ID:  docker://b880da8f3055d9d01c3f8f15c346fa95734f10947a2ed5418a6a856e9b466019
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:55 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k5ns7 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-k5ns7:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age    From               Message
	   ----     ------     ----   ----               -------
	   Normal   Scheduled  4m1s   default-scheduler  Successfully assigned default/pod-to-a-intra-node-proxy-egress-policy-7b584c67f-lxjwb to k8s2
	   Normal   Pulled     3m50s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m50s  kubelet            Created container pod-to-a-intra-node-proxy-egress-policy-allow-container
	   Normal   Started    3m50s  kubelet            Started container pod-to-a-intra-node-proxy-egress-policy-allow-container
	   Normal   Pulled     3m50s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m50s  kubelet            Created container pod-to-a-intra-node-proxy-egress-policy-reject-container
	   Normal   Started    3m49s  kubelet            Started container pod-to-a-intra-node-proxy-egress-policy-reject-container
	   Warning  Unhealthy  90s    kubelet            Liveness probe failed:
	 
	 
	 Name:         pod-to-a-multi-node-proxy-egress-policy-5f7c9fd644-p6pl2
	 Namespace:    default
	 Priority:     0
	 Node:         k8s1/192.168.36.11
	 Start Time:   Wed, 14 Apr 2021 18:33:44 +0000
	 Labels:       name=pod-to-a-multi-node-proxy-egress-policy
	               pod-template-hash=5f7c9fd644
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.0.13
	 IPs:
	   IP:           10.0.0.13
	   IP:           fd02::5a
	 Controlled By:  ReplicaSet/pod-to-a-multi-node-proxy-egress-policy-5f7c9fd644
	 Containers:
	   pod-to-a-multi-node-proxy-egress-policy-allow-container:
	     Container ID:  docker://ac004584e0cdc852be3d5b1b900cb4ff62adbd3c1b88892850c47a3cda6c7c80
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:50 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-a:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x24f2 (ro)
	   pod-to-a-multi-node-proxy-egress-policy-reject-container:
	     Container ID:  docker://ab1691d692a0c6a70f9eafce243bf3ac6f4121a804c463fd06005520f5577c47
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:51 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-a:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-x24f2 (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-x24f2:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age                   From               Message
	   ----     ------     ----                  ----               -------
	   Normal   Scheduled  4m                    default-scheduler  Successfully assigned default/pod-to-a-multi-node-proxy-egress-policy-5f7c9fd644-p6pl2 to k8s1
	   Normal   Pulled     3m55s                 kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m54s                 kubelet            Created container pod-to-a-multi-node-proxy-egress-policy-allow-container
	   Normal   Started    3m54s                 kubelet            Started container pod-to-a-multi-node-proxy-egress-policy-allow-container
	   Normal   Pulled     3m54s                 kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m53s                 kubelet            Created container pod-to-a-multi-node-proxy-egress-policy-reject-container
	   Normal   Started    3m53s                 kubelet            Started container pod-to-a-multi-node-proxy-egress-policy-reject-container
	   Warning  Unhealthy  2m9s (x2 over 3m51s)  kubelet            Readiness probe failed:
	   Warning  Unhealthy  2m9s                  kubelet            Liveness probe failed:
	 
	 
	 Name:         pod-to-b-intra-node-hostport-55d6d5988b-4q454
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:46 +0000
	 Labels:       name=pod-to-b-intra-node-hostport
	               pod-template-hash=55d6d5988b
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.75
	 IPs:
	   IP:           10.0.1.75
	   IP:           fd02::11b
	 Controlled By:  ReplicaSet/pod-to-b-intra-node-hostport-55d6d5988b
	 Containers:
	   pod-to-b-intra-node-hostport-container:
	     Container ID:  docker://59cc6cb960f41e780ede871a15ece8ebd62d8d5233f82a7e98a6d25c68fe0db4
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:36:56 +0000
	     Last State:     Terminated
	       Reason:       Error
	       Exit Code:    137
	       Started:      Wed, 14 Apr 2021 18:35:56 +0000
	       Finished:     Wed, 14 Apr 2021 18:36:56 +0000
	     Ready:          False
	     Restart Count:  3
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:40000/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:40000/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8fczm (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-8fczm:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age                     From               Message
	   ----     ------     ----                    ----               -------
	   Normal   Scheduled  3m58s                   default-scheduler  Successfully assigned default/pod-to-b-intra-node-hostport-55d6d5988b-4q454 to k8s2
	   Normal   Killing    3m18s                   kubelet            Container pod-to-b-intra-node-hostport-container failed liveness probe, will be restarted
	   Normal   Pulled     2m48s (x2 over 3m41s)   kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    2m47s (x2 over 3m41s)   kubelet            Created container pod-to-b-intra-node-hostport-container
	   Normal   Started    2m47s (x2 over 3m41s)   kubelet            Started container pod-to-b-intra-node-hostport-container
	   Warning  Unhealthy  2m47s                   kubelet            Readiness probe errored: rpc error: code = Unknown desc = container not running (9a769d0c07c43bcdaca3eec6dba9508a39b29540bd375fec03161203e2ddf450)
	   Warning  Unhealthy  2m18s (x11 over 3m40s)  kubelet            Readiness probe failed: curl: (6) Could not resolve host: echo-b-host-headless
	   Warning  Unhealthy  2m18s (x6 over 3m38s)   kubelet            Liveness probe failed: curl: (6) Could not resolve host: echo-b-host-headless
	 
	 
	 Name:         pod-to-b-intra-node-nodeport-847957bcb4-wgr9s
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:46 +0000
	 Labels:       name=pod-to-b-intra-node-nodeport
	               pod-template-hash=847957bcb4
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.253
	 IPs:
	   IP:           10.0.1.253
	   IP:           fd02::1b4
	 Controlled By:  ReplicaSet/pod-to-b-intra-node-nodeport-847957bcb4
	 Containers:
	   pod-to-b-intra-node-nodeport-container:
	     Container ID:  docker://f6e469314b0a54779e4344ee3a15eb4b37280789f6f09f8cf99e24f33b71145c
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:37:07 +0000
	     Last State:     Terminated
	       Reason:       Error
	       Exit Code:    137
	       Started:      Wed, 14 Apr 2021 18:36:07 +0000
	       Finished:     Wed, 14 Apr 2021 18:37:07 +0000
	     Ready:          False
	     Restart Count:  3
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:31313/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-b-host-headless:31313/public] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-5rlwd (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             False 
	   ContainersReady   False 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-5rlwd:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason                  Age                    From               Message
	   ----     ------                  ----                   ----               -------
	   Normal   Scheduled               3m58s                  default-scheduler  Successfully assigned default/pod-to-b-intra-node-nodeport-847957bcb4-wgr9s to k8s2
	   Warning  FailedCreatePodSandBox  3m41s                  kubelet            Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "de7e08d269942522a85f96749eaffeb410ad9fea86acffc7f43ae20478ba1884" network for pod "pod-to-b-intra-node-nodeport-847957bcb4-wgr9s": networkPlugin cni failed to set up pod "pod-to-b-intra-node-nodeport-847957bcb4-wgr9s_default" network: Unable to create endpoint: response status code does not match any response sta
...[truncated 18894 chars]...
ra-node-proxy-ingress-policy-allow-container
	   Normal  Started    3m42s  kubelet            Started container pod-to-c-intra-node-proxy-ingress-policy-allow-container
	   Normal  Pulled     3m42s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m42s  kubelet            Created container pod-to-c-intra-node-proxy-ingress-policy-reject-container
	   Normal  Started    3m42s  kubelet            Started container pod-to-c-intra-node-proxy-ingress-policy-reject-container
	 
	 
	 Name:         pod-to-c-intra-node-proxy-to-proxy-policy-746b66cd44-bhfxl
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:44 +0000
	 Labels:       name=pod-to-c-intra-node-proxy-to-proxy-policy
	               pod-template-hash=746b66cd44
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.103
	 IPs:
	   IP:           10.0.1.103
	   IP:           fd02::165
	 Controlled By:  ReplicaSet/pod-to-c-intra-node-proxy-to-proxy-policy-746b66cd44
	 Containers:
	   pod-to-c-intra-node-proxy-to-proxy-policy-allow-container:
	     Container ID:  docker://3a4f860f136bc25792dd1d658598ab047d637c74244c689939ed7669ced63ede
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:34:02 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cf7jv (ro)
	   pod-to-c-intra-node-proxy-to-proxy-policy-reject-container:
	     Container ID:  docker://99f276fcf11e8492cfdae67ded63b428ca21157330da99afad7e50b6c28322a2
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:34:02 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-c:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-cf7jv (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-cf7jv:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m     default-scheduler  Successfully assigned default/pod-to-c-intra-node-proxy-to-proxy-policy-746b66cd44-bhfxl to k8s2
	   Normal  Pulled     3m43s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m42s  kubelet            Created container pod-to-c-intra-node-proxy-to-proxy-policy-allow-container
	   Normal  Started    3m42s  kubelet            Started container pod-to-c-intra-node-proxy-to-proxy-policy-allow-container
	   Normal  Pulled     3m42s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m42s  kubelet            Created container pod-to-c-intra-node-proxy-to-proxy-policy-reject-container
	   Normal  Started    3m42s  kubelet            Started container pod-to-c-intra-node-proxy-to-proxy-policy-reject-container
	 
	 
	 Name:         pod-to-c-multi-node-proxy-ingress-policy-66b67564fb-q7ggf
	 Namespace:    default
	 Priority:     0
	 Node:         k8s1/192.168.36.11
	 Start Time:   Wed, 14 Apr 2021 18:33:44 +0000
	 Labels:       name=pod-to-c-multi-node-proxy-ingress-policy
	               pod-template-hash=66b67564fb
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.0.252
	 IPs:
	   IP:           10.0.0.252
	   IP:           fd02::a4
	 Controlled By:  ReplicaSet/pod-to-c-multi-node-proxy-ingress-policy-66b67564fb
	 Containers:
	   pod-to-c-multi-node-proxy-ingress-policy-allow-container:
	     Container ID:  docker://10bdd5fcced8936ff0150836a3b30f8b7871479c612d1a4b751c7a3bedf59079
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:56 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l9pwc (ro)
	   pod-to-c-multi-node-proxy-ingress-policy-reject-container:
	     Container ID:  docker://418e7c1bc55c0f30c4059866c37d03a7bd1a31410550852f3b919e39e66f0863
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:57 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-c:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l9pwc (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-l9pwc:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type     Reason     Age                   From               Message
	   ----     ------     ----                  ----               -------
	   Normal   Scheduled  4m                    default-scheduler  Successfully assigned default/pod-to-c-multi-node-proxy-ingress-policy-66b67564fb-q7ggf to k8s1
	   Normal   Pulled     3m49s                 kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m48s                 kubelet            Created container pod-to-c-multi-node-proxy-ingress-policy-allow-container
	   Normal   Started    3m48s                 kubelet            Started container pod-to-c-multi-node-proxy-ingress-policy-allow-container
	   Normal   Pulled     3m48s                 kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal   Created    3m48s                 kubelet            Created container pod-to-c-multi-node-proxy-ingress-policy-reject-container
	   Normal   Started    3m47s                 kubelet            Started container pod-to-c-multi-node-proxy-ingress-policy-reject-container
	   Warning  Unhealthy  2m9s (x2 over 3m19s)  kubelet            Liveness probe failed:
	   Warning  Unhealthy  2m9s                  kubelet            Readiness probe failed:
	 
	 
	 Name:         pod-to-c-multi-node-proxy-to-proxy-policy-8578c699d7-bnw52
	 Namespace:    default
	 Priority:     0
	 Node:         k8s1/192.168.36.11
	 Start Time:   Wed, 14 Apr 2021 18:33:45 +0000
	 Labels:       name=pod-to-c-multi-node-proxy-to-proxy-policy
	               pod-template-hash=8578c699d7
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.0.4
	 IPs:
	   IP:           10.0.0.4
	   IP:           fd02::1b
	 Controlled By:  ReplicaSet/pod-to-c-multi-node-proxy-to-proxy-policy-8578c699d7
	 Containers:
	   pod-to-c-multi-node-proxy-to-proxy-policy-allow-container:
	     Container ID:  docker://ffb8feafb86ee511c9d25a3c9f37ea90527d48eec43b578f4df8a3418a882d9a
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:55 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null echo-c:8080/public] delay=0s timeout=1s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9hjk (ro)
	   pod-to-c-multi-node-proxy-to-proxy-policy-reject-container:
	     Container ID:  docker://1bdc272d26726a9690977a07018d38745172ae2a5aa870bd2d9833ba52487da3
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:55 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [ash -c ! curl -s --fail --connect-timeout 5 -o /dev/null echo-c:8080/private] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9hjk (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-m9hjk:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  3m59s  default-scheduler  Successfully assigned default/pod-to-c-multi-node-proxy-to-proxy-policy-8578c699d7-bnw52 to k8s1
	   Normal  Pulled     3m50s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m50s  kubelet            Created container pod-to-c-multi-node-proxy-to-proxy-policy-allow-container
	   Normal  Started    3m49s  kubelet            Started container pod-to-c-multi-node-proxy-to-proxy-policy-allow-container
	   Normal  Pulled     3m49s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m49s  kubelet            Created container pod-to-c-multi-node-proxy-to-proxy-policy-reject-container
	   Normal  Started    3m49s  kubelet            Started container pod-to-c-multi-node-proxy-to-proxy-policy-reject-container
	 
	 
	 Name:         pod-to-external-1111-c98db84d4-jgss9
	 Namespace:    default
	 Priority:     0
	 Node:         k8s2/192.168.36.12
	 Start Time:   Wed, 14 Apr 2021 18:33:43 +0000
	 Labels:       name=pod-to-external-1111
	               pod-template-hash=c98db84d4
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.1.213
	 IPs:
	   IP:           10.0.1.213
	   IP:           fd02::194
	 Controlled By:  ReplicaSet/pod-to-external-1111-c98db84d4
	 Containers:
	   pod-to-external-1111-container:
	     Container ID:  docker://1c7e5cd803958a31b1939e4872070cb985477d26520e5b21a1941f767e0c6451
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:56 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null 1.1.1.1] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null 1.1.1.1] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6tkhn (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-6tkhn:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m1s   default-scheduler  Successfully assigned default/pod-to-external-1111-c98db84d4-jgss9 to k8s2
	   Normal  Pulled     3m48s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m48s  kubelet            Created container pod-to-external-1111-container
	   Normal  Started    3m48s  kubelet            Started container pod-to-external-1111-container
	 
	 
	 Name:         pod-to-external-fqdn-allow-google-cnp-5f55d8886b-x54xd
	 Namespace:    default
	 Priority:     0
	 Node:         k8s1/192.168.36.11
	 Start Time:   Wed, 14 Apr 2021 18:33:43 +0000
	 Labels:       name=pod-to-external-fqdn-allow-google-cnp
	               pod-template-hash=5f55d8886b
	 Annotations:  <none>
	 Status:       Running
	 IP:           10.0.0.82
	 IPs:
	   IP:           10.0.0.82
	   IP:           fd02::11
	 Controlled By:  ReplicaSet/pod-to-external-fqdn-allow-google-cnp-5f55d8886b
	 Containers:
	   pod-to-external-fqdn-allow-google-cnp-container:
	     Container ID:  docker://5da5e0c6229093ab26f74f9bb66279a6926b171b3530b777284ed1446e38e056
	     Image:         docker.io/byrnedo/alpine-curl:0.1.8
	     Image ID:      docker-pullable://byrnedo/alpine-curl@sha256:548379d0a4a0c08b9e55d9d87a592b7d35d9ab3037f4936f5ccd09d0b625a342
	     Port:          <none>
	     Host Port:     <none>
	     Command:
	       /bin/ash
	       -c
	       sleep 1000000000
	     State:          Running
	       Started:      Wed, 14 Apr 2021 18:33:49 +0000
	     Ready:          True
	     Restart Count:  0
	     Liveness:       exec [curl -sS --fail --connect-timeout 5 -o /dev/null www.google.com] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Readiness:      exec [curl -sS --fail --connect-timeout 5 -o /dev/null www.google.com] delay=0s timeout=7s period=10s #success=1 #failure=3
	     Environment:    <none>
	     Mounts:
	       /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-qb5zn (ro)
	 Conditions:
	   Type              Status
	   Initialized       True 
	   Ready             True 
	   ContainersReady   True 
	   PodScheduled      True 
	 Volumes:
	   kube-api-access-qb5zn:
	     Type:                    Projected (a volume that contains injected data from multiple sources)
	     TokenExpirationSeconds:  3607
	     ConfigMapName:           kube-root-ca.crt
	     ConfigMapOptional:       <nil>
	     DownwardAPI:             true
	 QoS Class:                   BestEffort
	 Node-Selectors:              <none>
	 Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                              node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	 Events:
	   Type    Reason     Age    From               Message
	   ----    ------     ----   ----               -------
	   Normal  Scheduled  4m1s   default-scheduler  Successfully assigned default/pod-to-external-fqdn-allow-google-cnp-5f55d8886b-x54xd to k8s1
	   Normal  Pulled     3m56s  kubelet            Container image "docker.io/byrnedo/alpine-curl:0.1.8" already present on machine
	   Normal  Created    3m56s  kubelet            Created container pod-to-external-fqdn-allow-google-cnp-container
	   Normal  Started    3m55s  kubelet            Started container pod-to-external-fqdn-allow-google-cnp-container
	 
Stderr:
 	 

FAIL: connectivity-check pods are not ready after timeout
Expected
    <*errors.errorString | 0xc00109f020>: {
        s: "timed out waiting for pods with filter  to be ready: 4m0s timeout expired",
    }
to be nil
=== Test Finished at 2021-04-14T18:37:44Z====
18:37:44 STEP: Running JustAfterEach block for EntireTestsuite K8sConformance Portmap Chaining
===================== TEST FAILED =====================
18:37:45 STEP: Running AfterFailed block for EntireTestsuite K8sConformance Portmap Chaining
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE           NAME                                                         READY   STATUS             RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 cilium-monitoring   grafana-5747bcc8f9-m6fsh                                     1/1     Running            0          38m     10.0.0.238      k8s1   <none>           <none>
	 cilium-monitoring   prometheus-655fb888d7-mtpn5                                  1/1     Running            0          38m     10.0.0.70       k8s1   <none>           <none>
	 default             echo-a-c4cdff77c-bjg2x                                       1/1     Running            0          4m7s    10.0.1.116      k8s2   <none>           <none>
	 default             echo-b-598c78b9fc-5hr5k                                      1/1     Running            0          4m7s    10.0.1.51       k8s2   <none>           <none>
	 default             echo-b-host-5556f9488-7m6vb                                  0/1     CrashLoopBackOff   5          4m7s    192.168.36.12   k8s2   <none>           <none>
	 default             echo-c-55ffd4dc66-r8gct                                      1/1     Running            0          4m7s    10.0.1.46       k8s2   <none>           <none>
	 default             echo-c-host-78565c4bcd-gchvt                                 1/1     Running            0          4m7s    192.168.36.12   k8s2   <none>           <none>
	 default             host-to-b-multi-node-clusterip-7f8c9699d6-jnbzm              1/1     Running            0          4m4s    192.168.36.11   k8s1   <none>           <none>
	 default             host-to-b-multi-node-headless-7df4d6fdb-2mzrq                1/1     Running            0          4m4s    192.168.36.11   k8s1   <none>           <none>
	 default             pod-to-a-679f686cb-6slkk                                     1/1     Running            0          4m7s    10.0.1.4        k8s2   <none>           <none>
	 default             pod-to-a-allowed-cnp-54755cc9c6-zxlkg                        1/1     Running            0          4m6s    10.0.1.89       k8s2   <none>           <none>
	 default             pod-to-a-denied-cnp-7bfb7d69b8-j8fh2                         1/1     Running            0          4m6s    10.0.1.207      k8s2   <none>           <none>
	 default             pod-to-a-intra-node-proxy-egress-policy-7b584c67f-lxjwb      2/2     Running            0          4m6s    10.0.1.86       k8s2   <none>           <none>
	 default             pod-to-a-multi-node-proxy-egress-policy-5f7c9fd644-p6pl2     2/2     Running            0          4m5s    10.0.0.13       k8s1   <none>           <none>
	 default             pod-to-b-intra-node-hostport-55d6d5988b-4q454                0/1     Running            3          4m3s    10.0.1.75       k8s2   <none>           <none>
	 default             pod-to-b-intra-node-nodeport-847957bcb4-wgr9s                0/1     Running            3          4m3s    10.0.1.253      k8s2   <none>           <none>
	 default             pod-to-b-multi-node-clusterip-7984ccf8c6-x4gsp               1/1     Running            0          4m4s    10.0.0.3        k8s1   <none>           <none>
	 default             pod-to-b-multi-node-headless-7fb5f5c84b-85t9h                1/1     Running            0          4m4s    10.0.0.19       k8s1   <none>           <none>
	 default             pod-to-b-multi-node-hostport-68d86fc5f6-z6xhg                0/1     Running            3          4m3s    10.0.0.241      k8s1   <none>           <none>
	 default             pod-to-b-multi-node-nodeport-85f9fb6b7d-pfmrw                0/1     Running            3          4m3s    10.0.0.86       k8s1   <none>           <none>
	 default             pod-to-c-intra-node-proxy-ingress-policy-54f9fb4cbc-tczkk    2/2     Running            0          4m5s    10.0.1.126      k8s2   <none>           <none>
	 default             pod-to-c-intra-node-proxy-to-proxy-policy-746b66cd44-bhfxl   2/2     Running            0          4m5s    10.0.1.103      k8s2   <none>           <none>
	 default             pod-to-c-multi-node-proxy-ingress-policy-66b67564fb-q7ggf    2/2     Running            0          4m5s    10.0.0.252      k8s1   <none>           <none>
	 default             pod-to-c-multi-node-proxy-to-proxy-policy-8578c699d7-bnw52   2/2     Running            0          4m4s    10.0.0.4        k8s1   <none>           <none>
	 default             pod-to-external-1111-c98db84d4-jgss9                         1/1     Running            0          4m6s    10.0.1.213      k8s2   <none>           <none>
	 default             pod-to-external-fqdn-allow-google-cnp-5f55d8886b-x54xd       1/1     Running            0          4m6s    10.0.0.82       k8s1   <none>           <none>
	 kube-system         cilium-6m974                                                 1/1     Running            0          5m34s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         cilium-operator-7f7c646455-h2wcx                             1/1     Running            0          5m34s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         cilium-operator-7f7c646455-hrf9q                             1/1     Running            0          5m34s   192.168.36.11   k8s1   <none>           <none>
	 kube-system         cilium-qf27p                                                 1/1     Running            0          5m34s   192.168.36.12   k8s2   <none>           <none>
	 kube-system         coredns-755cd654d4-ngvx5                                     1/1     Running            0          12m     10.0.0.63       k8s1   <none>           <none>
	 kube-system         etcd-k8s1                                                    1/1     Running            0          42m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-apiserver-k8s1                                          1/1     Running            0          42m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-controller-manager-k8s1                                 1/1     Running            0          42m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-2l4hw                                             1/1     Running            0          42m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         kube-proxy-hq6gk                                             1/1     Running            0          40m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         kube-scheduler-k8s1                                          1/1     Running            0          42m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-kxq2f                                           1/1     Running            0          39m     192.168.36.11   k8s1   <none>           <none>
	 kube-system         log-gatherer-njlmx                                           1/1     Running            0          39m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-lpd97                                         1/1     Running            0          40m     192.168.36.12   k8s2   <none>           <none>
	 kube-system         registry-adder-vh88w                                         1/1     Running            0          40m     192.168.36.11   k8s1   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods [cilium-6m974 cilium-qf27p]
cmd: kubectl exec -n kube-system cilium-6m974 -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                                        IPv6       IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                             
	 209        Disabled           Enabled           15351      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::1b   10.0.0.4     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=pod-to-c-multi-node-proxy-to-proxy-policy                                                                 
	 557        Disabled           Disabled          41055      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=kube-system         fd02::fd   10.0.0.63    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=coredns                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=kube-system                                                                        
	                                                            k8s:k8s-app=kube-dns                                                                                               
	 843        Disabled           Disabled          4          reserved:health                                                                    fd02::9a   10.0.0.125   ready   
	 893        Disabled           Disabled          38783      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::e1   10.0.0.19    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=pod-to-b-multi-node-headless                                                                              
	 1223       Disabled           Disabled          37950      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::24   10.0.0.241   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=pod-to-b-multi-node-hostport                                                                              
	 1398       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s1                                                                                 ready   
	                                                            k8s:node-role.kubernetes.io/control-plane                                                                          
	                                                            k8s:node-role.kubernetes.io/master                                                                                 
	                                                            k8s:node.kubernetes.io/exclude-from-external-load-balancers                                                        
	                                                            reserved:host                                                                                                      
	 1525       Disabled           Disabled          60938      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::a4   10.0.0.252   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=pod-to-c-multi-node-proxy-ingress-policy                                                                  
	 1813       Disabled           Enabled           5646       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::11   10.0.0.82    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=pod-to-external-fqdn-allow-google-cnp                                                                     
	 2941       Disabled           Disabled          63582      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::71   10.0.0.86    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=pod-to-b-multi-node-nodeport                                                                              
	 3540       Disabled           Disabled          63892      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::12   10.0.0.3     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=pod-to-b-multi-node-clusterip                                                                             
	 3764       Disabled           Enabled           30691      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default             fd02::5a   10.0.0.13    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                            
	                                                            k8s:name=pod-to-a-multi-node-proxy-egress-policy                                                                   
	 3981       Disabled           Disabled          16319      k8s:app=prometheus                                                                 fd02::2    10.0.0.70    ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=prometheus-k8s                                                             
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 4094       Disabled           Disabled          34051      k8s:app=grafana                                                                    fd02::ee   10.0.0.238   ready   
	                                                            k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=cilium-monitoring                                   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                           
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                                    
	                                                            k8s:io.kubernetes.pod.namespace=cilium-monitoring                                                                  
	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init)
	 

cmd: kubectl exec -n kube-system cilium-qf27p -- cilium endpoint list
Exitcode: 0 
Stdout:
 	 ENDPOINT   POLICY (ingress)   POLICY (egress)   IDENTITY   LABELS (source:key[=value])                                              IPv6        IPv4         STATUS   
	            ENFORCEMENT        ENFORCEMENT                                                                                                                    
	 24         Disabled           Disabled          28058      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::194   10.0.1.213   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-external-1111                                                                             
	 65         Disabled           Disabled          16755      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::19e   10.0.1.4     ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-a                                                                                         
	 280        Disabled           Disabled          4          reserved:health                                                          fd02::138   10.0.1.159   ready   
	 335        Disabled           Disabled          42960      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1b4   10.0.1.253   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-b-intra-node-nodeport                                                                     
	 442        Disabled           Enabled           39813      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::165   10.0.1.103   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-c-intra-node-proxy-to-proxy-policy                                                        
	 445        Disabled           Disabled          53224      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1ee   10.0.1.126   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-c-intra-node-proxy-ingress-policy                                                         
	 448        Enabled            Disabled          18786      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::198   10.0.1.46    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=echo-c                                                                                           
	 470        Disabled           Enabled           7938       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1e7   10.0.1.207   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-a-denied-cnp                                                                              
	 550        Disabled           Disabled          21061      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::1d2   10.0.1.116   ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=echo-a                                                                                           
	 843        Disabled           Enabled           43866      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::14b   10.0.1.86    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-a-intra-node-proxy-egress-policy                                                          
	 1434       Disabled           Disabled          28423      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::11b   10.0.1.75    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-b-intra-node-hostport                                                                     
	 1565       Disabled           Disabled          17838      k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::155   10.0.1.51    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=echo-b                                                                                           
	 2581       Disabled           Enabled           4229       k8s:io.cilium.k8s.namespace.labels.kubernetes.io/metadata.name=default   fd02::125   10.0.1.89    ready   
	                                                            k8s:io.cilium.k8s.policy.cluster=default                                                                  
	                                                            k8s:io.cilium.k8s.policy.serviceaccount=default                                                           
	                                                            k8s:io.kubernetes.pod.namespace=default                                                                   
	                                                            k8s:name=pod-to-a-allowed-cnp                                                                             
	 3168       Disabled           Disabled          1          k8s:cilium.io/ci-node=k8s2                                                                        ready   
	                                                            reserved:host                                                                                             
	 
Stderr:
 	 Defaulted container "cilium-agent" out of: cilium-agent, clean-cilium-state (init)
	 

===================== Exiting AfterFailed =====================
18:38:27 STEP: Running AfterEach for block EntireTestsuite K8sConformance Portmap Chaining
18:39:44 STEP: Running AfterEach for block EntireTestsuite

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakeci/flakeThis is a known failure that occurs in the tree. Please investigate me!staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions