Skip to content

CI: K8sUpdates Tests upgrade and downgrade from a Cilium stable image to master: no matches for kind "CiliumNetworkPolicy" in version "cilium.io/v2" #10447

@joestringer

Description

@joestringer

Unable to recognize "/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-K8s/1.13-gopath/src/github.com/cilium/cilium/test/k8sT/manifests/l7-policy.yaml": no matches for kind "CiliumNetworkPolicy" in version "cilium.io/v2"

Seems like there are possible cases in upgrade test where we will proceed before confirming that the k8s cluster is ready and Cilium is up.

https://jenkins.cilium.io/job/Cilium-PR-Ginkgo-Tests-K8s/2920/testReport/junit/Suite-k8s-1/13/K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master/

fc67162b_K8sUpdates_Tests_upgrade_and_downgrade_from_a_Cilium_stable_image_to_master.zip

Stacktrace

/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-K8s/1.13-gopath/src/github.com/cilium/cilium/test/ginkgo-ext/scopes.go:430
cannot import l7 policy: /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-K8s/1.13-gopath/src/github.com/cilium/cilium/test/k8sT/manifests/l7-policy.yaml
Expected
    <*helpers.cmdError | 0xc00046d310>: Cannot perform 'apply' on resorce '/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-K8s/1.13-gopath/src/github.com/cilium/cilium/test/k8sT/manifests/l7-policy.yaml' (exit status 1) output: cmd: kubectl apply -f /home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-K8s/1.13-gopath/src/github.com/cilium/cilium/test/k8sT/manifests/l7-policy.yaml -n default
Exitcode: 1 
Stdout:
 	 
Stderr:
 	 error: unable to recognize "/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-K8s/1.13-gopath/src/github.com/cilium/cilium/test/k8sT/manifests/l7-policy.yaml": no matches for kind "CiliumNetworkPolicy" in version "cilium.io/v2"
	 

to be nil
/home/jenkins/workspace/Cilium-PR-Ginkgo-Tests-K8s/1.13-gopath/src/github.com/cilium/cilium/test/k8sT/Updates.go:334

Standard Output

Number of "context deadline exceeded" in logs: 0
Number of "level=error" in logs: 0
Number of "level=warning" in logs: 0
Number of "Cilium API handler panicked" in logs: 0
Number of "Goroutine took lock for more than" in logs: 0
No errors/warnings found in logs
Cilium pods: []
Netpols loaded: 
CiliumNetworkPolicies loaded: 
Endpoint Policy Enforcement:
Pod   Ingress   Egress

Standard Error

STEP: Deleting Cilium, CoreDNS, and etcd-operator...
STEP: Waiting for pods to be terminated..
STEP: Cleaning Cilium state
STEP: Deploying Cilium 1.6-dev
STEP: Installing kube-dns
STEP: Cilium "1.6-dev" is installed and running
STEP: Performing Cilium preflight check
STEP: Performing Cilium status preflight check
STEP: Performing Cilium controllers preflight check
STEP: Performing Cilium health check
STEP: Performing Cilium service preflight check
STEP: Performing K8s service preflight check
STEP: Waiting for cilium-operator to be ready
STEP: Creating some endpoints and L7 policy
STEP: Waiting for kube-dns to be ready
STEP: Running kube-dns preflight check
STEP: Performing K8s service preflight check
=== Test Finished at 2020-03-04T01:46:59Z====
===================== TEST FAILED =====================
cmd: kubectl get pods -o wide --all-namespaces
Exitcode: 0 
Stdout:
 	 NAMESPACE     NAME                           READY   STATUS             RESTARTS   AGE     IP              NODE   NOMINATED NODE   READINESS GATES
	 kube-system   etcd-k8s1                      1/1     Running            0          13m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   kube-apiserver-k8s1            1/1     Running            0          13m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   kube-controller-manager-k8s1   0/1     CrashLoopBackOff   4          13m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   kube-proxy-mmlr2               1/1     Running            0          13m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   kube-proxy-zsp6t               1/1     Running            0          2m23s   192.168.36.12   k8s2   <none>           <none>
	 kube-system   kube-scheduler-k8s1            0/1     CrashLoopBackOff   4          13m     192.168.36.11   k8s1   <none>           <none>
	 kube-system   log-gatherer-lqbdn             1/1     Running            0          28s     192.168.36.11   k8s1   <none>           <none>
	 kube-system   log-gatherer-mskwd             1/1     Running            0          28s     192.168.36.12   k8s2   <none>           <none>
	 kube-system   registry-adder-288p7           1/1     Running            0          2m16s   192.168.36.11   k8s1   <none>           <none>
	 kube-system   registry-adder-b2dg5           1/1     Running            0          2m16s   192.168.36.12   k8s2   <none>           <none>
	 
Stderr:
 	 

Fetching command output from pods []
===================== Exiting AfterFailed =====================

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/CIContinuous Integration testing issue or flakekind/bug/CIThis is a bug in the testing code.staleThe stale bot thinks this issue is old. Add "pinned" label to prevent this from becoming stale.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions