-
Notifications
You must be signed in to change notification settings - Fork 151
Description
Context
I have a working ControlPlane deployed with Kamaji. The controlplane endpoint is provided with an ingress resource. This is the YAML manifest:
apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: tenant01-cluster01
namespace: tenant01
spec:
addons:
coreDNS: {}
controlPlane:
deployment:
replicas: 1
serviceAccountName: default
ingress:
additionalMetadata:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
hostname: 192.168.78.4.nip.io
ingressClassName: nginx
service:
serviceType: ClusterIP
dataStore: tenant01-cluster01
kubernetes:
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- ExternalIP
- Hostname
- InternalIP
version: v1.31.0
networkProfile:
clusterDomain: cluster.local
podCidr: "10.244.0.0/16"
port: 6443
serviceCidr: "10.96.0.0/16"
If nothing is modified, all is working as expected. However, I have found a major issue while updating the resource to provide a new ingress hostname. When I update the value from 192.168.78.4.nip.io
to 192.168.78.5.nip.io
, the reconciliation process is executed and the ingress resource is updated with the new value:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
creationTimestamp: "2025-03-20T13:15:37Z"
generation: 6
labels:
kamaji.clastix.io/component: ingress
kamaji.clastix.io/name: tenant01-cluster01
kamaji.clastix.io/project: kamaji
name: tenant01-cluster01
namespace: tenant01
ownerReferences:
- apiVersion: kamaji.clastix.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: TenantControlPlane
name: tenant01-cluster01
uid: f45398d8-3dce-4786-b877-db2d22ca8e6f
resourceVersion: "4168"
uid: 47e41a6c-821f-418d-a197-0f6de7da247a
spec:
ingressClassName: nginx
rules:
- host: 192.168.78.5.nip.io
http:
paths:
- backend:
service:
name: tenant01-cluster01
port:
number: 6443
path: /
pathType: Prefix
status:
loadBalancer: {}
However, the kubeconfig file is still pointing to the old hostname:
admin.conf: |
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURCVENDQWUyZ0F3SUJBZ0lJWDRvLzlncFh2c0l3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TlRBek1qQXhNekE1TXpG
YUZ3MHpOVEF6TVRneE16RTBNekZhTUJVeApFekFSQmdOVkJBTVRDbXQxWW1WeWJtVjBaWE13Z2dFaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQkR3QXdnZ0VLCkFvSUJBUUQyczFPZ3g5dG9WRDhmV1NSTEtkQVR1STRIMjJHZEw4RHQ4QkZXWGdwbU9COXg1b2pwRkdhUldnM2gKYl
lQT3I2MEhzelBwdEZpY3VTKy93U3dWdHBtQ1E0WllGeEszSVFEb2R6TDkvMzA5ektJc0lVMGJ6Qy84SDdoNwpBWUxaTmprNWh5cnowQVlMU1F5WXkxS2REMTlubTFvYko1eWVzWXh5WnRaZVRCMG83SUhaSG1zYW96MVJrZThRCjlHUWRVVGpQWjdVTGM0S1liK0Y1bmtiaXZaR1Yx
R2RDSE54WDl5dHZrYzlaNDQ4SXNtdzhnWjJPYnFBcWlraDgKczZRUlFYMkRacUM0MzRoZzdYNThYSk93UTZOVFE3a1FpNmZUZVl2UlhUck9MaytMZ0Q3QitTZmg2N2FlMGZmYgpiU2V1enJGTy92UU1rMVk1bXU1bk9vNTJVc1N2QWdNQkFBR2pXVEJYTUE0R0ExVWREd0VCL3dRRU
F3SUNwREFQCkJnTlZIUk1CQWY4RUJUQURBUUgvTUIwR0ExVWREZ1FXQkJTZWtVaDRHYVNRWitINUx2RWF2MGNvUXdrdW9UQVYKQmdOVkhSRUVEakFNZ2dwcmRXSmxjbTVsZEdWek1BMEdDU3FHU0liM0RRRUJDd1VBQTRJQkFRQnVGSDM4Q2tvRAorZjBZZDNKNC9ZQ2MyZHhxd2pQ
d1gva2tQbFhZMWRYQ2xIYzByMXJYUlc0NDBNTXpldzJ0aEU0Yk9XelRxQUlaCkp6K3hiRWM5SDBmMzhMdFMvVndNbEZydmQvSDIvWEJrcUFBS3FQTVZ2M2VYTCtsVDU3QWQwWDdLRDFOQnlHNysKS0FTd2o0Z0tkaWdDREJLeDhraW1JYlI4UXdpdkxHK3JhMmtuM3NTbEdPZGVBTj
MxYzR3dnR2R0ZmOFhaWVgvMQpXRkpRL25zaDV1U0ovUnRmZno4SE50NGFGL25NS2N5K1o5cEJ2RTljb3lLL0ZnUU5jMm9BL3B2SnZzWUdwdU9GClU3ZnMyc2lUZmpPQUJwREFSWTdZQlZHZE4ybStsN2RTQUl4NUFQdFFrVldNYmFnbnlCRERFOGZzQ3BqUGZWa1YKT0xWM1krNTRq
czhnCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
server: https://192.168.78.4.nip.io:6443
name: tenant01-cluster01
What's more, the kube-apiserver certificate is not updated including the new domain in the SAN:
apiserver.crt: |
-----BEGIN CERTIFICATE-----
MIIEEzCCAvugAwIBAgIIMtdAAwdVp+8wDQYJKoZIhvcNAQELBQAwFTETMBEGA1UE
AxMKa3ViZXJuZXRlczAeFw0yNTAzMjAxMzA5MzFaFw0yNjAzMjAxMzE0MzFaMBkx
FzAVBgNVBAMTDmt1YmUtYXBpc2VydmVyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
MIIBCgKCAQEAqfXaIUwBnK9yusustYE0L8KMv6OrIaNjSohrqHZkPT1UGvjNDjkA
jTBIKlBA3WW4aNXUVloEnDkgypE0qzePjyLC1rJn3oiPzctYU748EaUBHQdMIfw/
ATFeV/zeu5uUj4vihm3cr25T73ULtfoBzyf/NDnkXomUDOpF+n3BYfGk8VlJ+/RZ
5lKqQitsihUCmIwqzEx/LGL3KMhfjBDidzPqu8WOy9/S23YE71T5q7td2EqmIzUL
oZZ7RBNvsZdKbfrQkdSTxSMCoZgXC61CLEk1TPqpQOUndfykZ09/XA7jY4mhgGMe
+zK4FDefIH6U4PXhOql4XsTGWUkoQCahYwIDAQABo4IBYTCCAV0wDgYDVR0PAQH/
BAQDAgWgMBMGA1UdJQQMMAoGCCsGAQUFBwMBMAwGA1UdEwEB/wQCMAAwHwYDVR0j
BBgwFoAUnpFIeBmkkGfh+S7xGr9HKEMJLqEwggEFBgNVHREEgf0wgfqCEzE5Mi4x
NjguNzguNC5uaXAuaW+CCmt1YmVybmV0ZXOCEmt1YmVybmV0ZXMuZGVmYXVsdIIW
a3ViZXJuZXRlcy5kZWZhdWx0LnN2Y4Ika3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5j
bHVzdGVyLmxvY2Fsgglsb2NhbGhvc3SCBG5vZGWCEnRlbmFudDAxLWNsdXN0ZXIw
MYIfdGVuYW50MDEtY2x1c3RlcjAxLnRlbmFudDAxLnN2Y4ItdGVuYW50MDEtY2x1
c3RlcjAxLnRlbmFudDAxLnN2Yy5jbHVzdGVyLmxvY2FshwQKYAABhwQKb1FqhwR/
AAABMA0GCSqGSIb3DQEBCwUAA4IBAQDlp2lN10lrQctBya/YuoYjs1LD1dnzDWLz
QZSJnNdDfqadsrZIXIHmglVAr218CoIMf7DnCOcdQkt5tb+4QQk1yZaMAVUpDJx4
Qejevedff3TV/jiEZ9cEFfEYG4Pgpl2JjuPpF1fTxUVu2ZLXltNn76sKyLL7BuUz
Aa+QdH1MIF9/OfAJMEQfW74Ea1+DqxQDpyOiYR1hGMvFJ3kf3NBY6pOvrD6jNOqv
fSCVvvEv/vOf6BvAoFPTFrK7rqmUOfeDcnm0KumKXLqwwyClY0ChxfUFWpLO7NQM
TIBQuGQWXKP74viTEloe9JaM6nyYh1TDeIzxM1VvmJy1ntS4Kntw
-----END CERTIFICATE-----
If decoded, you can see that the SAN has not been updated:
Kamaji Version
The bug can be reproduced with edge-24.12.1
version and with latest
version.
Test environment to reproduce the issue
Datastore definition:
apiVersion: kamaji.clastix.io/v1alpha1
kind: DataStore
metadata:
name: tenant01-cluster01
namespace: tenant01
spec:
driver: etcd
endpoints:
- kamaji-etcd-0.kamaji-etcd.tenant01.svc.cluster.local:2379
- kamaji-etcd-1.kamaji-etcd.tenant01.svc.cluster.local:2379
- kamaji-etcd-2.kamaji-etcd.tenant01.svc.cluster.local:2379
tlsConfig:
certificateAuthority:
certificate:
secretReference:
keyPath: ca.crt
name: kamaji-etcd-certs
namespace: tenant01
privateKey:
secretReference:
keyPath: ca.key
name: kamaji-etcd-certs
namespace: tenant01
clientCertificate:
certificate:
secretReference:
keyPath: tls.crt
name: kamaji-etcd-root-client-certs
namespace: tenant01
privateKey:
secretReference:
keyPath: tls.key
name: kamaji-etcd-root-client-certs
namespace: tenant01
TCP definition:
apiVersion: kamaji.clastix.io/v1alpha1
kind: TenantControlPlane
metadata:
name: tenant01-cluster01
namespace: tenant01
spec:
addons:
coreDNS: {}
controlPlane:
deployment:
replicas: 1
serviceAccountName: default
ingress:
additionalMetadata:
annotations:
nginx.ingress.kubernetes.io/ssl-passthrough: "true"
hostname: 192.168.78.4.nip.io
ingressClassName: nginx
service:
serviceType: ClusterIP
dataStore: tenant01-cluster01
kubernetes:
kubelet:
cgroupfs: systemd
preferredAddressTypes:
- ExternalIP
- Hostname
- InternalIP
version: v1.31.0
networkProfile:
clusterDomain: cluster.local
podCidr: "10.244.0.0/16"
port: 6443
serviceCidr: "10.96.0.0/16"
Chart values:
replicaCount: 1
image:
pullPolicy: IfNotPresent
tag:edge-24.12.1
resources:
limits:
cpu: 500m
memory: 1Gi
requests:
cpu: 200m
memory: 200Mi
defaultDatastoreName: ""
kamaji-etcd:
deploy: false
telemetry:
disabled: true
Executed commands (Local Minikube instance):
minikube start
helm install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--create-namespace \
--version v1.17.1 \
--set crds.enabled=true
kubectl create ns tenant01
helm install kamaji-etcd clastix/kamaji-etcd -n tenant01
git clone https://github.com/clastix/kamaji -b edge-24.12.1
helm dependency build kamaji/charts/kamaji
helm install kamaji kamaji/charts/kamaji \
--namespace kamaji-system \
--create-namespace \
-f kamaji-values.yml
kubectl apply -f 02-datastore.yml
kubectl apply -f 03-controlplane.yml
Kamaji Controller Logs
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseBootstrapToken start processing 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent start processing {"resource": "konnectivity-agent"}
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent resource processed {"resource": "konnectivity-agent"} 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent start processing {"resource": "konnectivity-sa"}
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent resource processed {"resource": "konnectivity-sa"} 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent start processing {"resource": "konnectivity-clusterrolebinding"}
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent resource processed {"resource": "konnectivity-clusterrolebinding"} 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubelet start processing 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.kube_proxy start processing
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.kube_proxy reconciliation completed 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.coredns start processing
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubeadm start processing [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
2025-03-20T13:40:28Z INFO kubeadmconfig has been configured {"controller": "tenantcontrolplane", "controllerGroup": "kamaji.clastix.io", "controllerKind": "TenantControlPlane", "TenantControlPlane": {"name":"tenant01-cluster01","namespace":"tenant01"}, "namespace": "te2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent start processing {"resource": "konnectivity-agent"}
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent resource processed {"resource": "konnectivity-agent"} 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent start processing {"resource": "konnectivity-sa"}
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent resource processed {"resource": "konnectivity-sa"} 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent start processing {"resource": "konnectivity-clusterrolebinding"}
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent resource processed {"resource": "konnectivity-clusterrolebinding"} 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.konnectivity_agent reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.kube_proxy start processing 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.kube_proxy reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseBootstrapToken reconciliation completed 2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseBootstrapToken start processing
2025-03-20T13:40:28Z INFO ingress has been configured {"controller": "tenantcontrolplane", "controllerGroup": "kamaji.clastix.io", "controllerKind": "TenantControlPlane", "TenantControlPlane": {"name":"tenant01-cluster01","namespace":"tenant01"}, "namespace": "tenant012025-03-20T13:40:28Z INFO tenant01-cluster01 has been reconciled {"controller": "tenantcontrolplane", "controllerGroup": "kamaji.clastix.io", "controllerKind": "TenantControlPlane", "TenantControlPlane": {"name":"tenant01-cluster01","namespace":"tenant01"}, "namespace"
2025-03-20T13:40:28Z INFO tenant01-cluster01 has been reconciled {"controller": "tenantcontrolplane", "controllerGroup": "kamaji.clastix.io", "controllerKind": "TenantControlPlane", "TenantControlPlane": {"name":"tenant01-cluster01","namespace":"tenant01"}, "namespace"2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseBootstrapToken reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubelet reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubelet start processing
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubeadm reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubeadm start processing
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.coredns reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.coredns start processing
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubelet reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubeadm reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubeadm start processing
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.coredns reconciliation completed
2025-03-20T13:40:28Z INFO soot_tenant01_tenant01-cluster01.PhaseUploadConfigKubeadm reconciliation completed
Current behaviour
Kamaji reconciler process in not updating all required resources when the ingress hostname is modified. The ingress resource is updated, but the certificates for the kube-apiserver are not updated. Kubeconfig server
parameter is also not updated.
Expected behaviour
Both admin-kubeconfig
and api-server-certificate
must be updated/recreated when the ingress resource is modified. According to #640, #641 and #678, this should be already resolved, but it's not the case.
If you need some more insight to reproduce the issue I can help with that.
Thanks!