Skip to content

Add the possibility for the controller to be in a shared namespace #4558

@panzouh

Description

@panzouh

Is your feature request related to a problem? Please describe.
I would like to use nginx plugged with a sidecar, the purpose of the sidecar is to pull a configuration from an API and then pushing it to Nginx via a shared volume. My problem is that when the configuration is reloaded i would like to send a SIGHUP signal to nginx process to gracefully shutdown and reload configuration.

Describe the solution you'd like
The solution would be simple it would be to add this key like in the values.yaml :

controller:
  ## The name of the Ingress Controller daemonset or deployment.
  name: controller

  ## The kind of the Ingress Controller installation - deployment or daemonset.
  kind: deployment

  ## Shared process namespace between containers in the Ingress Controller pod.
  sharedProcessNamespace: false

And this in ./templates/deployment.yaml & ./templates/daemonset.yaml :

apiVersion: apps/v1
kind: Deployment # Same on daemonset
metadata:
  name: {{ include "nginx-ingress.controller.fullname" . }}
  namespace: {{ .Release.Namespace }}
  labels:
    {{- include "nginx-ingress.labels" . | nindent 4 }}
{{- if .Values.controller.annotations }}
  annotations: {{ toYaml .Values.controller.annotations | nindent 4 }}
{{- end }}
spec:
  {{- if not .Values.controller.autoscaling.enabled }}
  replicas: {{ .Values.controller.replicaCount }}
  {{- end }}
  selector:
    matchLabels:
      {{- include "nginx-ingress.selectorLabels" . | nindent 6 }}
  template:
    metadata:
      labels:
        {{- include "nginx-ingress.selectorLabels" . | nindent 8 }}
{{- if .Values.nginxServiceMesh.enable }}
        nsm.nginx.com/enable-ingress: "true"
        nsm.nginx.com/enable-egress: "{{ .Values.nginxServiceMesh.enableEgress }}"
        nsm.nginx.com/deployment: {{ include "nginx-ingress.controller.fullname" . }}
{{- end }}
{{- if .Values.controller.pod.extraLabels }}
{{ toYaml .Values.controller.pod.extraLabels | indent 8 }}
{{- end }}
{{- if or .Values.prometheus.create .Values.controller.pod.annotations }}
      annotations:
{{- if .Values.prometheus.create }}
        prometheus.io/scrape: "true"
        prometheus.io/port: "{{ .Values.prometheus.port }}"
        prometheus.io/scheme: "{{ .Values.prometheus.scheme }}"
{{- end }}
{{- if .Values.controller.pod.annotations }}
{{ toYaml .Values.controller.pod.annotations | indent 8 }}
{{- end }}
{{- end }}
    spec:
      {{- if .Values.controller.sharedProcessNamespace }}
      shareProcessNamespace: true
      {{- end }}
      # [...]

Describe alternatives you've considered
I considered for a moment to add a cronjob to my cluster that fetch configuration and kubectl exec into Nginx to reload configuration but for me it is not the proper way to do it because it does not take in consideration if the configuration changed or not.

Edit : I also considered to support multiple configmaps since the flag nginxConfigMaps is named with a plural but support only one configuration.

Metadata

Metadata

Assignees

Labels

backlog candidatePull requests/issues that are candidates to be backlog itemsproposalAn issue that proposes a feature request

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions