Skip to content

Access managed clusters via HTTP_PROXY #6043

@alexellis

Description

@alexellis

Summary

There doesn't appear to be any way for ArgoCD to access its set of managed clusters via HTTP_PROXY.

Motivation

This is required when one wants to use a tunnel like inlets to remotely access a Kubernetes API server in another cluster.

Imagine you have 4 staging environments, and 2 production environments. Production is public and has a TLS SAN for its Kubernetes API with the master node's IP. That is something ArgoCD can address directly, but each of the staging environments may be within private clusters.

Using an SSH tunnel or inlets, we can make the Kubernetes API server of each staging environment appear as a ClusterIP Service within the main ArgoCD management cluster.

The challenge is that when Argo accesses the API server over the tunnel, its TLS SAN name i.e. clustera.default.svc will not match the names in the HTTPS cert: kubernetes.default.svc.

Proposal

Turning on TLS Insecure is a workaround, but not one to be encouraged.

Suggestion 1:

Can the HTTP proxy be added to the argo cluster add command such as:

argocd cluster add clustera --http-proxy https://clustera-proxy:3128

Suggestion 2:

Is there a way that ArgoCD can read a HTTP_PROXY from the kubeconfig? (last time I checked, specifying a HTTPS proxy was planned for kubectl, but wasn't implemented yet)

There is a small HTTPS CONNECT proxy that I created which allowed me to prove out the idea, but I can't find a way for ArgoCD to be configured with a separate HTTPS proxy - one for each cluster it tries to connect to.

Here is a conceptual diagram of how this would work. Multiple workload clusters can be added with this method, with each having their own tiny HTTPS proxy, exposed in the main cluster via a tunnel

argo-in-cluster-proxy

In my testing with kubectl and HTTP_PROXY env-var, I was able to get this to work as follows:

o3-0zlG5

The missing part for this solution would be to configure ArgoCD to use the appropriate named HTTPS proxy for each remote cluster. For the time being, for managed Kubernetes the workaround is TLS Insecure.

Where we have access to kubeadm or k3s, we can add an additional TLS SAN name and the solution works by directly tunnelling the API server. @jsiebens has an example of that here: Argo CD for your private Raspberry Pi k3s cluster

I'm open to hearing other solutions - or about how other users are deploying via ArgoCD to many different private clusters on managed K8s within private VPCs, where the API Server is not available on a public URL.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions