-
Notifications
You must be signed in to change notification settings - Fork 6.4k
feat: manage clusters via proxy [WIP] #9496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## master #9496 +/- ##
==========================================
- Coverage 45.75% 45.71% -0.04%
==========================================
Files 236 236
Lines 28527 28550 +23
==========================================
Hits 13053 13053
- Misses 13669 13691 +22
- Partials 1805 1806 +1 ☔ View full report in Codecov by Sentry. |
This branch does not contain the latest commit. we have implemented the feature in our test env. |
same issue: #7887 |
I think this change is reasonable. Added a couple more reviewers. |
any udpate, we need this feature, all of our k8s behind a proxy |
any update? |
1 similar comment
any update? |
any chance to merge at v2.5 @crenshaw-dev |
Apologies for missing your pings @jinnjwu! @ls0f are you still able to push this PR? Looks like it needs a I'm definitely going to need some external help on the review. I'm going to add this PR to the topics for the next contributor experience meeting so I can get help. |
yep, i can push this PR @crenshaw-dev |
@ls0f can you run codegen and push? I'm also curious if you have reproduction/test steps handy? I don't tinker with proxies often, so if you have particular tools or steps you've used, that would help me test. :-) |
https://github.com/zhang-xuebin/argo-helm/tree/host-on-gke is an example to let Argo manage private GKE clusters. It doesn't matter if the cluster is public or private. it works for Argo 2.4.1+. |
looks like we are talking different scenaio. |
@ls0f any chance for pr? |
Signed-off-by: ls0f <lovedboy.tk@qq.com>
Signed-off-by: ls0f <lovedboy.tk@qq.com>
@crenshaw-dev The code is updated. |
@ls0f one question, Does it support multi proxy. our each k8s cluster has different proxy |
every cluster manged by argo can specify proxy |
@crenshaw-dev would you help it? |
hope it will be ready in 2.5 . all of our k8s cluster behind a proxy. |
Hello @ls0f, now it is :) |
the link seems not accessible. |
Hello, i fixed it, here is the link : 582d59f |
@ls0f hi,any updates on this feature? |
I have been using this change about a half year without any issues. |
Is there a timeline to get this merged? We need to rebuild argocd/run this as a local patch right now to be able to manage some of our deployments. |
@ls0f are you still willing to push this in? |
Sorry for the long delay in responding, but I haven't had time to work on this lately, is there anyone willing to develop this feature. @dje4om @manuelstein |
Is there much remaining to do on this? Seems like a pretty critical feature? |
any news? Would be super cool! |
any movement on this? |
@crenshaw-dev what's the best way to get this merged? We've been using a patched ArgoCD for 6 months with this and working great. |
The workaround of @qixiaobo did not work for us. We were also unable to patch and compile argocd ourselves, so we got creative. We came up with a different solution. We did it with a deployment of a kubectl container which generates kubeconfig for EKS and runs this command:
Then the ArgoCD cluster secret points to the service of the pod with http (https, wouldn't work). We use cilium, so traffic is still encrypted. Don't know about the stability of the solution yet. We just build a cronjob to redeploy it every hour... This is also "insecure", since it has no authentication, so any pod could run kubectl commands on the target cluster. Our cluster is not easily reachable, so it should be fine, but still... We would greatly appreciate it if the proper solution of this pull-request provides could finally be merged, because we don't want to work with janky workarounds. This was pretty frustrating, since we have no good alternative in an on-premise environment that needs to communicate with an external cluster. |
Hey together, |
We need this too, I guess we'll have to compile our own... |
#push a hint on the test requirement would be great to be able to contribute the tests needed for a merge @crenshaw-dev |
Merged #20374 |
May fix
#8314
#6043
WIP
It should be discussed, then add more tests.
Checklist: