-
Notifications
You must be signed in to change notification settings - Fork 8.1k
Fix routing for TPROXY mode #28363
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix routing for TPROXY mode #28363
Conversation
🤔 🐛 You appear to be fixing a bug in Go code, yet your PR doesn't include updates to any test files. Did you forget to add a test? Courtesy of your friendly test nag. |
😊 Welcome @gmemcc! This is either your first contribution to the Istio istio repo, or it's been You can learn more about the Istio working groups, code of conduct, and contributing guidelines Thanks for contributing! Courtesy of your friendly welcome wagon. |
Hi @gmemcc. Thanks for your PR. I'm waiting for a istio member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
cc @istio/wg-networking-maintainers and @istio/technical-oversight-committee We need to either add proper tests and docs for this feature or consider removal. Otherwise we end up in permanent experimental state which leads to a lack of user trust |
/ok-to-test |
@@ -104,7 +105,9 @@ var ( | |||
OriginalSrc = &listener.ListenerFilter{ | |||
Name: OriginalSrcFilterName, | |||
ConfigType: &listener.ListenerFilter_TypedConfig{ | |||
TypedConfig: util.MessageToAny(&originalsrc.OriginalSrc{}), | |||
TypedConfig: util.MessageToAny(&originalsrc.OriginalSrc{ | |||
Mark: uint32(os.Getegid()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand why this is used as the mark?!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be set as 1337
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod template provided by configmap istio-sidecar-injector
declares runAsGroup: 1337
for TPROXY mode:
privileged: {{ .Values.global.proxy.privileged }}
readOnlyRootFilesystem: {{ not .Values.global.proxy.enableCoreDump }}
runAsGroup: 1337
fsGroup: 1337
{{ if or (eq (annotation .ObjectMeta `sidecar.istio.io/interceptionMode` .ProxyConfig.InterceptionMode) `TPROXY`) (eq (annotation .ObjectMeta `sidecar.istio.io/capNetBindService` .Values.global.proxy.capNetBindService) `true`) -}}
runAsNonRoot: false
runAsUser: 0
{{- else -}}
runAsNonRoot: true
runAsUser: 1337
{{- end }}
I think getting the mark dynamically is more portable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did not realize that the istiod is run as 1337, @howardjohn I remember you said there is a case we run istiod locally without in k8s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should not assume istiod uid/gid correlates. we set it to 1337 only out of convenience
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my misreading... should we introduce an environment variable or just hardcode 1337?
@gmemcc Can you also take the outside traffic through ingress gateway into consideration? We have use case to get the the client source ip in this case. |
I have tested it. To make it work: we need to make several other changes too.
|
"-j", constants.MARK, "--set-mark", iptConfigurator.cfg.InboundTProxyMark) | ||
"-p", constants.TCP, "-m", "connmark", "--mark", iptConfigurator.cfg.InboundTProxyMark, "-j", "CONNMARK", "--restore-mark") | ||
// prevent infinite redirect | ||
iptConfigurator.iptables.InsertRuleV4(constants.ISTIOINBOUND, constants.MANGLE, 1, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This rule may be unnecessary, it works without it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I also found that the last rule is not necessary in default configuration, but without that rule, after applying a Sidecar object:
apiVersion: networking.istio.io/v1alpha3
kind: Sidecar
metadata:
name: default
namespace: default
spec:
egress:
- hosts:
- "default/*"
infinite redirection appeared again
If ingress gateway acts as a transparent proxy, it will send packets whose src ip is the real client ip to upstream workload. how to fix routing for replying packets may be environment dependent, usually cannot be done via iptables rules. |
@gmemcc: The following tests failed, say
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
@hzxuzhonghu in my cluster, leaving |
Mine is 4.4.0-142-generic |
maybe via configuring ingress gateway to send proxy protocol, I will investigate it |
see #28457 |
This PR fixes #23369
In the old versions, envoy sent upstream requests to 127.0.0.1, but now it sends to original dest ip ( pod ip), which causes TPROXY to not work properly.
this PR will:
[ ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[X] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure