-
Notifications
You must be signed in to change notification settings - Fork 3.4k
cilium-cli: add IPv6 connectivity test for LocalRedirectPolicy #37192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cilium-cli: add IPv6 connectivity test for LocalRedirectPolicy #37192
Conversation
(not an issue with this PR, but these tests should probably be owned by @cilium/sig-lb, I've filed #37193 to add respective |
Hi @saiaunghlyanhtet, thank you for the PR! I tested it in my local kind cluster but looks like it's failing. Would you be able to check it? $ go build ./cilium-cli/cmd/cilium/
$ ./cilium connectivity test --include-unsafe-tests --test "local-redirect-policy"
[=] [cilium-test-1] Test [local-redirect-policy] [96/109]
ℹ️ 📜 Applying CiliumLocalRedirectPolicy 'lrp-address-matcher' to namespace 'cilium-test-1' on cluster kind-kind..
ℹ️ 📜 Applying CiliumNetworkPolicy 'client-egress-to-cidr-lrp-deny' to namespace 'cilium-test-1' on cluster kind-kind..
ℹ️ 📜 Applying CiliumLocalRedirectPolicy 'lrp-address-matcher-skip-redirect-from-backend' to namespace 'cilium-test-1' on cluster kind-kind..
ℹ️ 📜 Applying CiliumLocalRedirectPolicy 'lrp-address-matcher-ipv6' to namespace 'cilium-test-1' on cluster kind-kind..
ℹ️ 📜 Applying CiliumLocalRedirectPolicy 'lrp-address-matcher-skip-redirect-from-backend-ipv6' to namespace 'cilium-test-1' on cluster kind-kind..
[-] Scenario [local-redirect-policy/lrp]
🟥 Failed to ensure local redirect BPF entries: %w timeout while waiting for condition, last error: frontend [[fd00::169:254:169:254]:80/TCP] backend [10.244.3.202] mapping not found in BPF LB map [cilium-6p9kv] map[0.0.0.0:30301/TCP:[0.0.0.0:0 (11) (0) [NodePort, non-routable, dsr] 10.244.3.62:8080/TCP (11) (1)] 0.0.0.0:31193/TCP:[10.244.2.217:8080/TCP (16) (1) 0.0.0.0:0 (16) (0) [NodePort, non-routable, dsr] ] 0.0.0.0:4000/TCP:[0.0.0.0:0 (20) (0) [HostPort, non-routable] 10.244.3.62:8080/TCP (20) (1)] 10.96.0.10:53/TCP:[10.244.2.239:53/TCP (5) (1) 0.0.0.0:0 (5) (0) [ClusterIP, non-routable, dsr] 10.244.2.91:53/TCP (5) (2)] 10.96.0.10:53/UDP:[10.244.2.239:53/UDP (4) (1) 10.244.2.91:53/UDP (4) (2) 0.0.0.0:0 (4) (0) [ClusterIP, non-routable, dsr] ] 10.96.0.10:9153/TCP:[10.244.2.239:9153/TCP (3) (1) 10.244.2.91:9153/TCP (3) (2) 0.0.0.0:0 (3) (0) [ClusterIP, non-routable, dsr] ] 10.96.0.1:443/TCP:[0.0.0.0:0 (1) (0) [ClusterIP, non-routable, dsr] 172.18.0.4:6443/TCP (1) (1)] 10.96.25.173:443/TCP:[0.0.0.0:0 (2) (0) [ClusterIP, InternalLocal, non-routable, dsr] 172.18.0.5:4244/TCP (2) (1)] 10.96.35.254:8080/TCP:[10.244.2.217:8080/TCP (12) (1) 0.0.0.0:0 (12) (0) [ClusterIP, non-routable, dsr] ] 10.96.55.196:8080/TCP:[10.244.3.62:8080/TCP (6) (1) 0.0.0.0:0 (6) (0) [ClusterIP, non-routable, dsr] ] 169.254.169.254:80/TCP:[10.244.3.202:8080/TCP (22) (1) 0.0.0.0:0 (22) (0) [LocalRedirect] ] 169.254.169.255:80/TCP:[0.0.0.0:0 (23) (0) [LocalRedirect] 10.244.3.202:8080/TCP (23) (1)] 172.18.0.5:30301/TCP:[0.0.0.0:0 (10) (0) [NodePort, dsr] 10.244.3.62:8080/TCP (10) (1)] 172.18.0.5:31193/TCP:[10.244.2.217:8080/TCP (15) (1) 0.0.0.0:0 (15) (0) [NodePort, dsr] ] 172.18.0.5:4000/TCP:[10.244.3.62:8080/TCP (19) (1) 0.0.0.0:0 (19) (0) [HostPort] ] [::]:30301/TCP:[[::]:0 (8) (0) [NodePort, non-routable, dsr] [fd00:10:244:3::a58b]:8080/TCP (8) (1)] [::]:31193/TCP:[[fd00:10:244:2::2568]:8080/TCP (17) (1) [::]:0 (17) (0) [NodePort, non-routable, dsr] ] [::]:4000/TCP:[[fd00:10:244:3::a58b]:8080/TCP (21) (1) [::]:0 (21) (0) [HostPort, non-routable] ] [fc00:c111::5]:30301/TCP:[[fd00:10:244:3::a58b]:8080/TCP (9) (1) [::]:0 (9) (0) [NodePort, dsr] ] [fc00:c111::5]:31193/TCP:[[::]:0 (14) (0) [NodePort, dsr] [fd00:10:244:2::2568]:8080/TCP (14) (1)] [fc00:c111::5]:4000/TCP:[[fd00:10:244:3::a58b]:8080/TCP (18) (1) [::]:0 (18) (0) [HostPort] ] [fd00:10:96::4036]:8080/TCP:[[fd00:10:244:3::a58b]:8080/TCP (7) (1) [::]:0 (7) (0) [ClusterIP, non-routable, dsr] ] [fd00:10:96::97c4]:8080/TCP:[[::]:0 (13) (0) [ClusterIP, non-routable, dsr] [fd00:10:244:2::2568]:8080/TCP (13) (1)] [fd00::169:254:169:254]:80/TCP:[[::]:0 (24) (0) [LocalRedirect] [fd00:10:244:3::275e]:8080/TCP (24) (1)] [fd00::169:254:169:255]:80/TCP:[[::]:0 (25) (0) [LocalRedirect] [fd00:10:244:3::275e]:8080/TCP (25) (1)]]
ℹ️ 📜 Deleting CiliumLocalRedirectPolicy 'lrp-address-matcher' in namespace 'cilium-test-1' on cluster kind-kind..
ℹ️ 📜 Deleting CiliumNetworkPolicy 'client-egress-to-cidr-lrp-deny' in namespace 'cilium-test-1' on cluster kind-kind..
ℹ️ 📜 Deleting CiliumLocalRedirectPolicy 'lrp-address-matcher-skip-redirect-from-backend' in namespace 'cilium-test-1' on cluster kind-kind..
ℹ️ 📜 Deleting CiliumLocalRedirectPolicy 'lrp-address-matcher-ipv6' in namespace 'cilium-test-1' on cluster kind-kind..
ℹ️ 📜 Deleting CiliumLocalRedirectPolicy 'lrp-address-matcher-skip-redirect-from-backend-ipv6' in namespace 'cilium-test-1' on cluster kind-kind..
📋 Test Report [cilium-test-1]
❌ 1/1 tests failed (0/0 actions), 108 tests skipped, 0 scenarios skipped: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please fix the test failure mentioned above
When I tested for the first time, it worked. But, in the second time, it failed. I am looking into it. |
1c6bcfb
to
63f87e5
Compare
@ysksuzuki Sorry for the delay. Previous test failures are fixed. But there is something annoying. When I test IPv6 with skipRedirectFromBackend true, sometimes the test passes. Sometimes it does not. I also found this issue #36740. What do you think about this? |
It seems that
|
Opened #37575. We need to fix it and then add the test for ipv6 |
heh, I think this will still fail in CI because we're starting the e2e-upgrade workflow on v1.17. Let me set up a test branch quickly ... |
ae7c994
to
81d89fc
Compare
Fixed for skipping IPv6 scenario with skipRedirectFromBackend=false when per-packet lb is enabled. |
/test |
e30e555
to
9ed70ff
Compare
/ci-e2e-upgrade |
@julianwiedmann @ysksuzuki |
Thank you @saiaunghlyanhtet ! Looks like it panics when running on a cluster with There's an issue where the agent crashes when deleting a CLRP while IPv6 is disabled. I can take care of this one.
|
Signed-off-by: saiaunghlyanhtet <saiaunghlyanhtet2003@gmail.com>
9ed70ff
to
957727e
Compare
/test |
Used ForEachIPFamily for handling tests for each IP family. I think that current CI failures are not related to this PR. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks!
Github wants a review from @cilium/ci-structure, seems it did not request a review automatically, did so manually |
Opened a PR to add the version check |
Modified LRP connectivity test for IPv6.
Fixes: #36960