-
Notifications
You must be signed in to change notification settings - Fork 1.5k
Closed
Labels
blocked/awskind/featureNew feature or requestNew feature or requestpriority/important-longtermImportant over the long term, but may not be currently staffed and/or may require multiple releasesImportant over the long term, but may not be currently staffed and/or may require multiple releases
Description
Before 2022 Oct, there was a limitation to create EKS cluster against the AZ cnn1-az4 (it's mapped to cn-north-1d in most account), eksctl has a fix to avoid this AZ by default (#3916). Now EKS has supported creating control plane against all AZs in cn-north-1 region, it's time to revert this temporary fix in eksctl.
What feature/behavior/change do you want?
Revert the fix for #3916
Why do you want this feature?
EKS has supported creating control plane against all AZs in cn-north-1 region:
[ec2-user@ip-172-31-30-56 ~]$ eksctl create cluster --name bjs80-test --zones cn-north-1a,cn-north-1b,cn-north-1d --region cn-north-1
2022-10-16 11:26:59 [ℹ] eksctl version 0.115.0
2022-10-16 11:26:59 [ℹ] using region cn-north-1
2022-10-16 11:26:59 [ℹ] subnets for cn-north-1a - public:192.168.0.0/19 private:192.168.96.0/19
2022-10-16 11:26:59 [ℹ] subnets for cn-north-1b - public:192.168.32.0/19 private:192.168.128.0/19
2022-10-16 11:26:59 [ℹ] subnets for cn-north-1d - public:192.168.64.0/19 private:192.168.160.0/19
2022-10-16 11:26:59 [ℹ] nodegroup "ng-ca601265" will use "" [AmazonLinux2/1.23]
2022-10-16 11:26:59 [ℹ] using Kubernetes version 1.23
2022-10-16 11:26:59 [ℹ] creating EKS cluster "bjs80-test" in "cn-north-1" region with managed nodes
2022-10-16 11:26:59 [ℹ] will create 2 separate CloudFormation stacks for cluster itself and the initial managed nodegroup
2022-10-16 11:26:59 [ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=cn-north-1 --cluster=bjs80-test'
2022-10-16 11:26:59 [ℹ] Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "bjs80-test" in "cn-north-1"
2022-10-16 11:26:59 [ℹ] CloudWatch logging will not be enabled for cluster "bjs80-test" in "cn-north-1"
2022-10-16 11:26:59 [ℹ] you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=cn-north-1 --cluster=bjs80-test'
2022-10-16 11:26:59 [ℹ]
2 sequential tasks: { create cluster control plane "bjs80-test",
2 sequential sub-tasks: {
wait for control plane to become ready,
create managed nodegroup "ng-ca601265",
}
}
2022-10-16 11:26:59 [ℹ] building cluster stack "eksctl-bjs80-test-cluster"
2022-10-16 11:26:59 [ℹ] deploying stack "eksctl-bjs80-test-cluster"
2022-10-16 11:27:29 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:27:59 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:28:59 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:30:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:31:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:32:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:33:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:34:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:35:00 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:36:01 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:37:01 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-cluster"
2022-10-16 11:39:03 [ℹ] building managed nodegroup stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:39:03 [ℹ] deploying stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:39:03 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:39:33 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:40:10 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:41:12 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:42:51 [ℹ] waiting for CloudFormation stack "eksctl-bjs80-test-nodegroup-ng-ca601265"
2022-10-16 11:42:51 [ℹ] waiting for the control plane to become ready
2022-10-16 11:42:51 [✔] saved kubeconfig as "/home/ec2-user/.kube/config"
2022-10-16 11:42:51 [ℹ] no tasks
2022-10-16 11:42:51 [✔] all EKS cluster resources for "bjs80-test" have been created
2022-10-16 11:42:52 [ℹ] nodegroup "ng-ca601265" has 2 node(s)
2022-10-16 11:42:52 [ℹ] node "ip-192-168-13-128.cn-north-1.compute.internal" is ready
2022-10-16 11:42:52 [ℹ] node "ip-192-168-80-198.cn-north-1.compute.internal" is ready
2022-10-16 11:42:52 [ℹ] waiting for at least 2 node(s) to become ready in "ng-ca601265"
2022-10-16 11:42:52 [ℹ] nodegroup "ng-ca601265" has 2 node(s)
2022-10-16 11:42:52 [ℹ] node "ip-192-168-13-128.cn-north-1.compute.internal" is ready
2022-10-16 11:42:52 [ℹ] node "ip-192-168-80-198.cn-north-1.compute.internal" is ready
2022-10-16 11:42:53 [ℹ] kubectl command should work with "/home/ec2-user/.kube/config", try 'kubectl get nodes'
2022-10-16 11:42:53 [✔] EKS cluster "bjs80-test" in "cn-north-1" region is ready
Metadata
Metadata
Assignees
Labels
blocked/awskind/featureNew feature or requestNew feature or requestpriority/important-longtermImportant over the long term, but may not be currently staffed and/or may require multiple releasesImportant over the long term, but may not be currently staffed and/or may require multiple releases