-
Notifications
You must be signed in to change notification settings - Fork 1k
Open
Description
Terraform Version, Provider Version and Kubernetes Version
Effectively duplicate of #1712 but on v2.12.1
Terraform version: 1.1.1 (also tried 1.2.8)
Kubernetes provider version: 2.12.1
Kubernetes version: 1.21
Affected Resource(s)
- kubernetes_manifest
Terraform Configuration Files
resource "kubernetes_manifest" "worker_machine_set" {
for_each = var.az_subnet_map
manifest = yamldecode(templatefile("${path.module}/manifests/machineset/worker-machine-set.yaml", {
az = "${var.region}${each.key}",
region = var.region,
environment = var.environment,
subnet_id = each.value,
ami_id = var.worker_ami_id,
instance_type = var.worker_instance_type
}))
field_manager {
force_conflicts = true
}
lifecycle {
prevent_destroy = true
}
}
worker-machine-set.yaml
apiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
name: ${environment}-worker-${az}
namespace: openshift-machine-api
labels:
machine.openshift.io/cluster-api-cluster: ${environment}
spec:
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: ${environment}
machine.openshift.io/cluster-api-machineset: ${environment}-worker-${az}
template:
metadata:
labels:
machine.openshift.io/cluster-api-cluster: ${environment}
machine.openshift.io/cluster-api-machine-role: worker
machine.openshift.io/cluster-api-machine-type: worker
machine.openshift.io/cluster-api-machineset: ${environment}-worker-${az}
spec:
providerSpec:
value:
userDataSecret:
name: worker-user-data
placement:
availabilityZone: ${az}
region: ${region}
credentialsSecret:
name: aws-cloud-credentials
instanceType: ${instance_type}
blockDevices:
- ebs:
encrypted: true
iops: 2000
kmsKey:
arn: ""
volumeSize: 500
volumeType: io1
securityGroups:
- filters:
- name: "tag:Name"
values:
- ${environment}-worker-sg*
kind: AWSMachineProviderConfig
loadBalancers:
- name: ${environment}-ingress
type: network
tags:
- name: kubernetes.io/cluster/${environment}
value: owned
- name: deployment
value: worker
deviceIndex: 0
ami:
id: ${ami_id}
subnet:
id: ${subnet_id}
apiVersion: awsproviderconfig.openshift.io/v1beta1
iamInstanceProfile:
id: kubic-${environment}-worker
Debug Output
Panic Output
Steps to Reproduce
terraform import 'module.kubic_crds.kubernetes_manifest.worker_machine_set["a"]' "apiVersion=machine.openshift.io/v1beta1,kind=MachineSet,namespace=openshift-machine-api,name=bs-ops-worker-us-east-1a"
terragrunt plan -target "module.kubic_crds"
Expected Behavior
The plan should update the resource. This is the behavior I see with v2.8.0 of the provider.
Actual Behavior
The plan attempts to recreate the resource... which in this case is not allowed because lifecycle.prevent_destroy
is true
Important Factoids
Terragrunt is used to wrap the calls to terraform, but it is still terraform that is ultimately being called. Terragrunt simply provides me with a couple of pre/post hooks to run scripts.
References
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
helder-moreira, kam1kaze, hongshaoyang and rtrevi
Metadata
Metadata
Assignees
Labels
No labels