-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Description
/kind bug
1. What kops
version are you running? The command kops version
, will display
this information.
Kops
Client version: 1.32.0 (git-v1.32.0)
2. What Kubernetes version are you running? kubectl version
will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops
flag.
Client Version: v1.32.2
Kustomize Version: v5.5.0
Server Version: v1.32.4
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops replace --name "${CLUSTER_NAME}" -f cluster-original.yaml --force
kops update cluster --yes
5. What happened after the commands executed?
kops deleted IAM role for dns-controller - dns-controller.kube-system.sa.k8s.cluster.lan
6. What did you expect to happen?
kops changes deployment of dns-controller to configure it to watch ingress resources
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml
to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
clust
kind: Cluster
metadata:
name: <CLUSTER_NAME> # e.g. k8s.example.internal
spec:
api:
dns: {}
authentication:
aws:
backendMode: CRD
clusterID: <CLUSTER_ID> # usually same as metadata.name
identityMappings:
- arn: arn:aws:iam::<ACCOUNT_ID>:role/<ADMIN_SSO_ROLE_PATH>
username: admin:{{SessionName}}
groups:
- system:masters
- arn: arn:aws:iam::<ACCOUNT_ID>:role/<ADMIN_SSO_ROLE>
username: admin:{{SessionName}}
groups:
- system:masters
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://<KOPS_STATE_BUCKET>/<CLUSTER_NAME>
dnsZone: <ROUTE53_PRIVATE_ZONE_ID>
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- encryptedVolume: true
instanceGroup: control-plane-<AZ>
name: b
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: main
- cpuRequest: 100m
etcdMembers:
- encryptedVolume: true
instanceGroup: control-plane-<AZ>
name: b
manager:
backupRetentionDays: 90
memoryRequest: 100Mi
name: events
serviceAccountIssuerDiscovery:
discoveryStore: s3://<OIDC_STORE_BUCKET>
enableAWSOIDCProvider: true
iam:
allowContainerRegistry: true
useServiceAccountExternalPermissions: true
legacy: false
kubelet:
anonymousAuth: false
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.32.4
networkID: <VPC_ID>
networkCIDR: <VPC_CIDR> # e.g. 10.10.0.0/16
nonMasqueradeCIDR: 100.64.0.0/10
networking:
calico: {}
sshKeyName: <SSH_KEY_NAME>
sshAccess:
- 0.0.0.0/0
subnets:
- name: hub-utility-<AZ>
id: <UTILITY_SUBNET_ID>
type: Utility
zone: <AZ>
egress: External
- name: hub-private-<AZ>
id: <PRIVATE_SUBNET_ID>
type: Private
zone: <AZ>
egress: External
topology:
dns:
type: Private
masters: private
nodes: private
externalDns:
watchIngress: true
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: <CLUSTER_NAME>
name: control-plane-<AZ>
spec:
machineType: t3a.medium
maxSize: 1
minSize: 1
role: Master
subnets:
- hub-private-<AZ>
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
labels:
kops.k8s.io/cluster: <CLUSTER_NAME>
name: nodes-<AZ>
spec:
machineType: t3a.medium
maxSize: 1
minSize: 1
role: Node
subnets:
- hub-private-<AZ>
8. Please run the commands with most verbose logging by adding the -v 10
flag.
Paste the logs into this report, or in a gist and provide the gist link here.
9. Anything else do we need to know?