|  | 
|  | 1 | +# Multi-AZ API Server LoadBalancer for CAPO | 
|  | 2 | + | 
|  | 3 | +## Summary | 
|  | 4 | +Add first-class Multi-AZ support for the Kubernetes control plane LoadBalancer in Cluster API Provider OpenStack (CAPO). The feature reconciles one Octavia LoadBalancer per Availability Zone (AZ), places each VIP in the intended subnet for that AZ via an explicit AZ→Subnet mapping, and by default registers control plane nodes only with the LB in the same AZ. Operators expose the control plane endpoint via external DNS multi-value A records that point at the per-AZ LB IPs. This proposal is additive and backward compatible. | 
|  | 5 | + | 
|  | 6 | +## Motivation | 
|  | 7 | +- Achieve true multi-AZ resilience for the control plane by avoiding a single VIP dependency. | 
|  | 8 | +- Align control plane networking with existing multi-AZ compute placement goals. | 
|  | 9 | +- Provide clear, portable primitives across Octavia providers with native AZ hints and an explicit, unambiguous mapping between AZs and VIP subnets. | 
|  | 10 | + | 
|  | 11 | +## Goals | 
|  | 12 | +- Create and manage one API server LoadBalancer per configured AZ. | 
|  | 13 | +- Support explicit AZ→Subnet mapping only (no positional mapping). | 
|  | 14 | +- Default to same-AZ LB membership for control plane nodes; allow opt-in cross-AZ registration. | 
|  | 15 | +- Keep the API additive with strong validation, clear events and documentation. | 
|  | 16 | +- Preserve user-provided DNS endpoints; DNS record management remains out of scope. | 
|  | 17 | + | 
|  | 18 | +## Non-Goals | 
|  | 19 | +- Managing or provisioning DNS records. | 
|  | 20 | +- Provider-specific topologies such as ACTIVE_STANDBY across fault domains. | 
|  | 21 | +- Service type LoadBalancer for worker Services. | 
|  | 22 | + | 
|  | 23 | +## User Stories | 
|  | 24 | +1) As a platform engineer, I want per-AZ LBs so a full AZ outage leaves the cluster reachable via DNS multi-A records that resolve to the remaining AZs. | 
|  | 25 | +2) As an operator, I want a safe migration path from single-LB clusters to per-AZ LBs without downtime. | 
|  | 26 | +3) As a security-conscious user, I want to restrict VIP access with allowed CIDRs when supported by my Octavia provider. | 
|  | 27 | + | 
|  | 28 | +## Design Overview | 
|  | 29 | + | 
|  | 30 | +### High-level behavior | 
|  | 31 | +- When enabled and configured with an explicit mapping, CAPO reconciles one LoadBalancer per Availability Zone (AZ). | 
|  | 32 | +- VIP placement is controlled only by an explicit mapping list that binds each AZ to a specific subnet on the LB network. | 
|  | 33 | +- Each per-AZ LB is named with an AZ suffix. | 
|  | 34 | +- Control plane nodes are registered as LB members only in their AZ by default; opt-in cross-AZ membership is supported. | 
|  | 35 | +- Operators expose an external DNS name for the control plane endpoint with one A/AAAA record per AZ LB IP. | 
|  | 36 | + | 
|  | 37 | +### Architecture diagram | 
|  | 38 | +```mermaid | 
|  | 39 | +flowchart LR | 
|  | 40 | +  Clients --> DNS[External DNS zone] | 
|  | 41 | +  DNS -->|A record per AZ| LBa[LB az1] | 
|  | 42 | +  DNS -->|A record per AZ| LBb[LB az2] | 
|  | 43 | +  DNS -->|A record per AZ| LBn[LB azN] | 
|  | 44 | +  subgraph OpenStack | 
|  | 45 | +    LBa --> LaL[Listeners] --> Pa[Pools] --> CP1[Control plane nodes in az1] | 
|  | 46 | +    LBb --> LbL[Listeners] --> Pb[Pools] --> CP2[Control plane nodes in az2] | 
|  | 47 | +    LBn --> LnL[Listeners] --> Pn[Pools] --> CPn[Control plane nodes in azN] | 
|  | 48 | +  end | 
|  | 49 | +``` | 
|  | 50 | + | 
|  | 51 | +## Integration with External Global Server Load Balancing (GSLB) | 
|  | 52 | + | 
|  | 53 | +External GSLB systems (e.g., Route 53 health-checked records, Akamai GTM, Cloudflare Load Balancing, NS1, F5 DNS/GTM) pair naturally with this Multi-AZ LB design: | 
|  | 54 | + | 
|  | 55 | +- Clear targets: Each AZ has its own LB with a stable IP (floating IP or provider VIP) and deterministic name. These per-AZ endpoints are ideal GSLB health-check targets. | 
|  | 56 | +- Health-aware failover: GSLB continuously probes each per-AZ LB (TCP 6443 or an alternative port configured via additionalPorts) and automatically removes unhealthy AZ endpoints from DNS responses. | 
|  | 57 | +- Improved blast-radius isolation: An AZ outage only affects the corresponding AZ LB. GSLB maintains service by answering with remaining healthy AZ LB IPs. | 
|  | 58 | +- Policy flexibility: GSLB policies (failover, weighted round-robin, latency/geo) can prefer: | 
|  | 59 | +  - Same-region/same-AZ endpoints for lowest latency | 
|  | 60 | +  - Spillover to other AZs only on failure | 
|  | 61 | +  - Weighted distribution across AZs for capacity utilization | 
|  | 62 | + | 
|  | 63 | +Recommended GSLB patterns | 
|  | 64 | +- Record model: Use a single control plane FQDN (the cluster’s spec.controlPlaneEndpoint.Host) and publish multiple A/AAAA records—one per AZ LB IP. | 
|  | 65 | +- Health checks: | 
|  | 66 | +  - Protocol: TCP on the API port (default 6443). For providers that support L7 checks, TCP is generally sufficient for the Kubernetes API. | 
|  | 67 | +  - Source IPs: Ensure GSLB checker IPs are permitted if using allowedCIDRs on listeners. | 
|  | 68 | +- TTL guidance: | 
|  | 69 | +  - Use low TTL (e.g., 30–60s) to accelerate failover while balancing resolver load. | 
|  | 70 | +  - Be aware that some clients cache beyond TTL; plan operationally for a brief grace period during failover. | 
|  | 71 | +- IP sourcing: | 
|  | 72 | +  - Floating IPs typically simplify routing and are stable across LB re-creation. | 
|  | 73 | +  - If using fixed VIPs (no floating), ensure they are routable to your GSLB health-check network and external resolvers that must reach them. | 
|  | 74 | +- Automation hooks: | 
|  | 75 | +  - Deterministic LB naming (per-AZ suffix) and tags facilitate discovery by GSLB automation to register/update record sets. | 
|  | 76 | +  - A controller or out-of-band job can list per-AZ LBs and synchronize GSLB records and health checks. | 
|  | 77 | + | 
|  | 78 | +Failure scenarios and behavior | 
|  | 79 | +- Single AZ failure: The corresponding per-AZ LB becomes unhealthy; GSLB health checks fail; DNS answers exclude that AZ until recovery. Existing connections may break depending on client TCP retry behavior; new connections will target healthy AZs. | 
|  | 80 | +- Partial AZ degradation (e.g., only some members or monitor thresholds): Octavia monitor status influences LB health; ensure GSLB health thresholds align with Octavia monitor sensitivity to avoid premature removal or flapping. | 
|  | 81 | +- Network partitions from health-check vantage points: | 
|  | 82 | +  - If GSLB checkers reside outside the cloud, confirm egress paths to per-AZ IPs and allowedCIDRs permit probes from those checkers. | 
|  | 83 | +  - Consider diverse checker regions to avoid false positives due to upstream routing issues. | 
|  | 84 | + | 
|  | 85 | +Operational considerations | 
|  | 86 | +- Access control: When using allowedCIDRs, include: | 
|  | 87 | +  - Management cluster egress IPs (so CAPO can reconcile listeners/pools/monitors) | 
|  | 88 | +  - Bastion/router IPs as needed for administration | 
|  | 89 | +  - GSLB health-check source IP ranges | 
|  | 90 | +- Observability: | 
|  | 91 | +  - Track per-AZ LB health and GSLB health check status together to diagnose discrepancies (LB marked healthy, but GSLB marks unhealthy often indicates ACL/routing issues). | 
|  | 92 | +- Multi-region future: This proposal focuses on multi-AZ within a region. If multi-region is introduced later, the same per-AZ model composes naturally: per-AZ LBs per region, with GSLB distributing across regions using latency- or geo-based policies and regional failover priorities. | 
|  | 93 | + | 
|  | 94 | +This integration enables operators to achieve health-aware, low-latency, and failure-tolerant access to the Kubernetes API without CAPO managing DNS, while leveraging the explicit per-AZ LB separation for precise GSLB control. | 
|  | 95 | + | 
|  | 96 | +## API Changes (additive) | 
|  | 97 | + | 
|  | 98 | +All changes are confined to the OpenStackCluster API and are backward compatible. Proposed changes in: | 
|  | 99 | +- [api/v1beta1/openstackcluster_types.go](api/v1beta1/openstackcluster_types.go) | 
|  | 100 | +- [api/v1beta1/types.go](api/v1beta1/types.go) | 
|  | 101 | + | 
|  | 102 | +### Spec additions on APIServerLoadBalancer | 
|  | 103 | +- availabilityZoneSubnets []AZSubnetMapping (required to enable multi-AZ) | 
|  | 104 | +  - Explicit mapping; each entry includes: | 
|  | 105 | +    - availabilityZone string | 
|  | 106 | +    - subnet SubnetParam | 
|  | 107 | +  - The LB network MUST be specified when using this mapping via spec.apiServerLoadBalancer.network. Each mapped subnet MUST belong to that network. | 
|  | 108 | +- allowCrossAZLoadBalancerMembers *bool | 
|  | 109 | +  - Default false. | 
|  | 110 | +  - When true, register control plane nodes to all per-AZ LBs; otherwise same-AZ only. | 
|  | 111 | +- additionalPorts []int | 
|  | 112 | +  - Optional extra listener ports besides the Kubernetes API port. | 
|  | 113 | +- allowedCIDRs []string | 
|  | 114 | +  - Optional VIP ACL list when supported by the Octavia provider. | 
|  | 115 | + | 
|  | 116 | +Notes: | 
|  | 117 | +- The existing single-value availabilityZone field (if present) is treated as a legacy single-AZ shorthand; multi-AZ requires availabilityZoneSubnets. | 
|  | 118 | + | 
|  | 119 | +### Status additions | 
|  | 120 | +- apiServerLoadBalancers []LoadBalancer | 
|  | 121 | +  - A list-map keyed by availabilityZone (kubebuilder listMapKey=availabilityZone). | 
|  | 122 | +  - Each entry includes: name, id, ip, internalIP, tags, availabilityZone, loadBalancerNetwork, allowedCIDRs. | 
|  | 123 | + | 
|  | 124 | +### Validation (CRD and controller) | 
|  | 125 | +- No duplicate availabilityZone values in availabilityZoneSubnets. | 
|  | 126 | +- Each availabilityZoneSubnets.subnet MUST resolve to a subnet that belongs to the specified LB network. | 
|  | 127 | +- No duplicate subnets across mappings. | 
|  | 128 | +- At least one mapping is required to enable multi-AZ; otherwise behavior is legacy single-LB. | 
|  | 129 | + | 
|  | 130 | +CRD updates in: | 
|  | 131 | +- [config/crd/bases/](config/crd/bases/) | 
|  | 132 | +- [config/crd/patches/](config/crd/patches/) | 
|  | 133 | + | 
|  | 134 | +## Controller Design | 
|  | 135 | + | 
|  | 136 | +Changes span these components: | 
|  | 137 | +- [controllers/openstackcluster_controller.go](controllers/openstackcluster_controller.go) | 
|  | 138 | +- [pkg/cloud/services/loadbalancer/](pkg/cloud/services/loadbalancer/) | 
|  | 139 | +- [pkg/cloud/services/networking/](pkg/cloud/services/networking/) | 
|  | 140 | + | 
|  | 141 | +### VIP network and subnet resolution | 
|  | 142 | +- When spec.apiServerLoadBalancer.network is specified with availabilityZoneSubnets: | 
|  | 143 | +  - Resolve each SubnetParam in order; validate that each belongs to the given LB network. | 
|  | 144 | +  - Derive the AZ list directly from the mapping entries. | 
|  | 145 | +  - Persist the LB network and the ordered subnets into status.apiServerLoadBalancer.loadBalancerNetwork. | 
|  | 146 | +- Legacy single-AZ behavior (no mapping provided): | 
|  | 147 | +  - If an LB network is specified but no mapping is provided, treat as single-LB and select a subnet per legacy rules (unchanged). | 
|  | 148 | +  - If no LB network is specified, default to the cluster network’s subnets (unchanged single-LB behavior). | 
|  | 149 | + | 
|  | 150 | +Initialize or update status.apiServerLoadBalancers entries to carry the LB network reference. | 
|  | 151 | + | 
|  | 152 | +### Per-AZ LoadBalancer reconciliation | 
|  | 153 | +For each AZ in availabilityZoneSubnets: | 
|  | 154 | +- Determine the VIP subnet from the mapping and create or adopt a LoadBalancer named: | 
|  | 155 | +  - k8s-clusterapi-cluster-${NAMESPACE}-${CLUSTER_NAME}-${AZ}-kubeapi | 
|  | 156 | +- Set Octavia AvailabilityZone hint when supported by the provider. | 
|  | 157 | +- Create or adopt listeners, pools, and monitors for the API port and any additionalPorts. | 
|  | 158 | +- If floating IPs are not disabled, allocate and associate a floating IP to the LB VIP port when needed. | 
|  | 159 | +- Update or insert the AZ entry in status.apiServerLoadBalancers, including name, id, internalIP, optional ip, tags, allowedCIDRs, and loadBalancerNetwork. | 
|  | 160 | + | 
|  | 161 | +### Legacy adoption and migration | 
|  | 162 | +- Discover legacy single-LB resources named: | 
|  | 163 | +  - k8s-clusterapi-cluster-${NAMESPACE}-${CLUSTER_NAME}-kubeapi | 
|  | 164 | +- When multi-AZ is enabled (availabilityZoneSubnets provided), rename legacy resources to the AZ-specific name for the first configured AZ, or adopt correctly named resources if they already exist. | 
|  | 165 | +- Emit clear events and warnings; ensure idempotent operation. | 
|  | 166 | + | 
|  | 167 | +### Member registration behavior | 
|  | 168 | +- Determine the machine failure domain (AZ) from the owning control plane machine. | 
|  | 169 | +- Default behavior: register the node only with the LoadBalancer whose availabilityZone matches the node’s AZ; if the legacy LB exists without an AZ, include it as a fallback. | 
|  | 170 | +- When allowCrossAZLoadBalancerMembers is true: register the node with all per-AZ LBs. | 
|  | 171 | +- Reconcile membership across the API port and any additionalPorts. | 
|  | 172 | + | 
|  | 173 | +### Control plane endpoint | 
|  | 174 | +- Preserve a user-provided DNS in spec.controlPlaneEndpoint when set and valid. | 
|  | 175 | +- Otherwise choose: | 
|  | 176 | +  - The LB floating IP if present, else the VIP for an LB. | 
|  | 177 | +  - If no LB host is available and floating IPs are allowed, allocate or adopt a floating IP for the cluster endpoint when applicable. | 
|  | 178 | +  - If floating IPs are disabled and a fixed IP is provided, use it. | 
|  | 179 | +- Operators are expected to configure DNS with one A/AAAA record per AZ LB IP for client-side failover. CAPO does not manage DNS. | 
|  | 180 | + | 
|  | 181 | +### Events and metrics | 
|  | 182 | +- Emit events for create/update/delete of LBs, listeners, pools, monitors, and floating IPs. | 
|  | 183 | +- Emit warnings when provider features are unavailable or when validations fail. | 
|  | 184 | +- Optional metrics (non-breaking) for per-AZ LB counts and reconciliation latency. | 
|  | 185 | + | 
|  | 186 | +## Example configurations | 
|  | 187 | + | 
|  | 188 | +Explicit AZ→Subnet mapping (required for multi-AZ) | 
|  | 189 | +```yaml | 
|  | 190 | +apiVersion: infrastructure.cluster.x-k8s.io/v1beta1 | 
|  | 191 | +kind: OpenStackCluster | 
|  | 192 | +metadata: | 
|  | 193 | +  name: my-cluster | 
|  | 194 | +  namespace: default | 
|  | 195 | +spec: | 
|  | 196 | +  apiServerLoadBalancer: | 
|  | 197 | +    enabled: true | 
|  | 198 | +    network: | 
|  | 199 | +      id: 6c90b532-7ba0-418a-a276-5ae55060b5b0 | 
|  | 200 | +    availabilityZoneSubnets: | 
|  | 201 | +      - availabilityZone: az1 | 
|  | 202 | +        subnet: | 
|  | 203 | +          id: cad5a91a-36de-4388-823b-b0cc82cadfdc | 
|  | 204 | +      - availabilityZone: az2 | 
|  | 205 | +        subnet: | 
|  | 206 | +          id: e2407c18-c4e7-4d3d-befa-8eec5d8756f2 | 
|  | 207 | +    allowCrossAZLoadBalancerMembers: false | 
|  | 208 | +``` | 
|  | 209 | +
 | 
|  | 210 | +Allow cross-AZ member registration | 
|  | 211 | +```yaml | 
|  | 212 | +spec: | 
|  | 213 | +  apiServerLoadBalancer: | 
|  | 214 | +    enabled: true | 
|  | 215 | +    network: | 
|  | 216 | +      id: 6c90b532-7ba0-418a-a276-5ae55060b5b0 | 
|  | 217 | +    availabilityZoneSubnets: | 
|  | 218 | +      - availabilityZone: az1 | 
|  | 219 | +        subnet: | 
|  | 220 | +          id: cad5a91a-36de-4388-823b-b0cc82cadfdc | 
|  | 221 | +      - availabilityZone: az2 | 
|  | 222 | +        subnet: | 
|  | 223 | +          id: e2407c18-c4e7-4d3d-befa-8eec5d8756f2 | 
|  | 224 | +    allowCrossAZLoadBalancerMembers: true | 
|  | 225 | +``` | 
|  | 226 | +
 | 
|  | 227 | +Restrict access using allowed CIDRs | 
|  | 228 | +```yaml | 
|  | 229 | +spec: | 
|  | 230 | +  apiServerLoadBalancer: | 
|  | 231 | +    enabled: true | 
|  | 232 | +    network: | 
|  | 233 | +      id: 6c90b532-7ba0-418a-a276-5ae55060b5b0 | 
|  | 234 | +    availabilityZoneSubnets: | 
|  | 235 | +      - availabilityZone: az1 | 
|  | 236 | +        subnet: | 
|  | 237 | +          id: cad5a91a-36de-4388-823b-b0cc82cadfdc | 
|  | 238 | +      - availabilityZone: az2 | 
|  | 239 | +        subnet: | 
|  | 240 | +          id: e2407c18-c4e7-4d3d-befa-8eec5d8756f2 | 
|  | 241 | +    allowedCIDRs: | 
|  | 242 | +      - 192.0.2.0/24 | 
|  | 243 | +      - 203.0.113.10 | 
|  | 244 | +``` | 
|  | 245 | +
 | 
|  | 246 | +## Backward compatibility and migration | 
|  | 247 | +
 | 
|  | 248 | +- Default behavior remains single-LB when no multi-AZ mapping is provided. | 
|  | 249 | +- Enabling multi-AZ: | 
|  | 250 | +  - Operators add availabilityZoneSubnets (and optionally additionalPorts, allowedCIDRs, allowCrossAZLoadBalancerMembers) and must specify the LB network. | 
|  | 251 | +  - Controller renames or adopts legacy resources into AZ-specific naming. | 
|  | 252 | +  - status.apiServerLoadBalancers is populated alongside legacy status until further cleanup. | 
|  | 253 | +- Disabling multi-AZ: | 
|  | 254 | +  - Remove the mapping; controller maintains single-LB behavior. | 
|  | 255 | +  - Per-AZ LBs are not automatically deleted; operators may clean up unused resources. | 
|  | 256 | +
 | 
|  | 257 | +## Testing strategy | 
|  | 258 | +
 | 
|  | 259 | +Unit tests | 
|  | 260 | +- Validation: duplicate AZs, duplicate subnets in mapping, wrong network-subnet associations. | 
|  | 261 | +- LB reconciliation: AZ hint propagation, per-port resource creation and updates. | 
|  | 262 | +- Migration/adoption: renaming legacy resources and adopting correctly-named resources. | 
|  | 263 | +- Member registration: defaults and cross-AZ opt-in. | 
|  | 264 | +- Allowed CIDRs: canonicalization and provider capability handling. | 
|  | 265 | +
 | 
|  | 266 | +E2E tests | 
|  | 267 | +- Multi-AZ suite to verify per-AZ LBs exist with expected names and ports. | 
|  | 268 | +- status.apiServerLoadBalancers contains per-AZ entries including LB network and IPs. | 
|  | 269 | +- Control plane nodes register to same-AZ LB (or to all LBs when cross-AZ is enabled). | 
|  | 270 | +- DNS records remain out of scope for e2e. | 
|  | 271 | +
 | 
|  | 272 | +Test code locations: | 
|  | 273 | +- [pkg/cloud/services/loadbalancer/](pkg/cloud/services/loadbalancer/) | 
|  | 274 | +- [controllers/](controllers/) | 
|  | 275 | +- [test/e2e/](test/e2e/) | 
|  | 276 | +
 | 
|  | 277 | +## Risks and mitigations | 
|  | 278 | +- Mapping/network mismatches: reject with clear validation messages; enforce via CRD CEL where feasible and in-controller checks. | 
|  | 279 | +- Providers ignoring AZ hints: VIP subnet mapping still ensures deterministic placement; document expected variance. | 
|  | 280 | +- Increased resource usage: multiple LBs per cluster increase quota consumption; highlight in docs and operations guidance. | 
|  | 281 | +- DNS misconfiguration: documented as operator responsibility. | 
|  | 282 | +
 | 
|  | 283 | +## Rollout plan | 
|  | 284 | +1) API and CRD changes: | 
|  | 285 | +   - Add new fields and list-map keyed status to OpenStackCluster types in [api/v1beta1/](api/v1beta1/). | 
|  | 286 | +   - Update CRDs in [config/crd/bases/](config/crd/bases/) and patches in [config/crd/patches/](config/crd/patches/). | 
|  | 287 | +2) Controller implementation: | 
|  | 288 | +   - VIP network/subnet resolution and explicit AZ mapping in [controllers/openstackcluster_controller.go](controllers/openstackcluster_controller.go). | 
|  | 289 | +   - Per-AZ LB reconciliation, rename/adoption, member selection, and optional floating IPs in [pkg/cloud/services/loadbalancer/](pkg/cloud/services/loadbalancer/). | 
|  | 290 | +3) Documentation: | 
|  | 291 | +   - Update configuration guide and examples in [docs/book/src/clusteropenstack/configuration.md](docs/book/src/clusteropenstack/configuration.md). | 
|  | 292 | +4) Testing: | 
|  | 293 | +   - Unit tests across controller and services; e2e suite updates in [test/e2e/](test/e2e/). | 
|  | 294 | +5) Optional metrics: | 
|  | 295 | +   - Add observability for per-AZ LB counts and reconciliation timings (non-breaking). | 
|  | 296 | +
 | 
|  | 297 | +## Open questions | 
|  | 298 | +- Should we add a future explicit field to declare the endpoint strategy (single VIP vs external DNS multi-A)? Current design preserves user-provided DNS and documents multi-A. | 
0 commit comments