Skip to content

Commit 9872429

Browse files
Publish Kubernetes v1.34 Sneak Peek Blog (#51711)
* Publish v1.34 sneak peek blog * add Markdown wrapping * Update content/en/blog/_posts/2025-07-28-kubernetes-v1.34-sneak-peek.md * Update content/en/blog/_posts/2025-07-28-kubernetes-v1.34-sneak-peek.md * Update content/en/blog/_posts/2025-07-28-kubernetes-v1.34-sneak-peek.md Co-authored-by: Mélony QIN <[email protected]> * Update content/en/blog/_posts/2025-07-28-kubernetes-v1.34-sneak-peek.md Co-authored-by: Mélony QIN <[email protected]> * Update content/en/blog/_posts/2025-07-28-kubernetes-v1.34-sneak-peek.md --------- Co-authored-by: Mélony QIN <[email protected]>
1 parent b852da6 commit 9872429

File tree

1 file changed

+57
-24
lines changed

1 file changed

+57
-24
lines changed
Lines changed: 57 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,5 @@
11
---
22
layout: blog
3-
draft: true
43
title: 'Kubernetes v1.34 Sneak Peek'
54
date: 2025-07-28
65
slug: kubernetes-v1-34-sneak-peek
@@ -12,37 +11,48 @@ author: >
1211
Dipesh Rawat
1312
---
1413

15-
Kubernetes v1.34 is coming at the end of August 2025. This release will not include any removal or deprecation, but it is packed with an impressive number of enhancements. Here are some of the features we are most excited about in this cycle!
14+
Kubernetes v1.34 is coming at the end of August 2025.
15+
This release will not include any removal or deprecation, but it is packed with an impressive number of enhancements.
16+
Here are some of the features we are most excited about in this cycle!
1617

1718
Please note that this information reflects the current state of v1.34 development and may change before release.
1819

1920
## Featured enhancements of Kubernetes v1.34
2021

21-
The following list highlights some of the notable enhancements likely to be included in the v1.34 release, but is not an exhaustive list of all planned changes. This is not a commitment and the release content is subject to change.
22+
The following list highlights some of the notable enhancements likely to be included in the v1.34 release,
23+
but is not an exhaustive list of all planned changes.
24+
This is not a commitment and the release content is subject to change.
2225

2326
### The core of DRA targets stable
2427

25-
[Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) (DRA) provides a flexible way to categorize, request, and use devices like GPUs or custom hardware in your Kubernetes cluster.
28+
[Dynamic Resource Allocation](/docs/concepts/scheduling-eviction/dynamic-resource-allocation/) (DRA) provides a flexible way to categorize,
29+
request, and use devices like GPUs or custom hardware in your Kubernetes cluster.
2630

2731
Since the v1.30 release, DRA has been based around claiming devices using _structured parameters_ that are opaque to the core of Kubernetes.
2832
The relevant enhancement proposal, [KEP-4381](https://kep.k8s.io/4381), took inspiration from dynamic provisioning for storage volumes.
29-
DRA with structured parameters relies on a set of supporting API kinds: ResourceClaim, DeviceClass, ResourceClaimTemplate, and ResourceSlice API types under `resource.k8s.io`, while extending the `.spec` for Pods with a new `resourceClaims` field.
33+
DRA with structured parameters relies on a set of supporting API kinds: ResourceClaim, DeviceClass, ResourceClaimTemplate,
34+
and ResourceSlice API types under `resource.k8s.io`, while extending the `.spec` for Pods with a new `resourceClaims` field.
3035
The core of DRA is targeting graduation to stable in Kubernetes v1.34.
3136

32-
33-
With DRA, device drivers and cluster admins define device classes that are available for use. Workloads can claim devices from a device class within device requests. Kubernetes allocates matching devices to specific claims and places the corresponding Pods on nodes that can access the allocated devices. This framework provides flexible device filtering using CEL, centralized device categorization, and simplified Pod requests, among other benefits.
37+
With DRA, device drivers and cluster admins define device classes that are available for use.
38+
Workloads can claim devices from a device class within device requests.
39+
Kubernetes allocates matching devices to specific claims and places the corresponding Pods on nodes that can access the allocated devices.
40+
This framework provides flexible device filtering using CEL, centralized device categorization, and simplified Pod requests, among other benefits.
3441

3542
Once this feature has graduated, the `resource.k8s.io/v1` APIs will be available by default.
3643

3744
### ServiceAccount tokens for image pull authentication
3845

39-
The [ServiceAccount](/docs/concepts/security/service-accounts/) token integration for `kubelet` credential providers is likely to reach beta and be enabled by default in Kubernetes v1.34. You'll then be able to have the `kubelet` use these tokens when pulling container images from registries that require authentication.
46+
The [ServiceAccount](/docs/concepts/security/service-accounts/) token integration for `kubelet` credential providers is likely to reach beta and be enabled by default in Kubernetes v1.34.
47+
This allows the `kubelet` to use these tokens when pulling container images from registries that require authentication.
4048

4149
That support already exists as alpha, and is tracked as part of [KEP-4412](https://kep.k8s.io/4412).
4250

43-
The existing alpha integration allows the `kubelet` to use short-lived, automatically rotated ServiceAccount tokens (that follow OIDC-compliant semantics) to authenticate to a container image registry. Each token is scoped to one associated Pod; the overall mechanism replaces the need for long-lived image pull Secrets.
51+
The existing alpha integration allows the `kubelet` to use short-lived, automatically rotated ServiceAccount tokens (that follow OIDC-compliant semantics) to authenticate to a container image registry.
52+
Each token is scoped to one associated Pod; the overall mechanism replaces the need for long-lived image pull Secrets.
4453

45-
Adopting this new approach reduces security risks, supports workload-level identity, and helps cut operational overhead. It brings image pull authentication closer to modern, identity-aware good practice.
54+
Adopting this new approach reduces security risks, supports workload-level identity, and helps cut operational overhead.
55+
It brings image pull authentication closer to modern, identity-aware good practice.
4656

4757
### Pod replacement policy for Deployments
4858

@@ -57,25 +67,37 @@ If your cluster has the feature enabled, you'll be able to select one of two pol
5767
`TerminationComplete`
5868
: Waits until old pods fully terminate before creating new ones, resulting in slower rollouts but ensuring controlled resource consumption.
5969

60-
This feature makes Deployment behavior more predictable by letting you choose when new pods should be created during updates or scaling. It's beneficial when working in clusters with tight resource constraints or with workloads with long termination periods.
70+
This feature makes Deployment behavior more predictable by letting you choose when new pods should be created during updates or scaling.
71+
It's beneficial when working in clusters with tight resource constraints or with workloads with long termination periods.
6172

62-
Its expected to be available as an alpha feature and can be enabled using the `DeploymentPodReplacementPolicy` and `DeploymentReplicaSetTerminatingReplicas` feature gates in the API server and kube-controller-manager.
73+
It's expected to be available as an alpha feature and can be enabled using the `DeploymentPodReplacementPolicy` and `DeploymentReplicaSetTerminatingReplicas` feature gates in the API server and kube-controller-manager.
6374

6475
### Production-ready tracing for `kubelet` and API Server
6576

66-
To address the longstanding challenge of debugging node-level issues by correlating disconnected logs, [KEP-2831](https://kep.k8s.io/2831) provides deep, contextual insights into the `kubelet`.
77+
To address the longstanding challenge of debugging node-level issues by correlating disconnected logs,
78+
[KEP-2831](https://kep.k8s.io/2831) provides deep, contextual insights into the `kubelet`.
6779

68-
This feature instruments the `kubelet`'s critical operations, particularly its gRPC calls to the Container Runtime Interface (CRI), using the vendor-agnostic OpenTelemetry standard. It allows operators to visualize the entire lifecycle of events (for example: a Pod startup) to pinpoint sources of latency and errors. Its most powerful aspect is the propagation of trace context; the `kubelet` passes a trace ID with its requests to the container runtime, enabling runtimes to link their own spans.
80+
This feature instruments critical `kubelet` operations, particularly its gRPC calls to the Container Runtime Interface (CRI), using the vendor-agnostic OpenTelemetry standard.
81+
It allows operators to visualize the entire lifecycle of events (for example: a Pod startup) to pinpoint sources of latency and errors.
82+
Its most powerful aspect is the propagation of trace context; the `kubelet` passes a trace ID with its requests to the container runtime, enabling runtimes to link their own spans.
6983

70-
This effort is complemented by a parallel enhancement, [KEP-647](https://kep.k8s.io/647), which brings the same tracing capabilities to the Kubernetes API server. Together, these enhancements provide a more unified, end-to-end view of events, simplifying the process of pinpointing latency and errors from the control plane down to the node. These features have matured through the official Kubernetes release process. [KEP-2831](https://kep.k8s.io/2831) was introduced as an alpha feature in v1.25, while [KEP-647](https://kep.k8s.io/647) debuted as alpha in v1.22. Both enhancements were promoted to beta together in the v1.27 release. Looking forward, Kubelet Tracing ([KEP-2831](https://kep.k8s.io/2831)) and API Server Tracing ([KEP-647](https://kep.k8s.io/647)) are now targeting graduation to stable in the upcoming v1.34 release.
84+
This effort is complemented by a parallel enhancement, [KEP-647](https://kep.k8s.io/647), which brings the same tracing capabilities to the Kubernetes API server.
85+
Together, these enhancements provide a more unified, end-to-end view of events, simplifying the process of pinpointing latency and errors from the control plane down to the node.
86+
These features have matured through the official Kubernetes release process.
87+
[KEP-2831](https://kep.k8s.io/2831) was introduced as an alpha feature in v1.25, while [KEP-647](https://kep.k8s.io/647) debuted as alpha in v1.22.
88+
Both enhancements were promoted to beta together in the v1.27 release.
89+
Looking forward, Kubelet Tracing ([KEP-2831](https://kep.k8s.io/2831)) and API Server Tracing ([KEP-647](https://kep.k8s.io/647)) are now targeting graduation to stable in the upcoming v1.34 release.
7190

7291
### `PreferSameZone` and `PreferSameNode` traffic distribution for Services
7392

7493
The `spec.trafficDistribution` field within a Kubernetes [Service](/docs/concepts/services-networking/service/) allows users to express preferences for how traffic should be routed to Service endpoints.
7594

76-
[KEP-3015](https://kep.k8s.io/3015) deprecates `PreferClose` and introduces two additional values: `PreferSameZone` and `PreferSameNode`. `PreferSameZone` is equivalent to the current `PreferClose`. `PreferSameNode` prioritizes sending traffic to endpoints on the same node as the client.
95+
[KEP-3015](https://kep.k8s.io/3015) deprecates `PreferClose` and introduces two additional values: `PreferSameZone` and `PreferSameNode`.
96+
`PreferSameZone` is equivalent to the current `PreferClose`.
97+
`PreferSameNode` prioritizes sending traffic to endpoints on the same node as the client.
7798

78-
This feature was introduced in v1.33 behind the `PreferSameTrafficDistribution` feature gate. It is targeting graduation to beta in v1.34 with its feature gate enabled by default.
99+
This feature was introduced in v1.33 behind the `PreferSameTrafficDistribution` feature gate.
100+
It is targeting graduation to beta in v1.34 with its feature gate enabled by default.
79101

80102
### Support for KYAML: a Kubernetes dialect of YAML
81103

@@ -85,9 +107,12 @@ and/or Helm charts.
85107
You can write KYAML and pass it as an input to **any** version of `kubectl`,
86108
because all KYAML files are also valid as YAML.
87109
With kubectl v1.34, we expect you'll also be able to request KYAML output from `kubectl` (as in `kubectl get -o kyaml …`).
88-
Of course, you can still request JSON or YAML output if you prefer that.
110+
If you prefer, you can still request the output in JSON or YAML format.
89111

90-
KYAML addresses specific challenges with both YAML and JSON. YAML's significant whitespace requires careful attention to indentation and nesting, while its optional string-quoting can lead to unexpected type coercion (for example: ["The Norway Bug"](https://hitchdev.com/strictyaml/why/implicit-typing-removed/)). Meanwhile, JSON lacks comment support and has strict requirements for trailing commas and quoted keys.
112+
KYAML addresses specific challenges with both YAML and JSON.
113+
YAML's significant whitespace requires careful attention to indentation and nesting,
114+
while its optional string-quoting can lead to unexpected type coercion (for example: ["The Norway Bug"](https://hitchdev.com/strictyaml/why/implicit-typing-removed/)).
115+
Meanwhile, JSON lacks comment support and has strict requirements for trailing commas and quoted keys.
91116

92117
[KEP-5295](https://kep.k8s.io/5295) introduces KYAML, which tries to address the most significant problems by:
93118

@@ -104,11 +129,16 @@ This might sound a lot like JSON, because it is! But unlike JSON, KYAML supports
104129
We're hoping to see KYAML introduced as a new output format for `kubectl` v1.34.
105130
As with all these features, none of these changes are 100% confirmed; watch this space!
106131

107-
As a format, KYAML is and will remain a **strict subset of YAML**, ensuring that any compliant YAML parser can parse KYAML documents. Kubernetes does not insist you provide input that is specifically formatted as KYAML, and we have no plan to change that.
132+
As a format, KYAML is and will remain a **strict subset of YAML**, ensuring that any compliant YAML parser can parse KYAML documents.
133+
Kubernetes does not require you to provide input specifically formatted as KYAML, and we have no plans to change that.
108134

109135
### Fine-grained autoscaling control with HPA configurable tolerance
110136

111-
[KEP-4951](https://kep.k8s.io/4951) introduces a new feature that allows users to configure autoscaling tolerance on a per-HPA basis, overriding the default cluster-wide 10% tolerance setting that often proves too coarse-grained for diverse workloads. The enhancement adds an optional `tolerance` field to the HPA's `spec.behavior.scaleUp` and `spec.behavior.scaleDown` sections, enabling different tolerance values for scale-up and scale-down operations, which is particularly valuable since scale-up responsiveness is typically more critical than scale-down speed for handling traffic surges.
137+
[KEP-4951](https://kep.k8s.io/4951) introduces a new feature that allows users to configure autoscaling tolerance on a per-HPA basis,
138+
overriding the default cluster-wide 10% tolerance setting that often proves too coarse-grained for diverse workloads.
139+
The enhancement adds an optional `tolerance` field to the HPA's `spec.behavior.scaleUp` and `spec.behavior.scaleDown` sections,
140+
enabling different tolerance values for scale-up and scale-down operations,
141+
which is particularly valuable since scale-up responsiveness is typically more critical than scale-down speed for handling traffic surges.
112142

113143
Released as alpha in Kubernetes v1.33 behind the `HPAConfigurableTolerance` feature gate, this feature is expected to graduate to beta in v1.34.
114144
This improvement helps to address scaling challenges with large deployments, where for scaling in,
@@ -117,17 +147,20 @@ Using the new, more flexible approach would enable workload-specific optimizatio
117147
responsive and conservative scaling behaviors.
118148

119149
## Want to know more?
120-
New features and deprecations are also announced in the Kubernetes release notes. We will formally announce what's new in [Kubernetes v1.34](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md) as part of the CHANGELOG for that release.
150+
New features and deprecations are also announced in the Kubernetes release notes.
151+
We will formally announce what's new in [Kubernetes v1.34](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.34.md) as part of the CHANGELOG for that release.
121152

122153
The Kubernetes v1.34 release is planned for **Wednesday 27th August 2025**. Stay tuned for updates!
123154

124155
## Get involved
125-
The simplest way to get involved with Kubernetes is by joining one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests. Have something you’d like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication), and through the channels below. Thank you for your continued feedback and support.
156+
The simplest way to get involved with Kubernetes is to join one of the many [Special Interest Groups](https://github.com/kubernetes/community/blob/master/sig-list.md) (SIGs) that align with your interests.
157+
Have something you'd like to broadcast to the Kubernetes community? Share your voice at our weekly [community meeting](https://github.com/kubernetes/community/tree/master/communication), and through the channels below.
158+
Thank you for your continued feedback and support.
126159

127160
* Follow us on Bluesky [@kubernetes.io](https://bsky.app/profile/kubernetes.io) for the latest updates
128161
* Join the community discussion on [Discuss](https://discuss.kubernetes.io/)
129162
* Join the community on [Slack](http://slack.k8s.io/)
130163
* Post questions (or answer questions) on [Server Fault](https://serverfault.com/questions/tagged/kubernetes) or [Stack Overflow](http://stackoverflow.com/questions/tagged/kubernetes)
131164
* Share your Kubernetes [story](https://docs.google.com/a/linuxfoundation.org/forms/d/e/1FAIpQLScuI7Ye3VQHQTwBASrgkjQDSS5TP0g3AXfFhwSM9YpHgxRKFA/viewform)
132-
* Read more about whats happening with Kubernetes on the [blog](https://kubernetes.io/blog/)
165+
* Read more about what's happening with Kubernetes on the [blog](https://kubernetes.io/blog/)
133166
* Learn more about the [Kubernetes Release Team](https://github.com/kubernetes/sig-release/tree/master/release-team)

0 commit comments

Comments
 (0)