Kubernetes v1.31 and More
-
Kubernetes Blog ☛ Kubernetes v1.31: PersistentVolume Last Phase Transition Time Moves to GA
For a v1.31 cluster, you can now assume that every PersistentVolume object has a .status.lastTransitionTime field, that holds a timestamp of when the volume last transitioned its phase. This change is not immediate; the new field will be populated whenever a PersistentVolume is updated and first transitions between phases (Pending, Bound, or Released) after upgrading to Kubernetes v1.31.
-
Kubernetes Blog ☛ Kubernetes v1.31: Elli
Editors: Matteo Bianchi, Yigit Demirbas, Abigail McCarthy, Edith Puclla, Rashan Smith
Announcing the release of Kubernetes v1.31: Elli!
Similar to previous releases, the release of Kubernetes v1.31 introduces new stable, beta, and alpha features. The consistent delivery of high-quality releases underscores the strength of our development cycle and the vibrant support from our community. This release consists of 45 enhancements. Of those enhancements, 11 have graduated to Stable, 22 are entering Beta, and 12 have graduated to Alpha.
-
Platform9 Arrives at Simplified Kubernetes With Fairwinds
Platform9 colludes with Fairwinds and centralizes on its managed Kubernetes-as-a-Service technology, and so the two companies hope they can create a Kubernetes cost control mechanism for public cloud deployment that is worthy of mission-critical applications.
-
Rafay Systems Adds Kubernetes Cost Optimization Capabilities to Management Platform
Rafay Systems today added a cost optimization suite of tools that continuously analyzes Kubernetes costs in a way that enables IT teams to invoke its management platform to automatically rein them in.
An update
More on this release:
-
Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache
Kubernetes has long used a watch cache to optimize read operations. The watch cache stores a snapshot of the cluster state and receives updates through etcd watches. However, until now, it couldn't serve consistent reads directly, as there was no guarantee the cache was sufficiently up-to-date.
One more:
-
Kubernetes 1.31: Moving cgroup v1 Support into Maintenance Mode
As Kubernetes continues to evolve and adapt to the changing landscape of container orchestration, the community has decided to move cgroup v1 support into maintenance mode in v1.31. This shift aligns with the broader industry's move towards cgroup v2, offering improved functionalities: including scalability and a more consistent interface. Before we dive into the consequences for Kubernetes, let's take a step back to understand what cgroups are and their significance in Linux.
5 more:
-
Kubernetes 1.31: Read Only Volumes Based On OCI Artifacts (alpha)
The Kubernetes community is moving towards fulfilling more Artificial Intelligence (AI) and Machine Learning (ML) use cases in the future. While the project has been designed to fulfill microservice architectures in the past, it’s now time to listen to the end users and introduce features which have a stronger focus on AI/ML.
One of these requirements is to support Open Container Initiative (OCI) compatible images and artifacts (referred as OCI objects) directly as a native volume source. This allows users to focus on OCI standards as well as enables them to store and distribute any content using OCI registries. A feature like this gives the Kubernetes project a chance to grow into use cases which go beyond running particular images.
-
Kubernetes 1.31: Prevent PersistentVolume Leaks When Deleting out of Order
PersistentVolume (or PVs for short) are associated with Reclaim Policy. The reclaim policy is used to determine the actions that need to be taken by the storage backend on deletion of the PVC Bound to a PV. When the reclaim policy is
Delete
, the expectation is that the storage backend releases the storage resource allocated for the PV. In essence, the reclaim policy needs to be honored on PV deletion.With the recent Kubernetes v1.31 release, a beta feature lets you configure your cluster to behave that way and honor the configured reclaim policy.
-
Kubernetes 1.31: MatchLabelKeys in PodAffinity graduates to beta
Kubernetes 1.29 introduced new fields
MatchLabelKeys
andMismatchLabelKeys
in PodAffinity and PodAntiAffinity.In Kubernetes 1.31, this feature moves to beta and the corresponding feature gate (
MatchLabelKeysInPodAffinity
) gets enabled by default.MatchLabelKeys
- Enhanced scheduling for versatile rolling updatesDuring a workload's (e.g., Deployment) rolling update, a cluster may have Pods from multiple versions at the same time. However, the scheduler cannot distinguish between old and new versions based on the
LabelSelector
specified in PodAffinity or PodAntiAffinity. As a result, it will co-locate or disperse Pods regardless of their versions. -
Kubernetes 1.31: VolumeAttributesClass for Volume Modification Beta
Volumes in Kubernetes have been described by two attributes: their storage class, and their capacity. The storage class is an immutable property of the volume, while the capacity can be changed dynamically with volume resize.
This complicates vertical scaling of workloads with volumes. While cloud providers and storage vendors often offer volumes which allow specifying IO quality of service (Performance) parameters like IOPS or throughput and tuning them as workloads operate, Kubernetes has no API which allows changing them.
-
Kubernetes v1.31: Accelerating Cluster Performance with Consistent Reads from Cache
Kubernetes is renowned for its robust orchestration of containerized applications, but as clusters grow, the demands on the control plane can become a bottleneck. A key challenge has been ensuring strongly consistent reads from the etcd datastore, requiring resource-intensive quorum reads.
Today, the Kubernetes community is excited to announce a major improvement: consistent reads from cache, graduating to Beta in Kubernetes v1.31.
Why consistent reads matter
Consistent reads are essential for ensuring that Kubernetes components have an accurate view of the latest cluster state. Guaranteeing consistent reads is crucial for maintaining the accuracy and reliability of Kubernetes operations, enabling components to make informed decisions based on up-to-date information. In large-scale clusters, fetching and processing this data can be a performance bottleneck, especially for requests that involve filtering results. While Kubernetes can filter data by namespace directly within etcd, any other filtering by labels or field selectors requires the entire dataset to be fetched from etcd and then filtered in-memory by the Kubernetes API server. This is particularly impactful for components like the kubelet, which only needs to list pods scheduled to its node - but previously required the API Server and etcd to process all pods in the cluster.
Third party site funded by Kubernetes vendors:
-
Kubernetes 1.31 Streamlines Range of Functions and Tasks
Kubernetes 1.31 streamlines a range of functions and tasks, an ongoing effort to make Kubernetes more accessible and simpler to manage.
Some late coverage:
-
Kubernetes 1.31: Pod Failure Policy for Jobs Goes GA
This post describes Pod failure policy, which graduates to stable in Kubernetes 1.31, and how to use it in your Jobs.
About Pod failure policy
When you run workloads on Kubernetes, Pods might fail for a variety of reasons. Ideally, workloads like Jobs should be able to ignore transient, retriable failures and continue running to completion.
To allow for these transient failures, Kubernetes Jobs include the
backoffLimit
field, which lets you specify a number of Pod failures that you're willing to tolerate during Job execution. However, if you set a large value for thebackoffLimit
field and rely solely on this field, you might notice unnecessary increases in operating costs as Pods restart excessively until the backoffLimit is met.This becomes particularly problematic when running large-scale Jobs with thousands of long-running Pods across thousands of nodes.
More features in view:
-
Kubernetes 1.31: Streaming Transitions from SPDY to WebSockets
In Kubernetes 1.31, by default kubectl now uses the WebSocket protocol instead of SPDY for streaming.
This post describes what these changes mean for you and why these streaming APIs matter
The latest one:
-
Kubernetes 1.31: Autoconfiguration For Node Cgroup Driver (beta)
Historically, configuring the correct cgroup driver has been a pain point for users running new Kubernetes clusters. On GNU/Linux systems, there are two different cgroup drivers:
cgroupfs
andsystemd
. In the past, both the kubelet and CRI implementation (like CRI-O or containerd) needed to be configured to use the same cgroup driver, or else the kubelet would exit with an error. This was a source of headaches for many cluster admins. However, there is light at the end of the tunnel!Automated cgroup driver detection
In v1.28.0, the SIG Node community introduced the feature gate
KubeletCgroupDriverFromCRI
, which instructs the kubelet to ask the CRI implementation which cgroup driver to use. A few minor releases of Kubernetes happened whilst we waited for support to land in the major two CRI implementations (containerd and CRI-O), but as of v1.31.0, this feature is now beta!In addition to setting the feature gate, a cluster admin needs to ensure their CRI implementation is new enough:
- containerd: Support was added in v2.0.0
- CRI-O: Support was added in v1.28.0
Then, they should ensure their CRI implementation is configured to the cgroup_driver they would like to use.
Eventually, support for the kubelet's
cgroupDriver
configuration field will be dropped, and the kubelet will fail to start if the CRI implementation isn't new enough to have support for this feature.
3 more stories about this release:
-
Kubernetes 1.31: Custom Profiling in Kubectl Debug Graduates to Beta
Currently, copying the pod is the sole mechanism that supports debugging this pod in kubectl debug. Furthermore, what if user needs to modify the REQUIRED_ENV_VAR to something different for advanced troubleshooting?. There is no mechanism to achieve this.
-
Kubernetes 1.31: Fine-grained SupplementalGroups control
-
Kubernetes v1.31: New Kubernetes CPUManager Static Policy: Distribute CPUs Across Cores
One more:
-
Kubernetes v1.31: kubeadm v1beta4
This version improves on the v1beta3 format by fixing some minor issues and adding a few new fields.