Two new security fixes last week: CVE-2022-3162 allowed custom resources in the same API group eavesdrop on each other; CVE-2022-3294 allowed users to proxy via the API server to addresses they shouldn’t have had access to. Both issues are fixed in the most recent patch releases and upgrading is recommended.
Next Deadline: Test Freeze / Docs Ready for Review / Release Blog Ready, November 15th
We are in Code Freeze now, with the countdown of deadlines to final release on December 6th. This week is the trifecta of: have your feature docs ready for review, get the release blog ready for review, and freeze all test work until the final release (except bugfixes).
Patch releases for 1.25.4, 1.24.8, 1.23.14, and 1.22.16 came out last week, including fixes for both of the CVEs. This will be the last patch release for 1.22, so upgrade your cluster now.
Note that the next set of patch releases have been rescheduled to December 7th, with cherry-picks due on the 2nd. SIG-Release has proposed that we stop making release candidates of patch releases given the effort and image cost, so comment there if you have opinions on this change.
The discovery APIs let clients query what APIs exist on the API server. This is used for everything from
kubectl shell autocomplete to version negotiation. When Kubernetes was young and there were only a few dozen API types, querying each separately worked well enough, but the core APIs have grown and custom APIs (operators) have become widespread. Some single operators like Crossplane can register more API types than all of Kubernetes core. Suffice to say, this created scaling and performance challenges. We’ve used client-side caching to help mitigate things but this improved API (and a related server-side change), we have a more complete solution. Now kube-apiserver can cache discovery metadata itself and clients can request it all in one go. For the initial testing, this is being added to clients in a relatively drop-in fashion to mirror the existing caching client but in the future the plan is to remove the need for client-side caching entirely when aggregated discovery is in use. This should improve both client and server performance and simplify the code, an impressive win for the API team.
If you work on a non-Go client library, definitely check out the new data you’ve got available, it will likely speed up your tools as well.
kubectl config view has long supported a redaction mechanism for a few hardwired config fields. This PR extends things to use our existing
datapolicy struct tag to handle things more holistically. This especially helps with user plugin credentials, such as OIDC access tokens. As before, the
--raw flag will disabled redaction. If you are using
kubectl config view in scripts or automation, you should check if this change will impact you (or if you are currently leaking sensitive information).
The leases API was originally added to streamline leader elections but it has found many uses beyond that. A new one added recently, and now promoted to Beta, is that each kube-apiserver creates a lease and renews it as long as it is running. This allows real-time enumeration of what API servers are running. The likely first use for this new data will be automating object storage version updates. Currently storage-version-migrator requires external coordination to ensure it runs to completion between each step when doing a marathon upgrade. Plenty of other things can likely benefit in similar ways though, debugging tools, cluster configuration helpers, etc. A simple and fun extension of an existing API!
unhealthyPodEvictionPolicyallows non-ready pods to be evicted even if it exceeds PDB
Test Cleanup: add disruption and jobs test, StorageVersion unit tests
kubectl eventsto beta (no more
kubectl alpha events)