Week Ending June 19, 2022
The June 16th community meeting covered: introducing the new Release Lead, Cici Huang; the image registry move; the retirement of Service Catalog; a review by Paris Pittman of the Kubernetes Annual Report. Folks also discussed some of the sessions at the Contributor Summit, particularly the ones related to project sustainability and the Release Team. Notes, Video
Next Deadline: Enhancements Due, June 23rd
This is the final deadline for getting your Enhancement KEP ready for 1.25 and add it to the Enhancements spreadsheet. There are currently 61 enhancements listed for 1.25, but 38 of those are incomplete and may be removed from the release. If one of those is yours, make sure to finish it by Thursday.
After a short regression-based delay, 1.24.2, 1.23.8, 1.22.11, and 1.21.14. This patch includes a golang update, plus fixes for EndpointSlices and pod status issues.
This is the final update for 1.21. If you are using 1.21 or earlier, please make upgrade plans ASAP.
Building off a design from the old “design proposals” system, this KEP proposes a new API for CRI plugins to communicate lifecycle events and other status information to the Kubelet. Right now the API is purely call-and-response, requiring the Kubelet to scrape information on all containers once per second, diff that against the current state, and build an event stream. While functional, this requires some baseline level of CPU consumption. This KEP would add in a gRPC streaming call to allow the CRI plugin to push updates to the Kubelet when it knows something has happened. This is not expected to cover all eventualities, unexpected state changes can still creep in thanks to the annoying reality of physical hardware and whatnot, so the Kubelet will continue to poll as well but at a reduced frequency so as to use fewer CPU cycles. Small savings add up quickly in large clusters, as well as improving things for folks using Kubernetes in resource constrained environments like edge or embedded systems.
Storage management currently has a small “hole”, when changing or swapping the default StorageClass there will be a short gap where new PVCs expecting to use the default class will end up stranded and permanently jammed until a human kicks them. This arises from the interaction between the static provisioning and dynamic provisioning rules in the storage controller. This KEP proposes a slight behavior tweak: when a StorageClass is marked as default (or created with the default annotation), the SC controller will check for any unbound PVC which is still pending and assign the class on it. This will allow the normal dynamic provisioning process to kick in, just as if that class had been the default when the PVC was created. This is technically a compat-relevant change as funky users of the static provisioning system may not expect this new behavior. If that sounds like you, please talk to SIG-Storage when you get a chance.
With so many vendors and users in our ecosystem, it can be hard to get a top-level view of security issues across Kubernetes projects. To help ease the load, SIG-Security and the SRC team will create a JSON feed of all such issues that will be updated automatically. The URL(s) for this feed haven’t been finalized so stay tuned for future updates on how to consume things.
Flaky Test Cleanup: fix leaking goroutines in multiple tests, remove duplicate Scheduler config, cleanup Node defaults, close server with defer