Voting in the Steering Committee Election is now open. We have a record 13 well-known contributors running for the four available seats. Please cast your ballot and help choose who will lead us in 2022-2023. Polls close on November 4.
SIG-Scheduling now has separate Tech Leads and Chairs, starting with new TL Aldo Culquicondor.
Next Deadline: November 1, exception requests due
October patch releases were pushed back because of KubeCon, which means you still have time to get a cherry-pick in until October 22, for a planned release on October 27.
In the early days of multi-CPU computers, Linux made the decision to default to symmetrical multiprocessing (SMP) which treated every CPU as identical. While most hardware at least somewhat supports this, these days different CPUs will generally have faster access to one bit of memory/cache compared to another. This is non-uniform memory access or NUMA. The “sockets” and “cores” abstraction in both Linux and Kubernetes was built on the older SMP assumptions which can lead to frustration when trying to take advantage of NUMA hardware (which, again, is most hardware). This week we have two big updates to NUMA handling. The first is #102015 which allows CPUManager to allocate based on NUMA nodes rather than cores, so the Kubelet will make sure your container’s processes all run on one NUMA node together. This joined by #105631 which adds support for a distribute-cpus-across-numa
policy in CPUManager. If a container’s requested CPU count won’t all fit in one NUMA node, the Kubelet will divide things evenly across all the required nodes rather than packing things in as much as possible and leaving the last one mostly unused. Together these changes will allow getting even more performance out of your existing hardware by exploiting its inherent strengths.
-1
suffixkubectl diff --invalid-argument
return status codeStructured Logging: JSON logging has been refactored and has new config options, plus proxy and ipvs proxy ported
SIG-APIMachinery is trying to improve API Priority and Fairness for Watch requests. If you want to join the discussion, see #105683.