Deprecated registry k8s.gcr.io has been redirected to registry.k8s.io. This may cause issues with some builds and/or deployments, so check yours for image pull errors. Please update your infrastructure and code to use the new registry, as much as you can.
The resource.k8s.io/v1alpha1
API types for dynamic resource allocation (ResourceClaim, ResourceClaimTemplate, ResourceClass, PodScheduling) have been removed and replaced with a resource.k8s.io/v1alpha2
. As this is still an alpha feature, no upgrade path is provided and any existing objects must be removed before upgrade or you risk database sadness. As part of this bump, some incompatible changes have been made. If you’ve been trying out this alpha feature, please make sure to check out the changes and take appropriate action in your development/testing environments.
Next Deadline: Docs Due, March 21st; Test Freeze, March 22nd
We are in Code Freeze. Please fix any reported test failures or bugs (urgently). Your final enhancement documentation is due for editing Tuesday, and on Wednesday they’ll halt all changes to tests. The release team is also drafting the release notes and the release blog.
Patch releases 1.26.3, 1.25.8, 1.24.12 are available. Version 1.23 is now EOL, and 1.23.17 was the last patch release for that version.
Signing the images for this month’s patch releases did not go well, so you can’t yet enforce signing on all images via cosign. Next month, hopefully.
While the venerable kubectl logs
command has long provided quick access to log output from containers running in Kubernetes, this hasn’t extended to node-level logs. We’ve slowly made more node debugging tools available in-band both to provide a unified experience when debugging node troubles and to better support minimalist OSes, and with this PR we and now get to the underlying node logs remotely. As with container logging, this is part of the Kubelet API. On Linux it queries journald and for Windows, the Event Log. There isn’t yet a dedicated kubectl command for it but you can still try things out using a command like kubectl get --raw "/api/v1/nodes/$NODE_NAME/proxy/logs/?query=kubelet"
(after enabling the feature gate, of course). Give it a try in your development clusters and report back to SIG-Node if you can!
There have been a lot of different solutions to distributing CA trust information in Kubernetes, from the inline embedding of admission webhooks to the magic ConfigMap used for in-cluster API access (and lots more from third-party projects). This new API type seeks to unify these. For now it’s mostly just a data-holder, the only behaviors so far are basic validations, however the goal is to grow it over time. If you work on any third-party tools that need (or already have) a mechanism to package up trust roots and use them with API objects, consider adding support for this new API when you can. Future extensions planned are reference support in webhooks and the ability to mount the PEM files into containers using the projected
volume type.
The existing IP allocator for Services has been with us for a long time and while battle-hardened it has some long term limitations that have frustrated many an admin. The biggest of these has been limiting the subnet size it can allocate from, a /12 for IPv4. A million IPs can seem like a lot until you’ve got heavy automation creating 10 Services on every commit and suddenly the bitmap allocator is having a very sad day. So we’ve created a new allocator that lets Etcd and Kube-apiserver handle more of the heavy lifting and thus removing most of the current limitations. This works by using an IPAddress
API object as a mutual-exclusion lock and a new allocator in the Service controller. There’s still a lot of room for improvement of the actual allocation algorithms compared to the old system however this PR adds the basics so folks can kick the tires in 1.27 and see how it behaves.
kubectl explain
defaults to OpenAPIv3net.ipv4.ip_local_reserved_ports
sysctl settingContextual Logging: defaultbinder, kube-controller-manager, controller utils, daemonset, volumes
Testing Overhaul: resize policy defaults, fix kubemark deps, standalone test, more standalone, StatefulSet defaulting, e2e pluggability, snapshot resize
--subresource
is beta, plus testingPollUntilContextCancel