news
LWN Articles About Linux Kernel
-
LWN ☛ Some 6.16 development statistics
The 6.16 development cycle was another busy one, with 14,639 non-merge changesets pulled into the mainline — just 18 commits short of the total for 6.15. The 6.16 release happened on July 27, as expected. Also as expected, LWN has put together its traditional look at where the code for this release came from.
Work on 6.16 came from 2,057 developers, a reasonably high number relative to previous releases. Of those, though, 310 contributed their first patch to the kernel this time around, the highest new-contributor rate since the release of 6.12 (335 new developers) in late 2024.
-
LWN ☛ A proxy-execution baby step
Priority inversion comes about when a low-priority task holds a resource that is also needed by a high-priority task, preventing the latter from running. This problem is made much worse if the low-priority task is unable to gain access to the CPU and, as a result, cannot complete its work and free the resources it holds. Proxy execution is a potential solution to this problem, but it is a complex solution that has been under development for several years; LWN first looked at it in 2020. The 6.17 kernel is likely to contain an important step forward for this long-running project.
The classic solution for priority inversion is priority inheritance; if a high-priority task finds itself blocked on a lock, it lends its priority to the lock holder, allowing the holder to progress and release the lock. Linux implements priority inheritance for the realtime scheduling classes, but that approach is not really applicable to the normal scheduling classes (where priorities are far more dynamic) or the deadline class (which has no priorities at all). So taking a different tack is called for.
That tack is proxy execution. While priority inheritance donates a task's priority to another, proxy execution also donates the waiting task's available CPU time. In short, if a high-priority ("donor") task finds itself waiting on a lock, the lock holder (the "proxy") is allowed to run in its place, using the donor's time on the CPU to get its work done. It is a relatively simple idea, but the implementation is anything but.
-
LWN ☛ Extending run-time verification for the kernel
There are a lot of things people expect the Linux kernel to do correctly. Some of these are checked by testing or static analysis; a few are ensured by run-time verification: checking a live property of a running Linux system. For example, the scheduler has a handful of different correctness properties that can be checked in this way. Nam Cao posted a patch series that aims to extend the kinds of properties that the kernel's run-time verification system can check, by adding support for linear temporal logic (LTL). The patch set has seen eleven revisions since the first version in March 2025, and recently made it into the linux-next tree, from where it seems likely to reach the mainline kernel soon.
Run-time analysis is present everywhere in the kernel; lockdep, for example, is a kind of run-time verification. But instrumenting the whole kernel for each kind of verification that people may want to perform is infeasible. The run-time verification subsystem allows for tracking more complex properties by hooking into the kernel's existing tracing infrastructure. For example, run-time verification can be used to ensure that a system schedules tasks correctly; there are options to ensure that task switches only occur during a call to __schedule(), that the scheduler is called in a context where it is safe to do so, and various other properties of the scheduler interface that depend on the global state of the system. Each property that is checked in this way is represented by a per-CPU or per-task state machine called a monitor. Tracing events drive the transitions in these machines. If they ever reach an error state, the kernel can be configured to log an error message or panic.
-
LWN ☛ Rethinking the Linux cloud stack for confidential VMs [Ed: Fake security, more restrictions, not confidentiality but remote controls of GAFAM]
There is an inherent limit to the privacy of the public cloud. While Linux can isolate virtual machines (VMs) from each other, nothing in the system's memory is ultimately out of reach for the host cloud provider. To accommodate the most privacy-conscious clients, confidential computing protects the memory of guests, even from hypervisors. But the Linux cloud stack needs to be rethought in order to host confidential VMs, juggling two goals that are often at odds: performance and security.
Isolation is one of the most effective ways to secure the system by containing the impact of buggy or compromised software components. That's good news for the cloud, which is built around virtualization — a design that fundamentally isolates resources within virtual machines. This is achieved through a combination of hardware-assisted virtualization, system-level orchestration (like KVM, the hypervisor integrated into the kernel), and higher-level user-space encapsulation.