news
Linux Kernel Coverage in LWN
-
LWN ☛ Linux's missing CRL infrastructure
In July 2024, Let's Encrypt, the nonprofit TLS certificate authority (CA), announced that it would be ending support for the online certificate status protocol (OCSP), which is used to determine when a server's signing certificate has been revoked. This prevents a compromised key from being used to impersonate a web server. The organization cited privacy concerns, and recommended that people rely on certificate revocation lists (CRLs) instead. On August 6, Let's Encrypt followed through and disabled its OCSP service. This poses a problem for Linux systems that must now rely on CRLs because, unlike on other operating systems, there is no standardized way for Linux programs to share a CRL cache.
CRLs are, as the name might suggest, another solution to the problem of certificate revocation. If a web server loses control of its signing certificate, the administrator is supposed to report this fact to the web site's certificate authority, which will publish a revocation for it. Clients periodically download lists of these revocations from the certificate authorities that they know about; if an attacker tries to use a key that has been revoked in order to impersonate a web site, the client can notice that the key is present in the list and refuse to connect. If the certificate revocation system were ever to stop working, an attacker with stolen credentials could perform a man-in-the-middle attack in order to impersonate a web site or eavesdrop on the user's communications with it.
This system worked well enough in 1999, but as the internet grew, the number of certificate revocations grew along with it. In 2002, RFC 6960 was standardized, creating the online certificate status protocol. Clients using OCSP send a request directly to the certificate authority to validate the certificate of each site they wish to contact. This had the benefit of freeing every client from having to store the CRL of every certificate authority they trusted, but it had a number of drawbacks.
-
LWN ☛ Bringing restartable sequences out of the niche
The restartable sequences feature, which was added to the 4.18 kernel in 2018, exists to enable better performance in certain types of threaded applications. While there are users for restartable sequences, they tend to be relatively specialized code; this is not a tool that most application developers reach for. Over time, though, the use of restartable sequences has grown, and it looks to grow further as the feature is tied to new capabilities provided by the kernel. As restartable sequences become less of a niche feature, though, some problems have turned up; fixing one of them may involve an ABI change visible in user space.
A restartable sequences overview
As the number of CPUs in a system grows, so does the desire to write and run highly parallel programs. In user space, as well as in the kernel, concurrent code using locks eventually runs into scalability problems, leading to an interest in the use of lockless algorithms instead. The kernel has a distinct advantage when it comes to lockless concurrent access, though, in that code running in kernel space can prevent interruptions from interrupts, preemption, or migration to another CPU during critical sections; user space has no such guarantees. So any user-space lockless algorithm must work correctly in an environment where code execution can be interrupted at any time.
-
LWN ☛ Shadow-stack control in clone3()
Shadow stacks are a control-flow-integrity feature designed to defend against exploits that manipulate a thread's call stack. The kernel first gained support for hardware-implemented shadow stacks, for the x86 architecture, in the 6.6 release; 64-bit Arm support followed in 6.13. This feature does not give user space much control over the allocation of shadow stacks for new threads, though; a patch series from Mark Brown may, after many attempts, finally be about to change that situation.
As its name suggests, a shadow stack is a sort of copy of a thread's ordinary call stack, but its contents are limited to return addresses. On a system with shadow-stack support, each function call will push the return address onto both the normal and shadow stacks. On return from a function, the return addresses are popped from both stacks and compared; if they do not match, some sort of corruption has occurred and the thread in question is killed.