New Kernel Articles in LWN
-
1½ Topics: realtime throttling and user-space adaptive spinning
The Linux CPU scheduler will let realtime tasks hog the CPU to the exclusion of everything else — except when it doesn't. At the 2023 Open Source Summit North America, Joel Fernandes covered the problems with the kernel's realtime throttling mechanism and a couple of potential solutions. As a bonus, since the room was unscheduled for the following slot, attendees were treated to a spontaneous session on adaptive spinning in user space run by André Almeida.
- The 2023 LSFMM+BPF Summit: the first set of reports from the annual gathering of storage, filesystem, memory-management, and BPF developers. Sessions written up so far include:
- A storage-standards update: an overview of what is coming in the storage area, with a focus on CXL 3.0.
- Peer-to-peer DMA: transferring data directly between NVMe devices.
- The state of the page in 2023: an update on the project to use folios everywhere and, eventually, reduce struct page to a single pointer.
- Reconsidering the direct-map fragmentation problem: for years, kernel developers have worried about the performance impacts of breaking up the kernel's direct map. Perhaps that worry was misplaced.
- Memory-management changes for CXL: a discussion of suggested changes to enable better management of CXL-attached memory.
- The future of memory tiering: there is a lot of interest in tiered-memory systems, but a lot of implementation questions remain.
- Live migration of virtual machines over CXL. Using CXL shared-memory pools to move VMs with no gap in service.
- Memory overcommit in containerized environments: helping virtual-machine managers get memory to the right places.
- User-space control of memory management: another proposed mechanism to allow user space to make memory-management decisions.
- A 2023 DAMON update: an overview of recent enhancements to the DAMON user-space memory-management tool and a look forward to what is coming next.
- High-granularity mappings for huge pages: a technique for using huge pages while suffering less internal fragmentation.
- Computational storage: a new NVMe feature for offloading computation on data to NVMe devices of various kinds.
- FUSE passthrough for file I/O: optimizing Filesystem in Userspace (FUSE) for "passthrough" filesystems.