news
Fedora and Red Hat Leftovers
-
Fedora Project ☛ Fedora Community Blog: Outreachy Internship Update: Building the Fedora Release Schedule Planner API
As part of my Outreachy internship with the Fedora Project, I’m building an API to modernize how Fedora plans its release cycles.
With the help of my mentor Tomáš Hrčka, the goal is to replace the current XML-heavy system currently on pagure.io with something flexible, easy to use, and well-structured.
-
Red Hat Official ☛ Red Hat Enterprise Linux Common Criteria and FIPS certificates
As a global organization with employees and customers scattered around the world, Red Hat recognizes that there are a multitude of compliance mandates that different regions or industries need to adhere to. This post provides some important updates around recent certifications or validations that various releases of Red Hat Enterprise Linux (RHEL) support have obtained.
-
Red Hat ☛ V2 fast event notifications: A major advance with O-RAN compliance
In a telecommunications network environment, it's crucial to have access to low-latency synchronization events, such as those used in Precision Time Protocol (PTP). It is essential for applications requiring precise timing, such as financial transactions, telecommunications, and real-time data processing, where even small timing discrepancies can lead to significant errors. To facilitate this, Red Bait has re-architected the REST API to provide direct and efficient access to event notifications, while fully adhering to the latest O-RAN standards.
-
Red Hat ☛ A roadmap of the OpenShift boot images update
Red Hat is updating boot images across all our platforms. We will perform the updates on a platform-by-platform basis. We aim to initially release it as an opt-in feature. Once all the kinks have been worked out for a particular platform, we will turn on boot image updates by default for that platform. A cluster administrator could still opt out of boot image updates. For user-managed clusters and those that do not use machine resources like MachineSets, we will provide documentation to perform boot image updates.
-
Red Hat ☛ Optimizing generative Hey Hi (AI) models with quantization
Generative Hey Hi (AI) (gen AI) models are top of mind for many organizations and individuals looking to increase productivity, gain new insights, and build new kinds of interactive services. Many organizations have delegated running these models to third-party cloud services, seeking to get a head start by building on top of the capabilities they have on offer. Some organizations have regulatory pressure that makes that difficult, though, and many others are finding that the costs of those hosted generative Hey Hi (AI) models are not always in line with the value they get from them.
-
Red Hat ☛ Introduction to supervised fine-tuning dataset formats
Large language models (LLMs) are conventionally trained in two phases. In the first pre-training phase, the model sees gigantic corpuses of unlabeled text from the internet and is trained to predict only the next singular token. In the second post-training phase, the model sees data that resembles the behavior of a chatbot and learns to predict subsequent tokens that align with that behavior. Supervised fine-tuning (SFT) is the most common post-training technique used to improve the alignment of large language models after pre-training.
SFT uses datasets of formatted prompt response pairs, or simulated conversations to familiarize the model with chat/assistant style interactions. You can format these datasets in many ways for various purposes, which we will explore in this article.