news
Red Hat and IBM Attacking Accessibility (Wayland Doesn't Work for Blind People), More Red Hat Picks
-
TheEvilSkeleton ☛ Hari Rana: It’s True, “We” Don’t Care About Accessibility on Linux [Ed: No, GNOME and IBM are attacking blind people; all they care about is vendor lock-in. It has gotten so bad that Red Hat staff attacks journalists for merely daring to point out Wayland's limitations.]
Introduction
What do virtue-signalers and privileged people without disabilities sharing content about accessibility on GNU/Linux being trash have in common? They don’t actually really care about the group they’re defending; they just exploit these victims’ unfortunate situation to either fuel hate against groups and projects actually trying to make the world a better place.
-
Bryan Lunduke ☛ Phoronix Suggests Open Source Projects Should be Controlled by Big Tech
Listen now (17 mins) | Phoronix spent the last few years praising the X11Libre developer.
-
Bryan Lunduke ☛ Fedora Silences Support for Xorg Fork, But Other Distros Voice Support
Red Hat does not want you to know that X Windows still exists, but Devuan & OpenMandriva support X11Libre.
-
Red Hat Official ☛ SiriusXM modernizes virtualization without missing a beat
The outcome? Faster developer velocity, leaner cost structure, and near-zero downtime driven by a platform approach that brings virtualization and containers under one control plane.
-
Red Hat Official ☛ MLPerf Inference v5.0 results with Supermicro’s GH200 Grace Hopper Superchip-based Server and Red Hat OpenShift
Meta released the Llama2-70b model on July 18, 2023. This model is open source and part of the very popular Llama family of models that range from 7 billion to 70 billion parameters. In this round of MLPerf Inference Datacenter there were 17 organizations who submitted Llama 2 70B results, making it the most popular model in this round. The Supermicro MLPerf v5.0 dual GPU GH200 server submission ran OpenShift 4.15 and NVIDIA TRT-LLM for the server stack. TRT-LLM uses post-training quantization to quantize llama2-70b to FP8 precision (8-bit floating point). FP8 dramatically reduces the memory footprint and bandwidth requirements allowing larger batch sizes and longer sequences. FP8 quantization also allows faster computation, but is less precise. This quantized model was used in the Supermicro MLPerf v5.0 submission and takes advantage of the FP8 hardware in GH200 systems.
-
Red Hat ☛ OpenShift Lightspeed: Assessing Hey Hi (AI) for OpenShift operations [Ed: Lots of hype about Hey Hi (AI), not enough actual substance]
Large language models (LLMs) have made remarkable strides in recent years, and their integration as Hey Hi (AI) assistants in technical environments is quickly becoming part of everyday workflows. These tools are now being used to handle a growing range of complex tasks, so it’s only natural to wonder how far they can really go. Red Hat OpenShift Lightspeed is no exception. This Hey Hi (AI) assistant built into OpenShift simplifies tasks, accelerates workflows, and helps users become more productive administering OpenShift clusters.
-
Red Hat ☛ Assessing Hey Hi (AI) for OpenShift operations: Advanced configurations
Welcome to the second part of this blog series diving into Red Hat OpenShift Lightspeed and its performance in real-world OpenShift certification scenarios like the Red Hat Certified OpenShift Administrator exam. If you haven't read the first post, you can find Part 1 here.
Here, we're starting fresh with a new hands-on exercise.
-
Red Hat ☛ OpenShift Data Foundation and HashiCorp Vault securing data
Managing secrets securely is non-negotiable for many enterprises today across cloud and on-premise environments. Organizations are poised to take advantage of deeper integration opportunities between HashiCorp Vault and Red Hat OpenShift to strengthen their security posture.