Red Hat / IBM Leftovers
-
Red Hat ☛ How to configure granular access in OpenShift Dev Spaces
Even though the current trend is to split an infrastructure into a number of "fit-for-purpose" clusters instead of having a gigantic monolith Red Hat OpenShift cluster, administrators still want to provide granular access and restrict certain functionality for particular users.
-
Red Hat ☛ How to deploy Meteor.js applications to OpenShift
-
Red Hat Official ☛ Red Hat Trusted Artifact Signer with Enterprise Contract: Trustable container images
Before starting, we must deploy Trusted Artifact Signer on our Red Hat OpenShift cluster by following Chapter 1 of the Deployment Guide. Be sure to also run the source ./tas-env-variables.sh script to set up the shell variables (URLs) to the Sigstore services endpoints (Fulcio, Rekor etc).
-
Red Hat Official ☛ Red Hat Performance and Scale Engineering
We work closely with developers early in the development process to validate that their software design will perform and scale well. We also collaborate with hardware and software partners to ensure that our software is performing and scaling well with their technology, and we engage with customers on innovative deployments where we can apply our expertise to help them get the best performance and scale for their workloads.
-
Red Hat Official ☛ MLRun Community Edition on Red Hat OpenShift
MLRun serves as an open MLOps framework, facilitating the development and management of continuous machine learning (ML) applications throughout their lifecycle. It integrates with your development and CI/CD environment, automating the delivery of production data, ML pipelines and online applications, which in turn helps to reduce engineering efforts, time to production and computation resources. Their orchestration allows for greater flexibility and efficiency when scaling and managing intelligent applications for production. MLRun also fosters collaboration and accelerates continuous improvements by breaking down silos between data, ML, software and DevOps/MLOps teams.
-
Red Hat Official ☛ Accessing Azure blob storage with Red Hat OpenShift sandboxed containers peer-pods [Ed: Red Hat is marketing its wares for Microsoft]
Currently, there is no support for Container Storage Interface (CSI) persistent volumes for peer-pods solution in OpenShift. However, some alternatives are available depending on your environment and use case. For example, if you have configured the peer-pods solution on Azure and have a workload that needs to process data stored in Azure Blob Storage, Microsoft's object storage solution for the cloud, then you can use Azure BlobFuse.
-
YouTube ☛ AI Adoption Essentials: Laying the Groundwork for Success in Enterprises [Ed: AI-spamming or buzzwords instead of substance from Red Hat]
-
Red Hat Official ☛ The revolution of retail technology - how to deliver the best integrated in-store experience to date
How AI and edge-optimized open source technologies can help retailers deliver a superior and seamless consumer experience.
-
The New Stack ☛ Red Hat Developer Hub: An Enterprise-Ready IDP [Ed: Red Hat sponsored sites commissioned to write Red Hat puff pieces (for Red Hat to later Red Hat Official ☛ link to)]
Based on Backstage, Red Hat Developer Hub, an Internal Developer Platform, provides a suite of tools and features to streamline and enhance the development process.
-
Dave Airlie ☛ Dave Airlie: radv: vulkan av1 video decode status
The Khronos Group announced VK_KHR_video_decode_av1 [1], this extension adds AV1 decoding to the Vulkan specification. There is a radv branch [2] and merge request [3]. I did some AV1 work on this in the past, but I need to take some time to see if it has made any progress since. I'll post an ANV update once I figure that out.
This extension is one of the ones I've been wanting for a long time, since having royalty-free codec is something I can actually care about and ship, as opposed to the painful ones. I started working on a MESA extension for this a year or so ago with Lynne from the ffmpeg project and we made great progress with it. We submitted that to Khronos and it has gone through the committee process and been refined and validated amongst the hardware vendors.