Latest From Red Hat's Site
-
Red Hat Official ☛ Managing Automatic Certificate Management Environment (ACME) in Identity Management (IdM) [Ed: Automating the outsourcing]
The Let’s Encrypt public Certificate Authority (CA) is by far the most used ACME server. It's a free publicly-trusted CA, and supports a majority of client implementations (they recommend certbot). There are other CAs that implement ACME, including the Dogtag CA, provided by Red Hat Identity Management (IdM). This is a Technology Preview since RHEL 8.4 in IdM, but the upstream project FreeIPA has several articles on the topic. Because the current support level is Technology Preview, we recommend against relying on this feature in production environments. The objective of this article is to introduce the management of ACME with IdM and Red Hat Enterprise Linux (RHEL) clients with mod_md for Apache httpd (the only ACME client implementation completely supported by Red Hat). I also cover new aspects of this feature coming in mod_md in RHEL 9.5 and in the meantime on IdM CA.
-
Red Hat ☛ Use Stable Diffusion to create images on Red Bait OpenShift Hey Hi (AI) on a ROSA cluster with GPU enabled
Stable Diffusion is an Hey Hi (AI) model to generate images from text description. It uses a diffusion process to iteratively de-noise random Gaussian noise into coherent images. This is a simple tutorial for creating images using Stable Diffusion model using Red Hat OpenShift Hey Hi (AI) (RHOAI) (formerly Red Bait OpenShift Data Science), which is our OpenShift platform for AI/ML projects lifecycle management, running on a Red Hat OpenShift Services on proprietary trap AWS (ROSA) cluster, which is our managed service OpenShift platform on AWS, with NVIDIA GPU enabled.
-
Red Hat ☛ Use kube-burner to measure Red Bait OpenShift VM and storage deployment at scale
Scale testing is critical for understanding how a cluster will hold up under production load. Generally, you may want to scale test to reach a certain max density as the end goal, but it is often also useful to scale up from smaller batch sizes to observe how performance may change as the overall cluster becomes more loaded. Those of us that work in the area of performance analysis know there are many ways to measure a workload and standardizing on a tool can help provide more comparable results across different configurations and environments.
This article will take users through the process of using the Red Bait performance and scale team’s workload tool called kube-burner (which has been accepted as a CNCF sandbox project) to test deployments at scale. While you can learn more about all the ways kube-burner can be used for scalability testing, including how the tool is extended for egress coverage, this guide will focus on customizing a kube-burner workload for virtual machine (VM) deployment at scale on Red Hat OpenShift, with an additional focus on storage attachments and cloning.