news
New in Red Hat's Site
-
Red Hat Official ☛ Announcing OLM v1: Next-Generation Operator Lifecycle Management
The next-generation Operator Lifecycle Manager has been specifically redesigned to improve how you manage operators on OpenShift. Developed directly from user feedback, OLM v1 delivers enhancements across the board, simplifying operator management, enhancing security, and boosting reliability.
-
Red Hat ☛ Set up JBoss EAP 7 clustering in OpenShift using DNS_PING
Red Hat JBoss Enterprise Application Platform (JBoss EAP) 7 is an open source platform for highly transactional, web-scale Java applications deployed in
war
/jar
/ear
and abides by specifications such as Jakarta 8. But it also provides messaging, distributed caching, and clustering features, as explained in the Introduction to JBoss EAP Guide.In terms of clustering, when JBoss EAP 7 is deployed in Red Hat OpenShift Container Platform (RHOCP), by default JBoss EAP 7 image will have clustering capabilities enabled. To provide those clustering capabilities, JBoss EAP 7 relies on JGroups discovery protocols, such as
DNS_PING
orKUBE_PING
, also known as ping protocols, to provide the ping capabilities for the clustering within the pods.In this two-part article series, I will provide more information about both of those mechanisms and how they differ. Both articles are based on solutions/articles such as EAP 7 image clustering in OCP 4, which provides extensive details on clustering. In this first part, I will describe the process when using
DNS_PING
as discovery protocol.It is important to highlight that although they are enabled by default, clustering capabilities are triggered/started on JBoss EAP 7 after the deployment of clustered applications. Therefore, clustering logs (such as the following one) are logged after the clustering capabilities start:
$ oc logs eap-example-0 | grep ISPN000094
-
Red Hat ☛ Manage operators as ClusterExtensions with OLM v1
In our recent announcement, we introduced Operator Lifecycle Manager (OLM) v1 and its exciting new features designed to simplify operator management on Red Hat OpenShift. This article walks through key user scenarios for OLM v1. We'll use the new ClusterExtension API and provide easy-to-follow, copy-paste examples to show how you can apply these improvements in your day-to-day operations.
If you're new to OLM v1, read our announcement post first for a high-level overview of its benefits, simplified Hey Hi (AI) and new features: Announcing OLM v1: Next-Generation Operator Lifecycle Management.
Manage operators as ClusterExtensions with OLM v1
In OLM v1, operators are managed declaratively using the ClusterExtension API objects. Let's walk through common lifecycle operations in 6 steps:
-
Explore operator packages to install from a catalog.
-
Install an operator package with a ClusterExtension.
-
Upgrade a ClusterExtension.
-
Optionally rollback to an older version for a ClusterExtension.
-
Grant user access to the provided Hey Hi (AI) of an installed operator package.
-
Uninstall an operator package with a ClusterExtension.
1. Explore operator packages to install from a catalog
OLM v1 shifts from a CustomResourceDefinition (CRD)-based catalog management approach to a new RESTful Hey Hi (AI) improving performance and reducing Kubernetes API server load. While the initial catalog API provides all content for a given catalog image through a single endpoint, we are actively developing support for more specific queries like listing all available channels in a particular operator package or listing all available versions in a certain channel.
Currently, you can query the catalog image off-cluster to explore and find operator packages.
Supported packages
OLM v1's initial general availability (GA) release supports operator packages that meet the following requirements:
- Uses the registry+v1 bundle format introduced in the existing OLM.
- Supports installation via the AllNamespaces install mode.
- Does not use webhooks.
- Does not declare dependencies using file-based catalog properties (
olm.gvk.required
,olm.package.required
,olm.constraint
).
In this initial release, OLM v1 verifies these constraints before installation, reporting any violations in the ClusterExtension condition. While OLM v1 initially supports a select set of operators, we're actively expanding compatibility.
Procedure
Follow these steps to query the catalog image off-cluster for operator packages to install:
-
Query the catalog image to get a list of compatible operator packages by running the
opm render
command:opm render registry.redhat.io/redhat/redhat-operator-index:v4.18 | jq -r --arg pkg "" ' select(.schema == "olm.bundle" and (.package == $pkg or $pkg == "")) | {package:.package, name:.name, image:.image, supportsAllNamespaces: (.properties[] | select(.type == "olm.csv.metadata").value.installModes[] | select(.type == "AllNamespaces").supported == true)} ' | tee allNamespaces.json | jq -r '.image' | xargs -I '{}' -n1 -P8 bash -c ' opm render {} > $(mktemp -d -p . -t olmv1-compat-bundle-XXXXXXX)/bundle.json ' && bash -c 'cat olmv1-compat-bundle*/*.json' | jq '{package, name, image, requiresWebhooks: (.properties[] | select(.type == "olm.bundle.object").value.data | @base64d | fromjson | select(.kind == "ClusterServiceVersion").spec.webhookdefinitions != null)}' > webhooks.json && jq -s ' group_by(.name)[] | reduce .[] as $item ({}; . *= $item) | . *= {compatible: ((.requiresWebhooks | not) and .supportsAllNamespaces)} | {name, package, compatible} ' allNamespaces.json webhooks.json | jq -r '. | select(.compatible == true) | .package' | sort -u
Example output:
3scale-operator
-
-
Red Hat ☛ Automate Skupper networks seamlessly with Ansible
Skupper version 2.0 has landed and it's bringing a shiny new Ansible collection with it, now available on Ansible Galaxy.
This isn't just another update, it's a toolkit that empowers you to define and manage Skupper networks with ease, no matter where they run—Kubernetes, Podman, Docker, or bare-metal Linux.
Declarative power at your fingertips
Skupper 2.0 redefines how virtual application networks (VANs) come to life. At its core is a sleek, declarative approach powered by a fresh set of Kubernetes Custom Resource Definitions (CRDs). Think of it as a blueprint for your network: you describe what you want, and Skupper makes it happen.
These CRDs aren't just for Kubernetes users. They work just as seamlessly outside the Kubernetes ecosystem, delivering a unified, platform-agnostic way to declare and deploy your Skupper network.
-
Red Hat ☛ What’s new in Red Bait build of Apache Camel 4.10
Red Hat build of Apache Camel 4.10 is an industry-proven, highly adaptable, lightweight toolkit designed for enterprise integration, offering key advantages in flexibility and performance. This release introduces enhanced integration capabilities with new and updated components, improved developer tooling through Camel JBang and Kaoto, unified observability via the new Camel Observability Services, and expanded support for cloud platforms and messaging systems.
Apache Camel enhancements