news
Latest Red Hat Communications
-
Red Hat Official ☛ The Red Hat Ansible Certified Collection for Terraform has been updated to support HashiCorp Terraform
Learn more about Red Hat Ansible Automation Platform.
-
Red Hat Official ☛ Open source and AI are transforming healthcare at Boston Children’s Hospital
How is AI transforming early diagnosis in fetal and pediatric care?
-
Red Hat Official ☛ Accelerate virtual machine migrations with the migration toolkit for virtualization 2.9
In a joint effort with Hitachi Vantara, MTV’s new storage offloading feature works to make the migration process as fast as possible with an increasing number of Red Hat certified storage partners. Instead of relying on traditional IP-based data transfers that can clog networks, this new function offloads the heavy lifting of the disk copy process from the network to the underlying storage systems. The storage offloading feature becomes increasingly relevant for larger, multi-terabyte disks, as more data is able to transfer to the underlying storage system for faster migrations. Rather than MTV reading the data, sending it over the network and writing it to a new location, the storage array working jointly with MTV essentially transforms the source virtual disk (for example, a VMware VMDK) into an OpenShift Persistent Volume format within the array. Once the volume is available to OpenShift, MTV converts the disk to OpenShift Virtualization format, and then the VM can be started. Any original source VMDK is also available in the event that you need to return to the source.
-
Red Hat ☛ Exploring Llama Stack with Python: Tool calling and agents
Over the last few months, we have explored how to leverage large language models (LLMs) with Llama Stack and Node.js. While TypeScript/JavaScript is often the second language supported by frameworks used to leverage LLMs, Python is generally the first. We thought it would be interesting to go through some of the same exploration by porting over our scripts to Python.
This is the first of a 4-part series in which we'll explore using Llama Stack with the Python Hey Hi (AI) We will start by looking at how tool calling and agents work when using Python with Llama Stack using the same patterns and approaches that we used to examine other frameworks.
Setting up Llama Stack
Our first step was to get a running Llama Stack instance that we could experiment with. Llama Stack is a bit different from other frameworks in a few ways.
First, instead of providing a single implementation with a set of defined APIs, it aims to standardize a set of Hey Hi (AI) and drive a number of distributions. In other words, the goal is to have many implementations of the same Hey Hi (AI) with each implementation shipped by a different organization as a distribution. As is common with this approach, a reference distribution is provided, but there are already several alternative distributions available. You can see the available distributions here.
[...]If you have trouble finding an example or documentation that shows what you want to do, using the docs endpoint is a great resource.
Our first Python Llama Stack application
Our next step was to create an example that we could run with Python. A Python client is available, so that is what we used: llama-stack-client-python. As stated in the documentation, it is an automatically generated client based on an OpenAPI definition of the reference API. There is also Markdown documentation for the implementation itself. As mentioned in the previous section, the docs endpoint was also a great resource for understanding the client functionality.
To start, we wanted to implement the same question flow that we used in the past to explore tool calling. This consists of providing the LLM with 2 tools:
favorite_color_tool
: Returns the favorite color for a person in the specified city and country.favorite_hockey_tool
: Returns the favorite hockey team for a person in the specified city and country.
Then, we ran through this sequence of questions to see how well they were answered: [...]
-
Red Hat ☛ Enhance data security in OpenShift Data Foundation
In our previous article on Red Hat OpenShift Data Foundation (ODF), we demonstrated how to configure cluster-wide encryption at rest using HashiCorp Vault as the Key Management System (KMS).
In this installment, we will take that foundation further. We’ll explore how to enable encryption on a per-PersistentVolume (PV) basis, allowing teams to isolate their encrypted storage volumes at the namespace level. This is especially valuable in multi-tenant environments where different projects or teams require distinct encryption boundaries.