news
Latest From Red Hat Official Site
-
Red Hat Official ☛ Metrics that matter: How to prove the business value of DevEx
I’ve noticed a recurring pattern where companies invest millions in a state-of-the-art container platform or cloud infrastructure, only to find their delivery speed hasn't budged.
-
Red Hat Official ☛ Strengthening the sovereign enterprise with new training from Red Hat [Ed: "Digital Sovereignty" cannot be attained with Pentagon (Red Hat/IBM)]
To help prepare IT organizations for the complexities of digital sovereignty, Red Hat has launched a new training lesson, available as part of both paid and trial Red Hat Learning Subscription. Achieving Digital Sovereignty in the Cloud equips IT leaders with the foundational knowledge and strategic framework to navigate digital sovereignty with greater confidence.
-
Red Hat ☛ Prompt engineering: Big vs. small prompts for Hey Hi (AI) agents
In this post, we explore two prompting approaches and the advantages and disadvantages of each, based on our experience developing the it-self-service-agent Hey Hi (AI) quickstart.
-
Red Hat ☛ Synthetic data for RAG evaluation: Why your RAG system needs better testing
Retrieval-augmented generation (RAG) has become the default architecture for enterprise large-language models (LLM) applications. By grounding models in external knowledge bases, RAG systems can provide accurate, up-to-date responses without the cost and complexity of fine-tuning. In practice, most RAG systems reach production with weak evaluation strategies.
Teams tune embeddings, retrievers, chunking strategies, and prompts—but still rely on manual spot checks, small hand-labeled datasets, or generic LLM-as-a-judge metrics to assess quality. The result: systems that appear to work, but fail silently under real user traffic. So the real question becomes: How do you know your RAG system actually works—and why it fails when it doesn't?