AI Proving Ground

What's Inside the AI Proving Ground

Inside the AI Proving Ground, IT professionals can explore the world of validated designs, reference architectures and DIY environments that fit their use cases. This includes full-stack validations from network to compute to storage, along with Kubernetes (K8s) platforms with MLOps integrations. IT professionals will also receive expert guidance from our AI and infrastructure experts, as well as insights from leading AI companies, to help accelerate decision-making and implementation of AI solutions.

Hardware

High-performance compute:

Access the latest CPUs, GPUs, DPUs and SmartNICs from industry leaders like NVIDIA, AMD and Intel.

Storage:

Solutions from Dell, NetApp, Pure Storage, VAST Data, IBM, DataDirect Networks (DDN), Weka, and HPE GreenLake.

Networking:

High-speed networking solutions like InfiniBand fabrics and 400GbE Ethernet from NVIDIA, Cisco and Arista.

Explore our offerings

WWT Agentic Network Assistant

Explore the WWT Agentic Network Assistant, a browser-based AI tool that converts natural language into Cisco CLI commands, executes them across multiple devices, and delivers structured analysis. Using a local LLM, it streamlines troubleshooting, summarizes device health, and compares configs, demonstrating the future of intuitive, AI driven network operations.

Crafting Your First AI Agent

Embark on a journey to create AI agents with "Crafting Your First AI Agent." This hands-on lab introduces LangChain, LangGraph, and CrewAI frameworks, empowering you to build, run, and understand AI agents. Discover the intricacies of AI workflows and multi-agent systems, transforming curiosity into practical expertise.

NVIDIA Run:ai Researcher Sandbox

This hands-on lab provides a comprehensive introduction to NVIDIA Run:ai, a powerful platform for managing AI workloads on Kubernetes. Designed for AI practitioners, data scientists, and researchers, this lab will guide you through the core concepts and practical applications of Run:ai's workload management system.

F5 AI Gateway (GPU Accelerated)

This lab will provide access to an Openshift cluster running the F5 AI Gateway solution. We will walk through how the F5 AI Gateway routes requests to different models by either allowing them to pass through or, more importantly, securing them via prompt injection checking. We have also added a couple of other tests that will allow for language detection of that input that the F5 AI Gateway can also detect.

NVIDIA Blueprint: PDF Ingestion

NVIDIA Blueprints: PDF Ingestion, also known as NVIDIA-Ingest, or NV-Ingest, this blueprint is a scalable, performance-oriented document content and metadata extraction microservice. Including support for parsing PDFs, Word and PowerPoint documents, it uses specialized NVIDIA NIM microservices to find, contextualize, and extract text, tables, charts and images for use in downstream generative applications.

HPE Private Cloud AI - Guided Walkthrough

This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.

AI Prompt Injection Lab

Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.

Generative AI Fundamentals

This lab will walk the lab user through the basics of Generative AI