Overview
Explore
Services
Select a tab
16 results found
Pure Storage Enterprise AI in-a-Box with Intel Gaudi 3 and Iterate.ai
Iterate.ai's Generate platform pairs with Intel Xeon CPUs, Gaudi 3 accelerators, Pure Storage FlashBlade//S, and Milvus vector DB. Deployed via Kubernetes/Slurm, it scales quickly, needs minimal tuning, and runs Llama 3, Mistral, and Inflection to accelerate AI training, inference, and search for healthcare, life-science, and finance workloads.
Advanced Configuration Lab
•1 launch
Retrieval Augmented Generation (RAG) Walk Through Lab
This lab will go into the basics of Retrieval Augmented Generation (RAG) through hands on access to a dedicated environment.
Foundations Lab
•918 launches
AI Prompt Injection Lab
Explore the hidden dangers of prompt injection in Large Language Models (LLMs). This lab reveals how attackers manipulate LLMs to disclose private information and behave in ways that they were not intended to. Discover the intricacies of direct and indirect prompt injection and learn to implement effective guardrails.
Foundations Lab
•682 launches
NVIDIA Run:ai Researcher Sandbox
This hands-on lab provides a comprehensive introduction to NVIDIA Run:ai, a powerful platform for managing AI workloads on Kubernetes. Designed for AI practitioners, data scientists, and researchers, this lab will guide you through the core concepts and practical applications of Run:ai's workload management system.
Sandbox Lab
•132 launches
NVIDIA Blueprint: Enterprise RAG
NVIDIA's AI Blueprint for RAG is a foundational guide for developers to build powerful data extraction and retrieval pipelines. It leverages NVIDIA NeMo Retriever models to create scalable and customizable RAG (Retrieval-Augmented Generation) applications.
This blueprint allows you to connect large language models (LLMs) to a wide range of enterprise data, including text, tables, charts, and infographics within millions of PDFs. The result is context-aware responses that can unlock valuable insights.
By using this blueprint, you can achieve 15x faster multimodal PDF data extraction and reduce incorrect answers by 50%. This boost in performance and accuracy helps enterprises drive productivity and get actionable insights from their data.
Sandbox Lab
•140 launches
Cisco AI Defense Capture the Flag (CTF)
Experience Cisco AI Defense in an interactive Capture the Flag (CTF), designed to showcase how Cisco is securing the future of GenAI.
Advanced Configuration Lab
•257 launches
HPE Private Cloud AI - Guided Walkthrough
This lab is intended to give you an overview of HPE Private Cloud AI, HPE's turnkey solution for on-prem AI workloads. HPE has paired with NVIDIA's AI Enterprise software giving customers a scalable on-prem solution that can handle multiple different AI workloads from Inference, RAG, model fine-tuning, and model training.
Foundations Lab
•54 launches
Introduction into OpenShift AI with Intel and Dell Infrastructure
Red Hat OpenShift AI, formerly known as Red Hat OpenShift Data Science, is a platform designed to streamline the process of building and deploying machine learning (ML) models. It caters to both data scientists and developers by providing a collaborative environment for the entire lifecycle of AI/ML projects, from experimentation to production.
In this lab, you will explore the features of OpenShift AI by building and deploying a fraud detection model. This environment is built ontop of Dell R660's and Intel Xeon's 5th generation processors.
Foundations Lab
•311 launches
Drone Landing Identification an Intel AI Reference Kit Lab
This lab will walk you through one of Intel's AI Reference Kits to develop an optimized semantic segmentation solution based on the Visual Geometry Group (VGG)-UNET architecture, aimed at assisting drones in safely landing by identifying and segmenting paved areas. The proposed system utilizes Intel® oneDNN optimized TensorFlow to accelerate the training and inference performance of drones equipped with Intel hardware. Additionally, Intel® Neural Compressor is applied to compress the trained segmentation model to further enhance inference speed. Explore the Developer Catalog for information on various use cases.
Advanced Configuration Lab
•31 launches
AI Gateway - LiteLLM Walkthrough Lab
This lab provides hands-on experience with LiteLLM, an open-source AI gateway that centralizes and manages access to Large Language Models (LLMs). Throughout the five modules, you'll learn how to set up and use LiteLLM to control, monitor, and optimize your AI model interactions.
Foundations Lab
•35 launches
Cisco RoCE Fabrics
This lab will demonstrate how the Cisco Nexus Dashboard Fabric Controller can easily set up an AI/ML fabric with a simple point-and-click GUI. It will only do this easily without knowing the protocols or best practices. The controller will do the work.
Advanced Configuration Lab
•224 launches
Retrieval Augmented Generation (RAG) - Programatic Lab
In this lab, we'll be focusing on the programatic steps of Retrieval Augmented Generation (RAG). First, we'll discuss data chunking, how we break down our documents. Then, we'll explore how these chunks become embeddings, numerical representations. Finally, we'll see how a vector database helps us efficiently retrieve this information.
Foundations Lab
•23 launches