Securing AI agents through Red Teaming
AI Security
Join this workshop to understand what red teaming means in the context of AI and how it helps protect AI agents from critical security vulnerabilities such as prompt injection, data leakage, and business alignment failures like hallucinations or inappropriate refusals.
Participants will learn how enterprises can automate security evaluations, detect vulnerabilities before they turn into incidents, and implement continuous protection for AI agents. The session will also introduce a range of testing techniques, including context-specific prompts, adaptive probing, and multi-turn interaction analysis.
Time & Place
March 25, 2026
15:00 - 16:30
Salon Bellevue
Classroom Seating
Max. Capacity: 64 Seats
Meet Your Intructors
What To Expect
Who Is This For?
Enterprise AI & Compliance Leaders
LLM & AI Engineers
Data Scientists
Regulatory & Governance Specialists
AI Security & Risk Management Teams
Pre-Requisites
No specific technical skills required
Bring Your Own Laptop
What You'll Learn & Do?
Understand LLM vulnerabilities and how continuous AI red teaming prevents them
Learn how to protect AI agents from security vulnerabilities and business failures
Apply red teaming techniques from cybersecurity to ensure the security and reliability of your LLM application
Agenda & Activities
Agenda for this session:
Getting Settled 5 minutes
Information Session Part 1 15-20 minutes
Individual / Group Exercise 10-15 minutes
Break 5 minutes
Information Session Part 2 15-20 minutes
Individual / Group Exercise 10-15 minutes
Q&A/Discussion 20 minutes
Reflection/Discussion 20 minute
Prerequisits:
No specific technical skills required
Bring Your Own Laptop
.png)