Abstract
Through this workshop, participants will learn what continuous red teaming means and how it can protect AI agents from security vulnerabilities like prompt injection, data leakages and business alignment failures like hallucinations, inappropriate denials, etc. We'll present how enterprises can automate security evaluation, detect vulnerabilities before they become incidents, and ensure continuous protection of AI agents. Through hands-on exercises aligned with Giskard Hub, attendees will explore a variety of testing techniques including context-specific prompts, adaptive probing, and multi-turn interactions.
Topics To Be Covered
Understand LLM vulnerabilities and how continuous AI red teaming prevents them
Learn how to protect AI agents from security vulnerabilities and business failures
Apply red teaming techniques from cybersecurity to ensure the security and reliability of your LLM application
Perfect For
Enterprise AI & Compliance Leaders
LLM & AI Engineers
Data Scientists
Regulatory & Governance Specialists
AI Security & Risk Management Teams
Meet Your Instructor
Alex Combessie
CEO & Co-Founder, Giskard AI
Alex is the co-founder and CEO at Giskard, a startup dedicated to secure AI agents through exhaustive testing. Previous experience includes big data & analytics consulting at Capgemini, and lead data scientist at Dataiku where he worked the development of the NLP and time-series stack.
At Giskard, he's leading a team developing an AI testing platform that advances Responsible AI, enhancing business performance while respecting the rights of citizens.
ADDITIONAL INFORMATION
Time & Place
Wed, March 25
08:30 - 10:00
The Ritz-Carlton Berlin
Salon Tiergarten
Classroom Seating
Max. Capacity: 32 Seats
Secure your seat – registration required.
Notes
Agenda for this session:
Getting Settled - 5 mins
Information Session - 40 mins
Break - 5 min
Individual / Group Exercise - 20 min
Q&A/Discussion - 15 min
Reflection - 5 min
Prerequisits:
No specific technical skills required
Bring Your Own Laptop


.png)