Securing AI agents through continuous Red Teaming
AI Security
Through this workshop, participants will learn what continuous red teaming means and how it can protect AI agents from security vulnerabilities like prompt injection, data leakages and business alignment failures like hallucinations, inappropriate denials, etc. We'll present how enterprises can automate security evaluation, detect vulnerabilities before they become incidents, and ensure continuous protection of AI agents. Through hands-on exercises aligned with Giskard Hub, attendees will explore a variety of testing techniques including context-specific prompts, adaptive probing, and multi-turn interactions.
Time & Place
Thu, Nov 26
09:15 - 09:45
The Ritz-Carlton Berlin
Matterhorn I
Limited to 45 participants.
Meet Your Intructors

Alex Combessie
CEO & Co-Founder, Giskard AI
Alex is the co-founder and CEO at Giskard, a startup dedicated to secure AI agents through exhaustive testing. Previous experience includes big data & analytics consulting at Capgemini, and lead data scientist at Dataiku where he worked the development of the NLP and time-series stack.
At Giskard, he's leading a team developing an AI testing platform that advances Responsible AI, enhancing business performance while respecting the rights of citizens.
What To Expect
Who Is This For?
Enterprise AI & Compliance Leaders
LLM & AI Engineers
Data Scientists
Regulatory & Governance Specialists
AI Security & Risk Management Teams
Pre-Requisites
No Prerequisits
What You'll Learn & Do?
Understand LLM vulnerabilities and how continuous AI red teaming prevents them
Learn how to protect AI agents from security vulnerabilities and business failures
Apply red teaming techniques from cybersecurity to ensure the security and reliability of your LLM application
Agenda & Activities
Agenda for this session
20 min presentation + Audience Q&A
.png)