Abstract
Learn how continuous Red Teaming can protect your LLM agents from emerging threats like hallucinations and data leakage. We'll present how enterprises can automate security evaluation, detect vulnerabilities before they become incidents, and ensure continuous protection of AI agents
Topics To Be Covered
How continuous Red Teaming strengthens AI agent security
Strategies to detect and prevent AI hallucinations
Automating security evaluation for LLM agents
Mitigating data leakage and emerging AI threats
Best practices for ensuring continuous AI protection
Who Is This For?
Enterprise AI & Compliance Leaders
Machine Learning & AI Engineers
Cybersecurity Professionals
Regulatory & Governance Specialists
AI Security & Risk Management Teams
Meet Your Instructor
CEO & Co-Founder, Giskard AI
Alex is the co-founder and CEO at Giskard, a startup dedicated to secure AI agents through exhaustive testing. Previous experience includes big data & analytics consulting at Capgemini, and lead data scientist at Dataiku where he worked the development of the NLP and time-series stack.
At Giskard, he's leading a team developing an AI testing platform that advances Responsible AI, enhancing business performance while respecting the rights of citizens.
ADDITIONAL INFORMATION
Time & Place
October 30, 2025
11:00 - 12:30
Hôtel Mövenpick Amsterdam City Centre
Winterthur Room
Limited to 18 participants.
Secure your seat – registration required.
Notes
Agenda for this session:
Getting Settled 5 minutes
Information Session Part 1 15-20 minutes
Individual / Group Exercise 10-15 minutes
Break 5 minutes
Information Session Part 2 15-20 minutes
Individual / Group Exercise 10-15 minutes
Q&A/Discussion 20 minutes
Reflection/Discussion 20 minute
Prerequisits:
No specific technical skills required
Bring Your Own Laptop