top of page

Securing AI Agents Through Continuous Red Teaming: Prevent Hallucinations and Vulnerabilities in LLM Agents

Learn how continuous Red Teaming can protect your LLM agents from emerging threats like hallucinations and data leakage. We'll present how enterprises can automate security evaluation, detect vulnerabilities before they become incidents, and ensure continuous protection of AI agents

Time & Place

April 1, 2025

11:00 - 11:45

The Ritz-Carlton Berlin

Salon Bellevue

Classroom Seating

Max. Capacity: 64 Seats

Meet Your Intructors

Alex Combessie

CEO & Co-Founder, Giskard AI

Alex is the co-founder and CEO at Giskard, a startup dedicated to secure AI agents through exhaustive testing. Previous experience includes big data & analytics consulting at Capgemini, and lead data scientist at Dataiku where he worked the development of the NLP and time-series stack.

At Giskard, he's leading a team developing an AI testing platform that advances Responsible AI, enhancing business performance while respecting the rights of citizens.

What To Expect

Who Is This For?

  • Enterprise AI & Compliance Leaders

  • Machine Learning & AI Engineers

  • Cybersecurity Professionals

  • Regulatory & Governance Specialists

  • AI Security & Risk Management Teams

Pre-Requisites

  • No Prerequisits

What You'll Learn & Do?

  • How continuous Red Teaming strengthens AI agent security

  • Strategies to detect and prevent AI hallucinations

  • Automating security evaluation for LLM agents

  • Mitigating data leakage and emerging AI threats

  • Best practices for ensuring continuous AI protection

Agenda & Activities

Agenda for this session

  • 5 min introduction

  • 25 min presentation

  • 15 min Q&A

Registration

In order to register to our workshops you must purchase a Platinum Pass. With the Pass you are eligible to select up to 4 workshops. If you are interested in attending only one workshop you may purchase the Gold Plus Pass.

WhatsApp button (66 x 66 px).png
bottom of page