top of page

Securing AI Agents Through Continuous Red Teaming: Prevent Hallucinations and Vulnerabilities in LLM Agents

Absract

Learn how continuous Red Teaming can protect your LLM agents from emerging threats like hallucinations and data leakage. We'll present how enterprises can automate security evaluation, detect vulnerabilities before they become incidents, and ensure continuous protection of AI agents

Topics To Be Covered

  • How continuous Red Teaming strengthens AI agent security

  • Strategies to detect and prevent AI hallucinations

  • Automating security evaluation for LLM agents

  • Mitigating data leakage and emerging AI threats

  • Best practices for ensuring continuous AI protection

Who Is This For?

  • Enterprise AI & Compliance Leaders

  • Machine Learning & AI Engineers

  • Cybersecurity Professionals

  • Regulatory & Governance Specialists

  • AI Security & Risk Management Teams

Meet Your Speaker

CEO & Co-Founder, Giskard AI

Alex is the co-founder and CEO at Giskard, a startup dedicated to secure AI agents through exhaustive testing. Previous experience includes big data & analytics consulting at Capgemini, and lead data scientist at Dataiku where he worked the development of the NLP and time-series stack.

At Giskard, he's leading a team developing an AI testing platform that advances Responsible AI, enhancing business performance while respecting the rights of citizens.

ADDITIONAL INFORMATION

Time & Place

April 1, 2025

11:00 - 11:45

The Ritz-Carlton Berlin

Salon Bellevue

Classroom Seating

Max. Capacity: 64 Seats

Limited Seating Still Available Upon Registration. Register Your Seat Here.

Notes

Agenda for this session

  • 5 min introduction

  • 25 min presentation

  • 15 min Q&A

REGISTRATION

In order to register to this session you must hold a Platinum or a Diamond Pass. Gold Pass holders can register based on availability only.

Limited Seating Still Guaranteed

WhatsApp button (66 x 66 px).png
bottom of page