top of page

Securing AI agents through continuous Red Teaming

Abstract

Through this workshop, participants will learn what continuous red teaming means and how it can protect AI agents from security vulnerabilities like prompt injection, data leakages and business alignment failures like hallucinations, inappropriate denials, etc. We'll present how enterprises can automate security evaluation, detect vulnerabilities before they become incidents, and ensure continuous protection of AI agents. Through hands-on exercises aligned with Giskard Hub, attendees will explore a variety of testing techniques including context-specific prompts, adaptive probing, and multi-turn interactions.

Topics To Be Covered

  • Understand LLM vulnerabilities and how continuous AI red teaming prevents them

  • Learn how to protect AI agents from security vulnerabilities and business failures

  • Apply red teaming techniques from cybersecurity to ensure the security and reliability of your LLM application

Perfect For

  • Enterprise AI & Compliance Leaders

  • LLM & AI Engineers

  • Data Scientists

  • Regulatory & Governance Specialists

  • AI Security & Risk Management Teams

Meet Your Instructor

Jean-Marie John-Mathews

Jean-Marie John-Mathews

Giskard

Co-CEO and Co-Founder, Giskard

Jean-Marie John-Mathews is the Co-CEO and co-founder of Giskard AI, a company focused on testing, evaluating, and securing AI systems. An AI safety researcher with a PhD from Université Paris-Saclay, his work explores explainable AI, fairness in machine learning, and the societal impacts of automated decision-making. Before founding Giskard in 2021, he worked at the intersection of data science, ethics, and public policy, including leading research initiatives at the Good In Tech academic chair and teaching quantitative methods, AI, and public policy at institutions such as Sciences Po and Université Paris-Dauphine. Jean-Marie is widely recognized for his research on AI risks—including misinformation, bias, and adversarial attacks—and for advancing practical tools and benchmarks that help organizations build safer and more trustworthy AI systems.

ADDITIONAL INFORMATION

Time & Place

Wed, March 25

09:15 - 09:45

The Ritz-Carlton Berlin

Salon Tiergarten

Classroom Seating

Max. Capacity: 32 Seats

Secure your seat – registration required.

Notes

Agenda for this session:

  • Getting Settled - 5 mins

  • Information Session - 40 mins

  • Break - 5 min

  • Individual / Group Exercise - 20 min

  • Q&A/Discussion - 15 min

  • Reflection - 5 min

Prerequisits:

  • No specific technical skills required

  • Bring Your Own Laptop

REGISTRATION

In order to register to this session you must hold a Pro or Pro Max Pass.

Limited Seating Still Guaranteed

WhatsApp button (66 x 66 px).png
bottom of page