Not sure where to get started when it comes to cloud native security and artificial intelligence? Look no further than CloudNativeSecurityCon North America 2024 in Seattle, which opens up with the co-located Secure AI Summit on June 25, and officially kicks off on June 26 and 27 with even more expert guidance on artificial intelligence. Here’s a quick look at what to expect from these three days.

Secure AI Summit

At this immersive co-located event, plan to spend the day exploring three thought-provoking tracks: AI-driven security solutions, integration and challenges, and ML and anomaly detection. The day will include time to visit the Solutions Showcase as well as opening and closing remarks, lightning talks, and the following presentations:

Leveraging and preparing for AI in cloud security

During CloudNativeSecurityCon North America 2024, the “Leveraging and preparing for AI in cloud security” track will offer several inspiring presentations.

Brave new world: welcoming AI identity challenges 

Gabriel L. Manor, engineering director,, will examine the differences between a programmable AI identity and a real human, and then will demonstrate a fully functional auth system built for the Ai identity era using only open-source software.

Shadow vulnerabilities in AI/ML data stacks- what you don’t know CAN hurt you 

Nitzan Mousseri, threat and data researcher, Oligo Security, plans to sketch out common security anti-patterns prevalent in AI engineering and then explore what might be better options for security hygiene. Mousseri plans to demo real code examples within their runtime context and show how to leverage eBPF and open source tooling.

Threat modeling for AI apps with attacks as code 

Priyanka Tembey, Co-founder and CTO, Operant, and Glenn McDonald, software engineer, also from Operant, want to explore what threat modeling looks like for today’s AI applications. Attendees will learn how to automate threat modeling for AI applications using SecOps-Chaos, an open source framework that encodes TTPs as security-focused experiments.

Implement AI RMF with policy as code automation 

Anca Sailer, distinguished engineer, IBM / RedHat, Robert Ficcaglia,CTO, SunStone Secure, and Vikas Agarwal, senior research scientist, IBM – all experienced AI and compliance practitioners – will discuss AI risk assessment, compliance assurance and Red Teaming for AI models and AI pipelines deployed on Kubernetes cloud native platforms. The group plan to map the Linux Foundation Principles for Trusted AI to the NIST AI RMF to define a reusable framework for designing controls to implement these principles and requirements.

Register for CloudNativeSecurityCon North America 2024 Today

Get all the details!

Read more about the schedule and add-on events