Security, Privacy, and Trustworthy AI

Welcome to this year’s session on Security, Privacy, and Trustworthy AI, where we confront the rapidly growing challenges of protecting intelligent systems in an era defined by ubiquitous computing and large-scale AI services. As autonomous systems and foundation models become deeply integrated into our digital and physical environments, the attack surface expands in unprecedented ways. Today’s adversaries can target not only software and networks, but also the data, models, and algorithms that power modern AI. This session brings together cutting-edge research spanning cybersecurity, privacy-preserving machine learning, and trustworthy AI systems. We will explore a wide range of topics including adversarial robustness, privacy-enhancing technologies, secure and explainable ML pipelines, LLM guardrails, detection and mitigation of emerging cyberattacks, AI governance and safety, and frameworks for ensuring reliable and ethical operation of AI-driven systems and services. Join us as we explore the intersection of cybersecurity and AI, bringing together diverse perspectives on how to build AI systems that remain reliable, interpretable, and secure, even in the presence of adaptive adversaries and dynamic, real-world environments.

Keynote Speaker – Prof. Zahra Ghodsi, Purdue University

Time: 9:00 – 10:00 AM

“Standing on Solid Ground: Foundational Systems to Enhance Security and Trust in Machine Learning

Abstract: Advances in machine learning have led to rapid deployment of such frameworks within various sectors, including critical and sensitive applications. Prior work has demonstrated different attack vectors on machine learning systems, leading to development of new mitigation techniques to improve security. Nevertheless, we argue that current machine learning frameworks are built on frail security foundations. In this talk, we examine fundamental security requirements such as randomness guarantees and authenticity verification and discuss how existing frameworks fall short, neglecting these requirements or sacrificing them for performance. Next, we present our recent work in building foundational systems to address issues in existing frameworks and provide security and efficiency simultaneously.

Biography: Zahra Ghodsi is an Assistant Professor at Purdue University in the Electrical and Computer Engineering Department where she leads the Trustworthy Computing and Learning Systems (TCLS) lab. Her research broadly focuses on addressing security and privacy issues in emerging computing systems. Most recently, she has been working on trustworthy machine learning, cryptographically secure privacy-preserving computation, and hardware acceleration of cryptographic protocols. I am particularly interested in solutions that span algorithm and protocol design and system optimizations to create end-to-end frameworks with real-world impact.”

Invited Speaker – Chhavi Yadav, Carnegie Mellon University

Time: 10:00 – 10:45 AM

“Accountable AI With ZKPs: Certifying Fairness and Explanations Under Model Confidentiality”

Abstract: Responsible deployment of AI models in high-stakes societal applications requires that these models be trustworthy—exhibiting properties such as fairness, privacy and interpretability. However, legal and IP constraints often necessitate that models remain confidential, which leads to the breakdown of many trustworthy AI tools in practice. This tension gives rise to a central challenge: how can we prove and verify key properties of ML models without revealing the models themselves? In this talk, I will present my recent work that addresses this challenge using zero-knowledge proofs (ZKPs). Specifically, I will describe: (1) FairProof, a system for publicly certifying individual fairness in neural networks while preserving model confidentiality, and (2) ExpProof, which operationalizes explanations even in adversarial settings. Together, these systems advance the goal of building verifiable and accountable AI.

Biography: Chhavi Yadav is an AI researcher, broadly interested in the foundations of Trustworthy AI and AI Privacy, Security and Safety. Specifically, she aims to make AI systems accountable and incentive aware by exposing vulnerabilities and understanding behavior of existing Trustworthy AI tools (unlearning, attribution, xai), developing trustless verification systems using cryptographic tools such as Zero Knowledge Proofs, studying auditing of closed models both theoretically and practically and, proposing evaluation frameworks and metrics. She is currently a postdoctoral researcher in the Machine Learning Department @CMU. She obtained a PhD in Computer Science from UC San Diego, advised by Prof. Kamalika Chaudhuri. She also organizes events at The Trustworthy ML Initiative.

Student Presentations

Time: 10:50 AM – 12:00 PM

Jason Andre Vega: “Breaking Safety Alignment in Frontier LLMs is Easier Than You Think!”

Tzu-Hsiang Huang: “Constant-Rate Certified Deletion”

McKenzy Heavlin: “Towards Trustworthy AI Applications in Electric Distribution Grids”

Yiming Su: “SREGym: A Live Training Ground for AI SRE Agents with High-Fidelity Failure Drills”

Xinbo Wu: “A Game-Theoretic Analysis of Attacking LLMs by Hiding Intents”

CSL Student Conference 2026
Email: omarb3@illinois.edu