About

Welcome to the Privacy-Preserving Machine Learning Workshop at EurIPS 2025!

The success of machine learning depends on access to large amounts of training data, which often contains sensitive information. This raises issues of legality, competitiveness, and privacy when data is exposed. Neural networks are known to be vulnerable to privacy attacks, a concern that has recently become more visible with large language models (LLMs), where attacks can be carried out directly through prompting. Differential privacy, the gold standard for privacy-preserving learning, has improved in terms of privacy–utility trade-offs thanks to new trust models and algorithms. However, there are still many open questions on how to bridge the gap between attacks and defenses, from developing auditing methods and more effective attacks to the growing interest in machine unlearning.

Which models best reflect real-world scenarios? How can methods scale to deep learning and foundation models? How are unlearning, auditing, and privacy-preserving machine learning connected, and how can these lines of work be brought together?

This workshop will bring together researchers from academia and industry working on differential privacy, machine unlearning, privacy auditing, privacy attacks, and related topics.

Call for Papers

We invite submissions to the Privacy-Preserving Machine Learning Workshop at EurIPS 2025.
We welcome both novel contributions and in-progress work with diverse viewpoints.

Important Dates Date (AoE)
Paper submission October 17, 2025
Accept/Reject notification October 31, 2025
Workshop December 7, 2025

All accepted papers can be presented as posters. Posterboards allow for A0 portrait or A1 landscape. (Please use A0 portrait if possible).

Topics of Interest

  • Efficient methods for privacy-preserving machine learning
  • Trust models for privacy, including federated learning and data minimization
  • Privacy at inference and privacy for agents interaction and for large language models (fine-tuning, test-time training)
  • Privacy-preserving data generation
  • Differential privacy theory
  • Threat models and privacy attacks
  • Auditing methods and interpretation of privacy guarantees
  • Machine unlearning, certifiable machine unlearning, and new unlearning algorithms
  • Relationship between privacy and other issues related to Trustworthy Machine Learning

Submission Guidelines

This workshop is non-archival and will not have official proceedings. Workshop submissions can be submitted to other venues. We welcome ongoing and unpublished work, including papers that are under review at the time of submission. We do not accept submissions that have already been accepted for publication in other venues with archival proceedings. The titles of accepted papers will be published on the website.

We are looking for reviewers to help ensure a fair and constructive review process.
Each reviewer will be asked to review at most 3 papers.


Invited Speakers

Tamalika Mukherjee

Tamalika Mukherjee
Max Planck Institute for Security and Privacy

Antti Honkela

Antti Honkela
University of Helsinki

Rasmus Pagh

Rasmus Pagh
University of Copenhagen

Catuscia Palamidessi

Catuscia Palamidessi
Director of Research at INRIA

Sahra Ghalebikesabi

Sahra Ghalebikesabi
Research Scientist at Google DeepMind

Program

Coming soon!


Organizers

Amartya Sanyal

Amartya Sanyal
University Of Copenhagen

Rachel Cummings

Rachel Cummings
Columbia University

Reviewers

We thank all the reviewers for their work.

  • Bogdan Kulynych
  • Carolin Heinzler
  • Christian Janos Lebeda
  • Christoph H. Lampert
  • Clément Lalanne
  • Clément Pierquin
  • Edwige Cyffers
  • Erchi Wang
  • Jan Schuchardt
  • Joel Daniel Andersson
  • Kostadin Cvejoski
  • Luca Corbucci
  • Lukas Retschmeier
  • Marlon Tobaben
  • Mathieu Dagréou
  • Mina Basirat
  • Nikita Kalinin
  • Quentin Hillebrand
  • Renaud Gaucher
  • Romaric Gaudel
  • Şeyma Selcan Mağara
  • Simone Bombari
  • Vasilis Siomos

PPML@EurIPS is sponsored by BILAI.

Bilai Logo

Contact

Questions? Email us at ppml.eurips@gmail.com