About

Welcome to the Privacy-Preserving Machine Learning Workshop at EurIPS 2025!

The success of machine learning depends on access to large amounts of training data, which often contains sensitive information. This raises issues of legality, competitiveness, and privacy when data is exposed. Neural networks are known to be vulnerable to privacy attacks, a concern that has recently become more visible with large language models (LLMs), where attacks can be carried out directly through prompting. Differential privacy, the gold standard for privacy-preserving learning, has improved in terms of privacy–utility trade-offs thanks to new trust models and algorithms. However, there are still many open questions on how to bridge the gap between attacks and defenses, from developing auditing methods and more effective attacks to the growing interest in machine unlearning.

Which models best reflect real-world scenarios? How can methods scale to deep learning and foundation models? How are unlearning, auditing, and privacy-preserving machine learning connected, and how can these lines of work be brought together?

This workshop will bring together researchers from academia and industry working on differential privacy, machine unlearning, privacy auditing, privacy attacks, and related topics.

Call for Papers

We invite submissions to the Privacy-Preserving Machine Learning Workshop at EurIPS 2025.
We welcome both novel contributions and in-progress work with diverse viewpoints.

Important Dates Date (AoE)
Paper submission October 10, 2025
Accept/Reject notification October 31, 2025
Workshop December 6–7, 2025

Topics of Interest

  • Efficient methods for privacy-preserving machine learning
  • Trust models for privacy, including federated learning and data minimization
  • Privacy at inference and privacy for agents interaction and for large language models (fine-tuning, test-time training)
  • Privacy-preserving data generation
  • Differential privacy theory
  • Threat models and privacy attacks
  • Auditing methods and interpretation of privacy guarantees
  • Machine unlearning, certifiable machine unlearning, and new unlearning algorithms
  • Relationship between privacy and other issues related to Trustworthy Machine Learning

Submission Guidelines

  • Format: up to 5 pages, excluding references
  • Style: NeurIPS 2025 template
  • Anonymization: required (double-blind review)
  • Submission site: via OpenReview — link will be announced here soon

This workshop is non-archival and will not have official proceedings. Workshop submissions can be submitted to other venues. We welcome ongoing and unpublished work, including papers that are under review at the time of submission. We do not accept submissions that have already been accepted for publication in other venues with archival proceedings. The titles of accepted papers will be published on the website.

We are looking for reviewers to help ensure a fair and constructive review process.
Each reviewer will be asked to review at most 3 papers.
If you are interested in serving as a reviewer, please fill out the following form.


Invited Speakers

Tamalika Mukherjee

Tamalika Mukherjee
Max Planck Institute for Security and Privacy

Antti Honkela

Antti Honkela
University of Helsinki

and more to be announced

Program

Coming soon!


Organizers

Amartya Sanyal

Amartya Sanyal
University Of Copenhagen

Rachel Cummings

Rachel Cummings
Columbia University

Contact

Questions? Email us at ppml.eurips@gmail.com