publications

2025

  1. NeurIPS 2025
    Strong Membership Inference Attacks on Massive Datasets and (Moderately) Large Language Models
    Jamie Hayes, Ilia Shumailov, Christopher A. Choquette-Choo, Matthew Jagielski, George Kaissis, Katherine Lee, and 10 more authors
    arXiv preprint 2505.18773, 2025
  2. NeurIPS Workshop
    RippleBench: Capturing Ripple Effects by Leveraging Existing Knowledge Repositories
    Roy Rinberg, Usha Bhalla, Igor Shilov, and Rohit Gandikota
    Mechanistic Interpretability Workshop at NeurIPS, 2025
  3. USENIX 2025
    Free Record-Level Privacy Risk Evaluation Through Artifact-Based Methods
    Joseph Pollock*Igor Shilov*, Euodia Dodd, and Yves-Alexandre Montjoye
    In 34th USENIX Security Symposium (USENIX Security 25), 2025
  4. ICML 2025
    Certification for Differentially Private Prediction in Gradient-Based Training
    Matthew Wicker, Philip Sosnin, Igor Shilov, Adrianna Janik, Mark N. Müller, Yves-Alexandre Montjoye, and 2 more authors
    In Forty-second International Conference on Machine Learning, 2025
  5. ICML Workshop
    Counterfactual Influence as a Distributional Quantity
    Matthieu Meeus, Igor Shilov, Georgios Kaissis, and Yves-Alexandre Montjoye
    The Impact of Memorization on Trustworthy Foundation Models: ICML 2025 Workshop, 2025
  6. SaTML 2025
    SoK: Membership Inference Attacks on LLMs are Rushing Nowhere (and How to Fix It)
    Matthieu Meeus, Igor Shilov, Shubham Jain, Manuel Faysse, Marek Rei, and Yves-Alexandre Montjoye
    In 3rd IEEE Conference on Secure and Trustworthy Machine Learning, 2025

2024

  1. arXiv
    Sub-optimal Learning in Meta-Classifier Attacks: A Study of Membership Inference on Differentially Private Location Aggregates
    Yuhan Liu, Florent Guepin, Igor Shilov, and Yves-Alexandre De Montjoye
    arXiv preprint 2412.20456, 2024
  2. arXiv
    Watermarking Training Data of Music Generation Models
    Pascal Epple, Igor Shilov, Bozhidar Stevanoski, and Yves-Alexandre Montjoye
    arXiv preprint 2412.08549, 2024
  3. ICML 2024
    Copyright Traps for Large Language Models
    Matthieu Meeus*Igor Shilov*, Manuel Faysse, and Yves-Alexandre Montjoye
    In Forty-first International Conference on Machine Learning, 2024

    Press coverage in MIT Technology Review and Nature News.

  4. arXiv
    The Mosaic Memory of Large Language Models
    Igor Shilov*, Matthieu Meeus*, and Yves-Alexandre Montjoye
    arXiv preprint 2405.15523, 2024

2022

  1. arXiv
    Defending against Reconstruction Attacks with Rényi Differential Privacy
    Pierre Stock, Igor Shilov, Ilya Mironov, and Alexandre Sablayrolles
    arXiv preprint 2202.07623, 2022

2021

  1. NeurIPS 2021
    Antipodes of label differential privacy: PATE and ALIBI
    Mani Malek Esmaeili, Ilya Mironov, Karthik Prasad, Igor Shilov, and Florian Tramer
    In Advances in Neural Information Processing Systems, 2021
  2. NeurIPS Workshop
    Opacus: User-Friendly Differential Privacy Library in PyTorch
    Ashkan Yousefpour, Igor Shilov, Alexandre Sablayrolles, Davide Testuggine, Karthik Prasad, Mani Malek, and 6 more authors
    In NeurIPS Workshop on Privacy in Machine Learning, 2021