Matthew Jagielski

About Matthew Jagielski

Matthew Jagielski, With an exceptional h-index of 21 and a recent h-index of 21 (since 2020), a distinguished researcher at North Eastern University, specializes in the field of adversarial machine learning, differential privacy, security.

His recent articles reflect a diverse array of research interests and contributions to the field:

Noise masking attacks and defenses for pretrained speech models

Stealing Part of a Production Language Model

Auditing Private Prediction

Are aligned neural networks adversarially aligned?

Privacy auditing with one (1) training run

Students parrot their teachers: Membership inference on model distillation

Advancing differential privacy: Where we are now and future directions for real-world deployment

Poisoning Web-Scale Training Datasets is Practical

Matthew Jagielski Information

University

Position

PhD Student

Citations(all)

4548

Citations(since 2020)

4512

Cited By

587

hIndex(all)

21

hIndex(since 2020)

21

i10Index(all)

28

i10Index(since 2020)

28

Email

University Profile Page

Google Scholar

Matthew Jagielski Skills & Research Interests

adversarial machine learning

differential privacy

security

Top articles of Matthew Jagielski

Noise masking attacks and defenses for pretrained speech models

2024/4/14

Matthew Jagielski
Matthew Jagielski

H-Index: 10

Om Thakkar
Om Thakkar

H-Index: 1

Lun Wang
Lun Wang

H-Index: 7

Stealing Part of a Production Language Model

arXiv preprint arXiv:2403.06634

2024/3/11

Auditing Private Prediction

arXiv preprint arXiv:2402.09403

2024/2/14

Are aligned neural networks adversarially aligned?

Advances in Neural Information Processing Systems

2024/2/13

Privacy auditing with one (1) training run

Advances in Neural Information Processing Systems

2024/2/13

Milad Nasr
Milad Nasr

H-Index: 8

Matthew Jagielski
Matthew Jagielski

H-Index: 10

Students parrot their teachers: Membership inference on model distillation

Advances in Neural Information Processing Systems

2024/2/13

Poisoning Web-Scale Training Datasets is Practical

arXiv preprint arXiv:2302.10149

2023/2/20

Matthew Jagielski
Matthew Jagielski

H-Index: 10

Florian Tramèr
Florian Tramèr

H-Index: 23

Extracting training data from diffusion models

arXiv preprint arXiv:2301.13188

2023/1/30

Preventing generation of verbatim memorization in language models gives a false sense of privacy

Proceedings of the 16th International Natural Language Generation Conference

2023

How to Combine Membership-Inference Attacks on Multiple Updated Machine Learning Models

Proceedings on Privacy Enhancing Technologies

2023

Tight auditing of differentially private machine learning

2023

Counterfactual memorization in neural language models

Advances in Neural Information Processing Systems

2023/12/15

Scalable Extraction of Training Data from (Production) Language Models

arXiv preprint arXiv:2311.17035

2023/11/28

Privacy Side Channels in Machine Learning Systems

arXiv preprint arXiv:2309.05610

2023/9/11

Backdoor Attacks for In-Context Learning with Language Models

arXiv preprint arXiv:2307.14692

2023/7/27

Matthew Jagielski
Matthew Jagielski

H-Index: 10

Florian Tramèr
Florian Tramèr

H-Index: 23

A Note On Interpreting Canary Exposure

arXiv preprint arXiv:2306.00133

2023/5/31

Matthew Jagielski
Matthew Jagielski

H-Index: 10

SNAP: Efficient extraction of private properties with poisoning

2023/5/21

Privacy-Preserving Recommender Systems with Synthetic Query Generation using Differentially Private Large Language Models

arXiv preprint arXiv:2305.05973

2023/5/10

See List of Professors in Matthew Jagielski University(North Eastern University)

Co-Authors

academic-engine