Zhengyu Zhao

Zhengyu Zhao

Radboud Universiteit

H-index: 14

Europe-Netherlands

About Zhengyu Zhao

Zhengyu Zhao, With an exceptional h-index of 14 and a recent h-index of 14 (since 2020), a distinguished researcher at Radboud Universiteit, specializes in the field of Adversarial Machine Learning, AI Security, Data Privacy.

His recent articles reflect a diverse array of research interests and contributions to the field:

Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving

Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization

Composite Backdoor Attacks against Large Language Models

Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights

Prompt Backdoors in Visual Prompt Learning

Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models

Generative watermarking against unauthorized subject-driven image synthesis

Zhengyu Zhao Information

University

Position

___

Citations(all)

616

Citations(since 2020)

600

Cited By

181

hIndex(all)

14

hIndex(since 2020)

14

i10Index(all)

17

i10Index(since 2020)

17

Email

University Profile Page

Google Scholar

Zhengyu Zhao Skills & Research Interests

Adversarial Machine Learning

AI Security

Data Privacy

Top articles of Zhengyu Zhao

Physical 3D Adversarial Attacks against Monocular Depth Estimation in Autonomous Driving

2024/3/26

Quantization Aware Attack: Enhancing Transferable Adversarial Attacks by Model Quantization

IEEE Transactions on Information Forensics and Security

2024/1/31

Composite Backdoor Attacks against Large Language Models

2024

Revisiting Transferable Adversarial Image Examples: Attack Categorization, Evaluation Guidelines, and New Insights

arXiv preprint arXiv:2310.11850

2023/10/18

Prompt Backdoors in Visual Prompt Learning

arXiv preprint arXiv:2310.07632

2023/10/11

Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection

arXiv preprint arXiv:2309.01104

2023/9/3

Robustness Over Time: Understanding Adversarial Examples' Effectiveness on Longitudinal Versions of Large Language Models

arXiv preprint arXiv:2308.07847

2023/8/15

Generative watermarking against unauthorized subject-driven image synthesis

arXiv preprint arXiv:2306.07754

2023/6/13

Adversarial Image Color Transformations in Explicit Color Filter Space

IEEE Transactions on Information Forensics and Security (TIFS)

2023/5/10

Image Shortcut Squeezing: Countering Perturbative Availability Poisons with Compression

International Conference on Machine Learning (ICML)

2023/1/31

Level Up with RealAEs: Leveraging Domain Constraints in Feature Space to Strengthen Robustness of Android Malware Detection

arXiv preprint arXiv:2205.15128

2022/5/30

Is Adversarial Training Really a Silver Bullet for Mitigating Data Poisoning?

2023

Membership Inference Attacks by Exploiting Loss Trajectory

2022/8/31

Zhengyu Zhao
Zhengyu Zhao

H-Index: 7

Yang Zhang
Yang Zhang

H-Index: 3

Generative Poisoning Using Random Discriminators

2022

Pivoting Image-based Profiles Toward Privacy: Inhibiting Malicious Profiling with Adversarial Additions

2021/6/21

On Success and Simplicity: A Second Look at Transferable Targeted Attacks

Advances in Neural Information Processing Systems (NeurIPS)

2020/12/21

Screen Gleaning: A Screen Reading TEMPEST Attack on Mobile Devices Exploiting an Electromagnetic Side Channel

arXiv preprint arXiv:2011.09877

2020/11/19

Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance

2020

See List of Professors in Zhengyu Zhao University(Radboud Universiteit)

Co-Authors

academic-engine