Jinyuan Jia

Jinyuan Jia

Duke University

H-index: 23

North America-United States

About Jinyuan Jia

Jinyuan Jia, With an exceptional h-index of 23 and a recent h-index of 23 (since 2020), a distinguished researcher at Duke University, specializes in the field of Trustworthy AI.

His recent articles reflect a diverse array of research interests and contributions to the field:

MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models

Pre-trained encoders in self-supervised learning improve secure and privacy-preserving supervised learning

GraphGuard: Provably Robust Graph Classification against Adversarial Attacks

CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning

TextGuard: Provable Defense against Backdoor Attacks on Text Classification

Provably Robust Multi-bit Watermarking for AI-generated Text via Error Correction Code

FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models

Jinyuan Jia Information

University

Position

Ph.D. student

Citations(all)

3167

Citations(since 2020)

3120

Cited By

509

hIndex(all)

23

hIndex(since 2020)

23

i10Index(all)

32

i10Index(since 2020)

32

Email

University Profile Page

Duke University

Google Scholar

View Google Scholar Profile

Jinyuan Jia Skills & Research Interests

Trustworthy AI

Top articles of Jinyuan Jia

Title

Journal

Author(s)

Publication Date

MMCert: Provable Defense against Adversarial Attacks to Multi-modal Models

Yanting Wang

Hongye Fu

Wei Zou

Jinyuan Jia

2024

PoisonedRAG: Knowledge Poisoning Attacks to Retrieval-Augmented Generation of Large Language Models

arXiv preprint arXiv:2402.07867

Wei Zou

Runpeng Geng

Binghui Wang

Jinyuan Jia

2024/2/12

Pre-trained encoders in self-supervised learning improve secure and privacy-preserving supervised learning

Hongbin Liu

Wenjie Qu

Jinyuan Jia

Neil Zhenqiang Gong

2024

GraphGuard: Provably Robust Graph Classification against Adversarial Attacks

Han Yang

Binghui Wang

Jinyuan Jia

2024/2

CorruptEncoder: Data Poisoning based Backdoor Attacks to Contrastive Learning

Jinghuai Zhang

Hongbin Liu

Jinyuan Jia

Neil Zhenqiang Gong

2024

TextGuard: Provable Defense against Backdoor Attacks on Text Classification

arXiv preprint arXiv:2311.11225

Hengzhi Pei

Jinyuan Jia

Wenbo Guo

Bo Li

Dawn Song

2023/11/19

Provably Robust Multi-bit Watermarking for AI-generated Text via Error Correction Code

arXiv preprint arXiv:2401.16820

Wenjie Qu

Dong Yin

Zixin He

Wei Zou

Tianyang Tao

...

2024/1/30

FCert: Certifiably Robust Few-Shot Classification in the Era of Foundation Models

Yanting Wang

Wei Zou

Jinyuan Jia

2024/4/12

Brave: Byzantine-Resilient and Privacy-Preserving Peer-to-Peer Federated Learning

arXiv preprint arXiv:2401.05562

Zhangchen Xu

Fengqing Jiang

Luyao Niu

Jinyuan Jia

Radha Poovendran

2024/1/10

SafeDecoding: Defending against Jailbreak Attacks via Safety-Aware Decoding

arXiv preprint arXiv:2402.08983

Zhangchen Xu

Fengqing Jiang

Luyao Niu

Jinyuan Jia

Bill Yuchen Lin

...

2024/2/14

PORE: Provably Robust Recommender Systems against Data Poisoning Attacks

Jinyuan Jia

Yupei Liu

Yuepeng Hu

Neil Zhenqiang Gong

2023/3/26

IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI

2023 Conference on Neural Information Processing Systems (NeurIPS '23)

Bochuan Cao

Changjiang Li

Ting Wang

Jinyuan Jia

Bo Li

...

2023

Graph Contrastive Backdoor Attacks

Proceedings of the 40th International Conference on Machine Learning (ICML)

Hangfan Zhang

Jinghui Chen

Lu Lin

Jinyuan Jia

Dinghao Wu

2023/7/23

Prompt Injection Attacks and Defenses in LLM-Integrated Applications

arXiv preprint arXiv:2310.12815

Yupei Liu

Yuqi Jia

Runpeng Geng

Jinyuan Jia

Neil Zhenqiang Gong

2023/10/19

FedRecover: Recovering from Poisoning Attacks in Federated Learning using Historical Information

Xiaoyu Cao

Jinyuan Jia

Zaixi Zhang

Neil Zhenqiang Gong

2023

On the Safety of Open-Sourced Large Language Models: Does Alignment Really Prevent Them From Being Misused?

arXiv preprint arXiv:2310.01581

Hangfan Zhang

Zhimeng Guo

Huaisheng Zhu

Bochuan Cao

Lu Lin

...

2023/10/2

FedGame: A Game-Theoretic Defense against Backdoor Attacks in Federated Learning

Advances in Neural Information Processing Systems

Jinyuan Jia

Zhuowen Yuan

Dinuka Sahabandu

Luyao Niu

Arezoo Rajabi

...

2024/2/13

REaaS: Enabling Adversarially Robust Downstream Classifiers via Robust Encoder as a Service

Wenjie Qu

Jinyuan Jia

Neil Zhenqiang Gong

2023/1/7

Screen Perturbation: Adversarial Attack and Defense on Under-Screen Camera

Hanting Ye

Guohao Lan

Jinyuan Jia

Qing Wang

2023/10

Identifying and Mitigating Vulnerabilities in LLM-Integrated Applications

arXiv preprint arXiv:2311.16153

Fengqing Jiang

Zhangchen Xu

Luyao Niu

Boxin Wang

Jinyuan Jia

...

2023/11/7

See List of Professors in Jinyuan Jia University(Duke University)