Jindong Gu

About Jindong Gu

Jindong Gu, With an exceptional h-index of 14 and a recent h-index of 14 (since 2020), a distinguished researcher at Ludwig-Maximilians-Universität München, specializes in the field of Explainable AI, Robust AI, Efficient AI.

His recent articles reflect a diverse array of research interests and contributions to the field:

Latent Guard: a Safety Framework for Text-to-image Generation

Responsible Generative AI: What to Generate and What Not

Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks?

Model-agnostic Origin Attribution of Generated Images with Few-shot Examples

Discretization-induced Dirichlet posterior for robust uncertainty quantification on regression

Does Few-Shot Learning Suffer from Backdoor Attacks?

Feddat: An approach for foundation model finetuning in multi-modal heterogeneous federated learning

Minimalism is King! High-Frequency Energy-based Screening for Data-Efficient Backdoor Attacks

Jindong Gu Information

University

Position

___

Citations(all)

567

Citations(since 2020)

567

Cited By

68

hIndex(all)

14

hIndex(since 2020)

14

i10Index(all)

18

i10Index(since 2020)

18

Email

University Profile Page

Google Scholar

Jindong Gu Skills & Research Interests

Explainable AI

Robust AI

Efficient AI

Top articles of Jindong Gu

Latent Guard: a Safety Framework for Text-to-image Generation

arXiv preprint arXiv:2404.08031

2024/4/11

Responsible Generative AI: What to Generate and What Not

arXiv preprint arXiv:2404.05783

2024/4/8

Jindong Gu
Jindong Gu

H-Index: 4

Red Teaming GPT-4V: Are GPT-4V Safe Against Uni/Multi-Modal Jailbreak Attacks?

arXiv preprint arXiv:2404.03411

2024/4/4

Model-agnostic Origin Attribution of Generated Images with Few-shot Examples

arXiv preprint arXiv:2404.02697

2024/4/3

Discretization-induced Dirichlet posterior for robust uncertainty quantification on regression

Proceedings of the AAAI Conference on Artificial Intelligence

2024/3/24

Jindong Gu
Jindong Gu

H-Index: 4

Does Few-Shot Learning Suffer from Backdoor Attacks?

Proceedings of the AAAI Conference on Artificial Intelligence

2024/3/24

Xinwei Liu
Xinwei Liu

H-Index: 3

Jindong Gu
Jindong Gu

H-Index: 4

Feddat: An approach for foundation model finetuning in multi-modal heterogeneous federated learning

Proceedings of the AAAI Conference on Artificial Intelligence

2024/3/24

Minimalism is King! High-Frequency Energy-based Screening for Data-Efficient Backdoor Attacks

IEEE Transactions on Information Forensics and Security

2024/3/22

As Firm As Their Foundations: Can open-sourced foundation models be used to create adversarial examples for downstream tasks?

arXiv preprint arXiv:2403.12693

2024/3/19

Hide in Thicket: Generating Imperceptible and Rational Adversarial Perturbations on 3D Point Clouds

IEEE Conference on Computer Vision and Pattern Recognition (CVPR)

2024/3/8

Jindong Gu
Jindong Gu

H-Index: 4

Li Liu
Li Liu

H-Index: 4

Stop Reasoning! When Multimodal LLMs with Chain-of-Thought Reasoning Meets Adversarial Images

arXiv preprint arXiv:2402.14899

2024/2/22

Benchmarking robustness of adaptation methods on pre-trained vision-language models

Advances in Neural Information Processing Systems

2024/2/13

Inducing High Energy-Latency of Large Vision-Language Models with Verbose Images

arXiv preprint arXiv:2401.11170

2024/1/20

A Survey on Transferability of Adversarial Examples across Deep Neural Networks

Transactions on Machine Learning Research (TMLR)

2023/10/26

An Image Is Worth 1000 Lies: Transferability of Adversarial Images across Prompts on Vision-Language Models

2023/10/13

Exploring Non-additive Randomness on ViT against Query-Based Black-Box Attacks

arXiv preprint arXiv:2309.06438

2023/9/12

Boosting Fair Classifier Generalization through Adaptive Priority Reweighing

arXiv preprint arXiv:2309.08375

2023/9

Revisiting and Exploring Efficient Fast Adversarial Training via LAW: Lipschitz Regularization and Auto Weight Averaging

arXiv preprint arXiv:2308.11443

2023/8/22

A Systematic Survey of Prompt Engineering on Vision-Language Foundation Models

arXiv preprint arXiv:2307.12980

2023/7/24

Reliable Evaluation of Adversarial Transferability

arXiv preprint arXiv:2306.08565

2023/6/14

Jindong Gu
Jindong Gu

H-Index: 4

Philip Torr
Philip Torr

H-Index: 69

See List of Professors in Jindong Gu University(Ludwig-Maximilians-Universität München)

Co-Authors

academic-engine