Tianyu Ding, Ph.D.

Tianyu Ding, Ph.D.

Johns Hopkins University

H-index: 10

North America-United States

About Tianyu Ding, Ph.D.

Tianyu Ding, Ph.D., With an exceptional h-index of 10 and a recent h-index of 10 (since 2020), a distinguished researcher at Johns Hopkins University, specializes in the field of Efficient ML, Efficient AI, Generative AI, Computer Vision.

His recent articles reflect a diverse array of research interests and contributions to the field:

InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules

AdaContour: Adaptive Contour Descriptor with Hierarchical Representation

S3Editor: A Sparse Semantic-Disentangled Self-Training Framework for Face Video Editing

ONNXPruner: ONNX-Based General Model Pruning Adapter

Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation

DREAM: Diffusion Rectification and Estimation-Adaptive Models

OTOv3: Automatic Architecture-Agnostic Neural Network Training and Compression from Structured Pruning to Erasing Operators

The efficiency spectrum of large language models: An algorithmic survey

Tianyu Ding, Ph.D. Information

University

Position

PhD Candidate

Citations(all)

495

Citations(since 2020)

495

Cited By

26

hIndex(all)

10

hIndex(since 2020)

10

i10Index(all)

10

i10Index(since 2020)

10

Email

University Profile Page

Google Scholar

Tianyu Ding, Ph.D. Skills & Research Interests

Efficient ML

Efficient AI

Generative AI

Computer Vision

Top articles of Tianyu Ding, Ph.D.

InsertNeRF: Instilling Generalizability into NeRF with HyperNet Modules

International Conference on Learning Representations (ICLR)

2024/1

AdaContour: Adaptive Contour Descriptor with Hierarchical Representation

arXiv preprint arXiv:2404.08292

2024/4/12

S3Editor: A Sparse Semantic-Disentangled Self-Training Framework for Face Video Editing

arXiv preprint arXiv:2404.08111

2024/4/11

ONNXPruner: ONNX-Based General Model Pruning Adapter

arXiv preprint arXiv:2404.08016

2024/4/10

Exploiting Inter-sample and Inter-feature Relations in Dataset Distillation

arXiv preprint arXiv:2404.00563

2024/3/31

DREAM: Diffusion Rectification and Estimation-Adaptive Models

2024/3

OTOv3: Automatic Architecture-Agnostic Neural Network Training and Compression from Structured Pruning to Erasing Operators

arXiv preprint arXiv:2312.09411

2023/12/15

The efficiency spectrum of large language models: An algorithmic survey

2023/12/1

CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural Rendering

arXiv preprint arXiv:2311.15510

2023/11/27

Efficient Subgame Refinement for Extensive-form Games

Advances in Neural Information Processing Systems (NeurIPS)

2023/11/2

Lorashear: Efficient large language model structured pruning and knowledge recovery

arXiv preprint arXiv:2310.18356

2023/10/24

Unified Space-Time Interpolation of Video Information

2023/8/17

Where and How: Mitigating Confusion in Neural Radiance Fields from Sparse Inputs

Proceedings of the 31st ACM International Conference on Multimedia (ACMMM)

2023/8/5

Towards automatic neural architecture search within general super-networks

arXiv preprint arXiv:2305.18030

2023/5/25

OTOv2: Automatic, Generic, User-Friendly

2023

Video Frame Interpolation Via Feature Pyramid Flows

2022/12/15

Sparsity-guided Network Design for Frame Interpolation

arXiv preprint arXiv:2209.04551

2022/9/9

On the Optimization Landscape of Neural Collapse under MSE Loss: Global Optimality with Unconstrained Features

International Conference on Machine Learning (ICML)

2022/3/2

Rstt: Real-time spatial temporal transformer for space-time video super-resolution

2022

Only train once: A one-shot neural network training and pruning framework

Advances in Neural Information Processing Systems

2021/12/6

See List of Professors in Tianyu Ding, Ph.D. University(Johns Hopkins University)

Co-Authors

academic-engine