Linyan Mei

About Linyan Mei

Linyan Mei, With an exceptional h-index of 8 and a recent h-index of 8 (since 2020), a distinguished researcher at Katholieke Universiteit Leuven, specializes in the field of Deep learning accelerator, Design space exploration, Precision-scalable computing.

His recent articles reflect a diverse array of research interests and contributions to the field:

ACCO: Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators

Design Space Exploration of Deep Learning Accelerators

SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators

Stream: A Modeling Framework for Fine-grained Layer Fusion on Multi-core DNN Accelerators

PetaOps/W edge-AI Processors: Myth or reality?

Defines: Enabling fast exploration of the depth-first scheduling space for dnn accelerators through analytical modeling

Tinyvers: A tiny versatile system-on-chip with state-retentive eMRAM for ML inference at the extreme edge

Towards heterogeneous multi-core accelerators exploiting fine-grained scheduling of layer-fused deep neural networks

Linyan Mei Information

University

Position

___

Citations(all)

307

Citations(since 2020)

305

Cited By

22

hIndex(all)

8

hIndex(since 2020)

8

i10Index(all)

8

i10Index(since 2020)

8

Email

University Profile Page

Google Scholar

Linyan Mei Skills & Research Interests

Deep learning accelerator

Design space exploration

Precision-scalable computing

Top articles of Linyan Mei

ACCO: Automated Causal CNN Scheduling Optimizer for Real-Time Edge Accelerators

2023/11/6

Design Space Exploration of Deep Learning Accelerators

2023/8/24

Linyan Mei
Linyan Mei

H-Index: 3

SALSA: Simulated Annealing based Loop-Ordering Scheduler for DNN Accelerators

2023/6/11

Stream: A Modeling Framework for Fine-grained Layer Fusion on Multi-core DNN Accelerators

2023/4/23

Defines: Enabling fast exploration of the depth-first scheduling space for dnn accelerators through analytical modeling

2023/2/25

Tinyvers: A tiny versatile system-on-chip with state-retentive eMRAM for ML inference at the extreme edge

IEEE Journal of Solid-State Circuits

2023/1/20

Towards heterogeneous multi-core accelerators exploiting fine-grained scheduling of layer-fused deep neural networks

arXiv preprint arXiv:2212.10612

2022/12/20

Bandwidth-aware flexible-scheduling machine learning accelerator

2022/12/1

ML processors are going multi-core: A performance dream or a scheduling nightmare?

IEEE Solid-State Circuits Magazine

2022/11/15

Marian Verhelst
Marian Verhelst

H-Index: 25

Linyan Mei
Linyan Mei

H-Index: 3

TinyVers: A 0.8-17 TOPS/W, 1.7 μW-20 mW, tiny versatile system-on-chip with state-retentive eMRAM for machine learning inference at the extreme edge

2022/6/12

A uniform latency model for DNN accelerators with diverse architectures and dataflows

2022/3/14

Taxonomy and benchmarking of precision-scalable MAC arrays under enhanced DNN dataflow representation

IEEE Transactions on Circuits and Systems I: Regular Papers

2022/1/14

Linyan Mei
Linyan Mei

H-Index: 3

Marian Verhelst
Marian Verhelst

H-Index: 25

Hardware-efficient residual neural network execution in line-buffer depth-first processing

IEEE Journal on Emerging and Selected Topics in Circuits and Systems

2021/10/14

Processor architecture optimization for spatially dynamic neural networks

2021/10/4

Analyzing the energy-latency-area-accuracy trade-off across contemporary neural networks

2021/6/6

LOMA: Fast auto-scheduling on DNN accelerators through loop-order-based memory allocation

2021/6/6

Linyan Mei
Linyan Mei

H-Index: 3

Marian Verhelst
Marian Verhelst

H-Index: 25

ZigZag: Enlarging joint architecture-mapping design space exploration for DNN accelerators

IEEE Transactions on Computers

2021/2/22

Opportunities and limitations of emerging analog in-memory compute DNN architectures

2020/12/12

See List of Professors in Linyan Mei University(Katholieke Universiteit Leuven)