Yufei Ma
Peking University
H-index: 14
Asia-China
Top articles of Yufei Ma
DCIM-GCN: Digital Computing-in-Memory Accelerator for Graph Convolutional Network
IEEE Transactions on Circuits and Systems I: Regular Papers
2024/4/15
Sparsity-Aware In-Memory Neuromorphic Computing Unit With Configurable Topology of Hybrid Spiking and Artificial Neural Network
IEEE Transactions on Circuits and Systems I: Regular Papers
2024/3/26
30.2 A 22nm 0.26 nW/Synapse Spike-Driven Spiking Neural Network Processing Unit Using Time-Step-First Dataflow and Sparsity-Adaptive In-Memory Computing
2024/2/18
Research progress on low-power artificial intelligence of things (AIoT) chip design
2023/10
DCIM-3DRec: A 3D Reconstruction Accelerator with Digital Computing-in-Memory and Octree-Based Scheduler
2023/8/7
A Model-Specific End-to-End Design Methodology for Resource-Constrained TinyML Hardware
2023/7/9
An 82-nW 0.53-pJ/SOP Clock-Free Spiking Neural Network With 40-s Latency for AIoT Wake-Up Functions Using a Multilevel-Event-Driven Bionic Architecture …
IEEE Transactions on Circuits and Systems I: Regular Papers
2023/6/23
AA 22nm 0.43 pJ/SOP Sparsity-Aware In-Memory Neuromorphic Computing System with Hybrid Spiking and Artificial Neural Network and Configurable Topology
2023/4/23
7.8 A 22nm delta-sigma computing-in-memory (Δ∑ CIM) SRAM macro with near-zero-mean outputs and LSB-first ADCs achieving 21.38 TOPS/W for 8b-MAC edge AI processing
2023/2/19
RIMAC: An array-level ADC/DAC-free ReRAM-based in-memory DNN processor with analog cache and computation
2023/1/16
Dcim-gcn: Digital computing-in-memory to efficiently accelerate graph convolutional networks
2022/10/30
Hybrid stochastic-binary computing for low-latency and high-precision inference of CNNs
IEEE Transactions on Circuits and Systems I: Regular Papers
2022/5/18
A flexible and efficient FPGA accelerator for various large-scale and lightweight CNNs
IEEE Transactions on Circuits and Systems I: Regular Papers
2021/12/7
SWIFT: Small-World-based Structural Pruning to Accelerate DNN Inference on FPGA
2021/2/17
Small-world-based structural pruning for efficient FPGA inference of deep neural networks
2020/11/3
Optimizing stochastic computing for low latency inference of convolutional neural networks
2020/11/2
Efficient inference of large-scale and lightweight convolutional neural networks on FPGA
2020/9/8
In-memory computing: The next-generation ai computing paradigm
2020/9/7
Efficient hardware post processing of anchor-based object detection on FPGA
2020/7/6
An efficient FPGA accelerator optimized for high throughput sparse CNN inference
2020/12/8