Hongwu Peng

Hongwu Peng

University of Connecticut

H-index: 15

North America-United States

About Hongwu Peng

Hongwu Peng, With an exceptional h-index of 15 and a recent h-index of 15 (since 2020), a distinguished researcher at University of Connecticut, specializes in the field of Computer Science and Engineering.

His recent articles reflect a diverse array of research interests and contributions to the field:

MaxK-GNN: Extremely Fast GPU Kernel Design for Accelerating Graph Neural Networks Training

Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate

Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM

Medusa: Simple llm inference acceleration framework with multiple decoding heads

Advanced language model-driven verilog development: Enhancing power, performance, and area optimization in code synthesis

Evaluating Emerging AI/ML Accelerators: IPU, RDU, and NVIDIA/AMD GPUs

Accel-gcn: High-performance gpu accelerator design for graph convolution networks

AQ2PNN: Enabling Two-party Privacy-Preserving Deep Neural Network Inference with Adaptive Quantization

Hongwu Peng Information

University

Position

Ph.D. Student

Citations(all)

756

Citations(since 2020)

756

Cited By

73

hIndex(all)

15

hIndex(since 2020)

15

i10Index(all)

27

i10Index(since 2020)

27

Email

University Profile Page

Google Scholar

Hongwu Peng Skills & Research Interests

Computer Science and Engineering

Top articles of Hongwu Peng

MaxK-GNN: Extremely Fast GPU Kernel Design for Accelerating Graph Neural Networks Training

2024/4/27

Learning from Teaching Regularization: Generalizable Correlations Should be Easy to Imitate

arXiv preprint arXiv:2402.02769

2024/2/5

Zero-Space Cost Fault Tolerance for Transformer-based Language Models on ReRAM

arXiv preprint arXiv:2401.11664

2024/1/22

Medusa: Simple llm inference acceleration framework with multiple decoding heads

arXiv preprint arXiv:2401.10774

2024/1/19

Advanced language model-driven verilog development: Enhancing power, performance, and area optimization in code synthesis

arXiv preprint arXiv:2312.01022

2023/12/2

Hongwu Peng
Hongwu Peng

H-Index: 5

Caiwen Ding
Caiwen Ding

H-Index: 14

Evaluating Emerging AI/ML Accelerators: IPU, RDU, and NVIDIA/AMD GPUs

arXiv preprint arXiv:2311.04417

2023/11/8

Accel-gcn: High-performance gpu accelerator design for graph convolution networks

2023/10/28

AQ2PNN: Enabling Two-party Privacy-Preserving Deep Neural Network Inference with Adaptive Quantization

2023/10/28

LinGCN: Structural Linearized Graph Convolutional Network for Homomorphically Encrypted Inference

2023/9/25

Dynamic sparse training via balancing the exploration-exploitation trade-off

2023/7/9

Pasnet: Polynomial architecture search framework for two-party computation-based secure neural network deployment

2023/7/9

RRNet: Towards ReLU-Reduced Neural Network for Two-party Computation Based Private Inference

arXiv preprint arXiv:2302.02292

2023/2/5

Design and validation of a MVDC isolated active voltage injection based HCB

IEEE Transactions on Industry Applications

2023/1/31

Autorep: Automatic relu replacement for fast private network inference

2023

Codg-reram: An algorithm-hardware co-design to accelerate semi-structured gnns on reram

2022/10/23

Towards sparsification of graph neural networks

2022/9/11

Si-IGBT and SiC-MOSFET hybrid switch-based 1.7 kV half-bridge power module

Power Electronic Devices and Components

2022/10/1

A length adaptive algorithm-hardware co-design of transformer on fpga through sparse attention and dynamic pipelining

2022/7/10

An automatic and efficient BERT pruning for edge AI systems

2022/4/6

Performance comparison and modelling of instantaneous current sharing amongst gan hemt switch configurations for current source inverters

2022/3/20

See List of Professors in Hongwu Peng University(University of Connecticut)

Co-Authors

academic-engine