Abdelrahman Hosny

Abdelrahman Hosny

Brown University

H-index: 6

North America-United States

About Abdelrahman Hosny

Abdelrahman Hosny, With an exceptional h-index of 6 and a recent h-index of 6 (since 2020), a distinguished researcher at Brown University, specializes in the field of Machine Learning, Constrained Optimization, Learning to Optimize, EDA.

His recent articles reflect a diverse array of research interests and contributions to the field:

torchmSAT: A GPU-Accelerated Approximation To The Maximum Satisfiability Problem

Automatic MILP solver configuration by learning problem similarities

Configuring Mixed-Integer Linear Programming Solvers with Deep Metric Learning

Sparse bitmap compression for memory-efficient training on the edge

Characterizing and optimizing EDA flows for the cloud

DRiLLS: Deep reinforcement learning for logic synthesis

Tutorial: Open-Source EDA and Machine Learning for IC Design: A Live Update

An immune-centric exploration of BRCA1 and BRCA2 germline mutation related breast and ovarian cancers

Abdelrahman Hosny Information

University

Brown University

Position

___

Citations(all)

549

Citations(since 2020)

497

Cited By

200

hIndex(all)

6

hIndex(since 2020)

6

i10Index(all)

6

i10Index(since 2020)

6

Email

University Profile Page

Brown University

Abdelrahman Hosny Skills & Research Interests

Machine Learning

Constrained Optimization

Learning to Optimize

EDA

Top articles of Abdelrahman Hosny

torchmSAT: A GPU-Accelerated Approximation To The Maximum Satisfiability Problem

Authors

Abdelrahman Hosny,Sherief Reda

Journal

arXiv preprint arXiv:2402.03640

Published Date

2024/2/6

The remarkable achievements of machine learning techniques in analyzing discrete structures have drawn significant attention towards their integration into combinatorial optimization algorithms. Typically, these methodologies improve existing solvers by injecting learned models within the solving loop to enhance the efficiency of the search process. In this work, we derive a single differentiable function capable of approximating solutions for the Maximum Satisfiability Problem (MaxSAT). Then, we present a novel neural network architecture to model our differentiable function, and progressively solve MaxSAT using backpropagation. This approach eliminates the need for labeled data or a neural network training phase, as the training process functions as the solving algorithm. Additionally, we leverage the computational power of GPUs to accelerate these computations. Experimental results on challenging MaxSAT instances show that our proposed methodology outperforms two existing MaxSAT solvers, and is on par with another in terms of solution cost, without necessitating any training or access to an underlying SAT solver. Given that numerous NP-hard problems can be reduced to MaxSAT, our novel technique paves the way for a new generation of solvers poised to benefit from neural network GPU acceleration.

Automatic MILP solver configuration by learning problem similarities

Authors

Abdelrahman Hosny,Sherief Reda

Journal

Annals of Operations Research

Published Date

2023/7/14

A large number of real-world optimization problems can be formulated as Mixed Integer Linear Programs (MILP). MILP solvers expose numerous configuration parameters to control their internal algorithms. Solutions, and their associated costs or runtimes, are significantly affected by the choice of the configuration parameters, even when problem instances have the same number of decision variables and constraints. On one hand, using the default solver configuration leads to suboptimal solutions. On the other hand, searching and evaluating a large number of configurations for every problem instance is time-consuming and, in some cases, infeasible. In this study, we aim to predict configuration parameters for unseen problem instances that yield lower-cost solutions without the time overhead of searching-and-evaluating configurations at the solving time. Toward that goal, we first investigate the cost correlation of …

Configuring Mixed-Integer Linear Programming Solvers with Deep Metric Learning

Authors

Abdelrahman Hosny,Sherief Reda

Published Date

2022/9/29

Mixed Integer Linear Programming (MILP) solvers expose a large number of configuration parameters for their internal algorithms. Solutions, and their associated costs or runtimes, are significantly affected by the choice of the configuration parameters, even when problem instances are coming from the same distribution. On one hand, using the default solver configuration leads to poor suboptimal solutions. On the other hand, searching and evaluating an exponential number of configurations for every problem instance is time-consuming and in some cases infeasible. In this work, we propose MILPTune -- a machine learning-based approach to predict an instance-aware parameters configuration for MILP solvers. It enables avoiding the expensive search of configuration parameters for each new problem instance, while tuning the solver's behavior for the given instance. Our method trains a metric learning model based on a graph neural network to project problem instances to a space where instances with similar costs are closer to each other. At inference time, and given a new problem instance, we first embed the instance to the learned metric space, and then predict a parameters configuration using nearest neighbor data. Empirical results on real-world problem benchmarks show that our method predicts configuration parameters that improve solutions' costs by 10-67% compared to the baselines and previous approaches.

Sparse bitmap compression for memory-efficient training on the edge

Authors

Abdelrahman Hosny,Marina Neseem,Sherief Reda

Published Date

2021/12/14

Training on the Edge enables neural networks to learn continu-ously from new data after deployment on memory-constrained edge devices. Previous work is mostly concerned with reducing the number of model parameters which is only beneficial for in-ference. However, memory footprint from activations is the main bottleneck for training on the edge. Existing incremental training methods fine-tune the last few layers sacrificing accuracy gains from re-training the whole model. In this work, we investigate the memory footprint of training deep learning models, and use our observations to propose BitTrain. In BitTrain, we exploit activation sparsity and propose a novel bitmap compression technique that reduces the memory footprint during training. We save the activations in our proposed bitmap compression format during the forward pass of the training, and restore them during the backward pass for the optimizer …

Characterizing and optimizing EDA flows for the cloud

Authors

Abdelrahman Hosny,Sherief Reda

Journal

IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems

Published Date

2021/10/15

Design space exploration in logic synthesis and parameter tuning in physical design require a massive amount of compute resources in order to meet tapeout schedules. To address this need, cloud computing provides semiconductor and electronics companies with instant access to scalable compute resources. However, deploying electronic design automation (EDA) jobs on the cloud requires EDA teams to deeply understand the characteristics of their jobs in cloud environments. Unfortunately, there has been little to no public information on these characteristics. Thus, in this article, we first formulate the problem of moving EDA jobs to the cloud. To address the problem, we characterize the performance of four EDA main applications, namely: 1) synthesis; 2) placement; 3) routing; and 4) static timing analysis. We show that different EDA jobs require different compute configurations in order to achieve the best …

DRiLLS: Deep reinforcement learning for logic synthesis

Authors

Abdelrahman Hosny,Soheil Hashemi,Mohamed Shalan,Sherief Reda

Published Date

2020/1/13

Logic synthesis requires extensive tuning of the synthesis optimization flow where the quality of results (QoR) depends on the sequence of optimizations used. Efficient design space exploration is challenging due to the exponential number of possible optimization permutations. Therefore, automating the optimization process is necessary. In this work, we propose a novel reinforcement learning-based methodology that navigates the optimization space without human intervention. We demonstrate the training of an Advantage Actor Critic (A2C) agent that seeks to minimize area subject to a timing constraint. Using the proposed methodology, designs can be optimized autonomously with no-humans in-loop. Evaluation on the comprehensive EPFL benchmark suite shows that the agent outperforms existing exploration methodologies and improves QoRs by an average of 13%.

Tutorial: Open-Source EDA and Machine Learning for IC Design: A Live Update

Authors

Abdelrahman Hosny,Andrew B Kahng

Published Date

2020/1/4

Open-source EDA is rapidly enabling new waves of innovation on many fronts. For academic researchers, it speeds up the lifecycle of scientific progress and makes research results relevant to modern industry practice. For EDA professionals and the industry ecosystem, open-source EDA is a complement and booster to commercial EDA. For the IC design community, recent releases under permissive licenses make it possible for design engineers as well as hobbyists to take ideas to manufacturing-ready layout at essentially zero cost.This full-day tutorial will review the very latest progress in open-source EDA, focusing on the digital RTL-to-GDS flow. In particular, the tutorial will make a deep dive into The OpenROAD Project https://theopenroadproject.org/, which brings new open-source tools and machine learning into a “no human in the loop” RTL-to-GDS flow. OpenROAD’s key components will be reviewed …

An immune-centric exploration of BRCA1 and BRCA2 germline mutation related breast and ovarian cancers

Authors

Ewa Przybytkowski,Thomas Davis,Abdelrahman Hosny,Julia Eismann,Ursula A Matulonis,Gerburg M Wulf,Sheida Nabavi

Journal

BMC cancer

Published Date

2020/12

Background BRCA1/2 germline mutation related cancers are candidates for new immune therapeutic interventions. This study was a hypothesis generating exploration of genomic data collected at diagnosis for 19 patients. The prominent tumor mutation burden (TMB) in hereditary breast and ovarian cancers in this cohort was not correlated with high global immune activity in their microenvironments. More information is needed about the relationship between genomic instability, phenotypes and immune microenvironments of these hereditary tumors in order to find appropriate markers of immune activity and the most effective anticancer immune strategies. Methods Mining and statistical analyses of the original DNA and RNA sequencing data and The Cancer Genome Atlas data were performed. To interpret the data, we have used published literature and …

See List of Professors in Abdelrahman Hosny University(Brown University)

Abdelrahman Hosny FAQs

What is Abdelrahman Hosny's h-index at Brown University?

The h-index of Abdelrahman Hosny has been 6 since 2020 and 6 in total.

What are Abdelrahman Hosny's top articles?

The articles with the titles of

torchmSAT: A GPU-Accelerated Approximation To The Maximum Satisfiability Problem

Automatic MILP solver configuration by learning problem similarities

Configuring Mixed-Integer Linear Programming Solvers with Deep Metric Learning

Sparse bitmap compression for memory-efficient training on the edge

Characterizing and optimizing EDA flows for the cloud

DRiLLS: Deep reinforcement learning for logic synthesis

Tutorial: Open-Source EDA and Machine Learning for IC Design: A Live Update

An immune-centric exploration of BRCA1 and BRCA2 germline mutation related breast and ovarian cancers

are the top articles of Abdelrahman Hosny at Brown University.

What are Abdelrahman Hosny's research interests?

The research interests of Abdelrahman Hosny are: Machine Learning, Constrained Optimization, Learning to Optimize, EDA

What is Abdelrahman Hosny's total number of citations?

Abdelrahman Hosny has 549 citations in total.

What are the co-authors of Abdelrahman Hosny?

The co-authors of Abdelrahman Hosny are Andrew B. Kahng, Sherief Reda, Sheida Nabavi, Paola Vera-Licona, Michelle Dow, Tutu Ajayi.

    Co-Authors

    H-index: 90
    Andrew B. Kahng

    Andrew B. Kahng

    University of California, San Diego

    H-index: 42
    Sherief Reda

    Sherief Reda

    Brown University

    H-index: 21
    Sheida Nabavi

    Sheida Nabavi

    University of Connecticut

    H-index: 13
    Paola Vera-Licona

    Paola Vera-Licona

    University of Connecticut

    H-index: 12
    Michelle Dow

    Michelle Dow

    University of California, San Diego

    H-index: 9
    Tutu Ajayi

    Tutu Ajayi

    University of Michigan

    academic-engine

    Useful Links