Abdelhak Lemkhenter

Abdelhak Lemkhenter

Universität Bern

H-index: 2

Europe-Switzerland

About Abdelhak Lemkhenter

Abdelhak Lemkhenter, With an exceptional h-index of 2 and a recent h-index of 2 (since 2020), a distinguished researcher at Universität Bern, specializes in the field of deep learning, machine learning, unsupervised learning, self-supervised learning.

Abdelhak Lemkhenter Information

University

Universität Bern

Position

___

Citations(all)

12

Citations(since 2020)

12

Cited By

0

hIndex(all)

2

hIndex(since 2020)

2

i10Index(all)

0

i10Index(since 2020)

0

Email

University Profile Page

Universität Bern

Abdelhak Lemkhenter Skills & Research Interests

deep learning

machine learning

unsupervised learning

self-supervised learning

Top articles of Abdelhak Lemkhenter

SemiGPC: Distribution-Aware Label Refinement for Imbalanced Semi-Supervised Learning Using Gaussian Processes

In this paper we introduce SemiGPC, a distribution-aware label refinement strategy based on Gaussian Processes where the predictions of the model are derived from the labels posterior distribution. Differently from other buffer-based semi-supervised methods such as CoMatch and SimMatch, our SemiGPC includes a normalization term that addresses imbalances in the global data distribution while maintaining local sensitivity. This explicit control allows SemiGPC to be more robust to confirmation bias especially under class imbalance. We show that SemiGPC improves performance when paired with different Semi-Supervised methods such as FixMatch, ReMixMatch, SimMatch and FreeMatch and different pre-training strategies including MSN and Dino. We also show that SemiGPC achieves state of the art results under different degrees of class imbalance on standard CIFAR10-LT/CIFAR100-LT especially in the low data-regime. Using SemiGPC also results in about 2% avg.accuracy increase compared to a new competitive baseline on the more challenging benchmarks SemiAves, SemiCUB, SemiFungi and Semi-iNat.

Authors

Abdelhak Lemkhenter,Manchen Wang,Luca Zancato,Gurumurthy Swaminathan,Paolo Favaro,Davide Modolo

Journal

arXiv preprint arXiv:2311.01646

Published Date

2023/11/3

Zero-shot Image Restoration via Diffusion Inversion

Recently, various methods have been proposed to solve Image Restoration (IR) tasks using a pre-trained diffusion models leading to state-of-the-art performance. A common characteristic among these approaches is that they alter the diffusion sampling process in order to satisfy the consistency with the corrupted input image. However, this choice has recently been shown to be sub-optimal and may cause the generated image to deviate from the data manifold. We propose to address this limitation through a novel IR method that not only leverages the power of diffusion but also guarantees that the sample generation path always lies on the data manifold. One choice that satisfies this requirement is not to modify the reverse sampling , i.e., not to alter all the intermediate latents, once an initial noise is sampled. This is ultimately equivalent to casting the IR task as an optimization problem in the space of the diffusion input noise. To mitigate the substantial computational cost associated with inverting a fully unrolled diffusion model, we leverage the inherent capability of these models to skip ahead in the forward diffusion process using arbitrary large time steps. We experimentally validate our method on several image restoration tasks. Our method SHRED achieves state of the art results on multiple zero-shot IR benchmarks especially in terms of image quality quantified using FID.

Authors

Hamadi Chihaoui,Abdelhak Lemkhenter,Paolo Favaro

Published Date

2023/10/13

Towards sleep scoring generalization through self-supervised meta-learning

In this work we introduce a novel meta-learning method for sleep scoring based on self-supervised learning. Our approach aims at building models for sleep scoring that can generalize across different patients and recording facilities, but do not require a further adaptation step to the target data. Towards this goal, we build our method on top of the Model Agnostic Meta-Learning (MAML) framework by incorporating a self-supervised learning (SSL) stage, and call it S2MAML. We show that S2MAML can significantly outperform MAML. The gain in performance comes from the SSL stage, which we base on a general purpose pseudo-task that limits the overfitting to the subject-specific patterns present in the training dataset. We show that S2MAML outperforms standard supervised learning and MAML on the SC, ST, ISRUC, UCD and CAP datasets. Clinical relevance— Our work tackles the generalization problem of …

Authors

Abdelhak Lemkhenter,Paolo Favaro

Published Date

2022/7/11

Boosting generalization in bio-signal classification by learning the phase-amplitude coupling

Various hand-crafted feature representations of bio-signals rely primarily on the amplitude or power of the signal in specific frequency bands. The phase component is often discarded as it is more sample specific, and thus more sensitive to noise, than the amplitude. However, in general, the phase component also carries information relevant to the underlying biological processes. In fact, in this paper we show the benefits of learning the coupling of both phase and amplitude components of a bio-signal. We do so by introducing a novel self-supervised learning task, which we call phase-swap, that detects if bio-signals have been obtained by merging the amplitude and phase from different sources. We show in our evaluation that neural networks trained on this task generalize better across subjects and recording sessions than their fully supervised counterpart.

Authors

Abdelhak Lemkhenter,Paolo Favaro

Published Date

2021

Generative Adversarial Learning via Kernel Density Discrimination

We introduce Kernel Density Discrimination GAN (KDD GAN), a novel method for generative adversarial learning. KDD GAN formulates the training as a likelihood ratio optimization problem where the data distributions are written explicitly via (local) Kernel Density Estimates (KDE). This is inspired by the recent progress in contrastive learning and its relation to KDE. We define the KDEs directly in feature space and forgo the requirement of invertibility of the kernel feature mappings. In our approach, features are no longer optimized for linear separability, as in the original GAN formulation, but for the more general discrimination of distributions in the feature space. We analyze the gradient of our loss with respect to the feature representation and show that it is better behaved than that of the original hinge loss. We perform experiments with the proposed KDE-based loss, used either as a training loss or a regularization term, on both CIFAR10 and scaled versions of ImageNet. We use BigGAN/SA-GAN as a backbone and baseline, since our focus is not to design the architecture of the networks. We show a boost in the quality of generated samples with respect to FID from 10% to 40% compared to the baseline. Code will be made available.

Authors

Abdelhak Lemkhenter,Adam Bielski,Alp Eren Sari,Paolo Favaro

Journal

arXiv preprint arXiv:2107.06197

Published Date

2021/7/13

Abdelhak Lemkhenter FAQs

What is Abdelhak Lemkhenter's h-index at Universität Bern?

The h-index of Abdelhak Lemkhenter has been 2 since 2020 and 2 in total.

What are Abdelhak Lemkhenter's top articles?

The articles with the titles of

SemiGPC: Distribution-Aware Label Refinement for Imbalanced Semi-Supervised Learning Using Gaussian Processes

Zero-shot Image Restoration via Diffusion Inversion

Towards sleep scoring generalization through self-supervised meta-learning

Boosting generalization in bio-signal classification by learning the phase-amplitude coupling

Generative Adversarial Learning via Kernel Density Discrimination

are the top articles of Abdelhak Lemkhenter at Universität Bern.

What are Abdelhak Lemkhenter's research interests?

The research interests of Abdelhak Lemkhenter are: deep learning, machine learning, unsupervised learning, self-supervised learning

What is Abdelhak Lemkhenter's total number of citations?

Abdelhak Lemkhenter has 12 citations in total.

What are the co-authors of Abdelhak Lemkhenter?

The co-authors of Abdelhak Lemkhenter are Paolo Favaro, Adam Bielski.

    Co-Authors

    H-index: 50
    Paolo Favaro

    Paolo Favaro

    Universität Bern

    H-index: 5
    Adam Bielski

    Adam Bielski

    Universität Bern

    academic-engine

    Useful Links