Samira Abnar

About Samira Abnar

Samira Abnar, With an exceptional h-index of 14 and a recent h-index of 13 (since 2020), a distinguished researcher at Universiteit van Amsterdam, specializes in the field of Machine Learning, Cognitive Science.

His recent articles reflect a diverse array of research interests and contributions to the field:

Adaptivity and modularity for efficient generalization over task complexity

Inductive Biases for Learning Natural Language

Gaudi: A neural architect for immersive 3d scene generation

Diffusion probabilistic fields

Scaling laws vs model architectures: How does inductive bias influence scaling?

Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers

Exploring the limits of large scale pre-training

Gradual domain adaptation in the wild: When intermediate distributions are absent

Samira Abnar Information

University

Position

___

Citations(all)

1584

Citations(since 2020)

1517

Cited By

191

hIndex(all)

14

hIndex(since 2020)

13

i10Index(all)

16

i10Index(since 2020)

14

Email

University Profile Page

Google Scholar

Samira Abnar Skills & Research Interests

Machine Learning

Cognitive Science

Top articles of Samira Abnar

Adaptivity and modularity for efficient generalization over task complexity

arXiv preprint arXiv:2310.08866

2023/10/13

Samira Abnar
Samira Abnar

H-Index: 8

Chen Huang
Chen Huang

H-Index: 0

Inductive Biases for Learning Natural Language

2023/4/24

Samira Abnar
Samira Abnar

H-Index: 8

Gaudi: A neural architect for immersive 3d scene generation

Advances in Neural Information Processing Systems

2022/12/6

Pengsheng Guo
Pengsheng Guo

H-Index: 3

Samira Abnar
Samira Abnar

H-Index: 8

Diffusion probabilistic fields

2022/9/29

Samira Abnar
Samira Abnar

H-Index: 8

Scaling laws vs model architectures: How does inductive bias influence scaling?

arXiv preprint arXiv:2207.10551

2022/7/21

Mostafa Dehghani
Mostafa Dehghani

H-Index: 2

Samira Abnar
Samira Abnar

H-Index: 8

Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers

2022

Mostafa Dehghani
Mostafa Dehghani

H-Index: 2

Samira Abnar
Samira Abnar

H-Index: 8

Exploring the limits of large scale pre-training

arXiv preprint arXiv:2110.02095

2021/10/5

Samira Abnar
Samira Abnar

H-Index: 8

Mostafa Dehghani
Mostafa Dehghani

H-Index: 2

Gradual domain adaptation in the wild: When intermediate distributions are absent

arXiv preprint arXiv:2106.06080

2021/6/10

Samira Abnar
Samira Abnar

H-Index: 8

Mostafa Dehghani
Mostafa Dehghani

H-Index: 2

Long range arena: A benchmark for efficient transformers

ICLR 2021

2020/11/8

Transferring inductive biases through knowledge distillation

arXiv preprint arXiv:2006.00555

2020/5/31

Quantifying attention flow in transformers

arXiv preprint arXiv:2005.00928

2020/5/2

Samira Abnar
Samira Abnar

H-Index: 8

Willem Zuidema
Willem Zuidema

H-Index: 18

A comparison of architectures and pretraining methods for contextualized multilingual word embeddings

Proceedings of the AAAI conference on artificial intelligence

2020/4/3

Niels Van Der Heijden
Niels Van Der Heijden

H-Index: 1

Samira Abnar
Samira Abnar

H-Index: 8

See List of Professors in Samira Abnar University(Universiteit van Amsterdam)

Co-Authors

academic-engine