R. Thomas McCoy

R. Thomas McCoy

Johns Hopkins University

H-index: 18

North America-United States

About R. Thomas McCoy

R. Thomas McCoy, With an exceptional h-index of 18 and a recent h-index of 18 (since 2020), a distinguished researcher at Johns Hopkins University, specializes in the field of Computational Linguistics, Linguistics, Cognitive Science.

His recent articles reflect a diverse array of research interests and contributions to the field:

MODELING: A Novel Dataset for Testing Linguistic Reasoning in Language Models

Distilling Symbolic Priors for Concept Learning into Neural Networks

Embers of autoregression: Understanding large language models through the problem they are trained to solve

How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven

Modeling rapid language learning by distilling Bayesian priors into artificial neural networks

Deep de Finetti: Recovering Topic Distributions from Large Language Models

How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech

Bayes in the age of intelligent machines

R. Thomas McCoy Information

University

Position

PhD Student in Cognitive Science

Citations(all)

2978

Citations(since 2020)

2943

Cited By

871

hIndex(all)

18

hIndex(since 2020)

18

i10Index(all)

22

i10Index(since 2020)

22

Email

University Profile Page

Johns Hopkins University

Google Scholar

View Google Scholar Profile

R. Thomas McCoy Skills & Research Interests

Computational Linguistics

Linguistics

Cognitive Science

Top articles of R. Thomas McCoy

Title

Journal

Author(s)

Publication Date

MODELING: A Novel Dataset for Testing Linguistic Reasoning in Language Models

Nathan Chi

Teodor Malchev

Riley Kong

Ryan Chi

Lucas Huang

...

2024/3

Distilling Symbolic Priors for Concept Learning into Neural Networks

arXiv preprint arXiv:2402.07035

Ioana Marinescu

R Thomas McCoy

Thomas L Griffiths

2024/2/10

Embers of autoregression: Understanding large language models through the problem they are trained to solve

arXiv preprint arXiv:2309.13638

R Thomas McCoy

Shunyu Yao

Dan Friedman

Matthew Hardy

Thomas L Griffiths

2023/9/24

How much do language models copy from their training data? evaluating linguistic novelty in text generation using raven

Transactions of the Association for Computational Linguistics

R Thomas McCoy

Paul Smolensky

Tal Linzen

Jianfeng Gao

Asli Celikyilmaz

2023/6/29

Modeling rapid language learning by distilling Bayesian priors into artificial neural networks

arXiv preprint arXiv:2305.14701

R Thomas McCoy

Thomas L Griffiths

2023/5/24

Deep de Finetti: Recovering Topic Distributions from Large Language Models

arXiv preprint arXiv:2312.14226

Liyi Zhang

R Thomas McCoy

Theodore R Sumers

Jian-Qiao Zhu

Thomas L Griffiths

2023/12/21

How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech

arXiv preprint arXiv:2301.11462

Aditya Yedetore

Tal Linzen

Robert Frank

R Thomas McCoy

2023/1/26

Bayes in the age of intelligent machines

arXiv preprint arXiv:2311.10206

Thomas L Griffiths

Jian-Qiao Zhu

Erin Grant

R Thomas McCoy

2023/11/16

On the hazards of relating representations and inductive biases

Behavioral and Brain Sciences

Thomas L Griffiths

Sreejan Kumar

R Thomas McCoy

2023/9/28

Structural biases for improving transformers on translation into morphologically rich languages

arXiv preprint arXiv:2208.06061

Paul Soulos

Sudha Rao

Caitlin Smith

Eric Rosen

Asli Celikyilmaz

...

2022/8/11

IMPLICIT COMPOSITIONAL STRUCTURE IN THE VECTOR REPRESENTATIONS OF ARTIFICIAL NEURAL NETWORKS

Richard Thomas McCoy

2022/7/11

Neurocompositional computing in human and machine intelligence: A tutorial

Paul Smolensky

R Thomas McCoy

Roland Fernandez

Matthew Goldrick

Jianfeng Gao

2022

Neurocompositional computing: from the central paradox of cognition to a new generation of AI systems

AI Magazine

Paul Smolensky

Richard McCoy

Roland Fernandez

Matthew Goldrick

Jianfeng Gao

2022/9/7

Infinite use of finite means? Evaluating the generalization of center embedding learned from an artificial grammar

R Thomas McCoy

Jennifer Culbertson

Paul Smolensky

Géraldine Legendre

2021/5/11

Discovering the compositional structure of vector representations with role learning networks

Paul Soulos

Tom McCoy

Tal Linzen

Paul Smolensky

2020

Representations of Syntax [MASK] Useful: Effects of Constituency and Dependency Structure in Recursive LSTMs

Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Michael A Lepori

Tal Linzen

R Thomas McCoy

2020/4/30

Syntactic data augmentation increases robustness to inference heuristics

arXiv preprint arXiv:2004.11999

Junghyun Min

R Thomas McCoy

Dipanjan Das

Emily Pitler

Tal Linzen

2020/4/24

Tensor product decomposition networks: Uncovering representations of structure learned by neural networks

Proceedings of the Society for Computation in Linguistics

Richard T McCoy

Tal Linzen

Ewan Dunbar

Paul Smolensky

2020

Does syntax need to grow on trees? sources of hierarchical inductive bias in sequence-to-sequence networks

Transactions of the Association for Computational Linguistics

R Thomas McCoy

Robert Frank

Tal Linzen

2020/1/1

Picking BERT's Brain: Probing for Linguistic Dependencies in Contextualized Embeddings Using Representational Similarity Analysis

Michael A Lepori

R Thomas McCoy

2020/11

See List of Professors in R. Thomas McCoy University(Johns Hopkins University)

Co-Authors

H-index: 110
Thomas L. Griffiths

Thomas L. Griffiths

Princeton University

H-index: 60
Paul Smolensky

Paul Smolensky

Johns Hopkins University

H-index: 52
Samuel R. Bowman

Samuel R. Bowman

New York University

H-index: 35
Matt Goldrick

Matt Goldrick

Northwestern University

H-index: 35
Ellie Pavlick

Ellie Pavlick

Brown University

academic-engine