Robert Frank
Yale University
H-index: 23
North America-United States
Top articles of Robert Frank
Title | Journal | Author(s) | Publication Date |
---|---|---|---|
LIEDER: Linguistically-Informed Evaluation for Discourse Entity Recognition | arXiv preprint arXiv:2403.06301 | Xiaomeng Zhu Robert Frank | 2024/3/10 |
Brain and grammar: revealing electrophysiological basic structures with competing statistical models | bioRxiv | Andrea Cometa Chiara Battaglini Fiorenzo Artoni Matteo Greco Robert Frank | 2024 |
On the Spectra of Syntactic Structures | Proceedings of the Society for Computation in Linguistics | Isabella Senturia Robert Frank | 2023 |
Attention and locality: On clause-boundedness and its exceptions in multiple sluicing | Linguistic Inquiry | Matthew Barros Robert Frank | 2023/9/26 |
False perspectives on human language: Why statistics needs linguistics | Frontiers in Language Sciences | Matteo Greco Andrea Cometa Fiorenzo Artoni Robert Frank Andrea Moro | 2023/4/20 |
How poor is the stimulus? Evaluating hierarchical generalization in neural networks trained on child-directed speech | arXiv preprint arXiv:2301.11462 | Aditya Yedetore Tal Linzen Robert Frank R Thomas McCoy | 2023/1/26 |
Subject-verb agreement with Seq2Seq transformers: Bigger is better, but still not best | Proceedings of the Society for Computation in Linguistics | Michael A Wilson Zhenghao Zhou Robert Frank | 2023 |
Inductive Bias Is in the Eye of the Beholder | Michael Wilson Robert Frank | 2023/12 | |
What affects Priming Strength? Simulating Structural Priming Effect with PIPS | Proceedings of the Society for Computation in Linguistics | Zhenghao Zhou Robert Frank | 2023/1 |
How abstract is linguistic generalization in large language models? Experiments with argument structure | Transactions of the Association for Computational Linguistics | Michael Wilson Jackson Petty Robert Frank | 2023/11/13 |
Do language models learn position-role mappings? | arXiv preprint arXiv:2202.03611 | Jackson Petty Michael Wilson Robert Frank | 2022/2/8 |
Formal language recognition by hard attention transformers: Perspectives from circuit complexity | Transactions of the Association for Computational Linguistics | Yiding Hao Dana Angluin Robert Frank | 2022/7/27 |
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models | arXiv preprint arXiv:2206.04615 | Aarohi Srivastava Abhinav Rastogi Abhishek Rao Abu Awal Md Shoeb Abubakar Abid | 2022/6/9 |
Arguments for top-down derivations in syntax | Proceedings of the Linguistic Society of America | Robert Frank Hadas Kotek | 2022/5/5 |
Coloring the blank slate: Pre-training imparts a hierarchical inductive bias to sequence-to-sequence models | arXiv preprint arXiv:2203.09397 | Aaron Mueller Robert Frank Tal Linzen Luheng Wang Sebastian Schuster | 2022/3/17 |
Comparing methods of tree-construction across mildly context-sensitive formalisms | Society for Computation in Linguistics | Tim Hunter Robert Frank | 2021/1/1 |
Variation in mild context-sensitivity: Derivational state and structural monotonicity | Evolutionary Linguistic Theory | Robert Frank Tim Hunter | 2021/11/5 |
Documentation of shared decisionmaking in the emergency department | Annals of Emergency Medicine | David Chartash Mona Sharifi Beth Emerson Robert Frank Elizabeth M Schoenfeld | 2021/11/1 |
Transformers generalize linearly | arXiv preprint arXiv:2109.12036 | Jackson Petty Robert Frank | 2021/9/24 |
Structure here, bias there: Hierarchical generalization by jointly learning syntactic transformations | Proceedings of the Society for Computation in Linguistics | Karl Mulligan Robert Frank | 2021/1 |