default search action
Emma Strubell
Person information
- affiliation: CMU, Pittsburgh, USA
- affiliation: University of Massachusetts Amherst, USA
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [c35]Li Lucy, Suchin Gururangan, Luca Soldaini, Emma Strubell, David Bamman, Lauren Klein, Jesse Dodge:
AboutMe: Using Self-Descriptions in Webpages to Document the Effects of English Pretraining Data Filters. ACL (1) 2024: 7393-7420 - [c34]Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Raghavi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann, Ananya Harsh Jha, Sachin Kumar, Li Lucy, Xinxi Lyu, Nathan Lambert, Ian Magnusson, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Abhilasha Ravichander, Kyle Richardson, Zejiang Shen, Emma Strubell, Nishant Subramani, Oyvind Tafjord, Evan Pete Walsh, Luke Zettlemoyer, Noah A. Smith, Hannaneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, Kyle Lo:
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research. ACL (1) 2024: 15725-15788 - [c33]Dirk Groeneveld, Iz Beltagy, Evan Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi:
OLMo: Accelerating the Science of Language Models. ACL (1) 2024: 15789-15809 - [c32]Clara Na, Ian Magnusson, Ananya Harsh Jha, Tom Sherborne, Emma Strubell, Jesse Dodge, Pradeep Dasigi:
Scalable Data Ablation Approximations for Language Models through Modular Training and Merging. EMNLP 2024: 21125-21141 - [c31]Sasha Luccioni, Yacine Jernite, Emma Strubell:
Power Hungry Processing: Watts Driving the Cost of AI Deployment? FAccT 2024: 85-99 - [i46]Li Lucy, Suchin Gururangan, Luca Soldaini, Emma Strubell, David Bamman, Lauren Klein, Jesse Dodge:
AboutMe: Using Self-Descriptions in Webpages to Document the Effects of English Pretraining Data Filters. CoRR abs/2401.06408 (2024) - [i45]Luca Soldaini, Rodney Kinney, Akshita Bhagia, Dustin Schwenk, David Atkinson, Russell Authur, Ben Bogin, Khyathi Raghavi Chandu, Jennifer Dumas, Yanai Elazar, Valentin Hofmann, Ananya Harsh Jha, Sachin Kumar, Li Lucy, Xinxi Lyu, Nathan Lambert, Ian Magnusson, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Abhilasha Ravichander, Kyle Richardson, Zejiang Shen, Emma Strubell, Nishant Subramani, Oyvind Tafjord, Pete Walsh, Luke Zettlemoyer, Noah A. Smith, Hannaneh Hajishirzi, Iz Beltagy, Dirk Groeneveld, Jesse Dodge, Kyle Lo:
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research. CoRR abs/2402.00159 (2024) - [i44]Dirk Groeneveld, Iz Beltagy, Pete Walsh, Akshita Bhagia, Rodney Kinney, Oyvind Tafjord, Ananya Harsh Jha, Hamish Ivison, Ian Magnusson, Yizhong Wang, Shane Arora, David Atkinson, Russell Authur, Khyathi Raghavi Chandu, Arman Cohan, Jennifer Dumas, Yanai Elazar, Yuling Gu, Jack Hessel, Tushar Khot, William Merrill, Jacob Morrison, Niklas Muennighoff, Aakanksha Naik, Crystal Nam, Matthew E. Peters, Valentina Pyatkin, Abhilasha Ravichander, Dustin Schwenk, Saurabh Shah, Will Smith, Emma Strubell, Nishant Subramani, Mitchell Wortsman, Pradeep Dasigi, Nathan Lambert, Kyle Richardson, Luke Zettlemoyer, Jesse Dodge, Kyle Lo, Luca Soldaini, Noah A. Smith, Hannaneh Hajishirzi:
OLMo: Accelerating the Science of Language Models. CoRR abs/2402.00838 (2024) - [i43]Muhammad Khalifa, David Wadden, Emma Strubell, Honglak Lee, Lu Wang, Iz Beltagy, Hao Peng:
Source-Aware Training Enables Knowledge Attribution in Language Models. CoRR abs/2404.01019 (2024) - [i42]Benjamin C. Lee, David Brooks, Arthur van Benthem, Udit Gupta, Gage Hills, Vincent Liu, Benjamin Pierce, Christopher Stewart, Emma Strubell, Gu-Yeon Wei, Adam Wierman, Yuan Yao, Minlan Yu:
Carbon Connect: An Ecosystem for Sustainable Computing. CoRR abs/2405.13858 (2024) - [i41]Sang Keun Choe, Hwijeen Ahn, Juhan Bae, Kewen Zhao, Minsoo Kang, Youngseog Chung, Adithya Pratapa, Willie Neiswanger, Emma Strubell, Teruko Mitamura, Jeff G. Schneider, Eduard H. Hovy, Roger B. Grosse, Eric P. Xing:
What is Your Data Worth to GPT? LLM-Scale Data Valuation with Influence Functions. CoRR abs/2405.13954 (2024) - 2023
- [j3]Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell:
An Empirical Investigation of the Role of Pre-training in Lifelong Learning. J. Mach. Learn. Res. 24: 214:1-214:50 (2023) - [j2]Marcos V. Treviso, Ji-Ung Lee, Tianchu Ji, Betty van Aken, Qingqing Cao, Manuel R. Ciosici, Michael Hassid, Kenneth Heafield, Sara Hooker, Colin Raffel, Pedro Henrique Martins, André F. T. Martins, Jessica Zosa Forde, Peter A. Milder, Edwin Simpson, Noam Slonim, Jesse Dodge, Emma Strubell, Niranjan Balasubramanian, Leon Derczynski, Iryna Gurevych, Roy Schwartz:
Efficient Methods for Natural Language Processing: A Survey. Trans. Assoc. Comput. Linguistics 11: 826-860 (2023) - [c30]Nupoor Gandhi, Anjalie Field, Emma Strubell:
Annotating Mentions Alone Enables Efficient Domain Adaptation for Coreference Resolution. ACL (1) 2023: 10543-10558 - [c29]Dheeru Dua, Emma Strubell, Sameer Singh, Pat Verga:
To Adapt or to Annotate: Challenges and Interventions for Domain Adaptation in Open-Domain Question Answering. ACL (1) 2023: 14429-14446 - [c28]Jared Fernandez, Jacob Kahn, Clara Na, Yonatan Bisk, Emma Strubell:
The Framework Tax: Disparities Between Inference Efficiency in NLP Research and Deployment. EMNLP 2023: 1588-1600 - [c27]Gustavo Gonçalves, Emma Strubell:
Understanding the Effect of Model Compression on Social Bias in Large Language Models. EMNLP 2023: 2663-2675 - [c26]Sanket Vaibhav Mehta, Jai Gupta, Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Jinfeng Rao, Marc Najork, Emma Strubell, Donald Metzler:
DSI++: Updating Transformer Memory with New Documents. EMNLP 2023: 8198-8213 - [c25]Xiaorong Wang, Clara Na, Emma Strubell, Sorelle Friedler, Sasha Luccioni:
Energy and Carbon Considerations of Fine-Tuning BERT. EMNLP (Findings) 2023: 9058-9069 - [c24]Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
Data-efficient Active Learning for Structured Prediction with Partial Annotation and Self-Training. EMNLP (Findings) 2023: 12991-13008 - [c23]Sireesh Gururaja, Amanda Bertsch, Clara Na, David Gray Widder, Emma Strubell:
To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing. EMNLP 2023: 13310-13325 - [c22]Sang Keun Choe, Sanket Vaibhav Mehta, Hwijeen Ahn, Willie Neiswanger, Pengtao Xie, Emma Strubell, Eric P. Xing:
Making Scalable Meta Learning Practical. NeurIPS 2023 - [c21]Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
On the Interactions of Structural Constraints and Data Resources for Structured Prediction. SustaiNLP 2023: 147-157 - [i40]Jared Fernandez, Jacob Kahn, Clara Na, Yonatan Bisk, Emma Strubell:
The Framework Tax: Disparities Between Inference Efficiency in Research and Deployment. CoRR abs/2302.06117 (2023) - [i39]Rajshekhar Das, Jonathan Francis, Sanket Vaibhav Mehta, Jean Oh, Emma Strubell, José M. F. Moura:
Regularizing Self-training for Unsupervised Domain Adaptation via Structural Constraints. CoRR abs/2305.00131 (2023) - [i38]Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
Data-efficient Active Learning for Structured Prediction with Partial Annotation and Self-Training. CoRR abs/2305.12634 (2023) - [i37]Ananya Harsh Jha, Dirk Groeneveld, Emma Strubell, Iz Beltagy:
Large Language Model Distillation Doesn't Need a Teacher. CoRR abs/2305.14864 (2023) - [i36]Ji-Ung Lee, Haritz Puerto, Betty van Aken, Yuki Arase, Jessica Zosa Forde, Leon Derczynski, Andreas Rücklé, Iryna Gurevych, Roy Schwartz, Emma Strubell, Jesse Dodge:
Surveying (Dis)Parities and Concerns of Compute Hungry NLP Research. CoRR abs/2306.16900 (2023) - [i35]Harnoor Dhingra, Preetiha Jayashanker, Sayali Moghe, Emma Strubell:
Queer People are People First: Deconstructing Sexual Identity Stereotypes in Large Language Models. CoRR abs/2307.00101 (2023) - [i34]Hao Peng, Qingqing Cao, Jesse Dodge, Matthew E. Peters, Jared Fernandez, Tom Sherborne, Kyle Lo, Sam Skjonsberg, Emma Strubell, Darrell Plessas, Iz Beltagy, Evan Pete Walsh, Noah A. Smith, Hannaneh Hajishirzi:
Efficiency Pentathlon: A Standardized Arena for Efficiency Evaluation. CoRR abs/2307.09701 (2023) - [i33]Sang Keun Choe, Sanket Vaibhav Mehta, Hwijeen Ahn, Willie Neiswanger, Pengtao Xie, Emma Strubell, Eric P. Xing:
Making Scalable Meta Learning Practical. CoRR abs/2310.05674 (2023) - [i32]Sireesh Gururaja, Amanda Bertsch, Clara Na, David Gray Widder, Emma Strubell:
To Build Our Future, We Must Know Our Past: Contextualizing Paradigm Shifts in Natural Language Processing. CoRR abs/2310.07715 (2023) - [i31]Xiaorong Wang, Clara Na, Emma Strubell, Sorelle Friedler, Sasha Luccioni:
Energy and Carbon Considerations of Fine-Tuning BERT. CoRR abs/2311.10267 (2023) - [i30]Alexandra Sasha Luccioni, Yacine Jernite, Emma Strubell:
Power Hungry Processing: Watts Driving the Cost of AI Deployment? CoRR abs/2311.16863 (2023) - [i29]Gustavo Gonçalves, Emma Strubell:
Understanding the Effect of Model Compression on Social Bias in Large Language Models. CoRR abs/2312.05662 (2023) - 2022
- [c20]Sanket Vaibhav Mehta, Jinfeng Rao, Yi Tay, Mihir Kale, Ankur Parikh, Emma Strubell:
Improving Compositional Generalization with Self-Training for Data-to-Text Generation. ACL (1) 2022: 4205-4219 - [c19]Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
Transfer Learning from Semantic Role Labeling to Event Argument Extraction with Template-based Slot Querying. EMNLP 2022: 2627-2647 - [c18]Clara Na, Sanket Vaibhav Mehta, Emma Strubell:
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models. EMNLP (Findings) 2022: 4909-4936 - [c17]Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
A Survey of Active Learning for Natural Language Processing. EMNLP 2022: 6166-6190 - [c16]Marius Hessenthaler, Emma Strubell, Dirk Hovy, Anne Lauscher:
Bridging Fairness and Environmental Sustainability in Natural Language Processing. EMNLP 2022: 7817-7836 - [c15]Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, Will Buchanan:
Measuring the Carbon Intensity of AI in Cloud Instances. FAccT 2022: 1877-1894 - [i28]Clara Na, Sanket Vaibhav Mehta, Emma Strubell:
Train Flat, Then Compress: Sharpness-Aware Minimization Learns More Compressible Models. CoRR abs/2205.12694 (2022) - [i27]Jesse Dodge, Taylor Prewitt, Remi Tachet des Combes, Erika Odmark, Roy Schwartz, Emma Strubell, Alexandra Sasha Luccioni, Noah A. Smith, Nicole DeCario, Will Buchanan:
Measuring the Carbon Intensity of AI in Cloud Instances. CoRR abs/2206.05229 (2022) - [i26]Zheng Wang, Juncheng B. Li, Shuhui Qu, Florian Metze, Emma Strubell:
SQuAT: Sharpness- and Quantization-Aware Training for BERT. CoRR abs/2210.07171 (2022) - [i25]Nupoor Gandhi, Anjalie Field, Emma Strubell:
Mention Annotations Alone Enable Efficient Domain Adaptation for Coreference Resolution. CoRR abs/2210.07602 (2022) - [i24]Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
A Survey of Active Learning for Natural Language Processing. CoRR abs/2210.10109 (2022) - [i23]Marius Hessenthaler, Emma Strubell, Dirk Hovy, Anne Lauscher:
Bridging Fairness and Environmental Sustainability in Natural Language Processing. CoRR abs/2211.04256 (2022) - [i22]Zheng Wang, Juncheng B. Li, Shuhui Qu, Florian Metze, Emma Strubell:
Error-aware Quantization through Noise Tempering. CoRR abs/2212.05603 (2022) - [i21]Sanket Vaibhav Mehta, Jai Prakash Gupta, Yi Tay, Mostafa Dehghani, Vinh Q. Tran, Jinfeng Rao, Marc Najork, Emma Strubell, Donald Metzler:
DSI++: Updating Transformer Memory with New Documents. CoRR abs/2212.09744 (2022) - [i20]Dheeru Dua, Emma Strubell, Sameer Singh, Pat Verga:
To Adapt or to Annotate: Challenges and Interventions for Domain Adaptation in Open-Domain Question Answering. CoRR abs/2212.10381 (2022) - [i19]Jesse Dodge, Iryna Gurevych, Roy Schwartz, Emma Strubell, Betty van Aken:
Efficient and Equitable Natural Language Processing in the Age of Deep Learning (Dagstuhl Seminar 22232). Dagstuhl Reports 12(6): 14-27 (2022) - 2021
- [c14]Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
Comparing Span Extraction Methods for Semantic Role Labeling. SPNLP@ACL-IJCNLP 2021: 67-77 - [c13]Amee Trivedi, Kate Silverstein, Emma Strubell, Prashant J. Shenoy, Mohit Iyyer:
WiFiMod: Transformer-based Indoor Human Mobility Modeling using Passive Sensing. COMPASS 2021: 126-137 - [c12]Zhisong Zhang, Emma Strubell, Eduard H. Hovy:
On the Benefit of Syntactic Supervision for Cross-lingual Transfer in Semantic Role Labeling. EMNLP (1) 2021: 6229-6246 - [i18]Amee Trivedi, Kate Silverstein, Emma Strubell, Prashant J. Shenoy:
WiFiMod: Transformer-based Indoor Human Mobility Modeling using Passive Sensing. CoRR abs/2104.09835 (2021) - [i17]Sanket Vaibhav Mehta, Jinfeng Rao, Yi Tay, Mihir Kale, Ankur Parikh, Hongtao Zhong, Emma Strubell:
Improving Compositional Generalization with Self-Training for Data-to-Text Generation. CoRR abs/2110.08467 (2021) - [i16]Sanket Vaibhav Mehta, Darshan Patil, Sarath Chandar, Emma Strubell:
An Empirical Investigation of the Role of Pre-training in Lifelong Learning. CoRR abs/2112.09153 (2021) - 2020
- [j1]Edward Kim, Zach Jensen, Alexander van Grootel, Kevin Huang, Matthew Staib, Sheshera Mysore, Haw-Shiuan Chang, Emma Strubell, Andrew McCallum, Stefanie Jegelka, Elsa Olivetti:
Inorganic Materials Synthesis Planning with Literature-Trained Neural Networks. J. Chem. Inf. Model. 60(3): 1194-1201 (2020) - [c11]Emma Strubell, Ananya Ganesh, Andrew McCallum:
Energy and Policy Considerations for Modern Deep Learning Research. AAAI 2020: 13693-13696 - [e1]Spandana Gella, Johannes Welbl, Marek Rei, Fabio Petroni, Patrick S. H. Lewis, Emma Strubell, Min Joon Seo, Hannaneh Hajishirzi:
Proceedings of the 5th Workshop on Representation Learning for NLP, RepL4NLP@ACL 2020, Online, July 9, 2020. Association for Computational Linguistics 2020, ISBN 978-1-952148-15-6 [contents]
2010 – 2019
- 2019
- [c10]Emma Strubell, Ananya Ganesh, Andrew McCallum:
Energy and Policy Considerations for Deep Learning in NLP. ACL (1) 2019: 3645-3650 - [c9]Sheshera Mysore, Zach Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, Elsa Olivetti:
The Materials Science Procedural Text Corpus: Annotating Materials Synthesis Procedures with Shallow Semantic Structures. LAW@ACL 2019: 56-64 - [i15]Edward Kim, Zach Jensen, Alexander van Grootel, Kevin Huang, Matthew Staib, Sheshera Mysore, Haw-Shiuan Chang, Emma Strubell, Andrew McCallum, Stefanie Jegelka, Elsa Olivetti:
Inorganic Materials Synthesis Planning with Literature-Trained Neural Networks. CoRR abs/1901.00032 (2019) - [i14]Sheshera Mysore, Zach Jensen, Edward Kim, Kevin Huang, Haw-Shiuan Chang, Emma Strubell, Jeffrey Flanigan, Andrew McCallum, Elsa Olivetti:
The Materials Science Procedural Text Corpus: Annotating Materials Synthesis Procedures with Shallow Semantic Structures. CoRR abs/1905.06939 (2019) - [i13]Emma Strubell, Ananya Ganesh, Andrew McCallum:
Energy and Policy Considerations for Deep Learning in NLP. CoRR abs/1906.02243 (2019) - 2018
- [c8]Vittorio Perera, Tagyoung Chung, Thomas Kollar, Emma Strubell:
Multi-Task Learning For Parsing The Alexa Meaning Representation Language. AAAI 2018: 5390-5397 - [c7]Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum:
Linguistically-Informed Self-Attention for Semantic Role Labeling. EMNLP 2018: 5027-5038 - [c6]Patrick Verga, Emma Strubell, Andrew McCallum:
Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction. NAACL-HLT 2018: 872-884 - [i12]Patrick Verga, Emma Strubell, Andrew McCallum:
Simultaneously Self-Attending to All Mentions for Full-Abstract Biological Relation Extraction. CoRR abs/1802.10569 (2018) - [i11]Emma Strubell, Patrick Verga, Daniel Andor, David Weiss, Andrew McCallum:
Linguistically-Informed Self-Attention for Semantic Role Labeling. CoRR abs/1804.08199 (2018) - [i10]Emma Strubell, Andrew McCallum:
Syntax Helps ELMo Understand Semantics: Is Syntax Still Relevant in a Deep Neural Architecture for SRL? CoRR abs/1811.04773 (2018) - 2017
- [c5]Patrick Verga, Emma Strubell, Ofer Shai, Andrew McCallum:
Attending to All Mention Pairs for Full Abstract Biological Relation Extraction. AKBC@NIPS 2017 - [c4]Emma Strubell, Andrew McCallum:
Dependency Parsing with Dilated Iterated Graph CNNs. SPNLP@EMNLP 2017: 1-6 - [c3]Emma Strubell, Patrick Verga, David Belanger, Andrew McCallum:
Fast and Accurate Entity Recognition with Iterated Dilated Convolutions. EMNLP 2017: 2670-2680 - [i9]Emma Strubell, Patrick Verga, David Belanger, Andrew McCallum:
Fast and Accurate Sequence Labeling with Iterated Dilated Convolutions. CoRR abs/1702.02098 (2017) - [i8]Emma Strubell, Andrew McCallum:
Dependency Parsing with Dilated Iterated Graph CNNs. CoRR abs/1705.00403 (2017) - [i7]Patrick Verga, Emma Strubell, Ofer Shai, Andrew McCallum:
Attending to All Mention Pairs for Full Abstract Biological Relation Extraction. CoRR abs/1710.08312 (2017) - [i6]Sheshera Mysore, Edward Kim, Emma Strubell, Ao Liu, Haw-Shiuan Chang, Srikrishna Kompella, Kevin Huang, Andrew McCallum, Elsa Olivetti:
Automatically Extracting Action Graphs from Materials Science Synthesis Procedures. CoRR abs/1711.06872 (2017) - 2016
- [c2]Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew McCallum:
Multilingual Relation Extraction using Compositional Universal Schema. HLT-NAACL 2016: 886-896 - [i5]Haw-Shiuan Chang, Abdurrahman Munir, Ao Liu, Johnny Tian-Zheng Wei, Aaron Traylor, Ajay Nagesh, Nicholas Monath, Patrick Verga, Emma Strubell, Andrew McCallum:
Extracting Multilingual Relations under Limited Resources: TAC 2016 Cold-Start KB construction and Slot-Filling using Compositional Universal Schema. TAC 2016 - 2015
- [c1]Emma Strubell, Luke Vilnis, Kate Silverstein, Andrew McCallum:
Learning Dynamic Feature Selection for Fast Sequential Prediction. ACL (1) 2015: 146-155 - [i4]Benjamin Roth, Nicholas Monath, David Belanger, Emma Strubell, Patrick Verga, Andrew McCallum:
Building Knowledge Bases with Universal Schema: Cold Start and Slot-Filling Approaches. TAC 2015 - [i3]Emma Strubell, Luke Vilnis, Kate Silverstein, Andrew McCallum:
Learning Dynamic Feature Selection for Fast Sequential Prediction. CoRR abs/1505.06169 (2015) - [i2]Patrick Verga, David Belanger, Emma Strubell, Benjamin Roth, Andrew McCallum:
Multilingual Relation Extraction using Compositional Universal Schema. CoRR abs/1511.06396 (2015) - 2014
- [i1]Emma Strubell, Luke Vilnis, Andrew McCallum:
Training for Fast Sequential Prediction Using Dynamic Feature Selection. CoRR abs/1410.8498 (2014)
Coauthor Index
aka: Pat Verga
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-11-14 21:01 CET by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint