all AI news
Inducing and Using Alignments for Transition-based AMR Parsing. (arXiv:2205.01464v1 [cs.CL])
May 4, 2022, 1:11 a.m. | Andrew Drozdov, Jiawei Zhou, Radu Florian, Andrew McCallum, Tahira Naseem, Yoon Kim, Ramon Fernandez Astudillo
cs.CL updates on arXiv.org arxiv.org
Transition-based parsers for Abstract Meaning Representation (AMR) rely on
node-to-word alignments. These alignments are learned separately from parser
training and require a complex pipeline of rule-based components,
pre-processing, and post-processing to satisfy domain-specific constraints.
Parsers also train on a point-estimate of the alignment pipeline, neglecting
the uncertainty due to the inherent ambiguity of alignment. In this work we
explore two avenues for overcoming these limitations. First, we propose a
neural aligner for AMR that learns node-to-word alignments without relying on …
More from arxiv.org / cs.CL updates on arXiv.org
Benchmarking LLMs via Uncertainty Quantification
2 days, 2 hours ago |
arxiv.org
CARE: Extracting Experimental Findings From Clinical Literature
2 days, 2 hours ago |
arxiv.org
Jobs in AI, ML, Big Data
Data Architect
@ University of Texas at Austin | Austin, TX
Data ETL Engineer
@ University of Texas at Austin | Austin, TX
Lead GNSS Data Scientist
@ Lurra Systems | Melbourne
Senior Machine Learning Engineer (MLOps)
@ Promaton | Remote, Europe
Principal Data Engineering Manager
@ Microsoft | Redmond, Washington, United States
Machine Learning Engineer
@ Apple | San Diego, California, United States