R3: A Reading Comprehension Benchmark Requiring Reasoning Processes. (arXiv:2004.01251v1 [cs.CL])

Existing question answering systems can only predict answers without explicit
reasoning processes, which hinder their explainability and make us overestimate
their ability of understanding and reasoning over natural language. In this
work, we propose a novel task of reading comprehension, in which a model is
required to provide final answers and reasoning processes. To this end, we
introduce a formalism for reasoning over unstructured text, namely Text
Reasoning Meaning Representation (TRMR). TRMR consists of three phrases, which
is expressive enough to characterize the reasoning process to answer reading
comprehension questions. We develop an annotation platform to facilitate TRMR’s
annotation, and release the R3 dataset, a textbf{R}eading comprehension
benchmark textbf{R}equiring textbf{R}easoning processes. R3 contains over 60K
pairs of question-answer pairs and their TRMRs. Our dataset is available at:
url{this http URL}.

Source link

Related posts

The 10 Biggest AR Investments of 2018


RNA-seq assistant: machine learning based methods to identify more transcriptional regulated genes.


Can Amazon’s AI really detect fear? Plus: Fresh deepfake video freaks everyone out again


This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy