Cardiff University | Prifysgol Caerdydd ORCA
Online Research @ Cardiff 
WelshClear Cookie - decide language by browser settings

An empirical comparison of LM-based question and answer generation methods

Ushio, Asahi, Alva Manchego, Fernando and Camacho-Collados, Jose 2023. An empirical comparison of LM-based question and answer generation methods. Presented at: The 61st Annual Meeting of the Association for Computational Linguistics, 9-14 July 2023. Findings of the Association for Computational Linguistics: ACL 2023. Toronto, Canada: Association for Computational Linguistics, pp. 14262-14272. 10.18653/v1/2023.findings-acl.899

Full text not available from this repository.

Abstract

Question and answer generation (QAG) consists of generating a set of question-answer pairs given a context (e.g. a paragraph). This task has a variety of applications, such as data augmentation for question answering (QA) models, information retrieval and education. In this paper, we establish baselines with three different QAG methodologies that leverage sequence-to-sequence language model (LM) fine-tuning. Experiments show that an end-to-end QAG model, which is computationally light at both training and inference times, is generally robust and outperforms other more convoluted approaches. However, there are differences depending on the underlying generative LM. Finally, our analysis shows that QA models fine-tuned solely on generated question-answer pairs can be competitive when compared to supervised QA models trained on human-labeled data.

Item Type: Conference or Workshop Item (Paper)
Status: Published
Schools: Computer Science & Informatics
Publisher: Association for Computational Linguistics
Last Modified: 05 Sep 2023 14:15
URI: https://orca.cardiff.ac.uk/id/eprint/161904

Actions (repository staff only)

Edit Item Edit Item