![]() ![]() The proposed framework utilizes the retrieved prototypes, which are jointly selected by an IR system and a novel prototype selector to help the model bridging the structural gap between tables and texts. To address this, we propose a new framework: Prototype-to-Generate (P2G), for table-to-text generation under the few-shot scenario. However, due to the data-hungry nature of neural models, their performances strongly rely on large-scale training examples, limiting their applicability in real-world applications. Neural table-to-text generation models have achieved remarkable progress on an array of tasks. Keep the Primary, Rewrite the Secondary: A Two-Stage Approach for Paraphrase Generationįindings of the Association for Computational Linguistics: ACL-IJCNLP 2021įew-Shot Table-to-Text Generation with Prototype Memoryįindings of the Association for Computational Linguistics: EMNLP 2021 Empirical studies on three benchmark datasets with three state-of-the-art matching models demonstrate that the proposed learning framework significantly improves the model performance across various evaluation metrics. ![]() As for IC, it progressively strengthens the model’s ability in identifying the mismatching information between the dialogue context and a response candidate. In CC, the model gradually increases its ability in finding the matching clues between the dialogue context and a response candidate. Our learning framework consists of two complementary curricula: (1) corpus-level curriculum (CC) and (2) instance-level curriculum (IC). Motivated by the recent finding that models trained with random negative samples are not ideal in real-world scenarios, we propose a hierarchical curriculum learning framework that trains the matching model in an “easy-to-difficult” scheme. We study the learning of a matching model for dialogue response selection. Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers) In addition, we also conduct extensive analysis experiments to reveal the effect of each proposed component.ĭialogue Response Selection with Hierarchical Curriculum Learning Experimental results show that our model significantly outperforms existing non-autoregressive baselines and achieves competitive performance with many strong autoregressive models. For a comprehensive evaluation, we test the proposed model on three text generation tasks, including text summarization, sentence compression and machine translation. ![]() To further strengthen the speed advantage of the proposed model, we propose a new decoding strategy, ratio-first, for applications where the output lengths can be approximately estimated beforehand. Additionally, we devise two mechanisms to alleviate the two common problems of vanilla NAG models: the inflexibility of prefixed output length and the conditional independence of individual token predictions. In this work, we show that BERT can be employed as the backbone of a NAG model for a greatly improved performance. However, the generation quality of existing NAG models still lags behind their autoregressive counterparts. Non-autoregressive generation (NAG) has recently attracted great attention due to its fast inference speed. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |