Answer-focused and Position-aware Neural Question Generation.EMNLP, 2018
Improving Neural Question Generation Using Answer Separation.AAAI, 2019.
Answer-driven Deep Question Generation based on Reinforcement Learning.COLING, 2020.
Automatic Question Generation using Relative Pronouns and Adverbs.ACL, 2018.
Learning to Generate Questions by Learning What not to Generate.WWW, 2019.
Question Generation for Question Answering.EMNLP,2017.
Answer-focused and Position-aware Neural Question Generation.EMNLP, 2018.
Question-type Driven Question Generation.EMNLP, 2019.
Harvesting paragraph-level question-answer pairs from wikipedia.ACL, 2018.
Leveraging Context Information for Natural Question Generation.ACL, 2018.
Paragraph-level Neural Question Generation with Maxout Pointer and Gated Self-attention Networks.EMNLP, 2018.
Capturing Greater Context for Question Generation.AAAI, 2020.
Identifying Where to Focus in Reading Comprehension for Neural Question Generation.EMNLP, 2017.
Neural Models for Key Phrase Extraction and Question Generation.ACL Workshop, 2018.
A Multi-Agent Communication Framework for Question-Worthy Phrase Extraction and Question Generation.AAAI, 2019.
Improving Question Generation With to the Point Context.EMNLP, 2019.
Teaching Machines to Ask Questions.IJCAI, 2018.
Natural Question Generation with Reinforcement Learning Based Graph-to-Sequence Model.NeurIPS Workshop, 2019.
Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering.EMNLP, 2019.
Exploring Question-Specific Rewards for Generating Deep Questions.COLING, 2020.
Answer-driven Deep Question Generation based on Reinforcement Learning.COLING, 2020.
Multi-Task Learning with Language Modeling for Question Generation.EMNLP, 2019.
How to Ask Good Questions? Try to Leverage Paraphrases.ACL, 2020.
Improving Question Generation with Sentence-level Semantic Matching and Answer Position Inferring.AAAI, 2020.
Variational Attention for Sequence-to-Sequence Models. ICML, 2018.
Generating Diverse and Consistent QA pairs from Contexts with Information-Maximizing Hierarchical Conditional VAEs.ACL, 2020.
On the Importance of Diversity in Question Generation for QA.ACL, 2020.
Unified Language Model Pre-training for Natural Language Understanding and Generation.NeurIPS, 2019.
UniLMv2: Pseudo-Masked Language Models for Unified Language Model Pre-Training.arXiv, 2020.
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation.IJCAI, 2020.(SOTA)
Addressing Semantic Drift in Question Generation for Semi-Supervised Question Answering.EMNLP, 2019.
Synthetic QA Corpora Generation with Roundtrip Consistency.ACL, 2019.
Template-Based Question Generation from Retrieved Sentences for Improved Unsupervised Question Answering.ACL, 2020.
Training Question Answering Models From Synthetic Data.EMNLP, 2020.
Embedding-based Zero-shot Retrieval through Query Generation.arXiv, 2020.
Towards Robust Neural Retrieval Models with Synthetic Pre-Training.arXiv, 2021.
End-to-End Synthetic Data Generation for Domain Adaptation of Question Answering Systems.EMNLP, 2020.
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation.ACL 2021.
Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval.arXiv, 2021.
Open-domain question answering with pre-constructed question spaces.NAACL, 2021.
Accelerating real-time question answering via question generation.AAAI, 2021.
PAQ: 65 Million Probably-Asked Questions and What You Can Do With Them.arXiv, 2021.
Improving Factual Consistency of Abstractive Summarization via Question Answering.ACL, 2021.
Zero-shot Fact Verification by Claim Generation.ACL, 2021.
Towards a Better Metric for Evaluating Question Generation Systems.EMNLP, 2018.
On the Importance of Diversity in Question Generation for QA.ACL, 2020.
Evaluating for Diversity in Question Generation over Text.arXiv, 2020.
特別鳴謝
感謝 TCCI 天橋腦科學(xué)研究院對(duì)于 PaperWeekly 的支持。TCCI 關(guān)注大腦探知、大腦功能和大腦健康。
更多閱讀
#投 稿 通 道#
讓你的文字被更多人看到
如何才能讓更多的優(yōu)質(zhì)內(nèi)容以更短路徑到達(dá)讀者群體,縮短讀者尋找優(yōu)質(zhì)內(nèi)容的成本呢?答案就是:你不認(rèn)識(shí)的人。
總有一些你不認(rèn)識(shí)的人,知道你想知道的東西。PaperWeekly 或許可以成為一座橋梁,促使不同背景、不同方向的學(xué)者和學(xué)術(shù)靈感相互碰撞,迸發(fā)出更多的可能性。
PaperWeekly 鼓勵(lì)高校實(shí)驗(yàn)室或個(gè)人,在我們的平臺(tái)上分享各類優(yōu)質(zhì)內(nèi)容,可以是最新論文解讀,也可以是學(xué)術(shù)熱點(diǎn)剖析、科研心得或競(jìng)賽經(jīng)驗(yàn)講解等。我們的目的只有一個(gè),讓知識(shí)真正流動(dòng)起來。
?? 稿件基本要求:
· 文章確系個(gè)人原創(chuàng)作品,未曾在公開渠道發(fā)表,如為其他平臺(tái)已發(fā)表或待發(fā)表的文章,請(qǐng)明確標(biāo)注
· 稿件建議以 markdown 格式撰寫,文中配圖以附件形式發(fā)送,要求圖片清晰,無版權(quán)問題
· PaperWeekly 尊重原作者署名權(quán),并將為每篇被采納的原創(chuàng)首發(fā)稿件,提供業(yè)內(nèi)具有競(jìng)爭(zhēng)力稿酬,具體依據(jù)文章閱讀量和文章質(zhì)量階梯制結(jié)算
?? 投稿通道:
聯(lián)系客服