ACL2022论文分类汇总-Prompt、句子表征、检索排序&摘要

写在前面

大家好,我是刘聪NLP。

ACL2022会议的论文已经出来一阵子了,将论文列表过了一边,筛选了一些自己正在做或者感兴趣方向的相关论文,包括:Prompt(35篇)、句子表征(21篇)、检索排序(13篇)、摘要(35篇)和其他(11篇,个人觉得蛮有意思的论文)。

下面仅列出论文名字,详细论文内容,同学们可以通过下方论文链接自己查找。

一起学起来吧,请用论文填满你的业余时间。

论文链接:https://aclanthology.org/events/acl-2022/

Prompt

[1]Prompt-Based Rule Discovery and Boosting for Interactive Weakly-Supervised Learning

[2]An Information-theoretic Approach to Prompt Engineering Without Ground Truth Labels

[3]Auto-Debias: Debiasing Masked Language Models with Automated Biased Prompts

[4]Enhancing Cross-lingual Natural Language Inference by Prompt-learning from Cross-lingual Templates

[5]Knowledgeable Prompt-tuning: Incorporating Knowledge into Prompt Verbalizer for Text Classification

[6]Are Prompt-based Models Clueless?

[7]Adversarial Soft Prompt Tuning for Cross-Domain Sentiment Analysis

[8]Good Examples Make A Faster Learner: Simple Demonstration-based Learning for Low-resource NER

[9]A Good Prompt Is Worth Millions of Parameters: Low-resource Prompt-based Learning for Vision-Language Models*

[10]Generated Knowledge Prompting for Commonsense Reasoning

[11]Prompt-free and Efficient Few-shot Learning with Language Models

[12]PromDA: Prompt-based Data Augmentation for Low-Resource NLU Tasks

[13]Visual-Language Navigation Pretraining via Prompt-based Environmental Self-exploration*

[14]SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

[15]Dynamic Prefix-Tuning for Generative Template-based Event Extraction

[16]Noisy Channel Language Model Prompting for Few-Shot Text Classification

[17]Unified Structure Generation for Universal Information Extraction

[18]Can Prompt Probe Pretrained Language Models? Understanding the Invisible Risks from a Causal View

[19]MSP: Multi-Stage Prompting for Making Pre-trained Language Models Better Translators

[20]Prompt for Extraction? PAIE: Prompting Argument Interaction for Event Argument Extraction

[21]Fine-Grained Controllable Text Generation Using Non-Residual Prompting

[22]Prototypical Verbalizer for Prompt-based Few-shot Tuning

[23]Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity

[24]PPT: Pre-trained Prompt Tuning for Few-shot Learning

[25]P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks

[26]The Power of Prompt Tuning for Low-Resource Semantic Parsing

[27]RelationPrompt: Leveraging Prompts to Generate Synthetic Data for Zero-Shot Relation Triplet Extraction

[28]Dual Context-Guided Continuous Prompt Tuning for Few-Shot Learning

[29]Multi-Stage Prompting for Knowledgeable Dialogue Generation

[30]ASCM: An Answer Space Clustered Prompting Method without Answer Engineering

[31]Prompt-Driven Neural Machine Translation

[32]Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models

[33]Controllable Natural Language Generation with Contrastive Prefixes

[34]Modular and Parameter-Efficient Multimodal Fusion with Prompting

[35]Prompt Tuning for Discriminative Pre-trained Language Models

句子表征

[1]Language-agnostic BERT Sentence Embedding

[2]Learning Disentangled Textual Representations via Statistical Measures of Similarity

[3]Contextual Representation Learning beyond Masked Language Modeling

[4]Sentence-level Privacy for Document Embeddings

[5]Multilingual Molecular Representation Learning via Contrastive Pre-training

[6]A Contrastive Framework for Learning Sentence Representations from Pairwise and Triple-wise Perspective in Angular Space

[7]Toward Interpretable Semantic Textual Similarity via Optimal Transport-based Contrastive Sentence Learning

[8]Just Rank: Rethinking Evaluation with Word and Sentence Similarities

[9]Debiased Contrastive Learning of Unsupervised Sentence Representations

[10]UCTopic: Unsupervised Contrastive Learning for Phrase Representations and Topic Mining

[11]SCD: Self-Contrastive Decorrelation of Sentence Embeddings

[12]Problems with Cosine as a Measure of Embedding Similarity for High Frequency Words

[13]Augmenting Document Representations for Dense Retrieval with Interpolation and Perturbation

[14]A Sentence is Worth 128 Pseudo Tokens: A Semantic-Aware Contrastive Learning Framework for Sentence Embeddings

[15]Compressing Sentence Representation for Semantic Retrieval via Homomorphic Projective Distillation

[16]Virtual Augmentation Supported Contrastive Learning of Sentence Representations

[17]Learning Bias-reduced Word Embeddings Using Dictionary Definitions

[18]An Isotropy Analysis in the Multilingual BERT Embedding Space

[20]Combining Static and Contextualised Multilingual Embeddings

[21]Exploring the Impact of Negative Samples of Contrastive Learning: A Case Study of Sentence Embedding

检索排序

[1]Compact Token Representations with Contextual Quantization for Efficient Document Re-ranking

[2]Sentence-aware Contrastive Learning for Open-Domain Passage Retrieval

[3]Unsupervised Corpus Aware Language Model Pre-training for Dense Passage Retrieval

[4]Cross-Lingual Phrase Retrieval

[5]Multi-View Document Representation Learning for Open-Domain Dense Retrieval

[6]SDR: Efficient Neural Re-ranking using Succinct Document Representation

[7]MDERank: A Masked Document Embedding Rank Approach for Unsupervised Keyphrase Extraction

[8]TABi: Type-Aware Bi-Encoders for Open-Domain Entity Retrieval

[9]OneAligner: Zero-shot Cross-lingual Transfer with One Rich-Resource Language Pair for Low-Resource Sentence Retrieval

[10]LaPraDoR: Unsupervised Pretrained Dense Retriever for Zero-Shot Text Retrieval

[11]ED2LM: Encoder-Decoder to Language Model for Faster Document Re-ranking Inference

[12]A Neural Pairwise Ranking Model for Readability Assessment

[13]Zero-Shot Dense Retrieval with Momentum Adversarial Domain Invariant Representations

[1]Attention Temperature Matters in Abstractive Summarization Distillation

[2]Modeling Hierarchical Syntax Structure with Triplet Position for Source Code Summarization

[3]Neural Label Search for Zero-Shot Multi-Lingual Extractive Summarization

[4]HIBRIDS: Attention with Hierarchical Biases for Structure-aware Long Document Summarization

[5]Unsupervised Extractive Opinion Summarization Using Sparse Coding

[6]Faithful or Extractive? On Mitigating the Faithfulness-Abstractiveness Trade-off in Abstractive Summarization

[7]Differentiable Multi-Agent Actor-Critic for Multi-Step Radiology Report Summarization

[8]SummN: A Multi-Stage Summarization Framework for Long Input Dialogues and Documents

[9]DYLE: Dynamic Latent Extraction for Abstractive Long-Input Summarization

[10]Predicting Intervention Approval in Clinical Trials through Multi-Document Summarization

[11]A Variational Hierarchical Model for Neural Cross-Lingual Summarization

[12]Other Roles Matter! Enhancing Role-Oriented Dialogue Summarization via Role Interactions

[13]BRIO: Bringing Order to Abstractive Summarization

[14]Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization

[15]EntSUM: A Data Set for Entity-Centric Extractive Summarization

[16]Towards Abstractive Grounded Summarization of Podcast Transcripts

[17]SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization

[18]Graph Enhanced Contrastive Learning for Radiology Findings Summarization

[19]A Multi-Document Coverage Reward for RELAXed Multi-Document Summarization

[20]PRIMERA: Pyramid-based Masked Sentence Pre-training for Multi-document Summarization

[21]ASPECTNEWS: Aspect-Oriented Summarization of News Documents

[22]MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes

[23]Length Control in Abstractive Summarization by Pretraining Information Selection

[24]Learning Non-Autoregressive Models from Search for Unsupervised Sentence Summarization

[25]SummScreen: A Dataset for Abstractive Screenplay Summarization

[26]RNSum: A Large-Scale Dataset for Automatic Release Note Generation via Commit Logs Summarization

[27]NEWTS: A Corpus for News Topic-Focused Summarization

[28]End-to-End Segmentation-based News Summarization

[29]Read Top News First: A Document Reordering Approach for Multi-Document News Summarization

[30]HiStruct+: Improving Extractive Text Summarization with Hierarchical Structure Information

[31]Revisiting Automatic Evaluation of Extractive Summarization Task: Can We Do Better than ROUGE?

[32]Training Dynamics for Text Summarization Models

[33]Dialogue Summaries as Dialogue States (DS2), Template-Guided Summarization for Few-shot Dialogue State Tracking

[34]Focus on the Action: Learning to Highlight and Summarize Jointly for Email To-Do Items Summarization

[35]Should We Trust This Summary? Bayesian Abstractive Summarization to The Rescue

[1]RoCBert: Robust Chinese Bert with Multimodal Contrastive Pretraining

[2]Efficient Unsupervised Sentence Compression by Fine-tuning Transformers with Reinforcement Learning

[3]Explanation Graph Generation via Pre-trained Language Models: An Empirical Study with Contrastive Learning

[4]Token Dropping for Efficient BERT Pretraining

[5]Improving Compositional Generalization with Self-Training for Data-to-Text Generation

[6]∞-former: Infinite Memory Transformer

[7]CQG: A Simple and Effective Controlled Generation Framework for Multi-hop Question Generation

[8]SkipBERT: Efficient Inference with Shallow Layer Skipping

[9]NoisyTune: A Little Noise Can Help You Finetune Pretrained Language Models Better

[10]”Is Whole Word Masking Always Better for Chinese BERT?”: Probing on Chinese Grammatical Error Correction

[11]Dict-BERT: Enhancing Language Model Pre-training with Dictionary

整理不易,请多多点赞,关注,有问题的朋友也欢迎加我微信「logCong」、 公众号「NLP工作站」、 知乎「刘聪NLP」 私聊,交个朋友吧,一起学习,一起进步。

我们的口号是”生命不止,学习不停”。

Original: https://blog.csdn.net/lc_love_ty/article/details/125248378
Author: 刘聪NLP
Title: ACL2022论文分类汇总-Prompt、句子表征、检索排序&摘要

原创文章受到原创版权保护。转载请注明出处:https://www.johngo689.com/530219/

转载文章受原作者版权保护。转载请注明原作者出处!

(0)

大家都在看

亲爱的 Coder【最近整理,可免费获取】👉 最新必读书单  | 👏 面试题下载  | 🌎 免费的AI知识星球