• Home
  • ニュース
  • 当研究室の論文がTACLに採録されました。
  • 当研究室の論文がTACLに採録されました。

    ■書誌情報
    Takeshi Kojima, Yutaka Matsuo, Yusuke Iwasawa. “Continual Pre-training on Character-Level Noisy Texts Makes Decoder-based Language Models Robust Few-shot Learners”. Transactions of the Association for Computational Linguistics (TACL).

    ■概要
    Recent decoder-based pre-trained language models (PLMs) generally use subword tokenizers. However, adding character-level perturbations drastically changes the delimitation of texts by the tokenizers, leading to the vulnerability of PLMs. This study proposes a method of continual pre-training to convert decoder-based PLMs with subword tokenizers into perturbation-robust few-shot in-context learners. Our method continually trains decoder-based PLMs to predict the next tokens conditioning on artificially created character-level noisy texts. Since decoder-based language models are auto-regressive, we skip noised words from the target optimization. In addition, to maintain the same word prediction performance under noisy text as clean text, our method employs word distribution matching between the original PLMs and training models. We conducted experiments on various subword-based PLMs, including GPT2, Pythia, Mistral, Gemma2, and Llama3, ranging from 1B to 8B parameters. The results show that our method consistently improves downstream task performance for texts containing artificial noise and actual typos or spelling errors with few-shot in-context learning settings.