• Home
  • ニュース
  • 当研究室の論文がNAACL 2024に採録されました。
  • 当研究室の論文がNAACL 2024に採録されました。

    ■書誌情報
    Takeshi Kojima, Itsuki Okimura, Yusuke Iwasawa, Hitomi Yanaka, Yutaka Matsuo. “On the Multilingual Ability of Decoder-based Pre-trained Language Models: Finding and Controlling Language-Specific Neurons”. 2024 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2024)

    ■概要
    Current decoder-based pre-trained language models (PLMs) successfully demonstrate multilingual capabilities, but it is unclear how multilingualism is handled inside the models. We analyze neuron-level internal behavior of multilingual decoder-based PLMs: The existence of neurons that fire “uniquely for each language” within decoder-only multilingual PLMs. We analyze six languages: English, German, French, Spanish, Chinese, and Japanese, and show that language-specific neurons are unique with a slight overlap (< 5%) between languages and are mainly distributed in the models’ first and last few layers. This trend is consistent across various languages and models. We also tamper with less than 1% of the total neurons in each model during inference and show that tampering with few language-specific neurons drastically changes the probability of target language occurrence in text generation during inference.