• Home
  • News
  • Our paper has been accepted for publication in ACL.
  • Our paper has been accepted for publication in ACL.

    ■Bibliographic Information
    Andrew Gambardella, Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo. “Inconsistent Tokenizations Cause Language Models to be Perplexed by Japanese Grammar”. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics, 2025.
    ■Overview
    Typical methods for evaluating the performance of language models evaluate their ability to answer questions accurately. These evaluation metrics are acceptable for determining the extent to which language models can understand and reason about text in a general sense, but fail to capture nuanced capabilities, such as the ability of language models to recognize and obey rare grammar points, particularly in languages other than English. We measure the perplexity of language models when confronted with the “first person psych predicate restriction” grammar point in Japanese. Weblab is the only tested open source model in the 7-10B parameter range which consistently assigns higher perplexity to ungrammatical psych predicate sentences than grammatical ones. We give evidence that Weblab’s uniformly bad tokenization is a possible root cause for its good performance, and show that Llama 3’s perplexity on grammatical psych predicate sentences can be reduced by orders of magnitude (28x difference) by restricting test sentences to those with uniformly well-behaved tokenizations. We show in further experiments on machine translation tasks that language models will use alternative grammar patterns in order to produce grammatical sentences when tokenization issues prevent the most natural sentence from being output.