Research

研究

  • Home
  • 研究業績
  • 国際会議
  • 研究業績

    カテゴリー

    研究領域

    • Answer When Needed, Forget When Not: Language Models Pretend to Forget via In-Context Knowledge Unlearning

      Shota Takashiro, Takeshi Kojima, Andrew Gambardella, Qi Cao, Yusuke Iwasawa, Yutaka Matsuo

      The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025)

    • GraphCheck: Breaking Long-Term Text Barriers with Extracted Knowledge Graph-Powered Fact-Checking

      Yingjian Chen, Haoran Liu, Yinhong Liu, Rui Yang, Han Yuan, Yanran Fu, Pengyuan Zhou, Qingyu Chen, James Caverlee, Irene Li

      The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025)

    • Inconsistent Tokenizations Cause Language Models to be Perplexed by Japanese Grammar

      Andrew Gambardella, Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo

      The 63rd Annual Meeting of the Association for Computational Linguistics (ACL 2025)

    • In-Context Meta Learning Induces Multi-Phase Circuit Emergence

      Gouki Minegishi, Hiroki Furuta, Shohei Taniguchi, Yusuke Iwasawa, Yutaka Matsuo

      International Conference on Machine Learning (ICML)

    • Plan-and-Act: Improving Planning of Agents for Long-Horizon Tasks

      Lutfi Eren Erdogan, Nicholas Lee, Sehoon Kim, Suhong Moon, Hiroki Furuta, Gopala Anumanchipalli, Kurt Keutzer, Amir Gholami

      International Conference on Machine Learning (ICML)

    • Language Models can Categorize System Inputs for Performance Analysis

      Dominic Sobhani, Ruiqi Zhong, Edison Marrese-Taylor, Keisuke Sakaguchi, Yutaka Matsuo

      Nations of the Americas Chapter of the Association for Computational Linguistics (NAACL)

    • Near-Optimal Policy Identification in Robust Constrained Markov Decision Processes via Epigraph Form

      Toshinori Kitamura, Tadashi Kozuno, Wataru Kumagai, Kenta Hoshino, Yohei Hosoe, Kazumi Kasaura, Masashi Hamaya, Paavo Parmas, Yutaka Matsuo

      International Conference on Learning Representations (ICLR 2025)

    • Rethinking Evaluation of Sparse Autoencoders through the Representation of Polysemous Words

      Gouki Minegishi, Hiroki Furuta, Yusuke Iwasawa, Yutaka Matsuo

      International Conference on Learning Representations (ICLR 2025)

    • Lost in the Distance: Large Language Models Struggle to Capture Long-Distance Relational Knowledge

      Meiyun Wang, Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo

      The 2025 Annual Conference of the Nations of the Americas Chapter of the ACL (NAACL 2025)

    • Slender-Mamba: Fully Quantized Mamba From Head to Toe

      Zhenxuan Yu, Takeshi Kojima, Yutaka Matsuo and Yusuke Iwasawa.

      Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025).