
Research
研究
研究業績
カテゴリー
研究領域
年
-
Mechanism of Task-oriented Information Removal in In-context Learning
Hakaze Cho, Haolin Yang, Gouki Minegishi, Naoya Inoue
International Conference on Learning Representations 2026 (ICLR2026)
-
RL Squeezes, SFT Expands: A Comparative Study of Reasoning LLMs
Kohsei Matsutani, Shota Takashiro, Gouki Minegishi, Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo
International Conference on Learning Representations 2026 (ICLR2026)
-
C-Voting: Confidence-Based Test-Time Voting without Explicit Energy Functions
Kenji Kubo, Shunsuke Kamiya, Masanori Koyama, Kohei Hayashi, Yusuke Iwasawa, Yutaka Matsuo
International Conference on Learning Representations 2026 (ICLR2026)
-
Does “Do Differentiable Simulators Give Better Policy Gradients?” Give Better Policy Gradients?
Ku Onoda, Paavo Parmas, Manato Yaguchi, Yutaka Matsuo
International Conference on Learning Representations 2026 (ICLR2026)
-
Quantization-Aware Diffusion Models For Maximum Likelihood Training
Shohei Taniguchi, Masahiro Suzuki, Yutaka Matsuo
International Conference on Learning Representations 2026 (ICLR2026)
-
Self-Harmony: Learning to Harmonize Self-Supervision and Self-Play in Test-Time Reinforcement Learnin
Ru Wang, Wei Huang, Qi Cao, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo
International Conference on Learning Representations 2026 (ICLR2026)
-
MMA:Benchmarking Multi-ModalLarge Language Models in Ambiguity Context
Ru Wang*, Selena Song*, Yuquan Wang, Liang Ding, Mingming Gong, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo
The Third Conference on Parsimony and Learning(CPAL 2026)
-
Beyond In-Distribution Success: Scaling Curves of CoT Granularity for Language Model Generalization
Ru Wang, Wei Huang, Selena Song, Haoyu Zhang, Qian Niu, Yusuke Iwasawa, Yutaka Matsuo, Jiaxian Guo
The Third Conference on Parsimony and Learning(CPAL 2026)
-
Semantic Token Clustering for Efficient Uncertainty Quantification in Large Language Models
Qi Cao, Andrew Gambardella, Takeshi Kojima, Yutaka Matsuo, Yusuke Iwasawa
The 19th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2026)
-
∞-MoE: Generalizing Mixture of Experts to Infinite Expert
Shota Takashiro, Takeshi Kojima, Shohei Taniguchi, Yusuke Iwasawa, Yutaka Matsuo
The 19th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2026)