当研究室の論文が人工知能学会論文誌に採録されました

◼Information タイトル:Transformerと自己教師あり学習を用いたシーン解釈手法の提案 著 者: 小林 由弥 鈴木 雅大 松尾 豊 掲載号:第37巻2号 J-STAGE ◼Overview Ability to understand surrounding environment based on its components, namely objects, is one of the most important cognitive ability for intelligent agents. Human beings are able to decompose sensory input, i.e. visual stimulation, into some components based on its meaning or relationships between…

Our paper was accepted for Web Intelligence (Spotlight)

Our paper was accepted for Web Intelligence (Spotlight) 書誌情報 Hiromi Nakagawa, Yusuke Iwasawa, Yutaka Matsuo. “Graph-based Knowledge Tracing: Modeling Student Proficiency Using Graph Neural Network.” Web Intelligence. Vol. XX. No. X. IOS Press, 2021. 概要 Recent advancements in computer-assisted learning systems have caused an increase in the research in knowledge tracing, wherein student performance is predicted…

Our paper was accepted for NeurIPS2021 (Spotlight)

Our paper was accepted for presentation at NeurIPS2021 (Spotlight) . ︎書誌情報 Yusuke Iwasawa, Yutaka Matsuo. “Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization”,  Advances in Neural Information Processing Systems 2021 (NeurIPS2021). ︎概要 This paper presents a new algorithm for domain generalization (DG), test-time template adjuster (T3A), aiming to develop a model that performs well under conditions…

Our paper was accepted for NeurIPS2021

Our paper was accepted for presentation at NeurIPS2021 . ︎書誌情報 Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, and Shixiang Shane Gu. “Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning”,  Advances in Neural Information Processing Systems 2021 (NeurIPS2021). ︎概要 Recently many algorithms were devised for reinforcement learning (RL) with function approximation. While…

当研究室の論文がEMNLP2021に採録されました。

◼︎書誌情報 Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo “AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation for 8 African Languages”, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) 【著者】Machel Reid, Junjie Hu (Carnegie Mellon University), Graham Neubig (Carnegie Mellon University), Yutaka Matsuo 【タイトル】AfroMT: Pretraining Strategies and Reproducible Benchmarks for…

当研究室の論文がEMNLP2021 Findingsに採録されました。

◼︎書誌情報 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo “Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers”, Findings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) 【著者】Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo 【タイトル】Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers ◼︎概要 Transformers have shown improved performance when compared to…

Our paper was accepted for Machine Learning.(Springer)

◼︎Information Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo. “Information-theoretic regularization for learning global features by sequential VAE”, Mach Learn (2021). https://doi.org/10.1007/s10994-021-06032-4 ◼︎Overview Sequential variational autoencoders (VAEs) with a global latent variable z have been studied for disentangling the global features of data, which is useful for several downstream tasks. To further assist the sequential VAEs in…

Our paper was accepted for UAI2021.

◼︎Information Akiyoshi Sannai, Masaaki Imaizumi, Makoto Kawano. “Improved Generalization Bounds of Group Invariant / Equivariant Deep Networks via Quotient Feature Spaces”, 37th Conference on Uncertainty in Artificial Intelligence (UAI 2021). ◼︎Overview Numerous invariant (or equivariant) neural networks have succeeded in handling the invariant data such as point clouds and graphs. However, a generalization theory for…

Our paper was accepted for ICML2021.

【Information】 Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, and Shixiang Shane Gu. “Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning”, International Conference on Machine Learning 2021 (ICML2021). July 2021. 【Overview】 Progress in deep reinforcement learning (RL) research is largely enabled by benchmark task environments. However, analyzing…

Our paper was accepted for ACL-IJCNLP 2021 (Findings).

【NEWS】Our paper was accepted to ACL-IJCNLP 2021 (Findings) 【Title】LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer 【Authors】Machel Reid and Victor Zhong (University of Washington) 【Overview】Many types of text style transfer can be achieved with only small, precise edits (e.g. sentiment transfer from “I had a terrible time…” to “I had a great time…”). We propose…