Our paper was accepted for NeurIPS2021 (Spotlight)

Our paper was accepted for presentation at NeurIPS2021 (Spotlight) . ︎書誌情報 Yusuke Iwasawa, Yutaka Matsuo. “Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization”,  Advances in Neural Information Processing Systems 2021 (NeurIPS2021). ︎概要 This paper presents a new algorithm for domain generalization (DG), test-time template adjuster (T3A), aiming to develop a model that performs well under conditions…

Our paper was accepted for NeurIPS2021

Our paper was accepted for presentation at NeurIPS2021 . ︎書誌情報 Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, and Shixiang Shane Gu. “Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning”,  Advances in Neural Information Processing Systems 2021 (NeurIPS2021). ︎概要 Recently many algorithms were devised for reinforcement learning (RL) with function approximation. While…

当研究室の論文がEMNLP2021に採録されました。

◼︎書誌情報 Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo “AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation for 8 African Languages”, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) 【著者】Machel Reid, Junjie Hu (Carnegie Mellon University), Graham Neubig (Carnegie Mellon University), Yutaka Matsuo 【タイトル】AfroMT: Pretraining Strategies and Reproducible Benchmarks for…

当研究室の論文がEMNLP2021 Findingsに採録されました。

◼︎書誌情報 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo “Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers”, Findings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) 【著者】Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo 【タイトル】Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers ◼︎概要 Transformers have shown improved performance when compared to…

Our paper was accepted for Machine Learning.(Springer)

◼︎Information Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo. “Information-theoretic regularization for learning global features by sequential VAE”, Mach Learn (2021). https://doi.org/10.1007/s10994-021-06032-4 ◼︎Overview Sequential variational autoencoders (VAEs) with a global latent variable z have been studied for disentangling the global features of data, which is useful for several downstream tasks. To further assist the sequential VAEs in…

Our paper was accepted for UAI2021.

◼︎Information Akiyoshi Sannai, Masaaki Imaizumi, Makoto Kawano. “Improved Generalization Bounds of Group Invariant / Equivariant Deep Networks via Quotient Feature Spaces”, 37th Conference on Uncertainty in Artificial Intelligence (UAI 2021). ◼︎Overview Numerous invariant (or equivariant) neural networks have succeeded in handling the invariant data such as point clouds and graphs. However, a generalization theory for…

Our paper was accepted for ICML2021.

【Information】 Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, and Shixiang Shane Gu. “Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning”, International Conference on Machine Learning 2021 (ICML2021). July 2021. 【Overview】 Progress in deep reinforcement learning (RL) research is largely enabled by benchmark task environments. However, analyzing…

Our paper was accepted for ACL-IJCNLP 2021 (Findings).

【NEWS】Our paper was accepted to ACL-IJCNLP 2021 (Findings) 【Title】LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer 【Authors】Machel Reid and Victor Zhong (University of Washington) 【Overview】Many types of text style transfer can be achieved with only small, precise edits (e.g. sentiment transfer from “I had a terrible time…” to “I had a great time…”). We propose…

Our paper was accepted for L4DC.

【Information】 Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo. Estimating Disentangled Belief about Hidden State and Hidden Task for Meta-Reinforcement Learning. Learning for Dynamics and Control (L4DC) Conference. June 2021. 【Overview】 There is considerable interest in designing meta-reinforcement learning (meta-RL) algorithms, which enable autonomous agents to adapt new tasks from small amount of experience. In meta-RL, the…

Our paper was accepted for ICLR2021.

Our paper was accepted for presentation at ICLR2021. 【Information】Tatsuya Matsushima, Hiroki Furuta, Yutaka Matsuo, Ofir Nachum, and Shixiang Shane Gu. “Deployment-Efficient Reinforcement Learning via Model-Based Offline Optimization”, International Conference on Learning Representations 2021 (ICLR2021). May 2021. 【Overview】Most reinforcement learning (RL) algorithms assume online access to the environment, in which one may readily interleave updates to…