Our paper was accepted for NeurIPS2021 (Spotlight)

Our paper was accepted for presentation at NeurIPS2021 (Spotlight) . ︎書誌情報 Yusuke Iwasawa, Yutaka Matsuo. “Test-Time Classifier Adjustment Module for Model-Agnostic Domain Generalization”,  Advances in Neural Information Processing Systems 2021 (NeurIPS2021). ︎概要 This paper presents a new algorithm for domain generalization (DG), test-time template adjuster (T3A), aiming to develop a model that performs well under conditions…

Our paper was accepted for NeurIPS2021

Our paper was accepted for presentation at NeurIPS2021 . ︎書誌情報 Hiroki Furuta, Tadashi Kozuno, Tatsuya Matsushima, Yutaka Matsuo, and Shixiang Shane Gu. “Co-Adaptation of Algorithmic and Implementational Innovations in Inference-based Deep Reinforcement Learning”,  Advances in Neural Information Processing Systems 2021 (NeurIPS2021). ︎概要 Recently many algorithms were devised for reinforcement learning (RL) with function approximation. While…

2021年度「人工知能応用プロジェクト」A1A2ターム参加者募集開始(東大学部後期・大学院生向け)

Sorry, this entry is only available in Japanese. For the sake of viewer convenience, the content is shown below in the alternative language. You may click the link to switch the active language.

松尾研究室では,2021年冬学期(A1A2ターム)に.東京大学学部生(後期)・大学院生を対象として「人工知能応用プロジェクト」を実施します.

本講義は,工学部「創造的ものづくりプロジェクト」.工学系研究科「創造性工学プロジェクト」のプロジェクト型演習講義の1つとして実施されるものです. 人工知能技術を応用したロボット開発を行うチーム開発活動への参加に対して単位が認定される講義になっています.

 

プロジェクト内容

・最新の深層学習技術をロボット制御・ロボットシステムと組み合わせて.実世界で柔軟に動くロボット創るためのプロジェクトを立案・実行する.

・詳細は下記URLからご確認ください.

https://trail.t.u-tokyo.ac.jp/ja/courses/creativeeng2021a1a2/

 

当研究室の論文がEMNLP2021に採録されました。

◼︎書誌情報 Machel Reid, Junjie Hu, Graham Neubig, Yutaka Matsuo “AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation for 8 African Languages”, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) 【著者】Machel Reid, Junjie Hu (Carnegie Mellon University), Graham Neubig (Carnegie Mellon University), Yutaka Matsuo 【タイトル】AfroMT: Pretraining Strategies and Reproducible Benchmarks for…

当研究室の論文がEMNLP2021 Findingsに採録されました。

◼︎書誌情報 Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo “Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers”, Findings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP) 【著者】Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo 【タイトル】Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers ◼︎概要 Transformers have shown improved performance when compared to…

Our paper was accepted for Machine Learning.(Springer)

◼︎Information Kei Akuzawa, Yusuke Iwasawa, Yutaka Matsuo. “Information-theoretic regularization for learning global features by sequential VAE”, Mach Learn (2021). https://doi.org/10.1007/s10994-021-06032-4 ◼︎Overview Sequential variational autoencoders (VAEs) with a global latent variable z have been studied for disentangling the global features of data, which is useful for several downstream tasks. To further assist the sequential VAEs in…

Our paper was accepted for UAI2021.

◼︎Information Akiyoshi Sannai, Masaaki Imaizumi, Makoto Kawano. “Improved Generalization Bounds of Group Invariant / Equivariant Deep Networks via Quotient Feature Spaces”, 37th Conference on Uncertainty in Artificial Intelligence (UAI 2021). ◼︎Overview Numerous invariant (or equivariant) neural networks have succeeded in handling the invariant data such as point clouds and graphs. However, a generalization theory for…

Our paper was accepted for ICML2021.

【Information】 Hiroki Furuta, Tatsuya Matsushima, Tadashi Kozuno, Yutaka Matsuo, Sergey Levine, Ofir Nachum, and Shixiang Shane Gu. “Policy Information Capacity: Information-Theoretic Measure for Task Complexity in Deep Reinforcement Learning”, International Conference on Machine Learning 2021 (ICML2021). July 2021. 【Overview】 Progress in deep reinforcement learning (RL) research is largely enabled by benchmark task environments. However, analyzing…

Our paper was accepted for ACL-IJCNLP 2021 (Findings).

【NEWS】Our paper was accepted to ACL-IJCNLP 2021 (Findings) 【Title】LEWIS: Levenshtein Editing for Unsupervised Text Style Transfer 【Authors】Machel Reid and Victor Zhong (University of Washington) 【Overview】Many types of text style transfer can be achieved with only small, precise edits (e.g. sentiment transfer from “I had a terrible time…” to “I had a great time…”). We propose…