Our paper was accepted for Transactions of the Japanese Society for Artificial Intelligence : AI.
張鑫，松嶋達也，松尾豊，岩澤有祐: M3IL: Multi-Modal Meta-Imitation Learning, 人工知能学会論文誌, 第38巻2号 J-STAGE（2022）
Imitation Learning(IL) is anticipated to achieve intelligent robots since it allows the user to teach the various robot tasks easily．In particular, Few-Shot Imitation Learning(FSIL) aims to infer and adapt fast to unseen tasks with a small amount of data. Though FSIL requires few-shot of data, the high cost of demonstrations in IL is still a critical problem. Especially when we want to teach the robot a new task, we need to execute the task for the assignment every time. Inspired by the fact that humans specify tasks using language instructions without executing them, we propose a multi-modal FSIL setting in this work. The model leverages image and language information in the training phase and utilizes both image and language or only language information in the testing phase. We also propose a Multi-Modal Meta-Imitation Learning or M3IL, which can infer with only image or language information. The result of M3IL outperforms the baseline in the standard and proposed settings. Our result shows the effectiveness of M3IL and the importance of language instructions in the FSIL setting