• Home
  • ニュース
  • 当研究室の論文が人工知能学会論文誌に採録されました。
  • 当研究室の論文が人工知能学会論文誌に採録されました。

    ◼︎書誌情報
    Xin Zhang,Shixiang Shane Gu, Yutaka Matsuo, Yusuke Iwasawa : Domain Prompt Learning for Efficiently Adapting CLIP to Unseen Domains  人工知能学会論文誌, 第38巻 6号 J-STAGE
    ◼︎概要
    Domain generalization (DG) is a difficult transfer learning problem aiming to learn a generalizable model for unseen domains. Recent foundation models (FMs) are robust to many distribution shifts and, therefore, should substantially improve the performance of DG. In this work, we study generic ways to adopt contrastive language-image pre-training (CLIP), a visual-language foundation model, for DG problems in image classification. While empirical risk minimization (ERM) greatly improves the accuracy with bigger backbones and training datasets using standard DG benchmarks, fine-tuning FMs is not practical in many real-world situations. We propose Domain Prompt Learning (DPL) as a novel approach for domain inference in the form of conditional prompt generation. DPL achieved a significant accuracy improvement with only training a lightweight prompt generator (a three-layer MLP), whose parameter is of equivalent scale to the classification projector in the previous DG literature. Combining DPL with CLIP provides surprising performance, raising the accuracy of zero-shot CLIP from 73.7% to 79.3% on several standard datasets, namely PACS, VLCS, OfficeHome, and TerraIncognita. We hope the simplicity and success of our approach lead to broader adoption and analysis of foundation models in the domain generalization field.