Research

研究

  • Home
  • 研究業績
  • コンピュータービジョン
  • 研究業績

    カテゴリー

    研究領域

    • 2024年度 研究報告ユビキタスコンピューティングシステム,優秀論文賞(UBI):Vision Transformerから畳み込みニューラルネットワーク への知識蒸留手法の提案

      前羽 利治,河野 慎,松尾 豊

    • “Paste, Inpaint and Harmonize via Denoising: Subject-Driven Image Editing with Pre-Trained Diffusion Model”

      Xin Zhang*, Jiaxian Guo*, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa

      2024 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024)

    • DreamSparse: Escaping from Plato’s Cave with 2D Frozen Diffusion Model given Sparse Views

      Paul Yoo, Jiaxian Guo, Xin Zhang, Yutaka Matsuo, Shixiang Shane Gu

      XRNerf workshop of The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023)

    • HAWK-Net: Hierarchical Attention Weighted Top-K Network for Megapixel Image Classification

      Hitoshi Nakanishi, Masahiro Suzuki, Yutaka Matsuo

      IPSJ, (2023).

    • 顔の角度情報を用いたDeepFake動画の検出手法の提案

      蔭山智, 鈴木雅大, 落合桂一, 松尾豊

      電子情報通信学会和文論文誌D, (2023).

    • “DreamSparse: Escaping from Plato’s Cave with 2D Frozen Diffusion Model given Sparse Views”

      Paul Yoo, Jiaxian Guo, Xin Zhang, Yutaka Matsuo, Shixiang Shane Gu

      XRNerf workshop of The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023). May 2023

    • Paste, Inpaint and Harmonize via Denoising: Subject-Driven Image Editing with Pre-Trained Diffusion Model

      Xin Zhang, Jiaxian Guo, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa

      AI4CC workshop of The IEEE/CVF Conference on Computer Vision and Pattern Recognition

    •  Robustifying Vision Transformer Without Retraining From Scratch Using Attention Based Test-Time Adaptation

      Takeshi Kojima, Yusuke Iwasawa, Yutaka Matsuo

      New Generation Computing, (2022).[paper]

    • Fixing the train-test objective discrepancy: Iterative Image Inpainting for Unsupervised Anomaly Detection

      Hitoshi Nakanishi, Masahiro Suzuki, Yutaka Matuo.

      J-Stage in August Vol.30, (2022).

    • Robustifying Vision Transformer without Retraining from Scratch by Test-Time Class-Conditional Feature Alignment

      Takeshi Kojima, Yutaka Matsuo, and Yusuke Iwasawa.

      the 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence (IJCAI-ECAI 2022)