• Home
  • ニュース
  • 当研究室の論文がICASSP 2024に採録されました。
  • 当研究室の論文がICASSP 2024に採録されました。

    ■書誌情報
    Xin Zhang*, Jiaxian Guo*, Paul Yoo, Yutaka Matsuo, Yusuke Iwasawa. “PASTE AND HARMONIZE VIA DENOISING: SUBJECT-DRIVEN IMAGE EDITING WITH FROZEN PRE-TRAINED DIFFUSION MODEL”. ICASSP 2024

    ■概要
    Text-to-image generative models have attracted rising attention for flexible image editing via user-specified descriptions. However, text descriptions alone are not enough to elaborate the details of subjects, often compromising the subjects’ identity or requiring additional per-subject fine-tuning. We introduce a new framework called \textit{Paste, Inpaint and Harmonize via Denoising} (PhD), which leverages an exemplar image in addition to text descriptions to specify user intentions. In the pasting step, an off-the-shelf segmentation model is employed to identify a user-specified subject within an exemplar image which is subsequently inserted into a background image to serve as an initialization capturing both scene context and subject identity in one. To guarantee the visual coherence of the generated or edited image, we introduce an inpainting and harmonizing module to guide the pre-trained diffusion model to seamlessly blend the inserted subject into the scene naturally. As we keep the pre-trained diffusion model frozen, we preserve its strong image synthesis ability and text-driven ability, thus achieving high-quality results and flexible editing with diverse texts. In our experiments, we apply PhD to both subject-driven image editing tasks and explore text-driven scene generation given a reference subject. Both quantitative and qualitative comparisons with baseline methods demonstrate that our approach achieves state-of-the-art performance in both tasks.