• Home
  • News
  • Our paper has been accepted for publication in the Transactions of the Japanese Society for Artificial Intelligence.
  • Our paper has been accepted for publication in the Transactions of the Japanese Society for Artificial Intelligence.

    ◼︎ Bibliographic Information
    Y. Kobayashi, M. Suzuki, and Y. Matsuo: Scene Interpretation Using Background Information with Deep Generative Model, Transactions of the Japanese Society for Artificial Intelligence, Vol. 38, No. 3, 2023.
    ◼︎Overview
    Ability to understand the surrounding environment compositionally by decomposing it into its individual components is an important cognitive ability. Human beings decompose arbitral entities into some parts based on its semantics or functionality, and recognize those parts as “object “. Recently, researches called “scene interpretation” have been conducted using deep generative models. Those researches build models that are able to recognize environment compositionally. Application of existing methods are restricted to simple images, and could not deal with complex This is because previous works are done in a fully-unsupervised manner, and the objective function is just minimizing reconstruction errors. Therefore, in this case, models have no clues about objects unlike models leveraging supervised information, or inductive bias. In this research, we propose a method to decompose scenes as intended using minimum auxiliary information to identify objects. build a model that utilizes background as auxiliary information to separate representation of background and foreground, and then we show our method is We build a model that utilizes background as auxiliary information to separate representation of background and foreground, and then we show our method is able to deal with datasets that are difficult for existing methods.