Our paper has been accepted to IJCAI2020.
Title] Stabilizing Adversarial Invariance Induction from a Divergence Minimization Perspective
Summary】 【Outline】.
Adversarial invariance induction (AII) is a generic and powerful framework for enforcing an invariance to nuisance attributes into neural network representations. However, its optimization is often unstable and little is known about its practical behavior. This paper presents an analysis of the reasons for the optimization difficulties and provides a better optimization procedure by rethinking AII from a divergence minimization perspective. Interestingly, this perspective indicates a cause of the optimization difficulties: it does not ensure proper divergence minimization, which is a We then propose a simple variant of AII, called invariance induction by discriminator matching, which Our method consistently achieves near-optimal Our method consistently achieves near-optimal invariance in toy datasets with various configurations in which the original AII is catastrophically unstable. Extensive experiments on four real- world datasets also support the superior performance of the proposed method, leading to improved user anonymization and domain generalization.
Author】Yusuke Iwasawa, Kei Akuzawa, Yutaka Matsuo
[paper link] https://www.ijcai.org/Proceedings/2020/271