◼︎ Bibliography
Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo “Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers”, Findings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Author】Machel Reid, Edison Marrese-Taylor, Yutaka Matsuo
Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers
◼︎Overview
Transformers have shown improved performance when compared to previous architectures for sequence processing such as RNNs. Despite their sizeable performance gains, as recently suggested, the model is computationally expensive to train and with a high parameter budget. We perform an analysis of different parameter sharing/ Our model combines sandwich-style parameter sharing, which overcomes naive cross-layer parameter Our model combines sandwich-style parameter sharing, which overcomes naive cross-layer parameter sharing in generative models, and self-attentive embedding factorization (SAFE). Experiments on machine translation, abstractive summarization and language modeling show that the Subformer can outperform the Transformer even when using significantly fewer parameters. Experiments on machine translation, abstract summarization and language modeling show that the Subformer can outperform the Transformer even when using significantly fewer parameters.