Research

研究

研究業績

カテゴリー

研究領域

  • DiffusER: Diffusion via Edit-based Reconstruction

    Machel Reid, Vincent Josua Hellendoorn, Graham Neubig

    International Conference on Learning Representations (ICLR2023)

  • Large Language Models are Zero-Shot Reasoners

    Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, Yusuke Iwasawa

    Neural Information Processing Systems (NeurIPS 2022)

  • LSTMモデルによる金融経済レポートの指数化

    山本裕樹, 落合桂一, 鈴木雅大, 松尾豊

    情報処理学会論文誌トランザクションデジタルプラクティス (2022)

  • On the Impact of Data Augmentation on Downstream Performance in Natural Language Processing

    Itsuki Okimura, Machel Reid, Makoto Kawano and Yutaka Matsuo.

    Proceedings of the Third Workshop on Insights from Negative Results in NLP, Online and Dublin, Ireland. Association for Computational Linguistics. Best Paper Award

  • A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for African News Translation

    David Ifeoluwa Adelani, Jesujoba Oluwadara Alabi, Angela Fan, Julia Kreutzer, Xiaoyu Shen, Machel Reid, Dana Ruiter, Dietrich Klakow, Peter Nabende, Ernie Chang, Tajuddeen Gwadabe, Freshia Sackey, Bonaventure F. P. Dossou, Chris Chinenye Emezue, Colin Leong, Michael Beukman, Shamsuddeen Hassan Muhammad, Guyo Dub Jarso, Oreen Yousuf, Andre Niyongabo Rubungo, Gilles HACHEME, Eric Peter Wairagala, Muhammad Umair Nasir, Benjamin Ayoade Ajibade, Tunde Oluwaseyi Ajayi, Yvonne Wambui Gitau, Jade Abbott, Mohamed Ahmed, Millicent Ochieng, Anuoluwapo Aremu, Perez Ogayo, Jonathan Mukiibi, Fatoumata Ouoba Kabore, Godson Koffi KALIPE, Derguene Mbaye, Allahsera Auguste Tapo, Victoire Memdjokam Koagne, Edwin Munkoh-Buabeng, Valencia Wagner, Idris Abdulmumin, Ayodele Awokoya

    The 2022 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022). July 2022. Association for Computational Linguistics

  • PARADISE: Exploiting Parallel Data for Multilingual Sequence-to-Sequence Pretraining

    Machel Reid and Mikel Artetxe.

    7th Workshop on Representation Learning for NLP (non-archival), ACL 2022.

  • 第25回画像の認識・理解シンポジウム (MIRU2022) MIRU優秀賞: Pixel vs. Object: 変化キャプショニングにおける最適な画像表現についての研究

    土居健人, 濱口竜平, 岩澤有祐, 大西正輝, 松尾豊, 櫻田健,

  • Best Paper Award:“On the Impact of Data Augmentation on Downstream Performance in Natural Language Processing”. Proceedings of the Third Workshop on Insights from Negative Results in NLP, Online and Dublin, Ireland. Association for Computational Linguistics, 2022

    Itsuki Okimura, Machel Reid, Makoto Kawano and Yutaka Matsuo.

  • AfroMT: Pretraining Strategies and Reproducible Benchmarks for Translation of 8 African Languages

    Machel Reid, Junjie Hu, Graham Neubig and Yutaka Matsuo

    The 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP 2021). November 2021. Association for Computational Linguistics.

  • Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers

    Machel Reid, Edison Marrese-Taylor and Yutaka Matsuo.

    Findings of The 2021 Conference on Empirical Methods in Natural Language Processing (Findings of EMNLP 2021). Association for Computational Linguistics.