• Home
  • News
  • 【Matsuo-Iwasawa Lab】20 Researcher Positions are Available at The University of Tokyo
  • 【Matsuo-Iwasawa Lab】20 Researcher Positions are Available at The University of Tokyo

    ■ About Us

    The Matsuo-Iwasawa Lab at the University of Tokyo operates under the mission of “Reproduce human intelligence through Deep learning” and engages in a wide range of research areas, including the development of deep learning and foundational technologies beyond it, such as world models, robotics, large-scale language models, and societal implementation of algorithms. With over ten full-time researchers (and plans to continue increasing this number), the lab had 14 papers accepted at top conferences like ICML, ICLR, NAACL, and ICRA in the fiscal year 2023. (Our current research achievement can be found here

    To expand these activities further, the lab is launching the first phase of its research internship program. Under the mentorship of researchers active at the forefront of each field, interns will experience research activities within the lab that are usually not visible from the outside.

       


    ■ Description

    Our laboratory currently has a large number of researchers enrolled, but in order to achieve more impactful results, we have decided to hire an additional 20 researchers. Currently, we have major research units in 1) World Models (deep generative models, deep reinforcement learning, multimodal learning), 2) Robotics, 3) Large Language Models, 4) Social Implementation, and 5) Brain-Inspired Intelligence. We are looking for candidates who align with these research themes, but also those who are not limited to them. We welcome applicants with diverse backgrounds who are eager to challenge themselves to create breakthroughs by conducting fundamental research in intelligence and leveraging these advancements and technologies in fields such as robotics and natural language processing.

    【Working Conditions】

    • Job Title / Positions Available: Project Researcher,Project Assistant Professor,Project Lecturer, Project Associate Professor
    • Type: Full-time staff
    • Number of Positions: 20
    • Compensation:
      6,000,000 to 14,400,000 JPY
    • Location: Matsuo Research Lab, Faculty of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, Engineering Building No. 2 (Remote work possible)
    • Required Qualifications:
      – Must hold a Ph.D. degree or expect to obtain one by March 2025
      – Experience in research and publication in machine learning
      – Experience with deep learning libraries such as PyTorch, TensorFlow, JAX
      (Additional requirements for Specially Appointed Assistant Professor, Lecturer, and Associate Professor):
      – Research achievements including lead-authored papers at top international conferences in related fields
      – Machine Learning Fundamentals: ICLR, ICML, NeurIPS
      – Language: ACL, NAACL, EMNLP
      – Imaging: CVPR, ECCV, ICCV
      – Robotics: IROS, ICRA, RSS, CoRL
    • Research Environment:
      – Dedicated teams for lecture operations and secretarial support (10 members) to focus on research and supervision
      – Research expenses covered by the laboratory
      – Computers, books, travel/conference expenses (including international)
      – Computing resources (on-premises environment plus ABCI, Wisteria/BDEC-01, SQUID)
      – Robots (a total of 17 robots including HSR, Sawyer, XArm7)
      – Student intern salaries
    • Selection Process:
      Document screening → Interviews (online, about 2 rounds including research theme discussions) → Reference check + final interview with Prof. Matsuo (online) → Notification of acceptance or rejection

    ■ Example of research project in Matsuo-Iwasawa Lab

    • World Model
      • We engage in the study of world models, utilizing deep learning to model the real world and infer or predict its representation. Our research focuses on developing world models that understand and predict the interactions among multiple objects in an environment and scaling up these models. We also conduct studies on the crucial aspect of handling time within these models.

    • Large Language Model, LLM
      • Our lab has conducted significant research demonstrating the importance of prompt engineering in harnessing the inference capabilities of LLMs. We have developed our own large-scale language model, Weblab-10B. Recently, we have been working on studies related to adversarial learning and the principles of in-context learning aimed at enhancing control and understanding of large language models.

    • Algorithm
      • Our lab aims to develop deep learning algorithms that go beyond traditional error backpropagation, simulating more closely the human brain’s parallel processing capabilities. Specifically, we are exploring backprop-free learning methods based on energy-based models and searching for good substructures within networks based on the Strong Lottery Ticket Hypothesis.

    • Robotics
      • We utilize large-scale data collected from real-world robots and simulators for machine learning-based robot control, focusing on imitation learning and reinforcement learning. Our research aims to create versatile robotic systems capable of handling diverse tasks and environments, such as home settings. Recently, our lab has been involved in building foundational models for robotics using large-scale data, robot data collection via teleoperation, and integrating foundational models like LLMs with robotic systems. In the summer of 2023, our lab won third place at a global competition using Toyota’s HSR service robots and first place at a national competition in Japan.

    • Social Implementation
      • We are dedicated to addressing societal challenges through deep learning and machine learning technologies. Despite the rapidly increasing demands for computational resources and data sizes, especially following the advent of LLMs and foundational models, many industries do not have adequate environments. Our research also focuses on methodologies that function effectively under such constraints.

    • Brain Models
      •  In this role, hypothetical computational models of crucial brain regions such as the neocortical inter-area, local circuits of the neocortex, cerebellum, basal ganglia, hippocampus, and amygdala are designed. This involves understanding how these parts interact and function as a whole and modeling these interactions.

      • Required Skills: Candidates interested in deeply understanding neuroscience papers and considering computational models based on anatomical structures. We seek individuals capable of discussing the modeling of complex biological systems.


    Inquiry: recruit@weblab.t.u-tokyo.ac.jp