Perception, Action, Learning

From Metric-Semantic Scene Understanding to High-level Task Execution


This workshop brings together researchers from robotics, computer vision, and machine learning to examine challenges and opportunities emerging at the boundary between spatial perception and high-level task execution. Recent years have seen a growing interest towards metric-semantic understanding, which consists in building a semantically annotated (or object-oriented) model of the environment. This is pushing researchers from traditional research on SLAM towards more advanced forms of spatial perception. On the other hand, researchers have been looking at high-level task execution using modern tools from reinforcement learning and traditional decision making. The combination of these research efforts in perception and task execution has the potential to enable applications such as visual question answering, object search and retrieval, and are providing more intuitive ways to interact with the user. This workshop creates an exchange opportunity to connect researchers working in metric-semantic perception and high-level task execution. In particular, the workshop will bring forward the latest breakthroughs and cutting-edge research on spatial perception and high-level task execution. Besides the usual mix of invited talks and poster presentations, the workshop involves two interactive activities. First, we will provide a hands-on tutorial on a state-of-the-art library for metric-semantic reconstruction, which can be useful to both researchers and practitioners. Second, we will organize the GOSEEK challenge (details to follow), in conjunction with the release of a photo-realistic Unity-based simulator, where participants will need to combine perception and high-level decision making to find an object in a complex indoor environment.


Luca Carlone

Luca Carlone

Assistant Professor

Massachusetts Institute of Technology

Dan Griffith

Dan Griffith

Technical Staff

MIT Lincoln Laboratory

Sanjeev Mohindra

Sanjeev Mohindra

Associate Group Leader

MIT Lincoln Laboratory


Time Activity Speaker
08:45-09:00 Registration, welcome, and competition overview -
09:00-09:30 Invited talk Leslie Kaelbling (MIT)
09:30-10:00 Poster Spotlights -
10:00-10:30 Coffee Break -
10:30-11:00 Invited talk Raia Hadsell (DeepMind)
11:00-11:30 Poster Spotlights -
11:30-12:00 Invited talk Dhruv Batra (Georgia Tech)
12:00-12:30 Invited talk Sertac Karaman (MIT)
12:30-1:30 Lunch break -
1:30-2:00 Invited talk Andrew Davison (Imperial College)
2:00-2:30 Invited talk Cesar Cadena (ETH Zurich)
2:30-3:00 Hands-on Tutorial: Metric-Semantic Mapping -
3:00-3:30 Coffee break & poster session -
3:30-4:00 Invited talk Marco Pavone (Stanford)
4:00-4:30 Invited talk Davide Scaramuzza (UZurich)
4:30-5:00 Keynote presentation: competition winner -
5:00-5:30 Panel discussion and concluding remarks -


We are organizing the GOSEEK competition, where participants create an RL agent that combines perception and high-level decision-making to search for objects placed within complex indoor environments from a Unity-based simulator. Simply put: like PACMAN, but in a realistic scene and with realistic perception capabilities. Several data modalities will be provided from both the simulator ground truth and a perception pipeline (e.g., images, depth, agent location) to enable the participants to focus on the RL/search aspects. The contest will be hosted on the EvalAI platform, where participants will submit solutions, via docker containers run on AWS instances, for scoring. The winner of the competition will receive a monetary prize and will give a keynote presentation at the workshop.

To get updates about the challenge, please send an email to “” with subject “GOSEEK: subscribe” and we will add you to our mailing list!




Submission link:

Participants are invited to submit an extended abstract or short papers (up to 4 pages in ICRA format) focusing on novel advances in spatial perception, reinforcement learning, and at the boundary between these research areas. Topics of interest include but are not limited to:

  • Novel algorithms for spatial perception that combine geometry, semantics, and physics, and allow reasoning over spatial, semantic, and temporal aspects;
  • Learning techniques that can produce cognitive representations directly from complex sensory inputs;
  • Approaches that combine learning-based techniques with geometric and model-based estimation methods;
  • Novel transfer learning and meta-learning methods for reinforcement learning;
  • Novel RL approaches that leverage domain knowledge and existing (model-free and model-based) methods for perception and planning; and
  • Position papers and unconventional ideas on how to reach human-level performance in robot perception and task-execution. Contributed papers will be reviewed by the organizers and a program committee of invited reviewers. Accepted papers will be published on the workshop website and will be featured in spotlight presentations and poster sessions.


  • Submission Deadline: March 30, 2020
  • Notification of Acceptance: April 30, 2020
  • Workshop Date: May 31, 2020