Perception, Action, Learning

From Metric-Semantic Scene Understanding to High-level Task Execution


Overview

This workshop brings together researchers from robotics, computer vision, and machine learning to examine challenges and opportunities emerging at the boundary between spatial perception and high-level task execution. Recent years have seen a growing interest towards metric-semantic understanding, which consists in building a semantically annotated (or object-oriented) model of the environment. This is pushing researchers from traditional research on SLAM towards more advanced forms of spatial perception. On the other hand, researchers have been looking at high-level task execution using modern tools from reinforcement learning and traditional decision making. The combination of these research efforts in perception and task execution has the potential to enable applications such as visual question answering, object search and retrieval, and are providing more intuitive ways to interact with the user. This workshop creates an exchange opportunity to connect researchers working in metric-semantic perception and high-level task execution. In particular, the workshop will bring forward the latest breakthroughs and cutting-edge research on spatial perception and high-level task execution. Besides the usual mix of invited talks and poster presentations, the workshop involves two interactive activities. First, we will provide a hands-on tutorial on a state-of-the-art library for metric-semantic reconstruction, which can be useful to both researchers and practitioners. Second, we will organize the GOSEEK challenge (details to follow), in conjunction with the release of a photo-realistic Unity-based simulator, where participants will need to combine perception and high-level decision making to find an object in a complex indoor environment.


Schedule

Note: due to the coronavirus outbreak, ICRA 2020 will be either virtualized or postponed. We will be ready to either postpone or virtualize the PAL workshop according to the ICRA recommendations.
Time Activity Speaker
08:45-09:00 Registration, welcome, and competition overview -
09:00-09:30 Invited talk Leslie Kaelbling (MIT)
09:30-10:00 Poster Spotlights -
10:00-10:30 Coffee Break -
10:30-11:00 Invited talk Raia Hadsell (DeepMind)
11:00-11:30 Poster Spotlights -
11:30-12:00 Invited talk Dhruv Batra (Georgia Tech)
12:00-12:30 Invited talk Sertac Karaman (MIT)
12:30-1:30 Lunch break -
     
1:30-2:00 Invited talk Andrew Davison (Imperial College)
2:00-2:30 Invited talk Cesar Cadena (ETH Zurich)
2:30-3:00 Hands-on Tutorial: Metric-Semantic Mapping -
3:00-3:30 Coffee break & poster session -
3:30-4:00 Invited talk Marco Pavone (Stanford)
4:00-4:30 Invited talk Davide Scaramuzza (UZurich)
4:30-5:00 Keynote presentation: competition winner -
5:00-5:30 Panel discussion and concluding remarks -

GOSEEK-Challenge

The GOSEEK challenge is online and open for submissions via EvalAI! Note: the GOSEEK Reinforcement Learning (virtual) challenge will take place irrespective of ICRA.

WHAT: The GOSEEK reinforcement learning challenge consists in creating an RL agent that combines advanced perception (provided by Kimera) and high-level decision-making to search for objects placed within complex indoor environments from a Unity-based simulator. Simply put: like PACMAN, but in a realistic scene and with realistic perception capabilities. Several data modalities are provided from both the simulator ground truth and the perception pipeline (e.g., images, depth, agent location) to enable the participants to focus on the RL/search aspects. The contest is hosted on the EvalAI platform, where participants can submit solutions, via docker containers, for scoring.

WHEN: You can submit immediately for testing now, while we will open submissions to the leaderboard on April 25th. Because of the difficulties created by the coronavirus outbreak, we decided to extend the challenge deadline to May 20th.

HOW: https://github.com/MIT-TESSE/goseek-challenge

WHY: The challenge provides a unique infrastructure to combine advanced perception (e.g., visual inertial navigation, SLAM, depth reconstruction, 3D mapping) with reinforcement learning. Competing in the challenge will deepen your expertise in these topics and boost your research. In case that’s not enough: the winner of the competition will receive a monetary prize ($1000) and will give a keynote presentation at the PAL workshop at ICRA 2020.

If you participate in GOSEEK and write a paper or a report about your entry, please cite:

  • D. Yadav, R. Jain, H. Agrawal, P. Chattopadhyay, T. Singh, A. Jain, S. B. Singh, S. Lee, D. Batra, “EvalAI: Towards Better Evaluation Systems for AI Agents”, arXiv:1902.03570, 2019.
  • A. Rosinol, M. Abate, Y. Chang, and L. Carlone. Kimera: an open-source library for real-time metric-semantic localization and mapping. In IEEE Intl. Conf. on Robotics and Automation (ICRA), 2020.

KEEP IN TOUCH: To get updates about the challenge, please subscribe at http://mailman.mit.edu/mailman/listinfo/goseek-challenge. If you have troubles doing so, please send an email to qla@mit.edu with subject “GOSEEK: subscribe” and we will add you to our Goseek-Challenge@mit.edu mailing list!

 


CfP

SUBMISSIONS:

Submission link: https://easychair.org/conferences/?conf=pal2020icraworkshop

Participants are invited to submit an extended abstract or short papers (up to 4 pages in ICRA format) focusing on novel advances in spatial perception, reinforcement learning, and at the boundary between these research areas. Topics of interest include but are not limited to:

  • Novel algorithms for spatial perception that combine geometry, semantics, and physics, and allow reasoning over spatial, semantic, and temporal aspects;
  • Learning techniques that can produce cognitive representations directly from complex sensory inputs;
  • Approaches that combine learning-based techniques with geometric and model-based estimation methods;
  • Novel transfer learning and meta-learning methods for reinforcement learning;
  • Novel RL approaches that leverage domain knowledge and existing (model-free and model-based) methods for perception and planning; and
  • Position papers and unconventional ideas on how to reach human-level performance in robot perception and task-execution. Contributed papers will be reviewed by the organizers and a program committee of invited reviewers. Accepted papers will be published on the workshop website and will be featured in spotlight presentations and poster sessions.

IMPORTANT DATES:

  • Submission Deadline: April 15, 2020
  • Notification of Acceptance: May 10, 2020
  • Final deadline for GOSEEK challenge: May 25, 2020
  • Workshop Date: May 31, 2020

Organizers

Luca Carlone

Luca Carlone

Assistant Professor

Massachusetts Institute of Technology

Dan Griffith

Dan Griffith

Technical Staff

MIT Lincoln Laboratory

Sanjeev Mohindra

Sanjeev Mohindra

Associate Group Leader

MIT Lincoln Laboratory

Zachary Ravichandran

Zachary Ravichandran

Computer Vision Engineer

MIT Lincoln Laboratory

Mark Mazumder

Mark Mazumder

Machine Learning Engineer

MIT Lincoln Laboratory

Constantine Frost

Constantine Frost

Simulation Engineer

MIT Lincoln Laboratory

Antoni Rosinol

Antoni Rosinol

PhD Student

Massachusetts Institute of Technology