Long-Term Perception for Autonomy in Dynamic Human-shared Environments: What Do Robots Need?

Monday, October 14, 2024

Abu Dhabi, UAE


News

  • Aug 30, 2024 - The submission deadline (September 1) is extended to September 21!
  • Jul 30, 2024 - Submissions Portal is now open!
  • Jun 25, 2024 - We are excited to host the fist workshop on Long-Term Perception for Autonomy in Dynamic Human-shared Environments at IROS 2024!

Overview

This workshop aims to bring together researchers across the robotics, computer vision, and machine learning communities to discuss the unique challenges and opportunities in long-term perception for autonomy in human-centric environments. We aim to present the latest advances and emerging techniques, to identify the core challenges in human-centric scenes — which are highly complex, semantically rich, distinctly dynamic and subject to constant change — and to set the direction of research to address them in the coming years. Recent advancements are presented through a series of invited and contributed talks by distinguished leading researchers from academia and industry. The workshop discusses the pressing questions and tensions in the field — End-to-end learned vs. geometry-based? Maps vs. mapless? High-fidelity vs. scalability? Dense vs. symbolic? Safety vs. performance? Foundation models vs. on-domain learning? — In short: What do robots really need from perception solutions for effective and safe long-term autonomy around and with humans? To encourage interaction among all participants, the workshop features a poster and demos session, spotlight talks, and an interactive discussion session, connecting invited speakers, contributors, organizers, and attendees in smaller groups for an open-ended guided in-depth discussion. All talks and accepted contributions are published on the workshop’s webpage to expand its reach and impact. A best presentation award and support for underrepresented student researchers is sponsored by Amazon. This workshop builds on the success of our previous popular workshops in perception and mapping at IROS’23 and ICRA’22, but with a distinct focus on dynamic human-centric environments to enable long-term robot autonomy.


Call for Papers

We invite short papers of novel or recently published research relevant to the topics of the workshop.

  • Short papers are 2+n pages (2 pages content + references)
  • Submissions must follow the IEEE Conference double column format
  • All accepted papers will be presented as posters at the workshop and published on the workshop website.
  • Please indicate whether your paper falls into the ‘novel’ or ‘previously published’ category. Novel research papers are encouraged and can expect more substantial review feedback on their work. This is provided as a service to authors of novel papers and does not diminish the chance of acceptance.
  • All accepted submissions will be considered for the best presentation award, where 3 finalists will be selected for 5-minute plenary presentations. While all submissions are eligible, novelty will be considered in finalist selection.
  • Submissions are single blind and will be reviewed by members of the (extended) workshop committee.
  • Submissions can optionally be accompanied by a video.

Invited topics

We invite contributions from the areas of:

  • Scene and object representations
  • Long-term mapping and scene understanding
  • Open-set scene understanding and foundation models
  • Motion and change detection
  • Prediction and planning in dynamic and changing scenes
  • Safety in perception, planning and prediction
  • Continual and lifelong learning

Review Criteria

Novel submission will be evaluated on:

  • Relevance to the topics of the workshop
  • Recency and novelty
  • Clarity of presentation
  • Technical Quality
  • Strength of results (i.e., the results show promise for early stage work, the (planned) experimental setup is adequate)

Papers that have previously been published will mainly be evaluated on the first two points (relevance and recency). If so, please mention in the submission where it has been published.

Call for Demonstrations

We invite live demonstrations during the poster session. These can either accompany a submitted paper or be standalone. For standalone demonstrations, please submit an up to two pages description and/or video of the demo. Demos will be evaluated on recency and relevance to the workshop.

Submissions Portal

Please submit your paper via CMT.

Submissions Timeline

July 1 Call for submissions
September 1 September 21 Submissions due
September 30 Notification of acceptance
October 14 Workshop at IROS!

Invited Speakers

Iro Armeni

Iro Armeni

Stanford University

Jen Jen Chung

Jen Jen Chung

The University of Queensland

Stefan Leutenegger

Stefan Leutenegger

Technical University of Munich

Francesco Milano

Francesco Milano

ETH Zurich

Michael Milford

Michael Milford

Queensland University of Technology

Marc Pollefeys

Marc Pollefeys

ETH Zurich, Microsoft


Schedule

Time Talk Comments
14:00-14:10Welcome RemarksOrganizing Committee
14:10-14:35Plenary 1: Trusted and introspective positioning systems for people and their machinesMichael Milford (QUT)
14:35-15:00Plenary 2: The coupling of perception and interaction for object discovery and understandingJen Jen Chung (UQ) & Francesco Milano (ETH)
15:00-15:25Plenary 3: State estimation and 3D scene understanding for mobile robotsStefan Leutenegger (TUM)
15:25-15:45Spotlight TalksPresentations of award finalists selected from the submission to the workshop
Moving Object Segmentation in Point Cloud Data using Hidden Markov ModelsVedant Bhandari, Jasmin James, Tyson Phillips, and Ross McAree
Taxonomy-Aware Class-Incremental Semantic Segmentation for Open-World PerceptionJulia Hindel, Daniele Cattaneo, and Abhinav Valada
DUFOMap: Efficient Dynamic Awareness MappingQingwen Zhang, Daniel Duberg, Mingkai Ji, and Patric Jensfelt
15:45-16:30Coffee Break & Poster + Demo SessionAccepted posters and demos
16:30-16:55Plenary 4: Spatial AI for Robotics and MRMarc Pollefeys (Microsoft and ETH)
16:55-17:20Plenary 5: Spatiotemporal 3D Scene UnderstandingIro Armeni (Stanford)
17:20-17:50Interactive DiscussionGuided group discussions of mixed groups consisting of invited speakers, organizers, junior researchers with posters, and other attendees.
17:50-18:00Closing RemarksOrganizing Committee

Organizers

Lukas Schmid

Lukas Schmid

Massachusetts Institute of Technology

Luca Carlone

Luca Carlone

Massachusetts Institute of Technology

Roland Siegwart

Roland Siegwart

ETH Zurich

Rajat Talak

Rajat Talak

Massachusetts Institute of Technology

Olov Anderson

Olov Anderson

KTH Royal Institute of Technology

Helen Oleynikova

Helen Oleynikova

ETH Zurich

Jong Jin Park

Jong Jin Park

Amazon Lab126

Jianhao Zheng

Jianhao Zheng

Stanford University

Johanna Wald

Johanna Wald

Google

Federico Tombari

Federico Tombari

Google, Technical University of Munich