This workshop aims to bring together researchers across the robotics, computer vision, and machine learning communities to discuss the unique challenges and opportunities in long-term perception for autonomy in human-centric environments. We aim to present the latest advances and emerging techniques, to identify the core challenges in human-centric scenes — which are highly complex, semantically rich, distinctly dynamic and subject to constant change — and to set the direction of research to address them in the coming years. Recent advancements are presented through a series of invited and contributed talks by distinguished leading researchers from academia and industry. The workshop discusses the pressing questions and tensions in the field — End-to-end learned vs. geometry-based? Maps vs. mapless? High-fidelity vs. scalability? Dense vs. symbolic? Safety vs. performance? Foundation models vs. on-domain learning? — In short: What do robots really need from perception solutions for effective and safe long-term autonomy around and with humans? To encourage interaction among all participants, the workshop features a poster and demos session, spotlight talks, and an interactive discussion session, connecting invited speakers, contributors, organizers, and attendees in smaller groups for an open-ended guided in-depth discussion. All talks and accepted contributions are published on the workshop’s webpage to expand its reach and impact. A best presentation award and support for underrepresented student researchers is sponsored by Amazon. This workshop builds on the success of our previous popular workshops in perception and mapping at IROS’23 and ICRA’22, but with a distinct focus on dynamic human-centric environments to enable long-term robot autonomy.
We invite short papers of novel or recently published research relevant to the topics of the workshop.
We invite contributions from the areas of:
Novel submission will be evaluated on:
Papers that have previously been published will mainly be evaluated on the first two points (relevance and recency). If so, please mention in the submission where it has been published.
We invite live demonstrations during the poster session. These can either accompany a submitted paper or be standalone. For standalone demonstrations, please submit an up to two pages description and/or video of the demo. Demos will be evaluated on recency and relevance to the workshop.
Please submit your paper via CMT.
July 1 | Call for submissions |
Submissions due | |
September 30 | Notification of acceptance |
October 14 | Workshop at IROS! |
Time | Talk | Comments |
---|---|---|
14:00-14:10 | Welcome Remarks | Organizing Committee |
14:10-14:35 | Plenary 1: Trusted and introspective positioning systems for people and their machines | Michael Milford (QUT) |
14:35-15:00 | Plenary 2: The coupling of perception and interaction for object discovery and understanding | Jen Jen Chung (UQ) & Francesco Milano (ETH) |
15:00-15:25 | Plenary 3: Safe autonomous mobile robots around humans | Stefan Leutenegger (TUM) |
15:25-15:45 | Spotlight Talks | Presentations of award finalists selected from the submission to the workshop |
Moving Object Segmentation in Point Cloud Data using Hidden Markov Models | Vedant Bhandari, Jasmin James, Tyson Phillips, and Ross McAree | |
Taxonomy-Aware Class-Incremental Semantic Segmentation for Open-World Perception | Julia Hindel, Daniele Cattaneo, and Abhinav Valada | |
DUFOMap: Efficient Dynamic Awareness Mapping | Qingwen Zhang, Daniel Duberg, Mingkai Ji, and Patric Jensfelt | |
15:45-16:30 | Coffee Break & Poster + Demo Session | Accepted posters and demos |
16:30-16:55 | Plenary 4: Spatial AI for Robotics and MR | Marc Pollefeys (Microsoft and ETH) |
16:55-17:20 | Plenary 5: Spatiotemporal 3D Scene Understanding | Iro Armeni (Stanford) |
17:20-17:50 | Interactive Discussion | Guided group discussions of mixed groups consisting of invited speakers, organizers, junior researchers with posters, and other attendees. |
17:50-18:00 | Closing Remarks | Organizing Committee |