Audience Q&A: https://app.sli.do/event/aPSF8kUkZXDnNSXByoQvev/live/questions
Robots now have advanced perception, navigation, grasping and manipulation capabilities, but how come it’s still exceedingly difficult to bring these skills together to get a robot to autonomously tidy a room? A key limiting factor is that robots still lack the contextual scene understanding capabilities that allow humans to efficiently and compactly reason about our world and our actions within it. Metric (where) and semantic (what) representations are now common, but contextual (how) representations–how do objects interrelate and how can a robot interact with objects to achieve the task?–are still missing. How should we formulate these representations, and crucially, how can we allow robots–embodied agents–learn and update their contextual scene understanding from live experiences? Researchers in AI knowledge representation and reasoning as well as in the more distant field of linguistics have long grappled with similar questions. The goal of this workshop is to bring together those experts with researchers in the fields of robot scene understanding and long-horizon planning to discuss the state of the art and uncover synergies across the currently disparate disciplines.
Time | Speakers/Authors |
---|---|
09:00-09:10 | Welcome |
09:10-09:40 | Invited talk: Shuran Song (Columbia University) Hierarchical Representations for Language-Based Reasoning [Recording] |
09:40-10:10 | Invited talk: Jiayuan Mao (MIT) Neuro-Symbolic Concepts for Robotic Manipulation [Recording] |
10:10-10:30 | Spotlight presentations |
10:30-11:00 | Coffee break and posters |
11:00-11:30 | Invited talk: Janet Wiles (The University of Queensland) Social Robots and Language Technologies: What can Robotics Learn from the Language Sciences? [Recording] |
11:30-12:00 | Panel discussion: Reasoning and representations [Recording] |
12:00-13:30 | Lunch |
13:30-14:00 | Invited talk: Rajat Talak (MIT) Spatial Perception for Robotics: Representation, Structure, and Real-Time Systems [Recording] |
14:00-14:20 | Best paper talk: Stefano Ferraro, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt FOCUS: Object-Centric World Models for Robotics Manipulation |
14:20-15:00 | Spotlight presentations |
15:00-15:30 | Coffee break and posters |
15:30-16:00 | Invited talk: Manolis Savva (Simon Fraser University) 3D Simulation for Embodied AI: Emerging Challenges & Opportunities [Recording] |
16:00-16:30 | Invited talk: Helisa Dhamo (Huawei) Scene Understanding via Semantic Scene Graphs [Recording] |
16:30-17:00 | Panel discussion: 3D scene understanding [Recording] |
17:00-17:30 | Closing remarks |
🥇: Best paper award
⏰ DEADLINE EXTENDED until June 30 ⏰
Submission link: https://cmt3.research.microsoft.com/robrepworkshop2023/Submission/Index
Participants are invited to submit an extended abstract or short papers (up to 4 pages in RSS format) focusing on novel advances in 3D scene understanding, predicate/affordance reasoning, high-level planning and at the boundary between these research areas.
Important dates: (deadlines are AoE on the respective date)
Topics of interest include but are not limited to:
Contributed papers will be reviewed by the organizers and a program committee of invited reviewers. Accepted papers will be published on the workshop website and will be featured in spotlight presentations and poster sessions.
LaTeX template link: https://roboticsconference.org/docs/paper-template-latex.tar.gz
Instructions for Authors of accepted papers:
You will have the opportunty to present your work in a spotlight presentation which should last no longer than 5 minutes, followed by a short time for questions from the audience. For more in depth discussions, we invite you to prepare a poster for the poster session.
Important dates: