Roozbeh Mottaghi

Highlights and News

Nov, 2024:
Giving an invited talk at the Princeton Symposium on Safe Deployment of Foundation Models in Robotics.
Aug, 2024:
Giving an invited talk at the ACL 2024 workshop on Advances in Language and Vision Research.
Jun, 2024:
Giving an invited talk on Human-centric Embodied AI at the CVPR 2024 workshop on Virtual Humans for Robotics and Autonomous Driving.
Mar, 2024:
Serving as Area Chair for ECCV 2024 and NeurIPS 2024.
Nov, 2023:
Serving as Senior Area Chair for CVPR 2024.
Oct, 2023:
Launched Habitat 3.0, a simulator for human-robot interactions. Highlighted in the 10th anniversary of FAIR.
Oct, 2023:
Giving invited talks at Perception, Decision making and Reasoning through Multimodal Foundational Modeling and 3D Vision and Modeling Challenges in eCommerce workshops at ICCV 2023.
Sep, 2023:
Serving as Area Chair for ICLR 2024 and AAAI 2024.
Jul, 2023:
We released HomeRobot, a framework for reproducible robotics research. We also hold a challenge based on HomeRobot at NeurIPS 2023 for open-vocabulary mobile manipulation.
Nov, 2022:
Our work on large-scale Embodied AI recevied the Outstanding Paper Award at NeurIPS 202s2.
Sep, 2022:
Giving invited talks at the Benchmarking in Robotic Manipulation and Learning, Perception, and Abstraction for Long-Horizon Planning workshops at CoRL 2022.
Sep, 2022:
Serving as Area Chair for CVPR 2023 and ICLR 2023.
Aug, 2022:
Joined FAIR at Meta as a Research Manager to lead part of the Embodied AI efforts at FAIR.
Jun, 2022:
Giving an invited talk at the Embodied AI Workshop at CVPR 2022.
Apr, 2022:
Giving an invited talk at Stanford HAI Metaverse Workshop.
Mar, 2022:
Giving an invited talk at China Society of Image and Graphics (CSIG).
Oct, 2021:
Giving an invited talk at the ICCV 2021 workshop on Structural and Compositional Learning on 3D Data.
Jun, 2021:
Giving invited talks at the following CVPR 2021 workshops: 3D Vision and Robotics, 3D Scene Understanding for Vision, Graphics, and Robotics, and Learning to Generate 3D Shapes and Scenes.
May, 2021:
Giving a guest lecture at Stanford CS331B: Interactive Simulation for Robot Learning.
Feb, 2021:
We are organizing three challenges at the Embodied AI Workshop at CVPR 2021: Navigation towards objects, Room rearrangement, and Interactive instruction following.
Nov, 2020:
We released a report describing a new challenge for Embodied AI. This is a joint work with colleagues from Georgia Tech, FAIR, Simon Fraser University, Imperial College London, Princeton, Intel Labs, UC Berkeley, Google and UC San Diego. The report can be accessed here.
Nov, 2020:
Serving as Area Chair for CVPR 2021.
Sep, 2020:
We released AllenAct, a framework for unifying environments, models and training algorithms used in Embodied AI. Check out the details in this arXiv paper.
Feb, 2020:
Co-organizing Embodied Vision, Actions & Language Workshop at ECCV 2020, which hosts a challenge on ALFRED our embodied instruction following framework.
Feb, 2020:
Co-organizing Embodied AI Workshop at CVPR 2020.
Feb, 2020:
Announcing the RoboTHOR navigation challenge. Follow this link for further information.
Nov, 2019:
Serving as Area Chair for CVPR 2020.
Jun, 2019:
Recognized as a CVPR 2019 outstanding reviewer.
Feb, 2019:
Our papers on self-adaptive navigation and knowledge-based question answering have been accepted to CVPR 2019.
Jan, 2019:
Our paper on navigation using scene knowledge was accepted to ICLR 2019.
Jul, 2018:
We published a paper about evaluation of navigation agents.
May, 2018:
Our work was covered in a documentary on AI by PBS. Here is the link to the video: Can we build a brain?
Apr, 2018:
Our dog modeling project has been covered by TechCrunch, NBC News, MIT Technology Review, IEEE Spectrum, and The Verge.