Roozbeh Mottaghi

Highlights and News

Jun, 2021:
Giving invited talks at the following CVPR 2021 workshops: 3D Vision and Robotics, 3D Scene Understanding for Vision, Graphics, and Robotics, and Learning to Generate 3D Shapes and Scenes.
Feb, 2021:
We are organizing three challenges at the Embodied AI Workshop at CVPR 2021: Navigation towards objects, Room rearrangement, and Interactive instruction following.
Nov, 2020:
We released a report describing a new challenge for Embodied AI. This is a joint work with colleagues from Georgia Tech, FAIR, Simon Fraser University, Imperial College London, Princeton, Intel Labs, UC Berkeley, Google and UC San Diego. The report can be accessed here.
Nov, 2020:
Serving as Area Chair for CVPR 2021.
Sep, 2020:
We released AllenAct, a framework for unifying environments, models and training algorithms used in Embodied AI. Check out the details in this arXiv paper.
Feb, 2020:
Co-organizing Embodied Vision, Actions & Language Workshop at ECCV 2020, which hosts a challenge on ALFRED our embodied instruction following framework.
Feb, 2020:
Co-organizing Embodied AI Workshop at CVPR 2020.
Feb, 2020:
Announcing the RoboTHOR navigation challenge. Follow this link for further information.
Nov, 2019:
Serving as Area Chair for CVPR 2020.
Jun, 2019:
Recognized as a CVPR 2019 outstanding reviewer.
Feb, 2019:
Our papers on self-adaptive navigation and knowledge-based question answering have been accepted to CVPR 2019.
Jan, 2019:
Our paper on navigation using scene knowledge was accepted to ICLR 2019.
Jul, 2018:
We published a paper about evaluation of navigation agents.
May, 2018:
Our work was covered in a documentary on AI by PBS. Here is the link to the video: Can we build a brain?
Apr, 2018:
Our dog modeling project has been covered by TechCrunch, NBC News, MIT Technology Review, IEEE Spectrum, and The Verge.