Direkt zum Inhalt springen
Computer Vision Group
TUM Department of Informatics
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Chair of Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:
CVG Group DVL Group

News

02.10.2020

We have five papers accepted to 3DV 2020!

30.09.2020

Our effcient deep network architectures form the AI engine of the project Slow Down COVID-19 at Harvard.

24.07.2020

Our practical course "Vision-based Navigation" (WS18, SS19) by Dr. Vladyslav Usenko and Nikolaus Demmel was honored as best practical course in the academic year 2018/2019 by the department for Informatics.

07.05.2020

We are organizing a workshop on Map-based Localization for Autonomous Driving at ECCV 2020, Glasgow, UK.

13.04.2020

Daniel Cremers received an ERC Advanced Grant (3.5 Mio Euro) for pioneering frontier research from the European Research Council. This constitutes his fifth ERC grant.

More


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Next revision Both sides next revision
members:wangr [2020/02/25 22:25]
Rui Wang
members:wangr [2020/08/22 16:04]
Rui Wang
Line 2: Line 2:
  
 ==== News ==== ==== News ====
-  * [02.2020] D3VO is accepted by CVPR 2020. Stay tuned.+  * [07.2020] Our paper "DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization" is accepted by ECCV 2020 (spotlight). Code is released and please check our [[https://vision.in.tum.de/research/vslam/dh3d|project page]] for details. 
 +  * [05.2020] We are organizing a workshop together with a challenge on [[https://sites.google.com/view/mlad-eccv2020|Map-based Localization for Autonomous Driving]] at ECCV 2020, Glasgow, UK. 
 +  * [05.2020] The results of Stereo DSO on KITTI Odometry test set are uploaded to the [[https://vision.in.tum.de/research/vslam/stereo-dso|project page]].  
 +  * [02.2020] D3VO is accepted by CVPR 2020.([[https://arxiv.org/abs/2003.01060|arxiv]])
   * [01.2020] DirectShape has been accepted by ICRA 2020. It is an optimization pipeline that estimates the 3D poses and shapes of cars directly from a stereo image pair, no 3D points are needed. ([[https://vision.in.tum.de/research/vslam/direct-shape|project page]])   * [01.2020] DirectShape has been accepted by ICRA 2020. It is an optimization pipeline that estimates the 3D poses and shapes of cars directly from a stereo image pair, no 3D points are needed. ([[https://vision.in.tum.de/research/vslam/direct-shape|project page]])
   * [10.2018] The code for LDSO (Direct Sparse Odometry with Loop Closure) has been released! ([[https://vision.in.tum.de/research/vslam/ldso|project page]]).   * [10.2018] The code for LDSO (Direct Sparse Odometry with Loop Closure) has been released! ([[https://vision.in.tum.de/research/vslam/ldso|project page]]).
Line 15: Line 18:
  
 Find me on [[https://scholar.google.de/citations?user=buN3yw8AAAAJ&hl=en|Google Scholar]], Find me on [[https://scholar.google.de/citations?user=buN3yw8AAAAJ&hl=en|Google Scholar]],
- [[https://www.linkedin.com/in/rui-wang-5367398a|LinkedIn]], [[https://www.strava.com/athletes/22551758|Strava]].+ [[https://www.linkedin.com/in/rui-wang-5367398a|LinkedIn]], [[https://www.strava.com/athletes/22551758|Strava]] (highly research related).
  
 ==== Research Interests ==== ==== Research Interests ====
  
-=== Semantic VO / SLAM === +== VO and vSLAM ==
- +
-  * ** Direct Shape ** This video shows the basic idea and some results of our paper "DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation". In this work, we estimate the 3D poses and shapes of cars based on a single stereo image pair. ** Note that the point clouds in the video are only for visualization purpose, they are not used in our method. ** For more details please refer to the [[https://vision.in.tum.de/research/vslam/direct-shape|Project Page]].  +
-<html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/QqP6zdx5OKw" frameborder="0" allowfullscreen></iframe></center></html> +
-<html><br /></html> +
- +
-=== Classic VO / SLAM ===+
  
   * ** Stereo DSO ** This video shows some results of our paper "Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras" accepted by ICCV 2017. ([[https://vision.in.tum.de/research/vslam/stereo-dso|Project Page]])    * ** Stereo DSO ** This video shows some results of our paper "Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras" accepted by ICCV 2017. ([[https://vision.in.tum.de/research/vslam/stereo-dso|Project Page]]) 
Line 41: Line 38:
 <html><br /></html> <html><br /></html>
  
-=== Deep Learning Boosted VO / SLAM ===+== Large-scale Relocalization ==
  
-  * ** Deep Virtual Stereo Odometry (DVSO) ** In this project we design a novel deep network and train it in a semi-supervised way to predict depth map from single image, and integrate the depth map into DSO as virtual stereo measurement. Being a monocular VO approach, DVSO achieves comparable performance to the state-of-the-art stereo methods. ([[https://vision.in.tum.de/research/vslam/dvso|Project Page]])+  * ** DH3D ** "DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization" (ECCV 2020, spotlight). Please visit the [[https://vision.in.tum.de/research/vslam/dh3d|Project Page]] for code and related materials. (The video is with audio.) 
 +<html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/ZxZiwZugG14" frameborder="0" allowfullscreen></iframe></center></html> 
 + 
 +== Bring Semantic Information into vSLAM == 
 + 
 +  * ** DirectShape ** This video shows the basic idea and some results of our paper "DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation". In this work, we estimate the 3D poses and shapes of cars based on a single stereo image pair. ** Note that the point clouds in the video are only for visualization purpose, they are not used in our method. ** For more details please refer to the [[https://vision.in.tum.de/research/vslam/direct-shape|Project Page]].  
 +<html><center><iframe width="640" height="360" src="https://www.youtube.com/embed/QqP6zdx5OKw" frameborder="0" allowfullscreen></iframe></center></html> 
 +<html><br /></html> 
 + 
 +== Robustify VO with Deep Learning == 
 + 
 +  * ** DVSO: Deep Virtual Stereo Odometry ** In this project we design a novel deep network and train it in a semi-supervised way to predict depth map from single image, and integrate the depth map into DSO as virtual stereo measurement. Being a monocular VO approach, DVSO achieves comparable performance to the state-of-the-art stereo methods. ([[https://vision.in.tum.de/research/vslam/dvso|Project Page]])
 <html><center><iframe width="640" height="360" <html><center><iframe width="640" height="360"
 src="https://www.youtube.com/embed/sLZOeC9z_tw" frameborder="0" allowfullscreen></iframe> src="https://www.youtube.com/embed/sLZOeC9z_tw" frameborder="0" allowfullscreen></iframe>
Line 49: Line 57:
 <html><br /></html> <html><br /></html>
  
-=== Camera Calibration ===+  * ** D3VO: Deep Depth, Pose and Uncertainty ** for Monocular Visual Odometry. In addition to integrate the depth maps generated by a network into DSO, like was done in DVSO, in this work we further learn to predict camera poses and uncertainty maps and fuse them into DSO. For details please visit the [[https://vision.in.tum.de/research/vslam/d3vo|Project Page]]. 
 + 
 +== Camera Calibration ==
  
   * ** Online Photometric Calibration ** We've conducted a project to achieve online photometric calibration, where the exposure times of consecutive frames, the camera response function, and the camera vignetting factors can be recovered in real-time. Experiments show that our estimations converge to the ground truth after only a few seconds. Our approach can be used either offline for calibrating existing datasets, or online in combination with state-of-the-art direct visual odometry or SLAM pipelines. For more details please check our paper "Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM". ([[https://vision.in.tum.de/research/vslam/photometric-calibration|Project Page]])   * ** Online Photometric Calibration ** We've conducted a project to achieve online photometric calibration, where the exposure times of consecutive frames, the camera response function, and the camera vignetting factors can be recovered in real-time. Experiments show that our estimations converge to the ground truth after only a few seconds. Our approach can be used either offline for calibrating existing datasets, or online in combination with state-of-the-art direct visual odometry or SLAM pipelines. For more details please check our paper "Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM". ([[https://vision.in.tum.de/research/vslam/photometric-calibration|Project Page]])
Line 55: Line 65:
 <html><br /></html> <html><br /></html>
  
-==== Master Theses / IDP / Guided Research ==== 
-Please send me your transcripts and CV by email.  
  
 ==== Teaching ==== ==== Teaching ====
Line 65: Line 73:
  
 ==== Service ==== ==== Service ====
-  * Conference reviewer: CVPR, ICCV, ICRA, IROS, AAAI +  * Conference reviewer: CVPR, ICCV, ECCV, ICRA, IROS, AAAI 
-  * Journal reviewer: RA-L, T-RO+  * Journal reviewer: RA-L, T-RO, AURO
  
 ==== Publications ==== ==== Publications ====

Rechte Seite

Informatik IX
Chair of Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:
CVG Group DVL Group

News

02.10.2020

We have five papers accepted to 3DV 2020!

30.09.2020

Our effcient deep network architectures form the AI engine of the project Slow Down COVID-19 at Harvard.

24.07.2020

Our practical course "Vision-based Navigation" (WS18, SS19) by Dr. Vladyslav Usenko and Nikolaus Demmel was honored as best practical course in the academic year 2018/2019 by the department for Informatics.

07.05.2020

We are organizing a workshop on Map-based Localization for Autonomous Driving at ECCV 2020, Glasgow, UK.

13.04.2020

Daniel Cremers received an ERC Advanced Grant (3.5 Mio Euro) for pioneering frontier research from the European Research Council. This constitutes his fifth ERC grant.

More