Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More


Rui Wang

AlumniTechnical University of Munich

School of Computation, Information and Technology
Informatics 9
Boltzmannstrasse 3
85748 Garching
Germany

Fax: +49-89-289-17757
Office: 
Mail: rui.wang@in.tum.de

This is my old webpage which is not maintained since March 2021.
Please visit my new website here.

Brief Bio

I received my Bachelor's degree (2011) in Automation from Xi'an Jiaotong University, and my Master's degree (2014) in Electrical Engineering and Information Technology from the Technical University of Munich. From 2014 to 2016 I worked as a computer vision algorithm developer for advanced driver assistance systems (ADAS) at Continental. Since March 2016 I am a PhD student in the Computer Vision Group at the Technical University of Munich, headed by Professor Daniel Cremers. In 2018 I joined Artisense, a startup co-founded by Professor Cremers, as a PhD student and senior computer vision & AI researcher. My research interests include visual SLAM and visual 3D reconstruction, as well as their combinations with semantic information.

Find me on Google Scholar, LinkedIn, Strava (highly research related).

Research Interests

VO and vSLAM
  • Stereo DSO This video shows some results of our paper "Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras" accepted by ICCV 2017. (Project Page)


  • SLAM extension to Stereo DSO After the ICCV 2017 deadline, we extended our method to a SLAM system with additional components for map maintenance, loop detection and loop closure. Our performance on KITTI is further boosted a little, as shown by the plots in the video. (Project Page)


  • LDSO: Direct Sparse Odometry with Loop Closure In this project we integrate feature points into DSO to improve the repeatability of the sampled points, enabling loop closure candidate detection using BoW and loop closure based on pose graph optimization. For more details and access to the code, please visit the Project Page.


Large-scale Relocalization
  • DH3D "DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization" (ECCV 2020, spotlight). Please visit the Project Page for code and related materials. (The video is with audio.)

Bring Semantic Information into vSLAM
  • DirectShape This video shows the basic idea and some results of our paper "DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation". In this work, we estimate the 3D poses and shapes of cars based on a single stereo image pair. Note that the point clouds in the video are only for visualization purpose, they are not used in our method. For more details please refer to the Project Page.


Robustify VO with Deep Learning
  • DVSO: Deep Virtual Stereo Odometry In this project we design a novel deep network and train it in a semi-supervised way to predict depth map from single image, and integrate the depth map into DSO as virtual stereo measurement. Being a monocular VO approach, DVSO achieves comparable performance to the state-of-the-art stereo methods. (Project Page)


  • D3VO: Deep Depth, Pose and Uncertainty for Monocular Visual Odometry. In addition to integrate the depth maps generated by a network into DSO, like was done in DVSO, in this work we further learn to predict camera poses and uncertainty maps and fuse them into DSO. For details please visit the Project Page.
Camera Calibration
  • Online Photometric Calibration We've conducted a project to achieve online photometric calibration, where the exposure times of consecutive frames, the camera response function, and the camera vignetting factors can be recovered in real-time. Experiments show that our estimations converge to the ground truth after only a few seconds. Our approach can be used either offline for calibrating existing datasets, or online in combination with state-of-the-art direct visual odometry or SLAM pipelines. For more details please check our paper "Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM". (Project Page)


Teaching

Service

Publications


Export as PDF, XML, TEX or BIB

Journal Articles
2018
[]Online Photometric Calibration of Auto Exposure Video for Realtime Visual Odometry and SLAM (P. Bergmann, R. Wang and D. Cremers), In IEEE Robotics and Automation Letters (RA-L), volume 3, 2018. (This paper was also selected by ICRA'18 for presentation at the conference.[arxiv][video][code][project]) [bibtex] [pdf]ICRA'18 Best Vision Paper Award - Finalist
[]Challenges in Monocular Visual Odometry: Photometric Calibration, Motion Bias and Rolling Shutter Effect (N. Yang, R. Wang, X. Gao and D. Cremers), In In IEEE Robotics and Automation Letters (RA-L) & Int. Conference on Intelligent Robots and Systems (IROS), volume 3, 2018. ([arxiv]) [bibtex] [doi] [pdf]
Preprints
2022
[]4Seasons: Benchmarking Visual SLAM and Long-Term Localization for Autonomous Driving in Challenging Conditions (P Wenzel, N Yang, R Wang, N Zeller and D Cremers), In arXiv preprint arXiv:2301.01147, 2022.  [bibtex] [arXiv:2301.01147]
Conference and Workshop Papers
2023
[]CASSPR: Cross Attention Single Scan Place Recognition (Y Xia, M Gladkova, R Wang, Q Li, U Stilla, JF. Henriques and D Cremers), In IEEE International Conference on Computer Vision (ICCV), 2023. ([code]) [bibtex] [arXiv:2211.12542]
2021
[]SOE-Net: A Self-Attention and Orientation Encoding Network for Point Cloud based Place Recognition (Y. Xia, Y. Xu, S. Li, R. Wang, J. Du, D. Cremers and U. Stilla), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021. ([arxiv]) [bibtex]Oral Presentation
[]Tight Integration of Feature-based Relocalization in Monocular Direct Visual Odometry (M Gladkova, R Wang, N Zeller and D Cremers), In Proc. of the IEEE International Conference on Robotics and Automation (ICRA), 2021. ([project page]) [bibtex] [arXiv:2102.01191]
2020
[]4Seasons: A Cross-Season Dataset for Multi-Weather SLAM in Autonomous Driving (P. Wenzel, R. Wang, N. Yang, Q. Cheng, Q. Khan, L. von Stumberg, N. Zeller and D. Cremers), In Proceedings of the German Conference on Pattern Recognition (GCPR), 2020. ([project page][arXiv][video]) [bibtex] [pdf]
[]Learning Monocular 3D Vehicle Detection without 3D Bounding Box Labels (L. Koestler, N. Yang, R. Wang and D. Cremers), In Proceedings of the German Conference on Pattern Recognition (GCPR), 2020. ([project page][video]) [bibtex] [pdf]
[]DH3D: Deep Hierarchical 3D Descriptors for Robust Large-Scale 6DoF Relocalization (J. Du, R. Wang and D. Cremers), In European Conference on Computer Vision (ECCV), 2020. ([project page][code][supplementary][arxiv]) [bibtex] [pdf]Spotlight Presentation
[]D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual Odometry (N. Yang, L. von Stumberg, R. Wang and D. Cremers), In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2020.  [bibtex] [arXiv:2003.01060] [pdf]Oral Presentation
[]DirectShape: Photometric Alignment of Shape Priors for Visual Vehicle Pose and Shape Estimation (R. Wang, N. Yang, J. Stueckler and D. Cremers), In Proc. of the IEEE International Conference on Robotics and Automation (ICRA), 2020. ([video][presentation][project page][supplementary][arxiv]) [bibtex] [pdf]
2018
[]Deep Virtual Stereo Odometry: Leveraging Deep Depth Prediction for Monocular Direct Sparse Odometry (N. Yang, R. Wang, J. Stueckler and D. Cremers), In European Conference on Computer Vision (ECCV), 2018. ([arxiv],[supplementary],[project]) [bibtex]Oral Presentation
[]LDSO: Direct Sparse Odometry with Loop Closure (X. Gao, R. Wang, N. Demmel and D. Cremers), In International Conference on Intelligent Robots and Systems (IROS), 2018. ([arxiv][video][code][project]) [bibtex]
2017
[]Stereo DSO: Large-Scale Direct Sparse Visual Odometry with Stereo Cameras (R. Wang, M. Schwörer and D. Cremers), In International Conference on Computer Vision (ICCV), 2017. ([supplementary][video][arxiv][project]) [bibtex] [pdf]
Powered by bibtexbrowser
Export as PDF, XML, TEX or BIB

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More