Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
Last revision Both sides next revision
research:vslam:stereo-dso [2018/08/12 01:57]
Rui Wang
research:vslam:stereo-dso [2020/05/08 17:50]
Rui Wang
Line 8: Line 8:
 ===== Abstract ===== ===== Abstract =====
 ** Stereo DSO ** is a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, it integrates constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods. ** Stereo DSO ** is a novel method for highly accurate real-time visual odometry estimation of large-scale environments from stereo cameras. It jointly optimizes for all the model parameters within the active window, including the intrinsic/extrinsic camera parameters of all keyframes and the depth values of all selected pixels. In particular, it integrates constraints from static stereo into the bundle adjustment pipeline of temporal multi-view stereo. Real-time optimization is realized by sampling pixels uniformly from image regions with sufficient intensity gradient. Fixed-baseline stereo resolves scale drift. It also reduces the sensitivities to large optical flow and to rolling shutter effect which are known shortcomings of direct image alignment methods. Quantitative evaluation demonstrates that the proposed Stereo DSO outperforms existing state-of-the-art visual odometry methods both in terms of tracking accuracy and robustness. Moreover, our method delivers a more precise metric 3D reconstruction than previous dense/semi-dense direct approaches while providing a higher reconstruction density than feature-based methods.
 +
 +===== Download =====
 +- The estimated camera trajectories of all the sequences of KITTI Odometry: {{:research:vslam:stereo-dso:KITTI_odometry_training.zip|training (00-10)}}, {{:research:vslam:stereo-dso:KITTI_odometry_testing.zip|testing (11-21)}}.\\
 +- Paper: {{:research:vslam:stereo-dso:wang2017stereodso.pdf|Paper}}, supplementary document of the paper: {{:research:vslam:stereo-dso:wang2017stereodso-supp.pdf|Supplementary Document}}.
 + 
  
 ===== Results ===== ===== Results =====
Line 22: Line 27:
 {{:research:vslam:dso:rs.png?350&nolink|}} {{:research:vslam:dso:rs.png?350&nolink|}}
  
-As qualitative results we run our method on all the sequences from the training set and compare the estimated camera trajectories to the provided ground truth. Following are the results on some example sequences. All the estimated camera trajectories can be downloaded here {{:research:vslam:stereo-dso:camera_trajectories.zip|Camera Trajectories}}.+As qualitative results we run our method on all the sequences from the training set and compare the estimated camera trajectories to the provided ground truth. Following are the results on some example sequences. **All the estimated camera trajectories can be downloaded here {{:research:vslam:stereo-dso:camera_trajectories.zip|Camera Trajectories}}**.
  
 {{:research:vslam:dso:00.png?350&nolink|}} {{:research:vslam:dso:00.png?350&nolink|}}

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More