Direkt zum Inhalt springen
Computer Vision Group
TUM School of Computation, Information and Technology
Technical University of Munich

Technical University of Munich

Menu

Links

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More


Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
research:semidense [2014/07/02 08:47]
engelj
research:semidense [2014/07/02 08:51]
engelj
Line 1: Line 1:
- +~~REDIRECT>research:lsdslam~~
- +
-====== LSD-SLAM: Large-Scale Direct Monocular SLAM ====== +
-{{:research:semidense:semidenseexamples.png?500|}} +
- +
-Contact: [[members:engelj|Jakob Engel]] +
- +
-In the last years - in particular with the advent of commodity depth cameras - **dense methods** for camera tracking and 3d-reconstruction have quickly gained popularity. Instead of abstracting the images to a set of feature observations (such as corners, blobs or line-segments), they direcly operate on the raw images. This allows to exploit all information in the image and achieve high-accuracy tracking and reconstruction results in real-time. +
- +
-We aim at developing similar methods for ordinary **monocular cameras**, where no direct depth measurements are available.  +
-While the absence of a direct depth measuring means makes the problem more complex, it also has important benefits:  +
-  * The inherent **scale-ambivalence** allows to use image regions at any depth (the range of RGB-D cameras is very limited). Especially outdoors, where typically a large portion of the scene is far away, this is very beneficial. +
-  * A monocular camera is significantly **smaller, lighter and cheaper** than stereo- or in RGB-D camera, and hence more suited as sensor for mobile or robotic applications. +
- +
-<html><center></html>{{:research:semidense:semidensevskeypoints.png?direct&640|}}<html></center></html> +
- +
- +
-====== Semi-Dense Visual Odometry ====== +
-The proposed **Semi-Dense** visual odometry / SLAM method **uses all image-regions which carry information**, i.e., which have a non-negligible intensity gradient. In contrast to keypoint-based methods this includes regions with gradient in only one direction (edges). The statistical depth map representation and the omission of intensity-uniform image regions allows it to run in **real-time on a CPU**. +
-In addition to highly accurate camera odometry, it provides accurate, semi-dense depth maps of the environment, which can be valuable e.g. for robot navigation. +
- +
-<html><center><iframe width="640" height="440" src="//www.youtube.com/embed/LZChzEcLNzI" frameborder="0" allowfullscreen></iframe></center></html> +
- +
-====== Related publications ====== +
-<bibtex> +
-<keywords>semidense</keywords> +
-</bibtex>+

Rechte Seite

Informatik IX
Computer Vision Group

Boltzmannstrasse 3
85748 Garching info@vision.in.tum.de

Follow us on:

News

04.03.2024

We have twelve papers accepted to CVPR 2024. Check our publication page for more details.

18.07.2023

We have four papers accepted to ICCV 2023. Check out our publication page for more details.

02.03.2023

CVPR 2023

We have six papers accepted to CVPR 2023. Check out our publication page for more details.

15.10.2022

NeurIPS 2022

We have two papers accepted to NeurIPS 2022. Check out our publication page for more details.

15.10.2022

WACV 2023

We have two papers accepted at WACV 2023. Check out our publication page for more details.

More