Direkt zum Inhalt springen
Computer Vision Group
TUM Department of Informatics
Technical University of Munich

Technical University of Munich

Menu
Home Members Frank Steinbrücker

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
members:steinbrf [2014/02/18 16:53]
steinbrf
members:steinbrf [2014/02/18 17:00]
steinbrf
Line 12: Line 12:
  
 === Visual Odometry === === Visual Odometry ===
-At ICCV 2011 published a method for getting a camera pose estimation from RGBD-Images.+At ICCV 2011 we published a method for getting a camera pose estimation from RGBD-Images.
 In the video below, the Kinect camera is moving in a static scene and the camera poses are being accurately estimated. ​ In the video below, the Kinect camera is moving in a static scene and the camera poses are being accurately estimated. ​
  
Line 18: Line 18:
  
 === Dense Mapping of large RGB-D Sequences === === Dense Mapping of large RGB-D Sequences ===
-In my publication at ICCV 2013 I describe a method for the volumetric fusion of large RGB-D sequences. The video below shows the mesh visualization of our office floor, a scene computed from more than 24.000 RGB-D images captured with the Asus Xtion sensor. The reconstruction run at more than 200 Hz on a GTX680. The finest resolution was 5mm and the entire scene fit into approximately 2.5 GB of GPU RAM, including color.+In our publication at ICCV 2013 I describe a method for the volumetric fusion of large RGB-D sequences. The video below shows the mesh visualization of our office floor, a scene computed from more than 24.000 RGB-D images captured with the Asus Xtion sensor. The reconstruction run at more than 200 Hz on a GTX680. The finest resolution was 5mm and the entire scene fit into approximately 2.5 GB of GPU RAM, including color.
 <​html><​center><​iframe width="​640"​ height="​360"​ src="//​www.youtube.com/​embed/​J37f9vH-CPc"​ frameborder="​0"​ allowfullscreen></​iframe></​center></​html>​ <​html><​center><​iframe width="​640"​ height="​360"​ src="//​www.youtube.com/​embed/​J37f9vH-CPc"​ frameborder="​0"​ allowfullscreen></​iframe></​center></​html>​
 +
 +While the method published at ICCV 2013 required a GPU to run in real-time, in our paper published at ICRA 2014, we demonstrated that the mapping part of dense volumetric RGB-D image fusion also works on a single standard CPU core at camera speed. Furthermore,​ we describe a method for incrementally extracting mesh surfaces from the volumetric data in approximately 1 Hz on a separate CPU core. In comparison to ray-casting visualization methods, surface meshes have the benefit that the visualization is view-independent. Therefore, this method is applicable for transmitting the visualization from an embedded system to a base-station. The video below demonstrates our method published at ICRA 2014.
  
 <​html><​center><​iframe width="​640"​ height="​360"​ src="//​www.youtube.com/​embed/​7s9JePSln-M"​ frameborder="​0"​ allowfullscreen></​iframe></​center></​html>​ <​html><​center><​iframe width="​640"​ height="​360"​ src="//​www.youtube.com/​embed/​7s9JePSln-M"​ frameborder="​0"​ allowfullscreen></​iframe></​center></​html>​

Rechte Seite

Informatik IX
Chair of Computer Vision & Artificial Intelligence

Boltzmannstrasse 3
85748 Garching

info@vision.in.tum.de