Pose Tracking |
|||||||
Home |
Motivation
3-D pose tracking is the task to follow the 3-D position and orientation of a known object through a video sequence. There are a multitude of applications for which pose tracking is necessary, such as self localisation and object grasping in robotics, motion capture, or human computer interaction.
The images above show the input data that must be given for the
tracking approach developed in our group: Necessary are input images
from one or several calibrated cameras (left), a 3-D model of the
object to be tracked (middle), and an initial, possibly inaccurate,
pose estimation for the first image in 3-D space (right). Here, we
illustrate the 3-D pose estimated by projecting the object back to the
image.
Our Contributions
The 3-D pose tracking work in our group is performed in collaboration with Bodo Rosenhahn and Thomas Brox. Region-based Pose Tracking As a first step, we extended the approach in [1], which performed 3-D pose tracking with the help of segmentation, to a segmentation-free pose tracking approach. The new, region-based tracking approach [2]is not only faster than the approach in [1], it can also achieve improved results. While this initial contribution only worked for rigid objects, we later extended it to kinematic chains [3]. Tracking of Multiple Objects
Since the appearance of the tracked object is not given and might
change during the sequence, it must be estimated while tracking. This
estimation is disturbed if there are partial occlusions. Although the
initial algorithm [1] can handle partial occlusions
up to some extent, once the occluded part of the object to be tracked
is too big, pose tracking is bound to fail.
Including Scene Knowledge For some scenes, prior knowledge about the object to be tracked or the scene is known, and can be used to improve tracking results. If there is a known ground plane, for example, it is obvious that the tracked object cannot pass it. In [5], it was demonstrated that improved results are achieved when using this knowledge. Similarly, other constraints can be incorporated. Examples include a cyclist whose feet must stay on the pedals, and a snowboarder whose feet have to stay on the snowboard, as demonstrated in [6]. Dealing with Self-Occlusions When tracking kinematic chains, occlusions
can be due to the fact that some parts of the chain occlude other
parts. With the pure silhouette-based approach explain so far, such
occlusions cannot be handled appropriately. Thus, we extended this
method such that it uses multiple internal regions to cope with
self-occlusions in [7]. Improved Handling of the Background Region The approach descriped in [7]
divides the object into multiple internal regions. This way, different
object regions can be distinguished. In [8], we extended
this approach such that the background is also divided into several regions.
This allows a better approximation of the background region, which can
significantly improve the tracking results.
Publications
Demo
For a demonstration of tracking results, click one of the following images:
|
||||||
MIA Group |