Multiple Instance Learning Tracking

Problem + Key Idea

  • problem: tracking an object given just an initial (detection) bounding box
  • MILtrack uses an adaptive appearance model, which models not only the object to track, but also the background
  • the key idea is to use the Multiple Instance Learning (MIL) paradigm, used before in areas such as object detection + object recognition
  • MIL means that during learning, (positive + negative) examples are presented in sets (bags) of image patches, where training labels are provided for sets, rather than individual instances (image patches)
  • if a set is labeled positive, it is assumed to contain at least one positive instance, otherwise the bag is labeled negative
  • to incorporate MIL into an online tracker, an online MIL version is needed
  • it is the first time, that an online MIL algorithm was presented

Rough steps of the MILtrack algorithm

  • detection / tracker location update / applying the classifier to image patches:
    • for each new frame a set of image patches around the current tracker location are cropped out
    • for each of these patches x, we compute the probability p(y=1|x) and choose the one with the highest probability
  • appearance model update:
    • two bags are cropped out: a positive bag (with radius < r to the current tracker location) and a negative bag from an annular region (with radius > r and radius < beta)
    • the model is then updated using these two bags
  • since the features for each weak classifier must be picked a priori, Grabner proposed to use a pool of M candidate weak stump classifiers for choosing the K weak classifiers


  • for the offline / batch version of MIL, MILBoost can be used to learn a classifier for image patches
  • MILBoost trains a boosting classifier that maximizes the (log) likelihood of bags
  • the probability of a bag being positiv is expressed in terms of the probabilities of its instances being positive, using the so called (Noisy-)OR model

Online Boosting

  • Oza developed an online variant of the popular AdaBoost (offline/batch) algorithm
  • for an incoming example x, each weak classifier is updated sequentially and the weight of example x is adjusted after each update
  • since the update formulas for the example weights and classifer weights in AdaBoost depend only on the error of the weak classifiers, we can use a running average of the error for each weak classifier, to estimate the example + classifier weight in an online manner

Online Boosting for MIL

  • all weak classifiers if the pool (containing M classifiers) are updated in parallel
  • then K weak classifiers h are chosen from the candidate pool sequentially by maximizing the log likelihood of the bags

Example Tracks

Here are 3 own experiments to track objects / subjects using MILtrack:

Car / Toy tracking: You need to install a Flash Player to watch this video!

Bike tracking / KITTI dataset: You need to install a Flash Player to watch this video!

Football player tracking: You need to install a Flash Player to watch this video!

public/multiple_instance_learning_tracking.txt · Last modified: 2013/09/13 09:40 (external edit) · []
Recent changes RSS feed Powered by PHP Valid XHTML 1.0 Valid CSS Driven by DokuWiki