Fast Algorithms for Large Scale Conditional 3D Prediction
Toyota Technological Institute at Chicago, USA
Institute for Numerical Simulation, University of Bonn, Germany
Atul Kanaujia and Dimitris Metaxas
Computer Science, Rutgers University
Abstract: The potential success of discriminative learning approaches to 3D reconstruction relies on the ability to efficiently train predictive algorithms using sufficiently many examples that are representative of the typical configurations encountered in the application domain. Recent research indicates that sparse conditional Bayesian Mixture of Experts (cMoE) models (e.g. BME [2-3]) are adequate modeling tools that not only provide contextual 3D predictions for problems like human pose reconstruction, but can also represent multiple interpretations that result from depth ambiguities or occlusion. However, training conditional predictors requires sophisticated double-loop algorithms that scale unfavorably with the input dimension and the training set size, thus limiting their usage to 10,000 examples of less, so far. In this paper we present largescale algorithms, referred to as fBME , that combine forward feature selection and bound optimization in order to train probabilistic, BME models, with one order of magnitude more data (100,000 examples and up) and more than one order of magnitude faster. We present several large scale experiments, including monocular evaluation on the HumanEva dataset, demonstrating how the proposed methods overcome the scaling limitations of existing ones.
Description: fBME is a package for fast training Mixture of Experts.
Requirement: Matlab 7.6.0.
Download: [code]. This package is free for academic usage. You can run it at your own risk.