Date: December 10th
Location: Whistler, B.C.
- October 29 -- Paper submission deadline
- November 8 -- Notification of acceptance
The bottleneck in many complex prediction problems is the prohibitive cost of inference or search at test time. Examples include structured problems such as object detection and segmentation, natural language parsing and translation, as well as standard classification with kernelized or costly features or a very large number of classes. These problems present a fundamental trade-off between approximation error (bias) and inference or search error due to computational constraints as we consider models of increasing complexity. This trade-off is much less understood than the traditional approximation/estimation (bias/variance) trade-off but is constantly encountered in machine learning applications.
The primary aim of this workshop is to formally explore this trade-off and to unify a variety of recent approaches, which can be broadly described as coarse-to-fine methods, that explicitly learn to control this trade-off. Unlike approximate inference algorithms, coarse-to-fine methods typically involve exact inference in a coarsened or reduced output space that is then iteratively refined. They have been used with great success in specific applications in computer vision (e.g., face detection) and natural language processing (e.g., parsing, machine translation). However, coarse-to-fine methods have not been studied and formalized as a general machine learning problem. Thus many natural theoretical and empirical questions have remained un-posed; e.g., when will such methods succeed, what is the fundamental theory linking these applications, and what formal guarantees can be found?
Sponsored by and