Abstracts - 2007
Blaise: A Toolkit for High-Performance Probabilistic Inference
Keith Bonawitz, Vikash Mansinghka & Beau Cronin
Blaise is a toolkit for high performance probabilistic inference, implemented in Java. Blaise provides efficient implementations of the algorithmic and representational primitives for the computations arising in probabilistic inference, along with means of composition that support easy incremental development of high-performance algorithms. Finally, Blaise is designed to allow easy interactive development with sophisticated visualization tools, so you can watch your computations unfold during development and debugging without sacrificing performance during production executions.
Development on Blaise has recently focused on the implementation of stochastic search processes such as Markov chain Monte Carlo (MCMC) and sequential Monte Carlo (particle filtering). Several other features are soon to be added, such as automatic parallelization for multicore processors and computing clusters, and inference schemes based on variational methods and message passing.
Blaise in Use
Blaise has been presented at NIPS 2006 [1,2] as well as various forums around MIT. Presentations included live coding demos and showed significant interest in the community. Furthermore, several research projects either currently use Blaise or are discussing migrating to Blaise. For example, the NIPS 2006 “Learning annotated hierarchies from relational data”  was implemented using Blaise. Cronin is currently implementing a Matlab toolbox, built on Blaise, which will allow for the routine analysis of many types of neurophysiological data using hierarchical, generative models. An early version of this toolbox has been successfully applied to neuronal recordings collected under several experimental paradigms in visual neuroscience, and this work is currently under review. Finally, Bonawitz and Mansinghka have been using Blaise for tasks varying from GenVis, a computational vision project focused on directly inverting a 3D-graphics based generative model for images using parallel tempered reversible jump MCMC, to exploring novel hybridizations of particle filtering and MCMC for inference in non-parametric relational models such as the Infinite Relational Model  and CrossCat .
Blaise represents novel technology in three respects. First, to the best of our knowledge, it is the only inference system that integrates advanced stochastic search primitives in a fully generic way. For example, every MCMC search built from this toolkit can benefit from advanced techniques such as parallel tempering while maintaining efficiency and with almost no additional code. Second, Blaise provides primitives for the development of inference algorithms over structured probabilistic domains, in contrast to existing toolkits which either emphasize inference for continuous parameters or top out, in terms of structure, at Bayes nets. For example, such other toolkits do not make it easy to build inference schemes for common structured models (such as collapsed Gibbs for the LDA family or Gibbs for Hierarchical Dirichlet Process mixtures) out of standard pieces. Finally, the primitives in Blaise emphasize the algorithmic commonalities between a variety of very different inference strategies. This architecture supports the creation of novel hybrid inference algorithms, such as the use of advanced MCMC techniques for optimizing the objective functions of variational inference.
 Keith Bonawitz and Vikash Mansinghka. Blaise: A System for Interactive Development of High Performance Inference Algorithms. Demonstration at Neural Information Processing Systems Conference, Vancouver, BC, Canada, December 4, 2006.
 Keith Bonawitz and Vikash Mansinghka. Blaise: A Toolkit for High-Performance Probabilistic Inference. Presentation and demonstration at Workshop on Machine Learning Open Source Software 2006, at Neural Information Processing Systems Conference Workshops, Whistler, BC, Canada, December 9, 2006.
 Daniel M. Roy, Charles Kemp, Vikash Mansinghka and Joshua B. Tenenbaum. Learning Annotated Hierarchies from Relational Data. In Advances in Neural Information Processing Systems 19. Whistler, B.C, December 2006.
 Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. Learning Systems of Concepts with an Infinite Relational Model. In The Proceedings of the Twenty-First National Conference on Artificial Intelligence (AAAI '06). Boston, MA, July 2006.
 Patrick Shafto, Charles Kemp, Vikash Mansinghka, and Joshua B. Tenenbaum. Learning cross-cutting systems of categories. In The Proceedings of the Twenty-Eighth Annual Meeting of the Cognitive Science Society. Vancouver, BC, Canada. July 2006.