OptiML is an embedded domain-specific language for machine learning. OptiML is developed as a research project from Stanford University's Pervasive Parallelism Laboratory (PPL).
OptiML is currently targeted at machine learning researchers and algorithm developers; it aims to provide a productive, high performance, MATLAB-like environment for linear algebra supplemented with machine learning specific abstractions. Our primary goal is to allow machine learning practitioners to write code in a highly declarative manner and still achieve high performance on a variety of underlying parallel, heterogeneous devices. The same OptiML program should run well and scale on a CMP (chip multi-processor), a GPU, a combination of CMPs and GPUs, clusters of CMPs and GPUs, and eventually even FPGAs and other specialized accelerators.
In particular, OptiML is designed to allow statistical inference algorithms expressible by the Statistical Query Model to be both easy to express and very fast to execute. These algorithms can be expressed in a summation form, and can be parallelized using fine-grained map-reduce operations. OptiML employs aggressive optimizations to reduce unnecessary memory allocations and fuse operations together to make these as fast as possible. OptiML also attempts to specialize implementations to particular hardware devices as much as possible to achieve the best performance.
- 2014 Jun 30 OptiML binary distribution 0.3.4-alpha released
- 2014 Mar 11 OptiML binary distribution 0.3.3-alpha released
- 2013 Oct 21 OptiML binary distribution 0.3.2-alpha released
- 2012 Sep 6 OptiML spec 0.2 released
- 2012 Aug 20 New examples and FAQ added
- 2012 Feb 16 OptiML alpha 0.1 released