Monday, June 24, 2013

Fast Matrix Multiply and ML

Being an impatient person, I've always tried to make my 'code' run faster. In my experience, in most ML algorithms the 'core' bottleneck in computation seems to be one of the following
  1. (Dense/Sparse) Matrix - Vector product
  2. (Dense/Sparse) Matrix - Dense Matrix product
For example, consider any binary classification or regression task with N examples and dimension P. The computational bottleneck (for training and testing) is the product of the data matrix X [NxP] (sparse or dense depending on the data) and the parameter vector w [Px1]. In any multi-class classification task with K classes, the bottleneck is the product of X and the parameter matrix W [PxK].