See readme in the packages for installation and usage.
|1||Generalized Conditional Gradient (GCG)|
|GCG is an open source Matlab solver for gauge (norm) regularized problems, that are commonly used in sparse coding and compressive sensing. Examples include matrix completion, dictionary learning, and structured sparse estimation.|
|2||Smoothing for Multivariate Scores (SMS)|
|SMS is an open source, extensible and scalable convex solver for a number of machine learning problems cast in the form of regularized risk minimization problem. It is particularly advantageous for optimizing multivariate performance measure. The implementation is "extensible" because the (problem-specific) loss function modules are encapsulated with a common interface for the main optimizer. Thus it is very simple to incorporate solutions to new problems.|
|3||Convex Subspace Learning|
|This Matlab package implements the convex subspace learning model proposed at NIPS'12 and AAAI'12. The optimization is based on alternating direction of multiplier method. Example applications include semi-supervised learning, image denoising, and multi-label learning.|
|4||Bayesian Online Multilabel Classification (BOMC)|
|BOMC is an open source toolkit for online multilabel classification using Bayesian models. It is implemented in F# 188.8.131.52 on Microsoft Visual Studio 2008, and can be compiled and run on Linux systems via Mono. The graphical model is extend from TrueSkillTM  to deal with multilabel, and the inference engine is expectation propagation.|
|5||Conditional Random Fields for Policy Gradient Multi-agent Reinforcement Learning|
|[tar.bz2 700 KB] [paper]|
|This package implements the tree sampling for inference in conditional random fields. With the sampled states and approximate expectations, the package implements the natural actor-critic which performs collaborative multi-agent reinforcement learning. Three simulators are provided namely grid gate control, sensor network, and traffic light control.|
|6||Faster Rates for Training SVMs using Optimal Gradient based Methods|
|[tar.bz2 800 KB] [paper]|
|This package implements the three
versions of Nesterov's first-order methods proposed in
2007. Its rate of convergence is O(1/k^2), which is
proved to be optimal in this class of optimizers. The 1983
version optimizes a smooth function with Lipschitz continuous
gradient. The 2005 version extends to the primal-dual
setting, and the 2007 version can automatically estimate the unknown
Lipschitz constant of the gradient.
This code is built upon the package BMRM.
|7||Hyperparameter Learning for Graph based Semi-supervised Learning Algorithms|
|[tar 100 KB] [paper]|
|This package implements the
leave-one-out method for learning the hyperparameters in graph based
semi-supervised learning. Practical efficiency is achieved
via the Sherman–Morrison formula and by facting out the common terms in
feature weight updates.
This code relies on the math library of Matlab. See this link for details.
I am collecting some tricks in coding for machine learning. Coming soon.
I am also polishing some code for massaging datasets. Mostly written in C++ for handling large datasets.