Mohamed S El-Mahallawy
L1-Regularized Least Squares Sparse Extreme Learning Machine for Classification
Extreme Learning Machines (ELM) is a class of supervised learning models that have three basic steps: A random projection of the input space followed by some nonlinear operation and finally a linear output layer of weights. The basic ELM uses pseudo matrix inverse to estimate the output layer weights which usually leads to over fitting. Recent research suggested the use of L2-norm regularization to enhance the sparsity of the output layer. This paper proposes the use of the L1-norm LASSO formulation, since the L1-norm promotes sparsity in the solution of the output layer weights and it has been shown to produce the sparsest solutions in many applications. Extensive comparison between the basic ELM, the L1-norm and the L2-norm is conducted over a number of classification tasks, with significant improvement in sparseness using the proposed approach with better performance than that reported in the literature.