Regularization for sparsity in statistical analysis and machine learning

  1. Vidaurre Henche, Diego
Zuzendaria:
  1. Concha Bielza Lozoya Zuzendaria
  2. Pedro Larrañaga Múgica Zuzendaria

Defentsa unibertsitatea: Universidad Politécnica de Madrid

Fecha de defensa: 2012(e)ko uztaila-(a)k 18

Epaimahaia:
  1. Serafín Moral Callejón Presidentea
  2. Ruben Armañanzas Arnedillo Idazkaria
  3. Iñaki Inza Cano Kidea
  4. Juan Antonio Fernández del Pozo de Salamanca Kidea
  5. Robert Castelo Valdueza Kidea
  6. Antonio Salmerón Cerdán Kidea
  7. Vicente Gómez Cerdà Kidea

Mota: Tesia

Laburpena

Pragmatism is the leading motivation of regularization. We can understand regularization as a modification of the maximum-likelihood estimator so that a reasonable answer could be given in an unstable or ill-posed situation. In this dissertation, i focus on the applications of regularization for obtaining sparse or parsimonious representations, where only a subset of the inputs is used. A particular form of regularization, L1-regularization, plays a key role for reaching sparsity. Most of the contributions presented revolve around L1-regularization, although other forms of regularization are explored (also pursuing sparsity in some sense). In addition to present a compact review of L1-regularization and its applications in statistical and machine learning, i devise methodology for regression, supervised classification and structure induction of graphical models. Within the regression paradigm, i focus on kernel smoothing learning, proposing techniques for kernel design that are suitable for high dimensional settings and sparse regression functions. I also present an application of regularized regression techniques for modeling the response of biological neurons. Supervised classification advances deal, on the one hand, with the application of regularization for obtaining a naive Bayes classifier and, on the other hand, with a novel algorithm for brain-computer interface design that uses group regularization in an efficient manner.