Flexible constraints for regularization in learning from data

Authors


Abstract

By its very nature, inductive inference performed by machine learning methods mainly is data driven. Still, the incorporation of background knowledge—if available—can help to make inductive inference more efficient and to improve the quality of induced models. Fuzzy set–based modeling techniques provide a convenient tool for making expert knowledge accessible to computational methods. In this article, we exploit such techniques within the context of the regularization (penalization) framework of inductive learning. The basic idea is to express knowledge about an underlying data-generating process in terms of flexible constraints and to penalize those models violating these constraints. An optimal model is one that achieves an optimal trade-off between fitting the data and satisfying the constraints. © 2004 Wiley Periodicals, Inc.

Ancillary