A Quantitative Analysis Of The Robustness Of Neural Networks For Tabular Data
Résumé
This paper presents a quantitative approach to demonstrate the robustness of neural networks for tabular data. These data form the backbone of the data structures found in most industrial applications. We analyse the effect of various widely used techniques we encounter in neural network practice, such as regularization of weights, addition of noise to the data, and positivity constraints. This analysis is performed by using three state-of-the-art techniques, which provide mathematical proofs of robustness in terms of Lipschitz constant for feed-forward networks. The experiments are carried out on two prediction tasks and one classification task. Our work brings insights into building robust neural network architectures for safety critical systems that require certification or approval from a competent authority.
Origine | Fichiers produits par l'(les) auteur(s) |
---|