Conference Papers Year : 2024

Preconditioned Gradient Descent for Sketched Mixture Learning

Abstract

Sketching consists of reducing the dimensionality of data samples by retaining a small number of their moments. In this paper, a Preconditioned Gradient Descent algorithm (PGD) is proposed to estimate the parameter of mixture models (MM) in arbitrary dimensions by minimizing the non-convex quadratic loss between the sketch and the characteristic function of an MM of varying parameters. Preconditioning is introduced to dynamically adapt the descent direction to the local landspace of the objective function, fastening convergence, with no computational overhead per iteration compared to vanilla GD. An analysis of the linear convergence rate of PGD is conducted, and numerical simulations showcase the method's effectiveness, particularly when the weight of the classes is unbalanced or when a substantial number of data samples are available.
Fichier principal
Vignette du fichier
ISIT_2024.pdf (366.9 Ko) Télécharger le fichier
Origin Files produced by the author(s)

Dates and versions

hal-04425748 , version 1 (30-01-2024)
hal-04425748 , version 2 (30-05-2024)

Licence

Copyright

Identifiers

  • HAL Id : hal-04425748 , version 2

Cite

Joseph Gabet, Maxime Ferreira Da Costa. Preconditioned Gradient Descent for Sketched Mixture Learning. International Symposium on Information Theory, IEEE, Jul 2024, Athens, Greece. ⟨hal-04425748v2⟩
210 View
148 Download

Share

More