KAKURENBO: adaptively hiding samples in deep neural network training - Parallel and Distributed Systems group Accéder directement au contenu
Communication Dans Un Congrès Année : 2023

KAKURENBO: adaptively hiding samples in deep neural network training

Résumé

This paper proposes a method for hiding the least-important samples during the training of deep neural networks to increase efficiency, i.e., to reduce the cost of training. Using information about the loss and prediction confidence during training, we adaptively find samples to exclude in a given epoch based on their contribution to the overall learning process, without significantly degrading accuracy. We explore the converge properties when accounting for the reduction in the number of SGD updates. Empirical results on various large-scale datasets and models used directly in image classification and segmentation show that while the withreplacement importance sampling algorithm performs poorly on large datasets, our method can reduce total training time by up to 22% impacting accuracy only by 0.4% compared to the baseline. Code available at https://github.com/TruongThaoNguyen/kakurenbo
Fichier principal
Vignette du fichier
main.pdf (2.21 Mo) Télécharger le fichier
Origine : Fichiers produits par l'(les) auteur(s)

Dates et versions

hal-04245814 , version 1 (17-10-2023)

Licence

Paternité

Identifiants

  • HAL Id : hal-04245814 , version 1

Citer

Thao Truong Nguyen, Balazs Gerofi, Edgar Josafat Martinez-Noriega, François Trahay, Mohamed Wahib. KAKURENBO: adaptively hiding samples in deep neural network training. NeurIPS 2023 - 37th Conference on Neural Information Processing Systems, Dec 2023, New Orleans, United States. ⟨hal-04245814⟩
53 Consultations
32 Téléchargements

Partager

Gmail Facebook X LinkedIn More