Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 1.16 KB

contributed_3.md

File metadata and controls

7 lines (5 loc) · 1.16 KB

Parallel/distributed intelligent hyperparameters search for generative artificial neural networks

Abstract

Generative Adversarial Networks (GANs) are emerging methods to train deep generative models. Co-evolutionary algorithms (CoEA) have shown to be resilient to the degenerative behavior of GAN training. Thus, Lipizzaner is a GAN training framework that applies a CoEA. This article presents a parallel/distributed system for the automatic configuration of relevant Lipizzaner (GAN training) parameters, implemented over a high-performance computing infrastructure. The proposed system combines distributed memory and shared memory GPU-based training, to efficiently execute the hyperparameters search proposed by IRACE, a well-known configuration tool for tuning optimization algorithms. The expected results of the research are the implementation and validation of an automatic procedure to efficiently search the Lipizzaner parametric space. The proposed methodology will allow computing the most effective parameter values, significantly reducing the execution time demanded by brute force approaches.

Speaker

Mathias Esteban, Universidad de la República, Uruguay