top of page
IJCNN Special Session on 
Deep and Generative Adversarial Learning 

Generative Adversarial Networks (GANs) have proven to be efficient systems for data generation. Their success is achieved by exploiting a minimax learning concept, which has proved to be an effective paradigm in earlier works, such as adversarial curiosity (1990) and predictability minimization (1991), in which two networks compete with each other during the learning process. One of the main advantages of GANs over other deep learning methods is their ability to generate new data from noise, as well as their ability to virtually imitate any data distribution. However, generating realistic data using GANs remains a challenge, particularly when specific features are required; for example, constraining the latent aggregate distribution space does not guarantee that the generator will produce an image with a specific attribute. On the other hand, new advancements in deep representation learning (RL) can help improve the learning process in Generative Adversarial Learning (GAL). For instance, RL can help address issues such as dataset bias and network co-adaptation, and identify a set of features that are best suited for a given task.

 

Despite their obvious advantages and their application to a wide range of domains, GANs have yet to overcome several challenges. They often fail to converge and are very sensitive to parameter and hyper-parameter initialization. Simultaneous learning of a generator and a discriminator network often results in overfitting. Moreover, the generator model is prone to mode collapse, which results in failure to generate data with several variations. Accordingly, new theoretical methods in deep RL and GAL are required to improve the learning process and generalization performance of GANs, as well as to yield new insights into how GANs learn data distributions.

 

This special session on Deep and Generative Adversarial Learning aims to bring together researchers and practitioners to discuss and present their findings on RL and GANs. The special session will invite novel contributions on new theoretical methods and applications of RL and GANs. This special session follows up on its first and very successful edition at IJCNN 2019, and on a special issue on the same topic in the Neural Networks journal being edited by the organizers.

 

Topics of interest for this special session include but are not limited to:

 

  • Representation learning methods and theory;

  • Adversarial representation learning for domain adaptation;

  • Network interpretability in adversarial learning;

  • Adversarial feature learning; 

  • RL and GAL for data augmentation and class imbalance;

  • New GAN models and new GAN learning criteria;

  • RL and GAL in classification;

  • Adversarial reinforcement learning;

  • GANs for noise reduction;

  • Recurrent GAN models;

  • GANs for imitation learning;

  • GANs for image segmentation and image completion;

  • GANs for image super-resolution;

  • GANs for speech and audio processing

  • GANs for object detection;

  • GANs for Internet of Things;

  • RL and GANs for image and video synthesis;

  • RL and GANs for speech and audio synthesis;

  • RL and GANs for text to audio or text to image synthesis;

  • RL and GANs for inpainting and sketch to image; 

  • RL and GAL in neural machine translation;

  • RL and GANs in other application domains

​

bottom of page