Where is a source showing a generative adverserial network autoencoder?


#1

I’ve looked, but found nothing. A lot of places say that, when autoencoder latent space dimensions get low, decompressed images get blurry. The obvious fix for this would be using a discrimative adversarial network, instead of a classifier, and showing the discriminator network and the decoder the latent space, so that the autoencoder (which would be the generator) would have to create a convincing recreation from the latent space in order to minimise its cost function. That means no blurriness or other artifacts the discriminator would notice.

So why isn’t there any source that I can find that shows this solution in action or describes it and where is such a source if one exists? Thank you. :slight_smile:


#2

Are you asking about AAE (adversarial autoencoder)? If yes, this is a standard model, and we used it a lot. Just a random tutorial on them https://towardsdatascience.com/a-wizards-guide-to-adversarial-autoencoders-part-2-exploring-latent-space-with-adversarial-2d53a6f8a4f9
Or are you asking about something different?


#3

@alexey That’s what I’m asking about. Thank you. :slight_smile: I guess I didn’t use quite the right wording in my google searches. Fortunately, that’s a rare occurrence for me. :slight_smile: