Where is a source showing a generative adverserial network autoencoder?

I’ve looked, but found nothing. A lot of places say that, when autoencoder latent space dimensions get low, decompressed images get blurry. The obvious fix for this would be using a discrimative adversarial network, instead of a classifier, and showing the discriminator network and the decoder the latent space, so that the autoencoder (which would be the generator) would have to create a convincing recreation from the latent space in order to minimise its cost function. That means no blurriness or other artifacts the discriminator would notice.

So why isn’t there any source that I can find that shows this solution in action or describes it and where is such a source if one exists? Thank you. :slight_smile:

Are you asking about AAE (adversarial autoencoder)? If yes, this is a standard model, and we used it a lot. Just a random tutorial on them A wizard’s guide to Adversarial Autoencoders: Part 2, Exploring latent space with Adversarial Autoencoders. | by Naresh Nagabushan | Towards Data Science
Or are you asking about something different?

1 Like

@alexey That’s what I’m asking about. Thank you. :slight_smile: I guess I didn’t use quite the right wording in my google searches. Fortunately, that’s a rare occurrence for me. :slight_smile: