I’ve looked, but found nothing. A lot of places say that, when autoencoder latent space dimensions get low, decompressed images get blurry. The obvious fix for this would be using a discrimative adversarial network, instead of a classifier, and showing the discriminator network and the decoder the latent space, so that the autoencoder (which would be the generator) would have to create a convincing recreation from the latent space in order to minimise its cost function. That means no blurriness or other artifacts the discriminator would notice.
So why isn’t there any source that I can find that shows this solution in action or describes it and where is such a source if one exists? Thank you.