DreamGAN
Authors: Conor Hayes
In late 2021, I became curious about the GAN-generated images I had seen bouncing around online, and wondered if the same technique could produce narrative videos instead.
After a night or two spent hacking around, I managed to adjust VQGAN-CLIP (Crowson et al) to produce narrative videos based on a series of prompts.
Inspired by the dream journal I was keeping at the time, I called it DreamGAN, and used it to make a number of renderings of my and others' dreams, as well as video accompaniments for my songs.