Synthesizing Novel Video from GANs2019-2020

While video has become an increasingly popular way for people to share and consume stories, there currently are limited inexpensive ways for users or filmmakers to quickly edit continuity errors and/or expand creative content for entertainment. Stanford graduate students Truong and Zhang will build on advances in generative adversarial networks (GANs) to create tools that allow users or editors to synthesize new video that captures different viewpoints, perspectives or actions. For example, if a director has multiple takes of an actor’s performance but likes the actor’s delivery in one and his gestures in another, this tool would allow the director to combine aspects of both takes into one “perfect performance.” Ultimately, the output of this project will be a set of tools that enables users to interactively control the camera and pose of human actors in video.


The Team