Video to Video AI Style Transfer with Stable Diffusion and Keyframing in TouchDesigner – Tutorial
Hey! In this tutorial, we’ll go over how to do video to video style transfer with Stable Diffusion using a custom component in TouchDesigner. We’ll cover how to keyframe animations so you can drive your AI style generation and swap out your prompts and model parameters at exact times in your video to sync up with your audio.
Link to the Project File is on my Patreon: https://patreon.com/tblankensmith
Link to The first Tutorial: https://www.youtube.com/watch?v=mRXTR9vcHAs
Link to the Computerender API: https://computerender.com/
Prompt Engineering with Lexica: https://lexica.art/
Opt out of Ai Training: https://haveibeentrained.com/
Huge thank you to Peter Whidden for his support on this and his work on Computerender which makes this project possible!!
0:00 Overview and Examples
1:56 Recommendations for Vid to Vid
2:43 Using the Existing Network for Rendering
7:38 Recreating the Keyframe Animation
17:13 Overview of Negative Prompts
17:41 Wrap up