Stream Diffusion Audioreactive in Touchdesigner

Heut probieren wir uns mal in Touchdesigner mit Streamdiffusion aus. Hoffentlich mit guten Techno etc. Folgt mir gern auf Twitch & co: https://www.instagram.com/mrshiftglitch/ https://www.twitch.tv/mrshiftglitch @mr.shiftglitch

Audioreactive Graffiti – TouchDesigner x StreamDiffusion Tutorial 1

In this tutorial, we’re looking at how to create graffiti based on abstract generative shapes and colors in TouchDesigner, using @dotsimulate’s latest .tox for StreamDiffusion. It’s all still very new and unexplored, so this is just one of my first approaches and I’m happy to hear any feedback 🙂 Really looking forward to many more […]

Using Stablediffusion to create a pointcloud 3D effect in Touchdesigner. Part 1

In this tutorial, we are going to look at a method to simulate a Pointcloud model in Touchdesigner based on a StableDiffusion output. The second part of the tutorial will be available on my Patreon along with project files. https://patreon.com/Dith_idsgn Some useful links to install stablediffusion and models : Stable Diffusion Art – Tutorials, prompts […]

Real-time diffusion in TouchDesigner – StreamdiffusionTD Setup + Install + Settings

Important ! Read description for Installation Tips ! This uses TouchDesigner 2023 and will not work in TouchDesigner 2022. Currently only working using windows + nvidia graphics card. StreamDiffusionTD TOX available here -https://www.patreon.com/dotsimulate StreamDiffusion github – https://github.com/cumulo-autumn/StreamDiffusion BEFORE Installing – make sure to have: Python 3.10 (make sure Python is added to Path in your […]

Sonic Flowers – TouchDesigner x StableDiffusion Tutorial 1

In this tutorial we’re having a first look at how to integrate the image-generation AI tool #stablediffusion into TouchDesigner. We’re creating an audio-reactive texture in an independant component to be able to do frame by frame animation without worrying about losing frames. TD is running in real-time, but the process itself is not (yet) real-time, […]

Sonic Flowers – TouchDesigner x StableDiffusion Tutorial 1

In this tutorial we’re having a first look at how to integrate the image-generation AI tool #stablediffusion into TouchDesigner. We’re creating an audio-reactive texture in an independant component to be able to do frame by frame animation without worrying about losing frames. TD is running in real-time, but the process itself is not (yet) real-time, […]

Speech-To-Image Stable Diffusion Demo in TouchDesigner

Speech to Image demonstration using OpenAi’s Whisper + GPT3-Turbo. There’s still some quirks to work out, and maybe I’ll address some of them in this livestream.

Speech to Image Dev Live pt2. OpenAI Whisper + ChatGPT + Stable Diffusion + TouchDesigner

Still just working on translating boring spoken words into useful prompts for generating AI images.

Speech-To-Image Stable Diffusion Demo in TouchDesigner

Speech to Image demonstration using OpenAi’s Whisper + GPT3-Turbo. There’s still some quirks to work out, and maybe I’ll address some of them in this livestream.

Realtime Diffusion / Deforum Animations in TouchDesigner! (text-to-video)

This is a quick video showing my (hack-y) method for getting realtime diffusion / deforum animations into TouchDesigner! Intro: 00:00 Walkthrough: 01:07 Tutorial: 03:05 Expression Switcher: 05:03 File Selector and Movie Output: 08:17 WebUI Settings: 11:44 Prompts: 14:04 Final Setings: 15:05 Generate!: 16:06 Tweaks: 17:10 Outro: 17:21 You can set it up to make amazing […]

DIY Stable Diffusion API ↔ TouchDesigner

Stable Diffusion is one of several popular text-to-image deep learning models released in the last few years, and is capable of producing highly detailed images based on text prompts from the user. Say you want to see what a painting of a cellphone by Picasso would’ve looked like — Stable Diffusion can generate a fairly […]

Video to Video AI Style Transfer with Stable Diffusion and Keyframing in TouchDesigner – Tutorial

Hey! In this tutorial, we’ll go over how to do video to video style transfer with Stable Diffusion using a custom component in TouchDesigner. We’ll cover how to keyframe animations so you can drive your AI style generation and swap out your prompts and model parameters at exact times in your video to sync up with your […]

Audio Reactive Animations with Stable Diffusion and TouchDesigner – Tutorial 3

Hey! In this tutorial, we’ll go over how to use Stable Diffusion with a custom component to generate audio-reactive animations in TouchDesigner. Project File: https://drive.google.com/drive/folders/1mUU5ApNJXAQPCrKVuc-8sUNkizdVH7JP?usp=sharing Link to The first Tutorial: https://www.youtube.com/watch?v=mRXTR9vcHAs Link to the API: https://computerender.com/ Link to my Patreon: https://patreon.com/tblankensmith Prompt Engineering with Lexica: https://lexica.art/ Opt out of Ai Training: https://haveibeentrained.com/ Huge thank you […]

Realtime AI Artwork in Touchdesigner with Stable Diffusion Pt. 2 – Morphing Faces

Highlights from my live stream exploring the possibilities of using text prompt based AI tools in a real-time generative environment. Lots of IMG2IMG work + multi-layered diffusion images. Part two involves creating an abstract artwork that morphs between multiple versions of a human face. Big shoutout to @blankensmithing for his awesome integration of the Compute […]

Create Stable Diffusion Images and Deforum Animations in VR with Unity and TouchDesigner – Part 4

Part 4 of a tutorial series on how to combine Stable Diffusion, Deforum, Touchdesigner and Unity VR with a Meta Quest headset connected to a PC with oculus link to create and integrate AI Generated art in a 360 environment in realtime! In this tutorial we continue to show how to use TouchDesigner to begin […]

Realtime AI Artwork in Touchdesigner with Stable Diffusion

Highlights from my live stream exploring the possibilities of using text prompt based AI tools in a real-time generative environment. Lots of IMG2IMG work + multi-layered diffusion images Big shoutout to @blankensmithing for his awesome integration of the Compute Render API into touchdesigner, checkout his tutorial on it here: https://www.youtube.com/watch?v=mRXTR9vcHAs Find me: Instagram: https://www.instagram.com/benheim_/ Website: […]

TD Diffusion API – Stable Diffusion image generator in TouchDesigner

In this video I will show you how to use #stablediffusion in #touchdesigner locally on your computer. Socials: https://www.instagram.com/olegchomp/ https://www.instagram.com/vjschool/ https://twitter.com/oleg__chomp/ Links & Commands: 00:00 – Pipeline description 00:28 – Distributive https://github.com/AUTOMATIC1111/stable-diffusion-webui https://github.com/olegchomp/TDDiffusionAPI 00:40 – Python 3.10.6 https://www.python.org/downloads/windows/ 00:50 – Git https://git-scm.com/download/win 01:00 – PIP py -m ensurepip –upgrade 01:07 – Pillow и Requests pip […]

TouchDesigner Tutorial – Generate AI Images with Stable Diffusion using Image-to-Image Generation

Hey! In this tutorial, we’ll go over how to use Stable Diffusion with a custom component I created to generate images in TouchDesigner. The project supports 2 forms of input using prompt generation and image to image so you can use any TOP in TouchDesigner as a starting point. Project File: https://drive.google.com/file/d/166nPMUmtuhhm4iiMdKMY1Qnzv1GYXBRt/view?usp=share_link Link to the […]

Generate AI Images with Stable Diffusion + Audio Reactive Particle Effects – TouchDesigner Tutorial

Hey! In this tutorial, we’ll go over how to use Stable Diffusion in TouchDesigner to turn AI-generated images into a video and add audio-reactive particles for a blending effect. The project file is available on my Patreon: https://patreon.com/tblankensmith Part 1 of this tutorial is available here: https://www.youtube.com/watch?v=mRXTR9vcHAs Huge thank you to Peter Whidden for his […]