Role:
Designed and developed the generative AI system.
Team:
Divya Dhavala, Sofia Ingegno, Cameron Reis, Kathy Lin
Timeline:
3 weeks
Tools:
TouchDesigner, MediaPipe, StreamDiffusion
Expressions of Alamelu is a live performance system that translates Kuchipudi lyrics into generative visuals, painted onto the screen by the dancer's movement in real time.
Using MediaPipe, the dancer's body points are extracted from a live camera feed and mapped as brush points on the screen. The lyrics were pre-translated into visual prompts and stored in a table, fed into StreamDiffusion to generate images as a visual representation of the song's poetic language. As the dancer moved, the generated visuals were drawn behind her. The full system runs locally in TouchDesigner, powered by an NVIDIA GPU to sustain real-time generation.
Published at AHFE Conference 2025.
Role:
Designed and developed the generative AI system.
Team:
Divya Dhavala, Sofia Ingegno, Cameron Reis, Kathy Lin
Timeline:
3 weeks
Tools:
TouchDesigner, MediaPipe, StreamDiffusion
Expressions of Alamelu is a live performance system that translates Kuchipudi lyrics into generative visuals, painted onto the screen by the dancer's movement in real time.
Using MediaPipe, the dancer's body points are extracted from a live camera feed and mapped as brush points on the screen. The lyrics were pre-translated into visual prompts and stored in a table, fed into StreamDiffusion to generate images as a visual representation of the song's poetic language. As the dancer moved, the generated visuals were drawn behind her. The full system runs locally in TouchDesigner, powered by an NVIDIA GPU to sustain real-time generation.
Published at AHFE Conference 2025.
Role:
Designed and developed the generative AI system.
Team:
Divya Dhavala, Sofia Ingegno, Cameron Reis, Kathy Lin
Timeline:
3 weeks
Tools:
TouchDesigner, MediaPipe, StreamDiffusion
Expressions of Alamelu is a live performance system that translates Kuchipudi lyrics into generative visuals, painted onto the screen by the dancer's movement in real time.
Using MediaPipe, the dancer's body points are extracted from a live camera feed and mapped as brush points on the screen. The lyrics were pre-translated into visual prompts and stored in a table, fed into StreamDiffusion to generate images as a visual representation of the song's poetic language. As the dancer moved, the generated visuals were drawn behind her. The full system runs locally in TouchDesigner, powered by an NVIDIA GPU to sustain real-time generation.
Published at AHFE Conference 2025.


