Materealized

Over 122.6 million people are currently displaced worldwide (UNHCR, 2024), many cut off from the places, objects, and traditions that once shaped their identity.

Materealized explores how emerging technology can help preserve cultural memory when physical artifacts are lost. The system uses voice interaction and generative AI to turn spoken stories into evolving 3D visuals — abstract portraits of memory rendered in real time.

By translating voice into image, it invites reflection, reimagines preservation, and asks: Can we use technology not just to remember, but to reconnect?

Client

CSULB MA HXDI Thesis Project

Project

Exhibition & Book

Year

2025

Tools

Touch Designer, Stable Diffusion, MidJourney, ChatGPT, Runway ML, Unreal Engine

Early Prototype: Visualizing Memories

The first testable prototype featured Maryan, a Filipino immigrant, reflecting on her childhood visits to the Philippines. As she spoke, generative visuals appeared in real time, forming ambient anchors tied to her memory of working at her family’s sari-sari store.

This prototype explored the emotional potential of AI-assisted storytelling. Could a memory feel meaningful when reconstructed by a machine? Could it still feel like hers?

A spoken memory becomes a living space. This prototype used a custom GPT pipeline to turn Maryan’s story into layered 3D visuals—tested using the Oculus to imagine how memories of places that no longer exist might live on.


Research & Thesis Report

This project culminated in an 84-page thesis that weaves together design research, cultural theory, and technical experimentation.

The report explores:

  • The impact of domicide—the loss or destruction of home—on cultural identity

  • How memory works through the lens of neuroscience and embodiment

  • The role of emerging technologies in preserving cultural narratives

Beyond theory, the report documents the full design process: from user interviews and research synthesis to prototyping, testing, and exhibition. It was designed as both a process archive and a provocation—a way to ask how speculative tools like generative AI might help preserve identity in the absence of place.

Blending personal stories, technical breakdowns, and visual storytelling, the thesis explores how memory can be spatialized, reimagined, and shared.

Report Link


Selected pages from the report


Public Exhibition

Materealized was exhibited at the Duncan Anderson Gallery as a participatory installation. Visitors were invited to reflect on a memory using printed prompts, then speak it aloud into a central microphone. As they shared, the system visualized their story in real time—turning voice into immersive memory artifacts.

Some stories were recorded for further development, contributing to a growing archive of cultural memory.


Prototype render of the gallery setup: A Unreal Engine mockup used to previsualize spatial layout, projection scale, and audience interaction before installation.


How it worked

The exhibit followed a simple three-step flow that made storytelling intuitive and accessible:

  • Step 1: Guests selected a memory slip and wrote down a meaningful personal moment.

  • Step 2: At the center of the room, they spoke their story into a microphone.

  • Step 3: Their voice triggered a live transformation—words became visuals, projected as a generative 3D interpretation of their memory.



Exhibition Documentation

These moments were captured during the live exhibition, where visitors engaged with the installation in intimate, often emotional ways.

Images courtesy of Shrey Patel, Exhibit Documentation


Final Prototype: Real-Time Memory Rendering

The final version of Materealized operated in real time, generating visuals at approximately 6 frames per second. The system combined multiple AI models into a locally run pipeline, built and optimized within TouchDesigner using an NVIDIA GPU.

The technical stack included:

  • Whisper (OpenAI) for real-time voice transcription

  • Stable Diffusion Turbo XL + Stream Diffusion for accelerated image generation

  • Anything Depth to convert images into 3D point clouds

Bringing all these components together in a single, responsive flow was one of the biggest challenges. Running the models concurrently required performance trade-offs, especially to maintain real-time image generation.

Through testing and feedback, I learned that transcription quality had a significant impact on the storytelling experience. Misheard or delayed words sometimes led to visual outputs that felt disconnected from the speaker’s intent. A future iteration could integrate a large language model to improve semantic understanding—analyzing tone, sentiment, and context to generate more coherent and emotionally accurate visuals.

The next step in development is to create a feature that allows guests to save high-quality renderings of their generated stories—a keepsake memory, anchored in digital form.

Divya shares a memory of her visit to an Indian Temple in South India

Reflections & Design Impact

This project posed not only technical questions, but human ones:

  • How can we make advanced tools more intuitive and accessible, especially for storytelling?

  • What does it mean to create systems that reflect a wider range of human experiences, ethically and equitably?

  • Can speculative prototypes like this act as cultural bridges, offering not just novelty but emotional connection?

Materealized represents the kind of work I want to keep doing — speculative, human-centered, and technically ambitious. It blends narrative with interaction, and asks not just what we can build but who we’re building it for.

I approach emerging technologies not as ends, but as tools for designing connection, memory, and meaning. Whether I’m prototyping, researching, or facilitating, I look for ways to ground innovation in real human context.

My goal is to keep designing systems that are not just functional — but intentional, inclusive, and worth belonging to.