top of page

STORY DIFFUSION

AI can be used to emulate generational interpretation and sheds light on the complex interplay between human creativity and machine learning algorithms. This body of work embraces the relationship between contextual generational storytelling and its generated artificial emulation, and calls for continued exploration and refinement of AI models and techniques to better serve the needs and aspirations of artists while navigating ethical complexities with care and foresight.

Part I: The paintings

Narrative Driven Visualization

These paintings establish a visual art direction that unifies several perspectives of this family building to the decision to flee their home country for a new life. The works' close visual similarities share a common experience, however is told through two different perspectives, one father, and one daughter. However, the development of these paintings also illuminates a third generation of these narratives. As the next generation of this family, these works resemble the same narratives told in a uniquely interpreted perspective. These paintings, along with their artist, lack the period-accurate context and first hand experience that previous generations can recount from personal memory. Furthermore, these paintings are built on a contemporary context, with understanding of these experiences built only on memoirs and narratives passed down through family conversations. While the child generations can not completely understand the full experiences of the parent generation, each lens brings a new culmination of context from personal identity and culture, creating a new interpretation of this narrative. 

Screenshot 2024-04-21 105835.png

Art Directable Training and Interpretation

Using Stable Diffusion's image generation program moved to a closed, local network, we can input author-created images and illustrations to train the art direction of a proprietary generative AI image. After 12 models were created in experimentation, the successive models are used to develop hundreds of illustrations which can be used to train the next generation. Each generation of AI model represents a generation of interpretation through visual narratives recognized through text prompts. In total, this project resulted in 32 AI models developed, with over 10,600 images produced and used. 

The first Stable Diffusion model for this experiment  was developed using a library of author-created illustrations built around the original three paintings. The three paintings are the primary source of art direction in which this model will develop from. The training process teaches Stable Diffusion to understand the use of shape, value, and color associated with subjects prompted to create. 
Along with these paintings, supporting original artworks were created and fed to the model to improve its understanding of visual art fundamentals, such as perspective and value. 
For this project, the context to interpret will be illustrated by the dataset of images and visuals in which models will be trained on. 

Bringing us back to our thesis of generational development of narratives, The process in which these environments are created share many commonalities with the observed interpretation of narratives that had been identified through the process of creating the original three paintings. The contextual foundation of images used for each individual AI model is unique, which features images created from the parent model of the previous generation. These images, combined with some new illustrations created by the artist as well, resculpts the understanding of art direction at each generational step. However, further models will always be influenced by the works interpreted and reiterated from previous models. In the case of Stable Diffusion’s interpretation of narrative, the story to interpret and visualize is the home. Conceptually, the idea and recognition of home is deeply connected to the narrative of the immigrant, in which homes are abandoned and also reforged. Throughout each model, output images can be used to evaluate the current model’s ability to identify and illustrate narrative qualities of home in the environments it is prompted to create. 

Animations / Interpretation in Further Narratives.

The AI was ultimately trained to develop and interpret environment HDRIs built from text context, which are automatically converted to 3D mesh projections. These 3D models can be imported to storyboarding and animation programs to allow characters to be created and animated inside, allowing an immersive experience of character animation, and the accessibility 3D workflow brings to animatic creation. 

The exploration of family experiences as war refugees provided a deeply personal narrative that served as the foundation for artistic exploration. Through oil paintings and digital illustrations, the artist captured the nuances of intergenerational trauma and the evolution of narratives within a family's heritage. The created body of work was inspired by personal narratives of family-shared experiences as war refugees, using art as a medium to preserve and interpret these experiences. Through oil paintings and digital illustrations, this body of work  captures the complexities of intergenerational trauma and the evolution of narratives within their family's heritage. The narrative-driven visualization process begins with paintings depicting experiences from two generations, culminating in a series of artworks that reflect the multi-generational journey of immigration and adaptation.

 

Drawing parallels between familial heritage and generational AI interpretation, The generative AI experiment explores how tools like Stable Diffusion can be used to emulate multi-generational narrative development while contextualizing the study within the broader landscape of generative AI's impact on creative industries, highlighting the importance of understanding its potential, limitations, and implications. By training AI models with datasets derived from original paintings, the study demonstrates the potential for AI-driven tools to augment artistic practices and explore new avenues of collaboration and innovation. However, ethical considerations surrounding authorship, ownership, and bias must be carefully addressed to ensure that AI-driven artistic collaborations uphold principles of transparency, accountability, and creative authenticity.

bottom of page