Evolving the Creative Workflow with AI
How a strategic blend of generative AI tools transformed an idea into an animated, art deco-meets-tech experience.
One reason I became so interested in generative AI is that it makes it easier for people—regardless of their background—to see creative ideas through to completion. I’ve seen too many brilliant concepts stall before they could fully emerge. I’ve found that with the willingness to learn these tools and invest the necessary time, you can take meaningful steps toward transforming creative ideas into tangible results—even if you’re just one person rather than a large team. Of course, it may still take multiple iterations and human effort to fully realize your vision.
Recently, I had the opportunity to share some of my learnings at the Winter Symposium for Columbia University’s School of Professional Studies MS in Strategic Communication. Below, I’ve highlighted a few key takeaways from my experiences, showing how these AI tools come together in real-world creative projects.
The New Creative Brief (Prompt)
For Phyusion’s holiday video last year, I used generative AI for every part of the production—images, video, music, and animation. While tools like LTX Studio can handle most of these tasks in one place, I’ve found that each platform has its own strengths, so I tackled each step with the program best suited to it.
My starting point is an AI workspace. I’ve set up a Claude Pro Project that functions like a “CMO,” complete with brand guidelines, voice specifications, and target audience details. You could do something similar with an OpenAI Custom GPT. When I began planning my holiday message, I shared the creative brief with this workspace—tweaking the timeline, tools, and overall vibe until everything felt cohesive.
I asked it to generate five different creative concepts, and for each one, it provided a visual treatment, music direction, and narrative arc. What impressed me was how it evaluated each idea through multiple lenses: authenticity, visual appeal, technical feasibility, and brand alignment.
Eventually, we landed on a concept centered around animating a character that reflected my own personality while expressing a genuine thank you to our partners and clients. It also fit our broader goals, from maintaining our art deco-meets-tech aesthetic to ensuring everything worked smoothly across different channels.
From Concept to Creation



Once I knew the direction, I turned to tools like Midjourney and Ideogram to create prompts for my starter images. My “CMO” Claude workspace provided some initial image prompts, which I tweaked to better reflect my vision. Here is the final image of the woman who became our central character.
This prompt was about 100 words long and included details such as the desired emotional response, the time period, and the exact color palette.
Bringing Scenes to Life
After finalizing the images, I used Sora and RunwayML to assemble them into animated scenes. It was rewarding to see the backgrounds come to life, adding a sense of motion and depth that made each moment more engaging.
I also used RunwayML’s Act One to animate the central character. I recorded a short video of myself reading the script, which was more challenging than I expected—I wanted my voice and tone to sound natural, and that ended up being one of the hardest parts—before uploading it. The platform mapped my expressions and gestures onto the Midjourney-created figure, preserving the art deco-meets-tech style while reflecting my personal presence.
Bringing It All Together
I generated a holiday jazz song in Suno, shaping it to capture a warm, festive tone. Then, I fine-tuned the AI-generated video snippets in Kapwing, adjusting transitions and timing until the flow felt just right. By blending these elements, I united the art deco-meets-tech aesthetic with a personal vibe, giving the final piece its own distinctive holiday spirit.
While this video isn’t a work of art, it’s something I could never have produced on my own before generative AI. More importantly, this is the worst these tools will ever be in our lifetimes. As they evolve, we won’t need to hop between platforms to achieve the best results. Instead, it will feel natural to co-create with an AI creative partner in one seamless environment.
About Sam Stark
Samantha Stark, founder of Phyusion, brings 25 years of communications expertise, having led integrated campaigns for Fortune 100 brands and managed cross-disciplinary agency teams. Most recently, as a member of the Executive Leadership Team at Endeavor's 160over90, she oversaw hundreds of specialists across the US, EMEA, and APAC regions. Her work with IBM during her tenure at Ketchum sparked an early interest in the potential of AI in communications—years before it became an industry buzzword. Recognized as one of PR News's Top Women in the Industry, Samantha also serves on the Board of Trustees for the Institute for Public Relations, where she directs the Digital Media Research Center, focusing on the transformative impact of emerging technologies on the communications landscape.
