I stopped calling them user stories a while back. First, I switched to just “stories.” Now I’m settling on “human stories.” Because that’s what they are—stories about humans, not users.
For years, I’ve wanted to turn these stories into comic strips. Something visual. Something that shows the before and after, the problem and the solution, in a way that words alone can’t quite capture.
Watch the full video:
The First Attempt
About two years ago, I tried drawing one by hand. I took a sample story with its given-when-then scenarios and sketched it out. The idea was to illustrate the story’s past, present, and future.
It didn’t come out great. And it took way too long. I looked at the result and thought, “Yeah, I’m not going to be doing this as much as I’d like.”
The AI Workflow
Now, with the tools we have, I’ve built something different.
I have an AI workflow—a markdown file with instructions—that takes a problem statement or a conversation transcript with stakeholders and generates stories. Depending on the size of the problem, it creates either individual stories or full epics.
The output is structured: the human story itself, acceptance criteria, and scenarios in Gherkin format (given-when-then). I’ve had this working for a while now.
From Stories to Storyboards
The next step was automating the visual part. I created skills in Claude Cowork that read through an epic and generate a comic storyboard.
Here’s what it produces:
-
Narrative arc: What’s the problem? What’s the solution? What does “before” look like versus “after”?
-
Characters: Who are the people in this story? What do they look like? What are their main traits?
-
Character reference sheet: Detailed descriptions and prompts for generating consistent character images.
-
Panel-by-panel breakdown: For each panel, it describes the location, the scene, and the exact prompt to use in the image generator.
From that markdown file, I feed it into a custom app I built in Gemini. It extracts the comic page content, generates the individual images, and assembles everything into a self-contained HTML file.
The result? A two-page comic strip with character reference sheets and speech bubbles. The positioning isn’t perfect—titles and bubbles are sometimes all over the place—but it’s already pretty good.
Turning Comics into PowerPoint
Once I have the HTML and images, I go back to Cowork and tell it to create a PowerPoint slide deck. The deck includes:
-
A title slide based on the epic
-
The “before” state
-
The “after” state
-
A high-level summary of the value we hope to deliver
The instructions for assembling the deck are embedded in the storyboard markdown. The AI uses them to build the deck automatically.
I’ve used this process for several epics now. It also includes refinement notes, so I can tweak individual panels and regenerate images as needed.
The AI Quirks
There are quirks. AI loves giving people extra hands. In one panel, a character has three or four arms. In another, it looks like three hands are coming from the same person.
It’s a little funny. Sometimes I regenerate those images. Sometimes I leave them as-is—they get a chuckle, and as long as they don’t distract from the point, it’s fine.

What’s with AI and the extra hands?
The Video Experiment
The latest experiment: turning the comic storyboard (does Graphic Novel sound better?) into a video.
I already had the storyboard. I already had the images. It looked like something you’d use for video production. So I spent less than an hour trying it out.
I told Cowork to take the comic storyboard and the PowerPoint deck and create a video storyboard. I mentioned I was using Google Vids, which caps individual scenes at eight seconds, and that I wanted the overall video to stay under a minute.
It generated:
-
Scene descriptions
-
Visual details
-
On-screen text
-
Voiceover scripts for each scene
-
Production notes and character transitions
Then I went into Google Vids and pasted the markdown. I created one video per scene—scene one, scene two, scene three—and stitched them together.
The result was a one-minute video.

Left: panel from the graphic novel. Right: frame from video.
What Worked (and What Didn’t)
The composition was solid. The transitions between scenes made sense. The voiceover conveyed the before-and-after narrative clearly.
But there were issues:
-
Text generation: Words were misspelled or completely made up. “Arriving” had three R’s. Some words were unreadable.
-
Audio glitches: The voiceover had a stutter in one spot—“shouldn’t feel like shouldn’t feel like.”
-
Extra hands: Just like in the comics, characters had too many arms.
-
Generic visuals: When the video showed app screens, it generated placeholder text and gibberish instead of real interface elements.
The closed captions were included automatically when I downloaded the video, which was a nice bonus.
Where This Could Go
This isn’t about creating high-quality production videos. It’s about giving people a glimpse before a meeting.
Imagine sending a one-minute video ahead of a conversation: “Here’s what we’re going to discuss. Here’s the problem we’re solving.”
People can watch it quickly. They might laugh at the weird AI quirks. But they’ll walk into the meeting with context.
If I gave the AI screenshots of our actual prototype—the real app screens—would it use those instead of generating nonsense? If I provided branding assets, would it incorporate them correctly?
I don’t know yet. But those are the next experiments.
Time Invested, Time Saved
Once all the skills are in place, the process could look like this:
-
Generate human stories from a conversation or problem statement.
-
Create a comic storyboard with character references.
-
Generate images and assemble them into a graphic novel.
-
Optionally, create a one-minute video using the storyboard and prototype screenshots.
It’s time invested upfront to save time during stakeholder conversations. Get everyone on the same page faster. Streamline the discussion.
That’s the experiment. Let’s see where it takes me.





Leave a Reply