A few days after I published the post about building a story-writing agent, a colleague reached out. He had played with the agent I’d shared, and had a real problem he wanted to tackle.

The Houston office has its own expense classification rules. They differ from corporate policy in specific ways that regularly confuse people. Part of his job was helping colleagues figure out how to report expenses correctly. Repetitive, detail-heavy work, and exactly the kind where consistent guidance would save everyone time.

We sat down together for an hour. By the end, we had personas, stories, and a working prototype of an expense categorization assistant.

Starting with Stories, Not with Software

The first thing we did wasn’t think about the agent. It was think about the problem.

I pointed the story-writing agent at my colleague’s situation. He described it out loud. I’ve found that voice works better than typing for this, not just for me but for most people I work with. When you’re talking, you say what you mean. When you’re typing in a text box, you start editing before you’re done thinking.

The agent extracted personas and started asking clarifying questions: Do all Houston Improvers follow the same rules, or are there subgroups, such as client-billable versus internal? Should the output be just a category, or a full Workday-ready checklist?

These are exactly the questions that surface halfway through a build if you skip this step. Here, they came up before we’d written a single instruction for any agent.

He worked through each one, and I could see him thinking through implications he hadn’t fully mapped out. “Client-billable versus internal, yeah, there’s a difference there.” That distinction would affect how the agent handled certain scenarios. Better to know that now.

That’s what good questions do. They surface assumptions before you start building.

What the Stories Captured

The agent produced personas: the Houston Improver submitting expenses from a business trip, the one processing team meals, the one handling internal event costs. He pushed back on some and refined others. Not everything the agent suggested was right. But reacting to something is faster than starting from blank.

Then came the stories, each starting with the “why”:

In order to enter my meal expense correctly for a team event, as a Houston Improver, I want to describe the expense in plain language and have the system tell me which Workday fields to fill in and how.

Simple, focused, grounded in what the person actually needs. Not in what the system should do.

At some point, he stepped back and said something like: “So here we’re writing stories to build an agent.” He’d come in thinking about the agent. The process pulled him back to the stories first. That sequence matters more than people realize.

From Stories to Agent

Once we had stories and personas, we had something concrete to hand to the agent builder. The stories described the scenarios. The personas described who. The acceptance criteria described what “done” looked like.

We created a new agent in Copilot, using the stories as the specification. We fed in the PowerPoint deck my colleague had built, which contained the local classification rules. Now the agent had two things: a clear purpose (from the stories) and the knowledge to carry it out (from the rules).

We tested it with a receipt for snacks. Then one for an Obsidian subscription. It handled both, applying the right rules and generating the right Workday fields. When it wasn’t sure, it asked for clarification or flagged that the person should reach out to him directly.

That escalation path came from the stories, too. One of the acceptance criteria had been: when the agent can’t confidently classify an expense, it should direct the user to the right person.

The Part That Doesn’t Require Code

At one point, I mentioned to him that this approach isn’t just for building software or agents. Sometimes what a good set of stories reveals is that you don’t need to build anything. You need a clearer process, better communication, or a simpler checklist.

In this case, the stories pointed toward an agent. The classification rules were well-defined, the scenarios consistent enough, and the value of automating the guidance was real. The stories helped us see that clearly before building anything.

But if the stories had revealed something messier, like rules that contradicted each other or too many exceptions to handle cleanly, that would have been equally valuable. Better to find that before the build than after.

What I’m Taking from That Session

My colleague came in curious about agents. He left thinking in stories.

Stories force you to think about people and their situation before you think about technology. When you lead with “in order to,” you have to commit to a reason. That reason shapes everything that follows: the personas, the scenarios, the acceptance criteria, and eventually the agent instructions or application requirements.

The story-writing agent made the process faster. It asked questions, extracted personas, and suggested scenarios. But the thinking was still his. I was just helping him get it out of his head and onto the screen.

Once it was on the screen, it drove the build.

That loop from problem statement to stories to working tool is what I keep coming back to. Not because it’s faster (though it is). Because you start by understanding what someone actually needs before you start building anything.

Leave a Reply

Trending

Discover more from Claudio Lassala's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading