In this episode, Matthew and I explored how we’re learning to work with AI tools in ways that go beyond just picking the most expensive model.

The Five-Year Journal and Two Screens of AI

I keep a five-year journal, with each page covering one day across five years. You get maybe two short sentences per day. That’s it.

Last night I wrote: “I had my AI agents at full force today—two screens, each with an agent working on a separate feature.”

Then I looked back one year. I had written about working on a single feature, breaking it down into tasks by hand—one task per endpoint, per service, per component. The granularity was extreme because that’s what it took.

By “feature” I meant something completely different a year ago. If I could have read yesterday’s entry back then, it wouldn’t have made sense. I couldn’t have conceived where we are now.

From Tasks to Stories to Features

Our scrum board has changed over the last few months. It used to be: one swim lane equals one user story, and each card is a task.

Now it’s: one swim lane equals a feature, and each card is a full story.

I don’t create many tasks anymore. Well, I do, but they’re different. When I give a story to the agent with instructions on how to decompose work, it knows what to do. I add a card to review the stories—because I’m the human, and stories apply to humans. Otherwise, they’d just be requirements.

Matthew works in a hybrid environment where some people use AI tools and some don’t. When they size stories, they do so from the perspective of someone doing all the labor manually. He looks at the estimates and thinks, “I can get that done in much less time.”

But he has to size it in a way he calls “an insincere way”—because he’s not planning to do the task manually.

The Question of Which Model and Which Tool

We talked about Cursor versus Windsurf, about auto mode versus manually selecting models, and about tokens and costs.

I’ve been using Cursor’s auto mode for months. I don’t want to spend time figuring out which model to choose. I know the outcome I want. Go do the thing.

If it struggles—as it did a few months ago with dark mode implementation—I switch to a different model. Right away, it clicks. Night and day.

But that’s the exception. Most of the time, auto mode works because I’ve built up instructions. The guardrails are up. It knows how we design endpoints, test, and implement. Once it knows, auto mode works fine.

Matthew pointed out something important: we’re marching toward a world where you have to pay to play. Organizations that can afford to use frontier models exclusively will get higher-quality outputs more often. Those who have to watch token usage will need to be more selective.

It’s like healthcare access—how much money you have determines what you can access.

The Natural Medicine Analogy

I offered a different analogy: someone with an illness may take an expensive pill that solves the problem. Or they can go live in the country for three months, eat natural food, and get the same result for free.

I can pay $200 a month for the premium tier. Or I can learn how to use auto mode for $20, teach the tool the natural way of doing things, and get what I need done.

The teaching part matters. I’ve been building up a library of markdown files with instructions. When I need something, I drag them into the chat. “Here’s what you need to know. Go.”

Yesterday I told Windsurf: “Take all these markdowns and create some workflows.” It read everything and said it would create workflows, not skills. I said okay—you know better than I do.

I got what I needed without fully understanding how skills versus workflows work. Because I’m focused on getting stuff done.

Watching the Process Not Just the Result

Matthew described what he calls a trap disguised as a benefit: the ability to walk away while the AI works.

You can do that. But you lose all insight into what’s happening step by step.

If you watch the process, you’ll see it searching in places it doesn’t need to. You see it doing things it doesn’t need to do. You can tell it: “There’s a shortcut. Go here instead.”

It’s like pairing with a person who reaches for the mouse for everything. You say: “You got it done. Perfect. Let me show you a quicker way.”

We teach people shortcuts to make them more productive. We should do the same with AI.

If you walk away, you can’t build that fluency. You don’t know how it arrived at the answer. And if there’s a shared pool of tokens and you’re wasting them, you’re not just harming yourself—you’re harming everyone who depends on that pool.

The Transcript That Became a Feature

I pulled up transcripts from meetings two months ago—conversations I couldn’t fully remember, even though I was there asking the questions. I’d been too busy to internalize the understanding.

I knew they were there. I knew where they fit.

I worked with AI: “Here’s the transcript. Focus on this. Here’s the codebase. Analyze it. Create a plan.”

Maybe 20 to 30 minutes of back-and-forth. Then I said go. It worked for an hour or two while I did something else.

When I came back, I tried it. The menu worked. The different options looked right. I brought it to the stakeholders, and they were blown away.

“How did you get all of that?”

“You guys told me.”

What We Could Have Always Been Doing

Matthew said something that hit hard: we could have always been doing this before AI.

The reason we weren’t is how we did meetings. Most of the meeting got left on the table. We took notes. We thought we left with a shared understanding. That was never true.

Just being able to record meetings and listen back would have gotten us closer. But being able to take the complete transcript—not just highlights or our notes—that’s what makes this possible.

It’s not just that AI tools are good now. It’s that they’re empowered by the context we can extract from the whole meeting.

And here’s what really helps: being the person in the room putting up the guardrails. Facilitating the conversation. Asking the questions. Steering toward: what do you perceive as a problem? What is the actual need? Talk about the scenario, the intents, and the pain points.

When I drop that transcript into AI, my instructions know exactly how I write user stories and what to look for.

The Autopsy on the Meeting

We can now do an autopsy on meetings. Given the questions asked, were we moving toward the goal or away from it?

Matthew described what he thinks full stack will mean in the future: not deeply technical, not deeply business—but sitting in the middle, going back and forth.

If you’re highly business-savvy, you can use AI tools to help with the technical aspects. If you’re highly technical, you can use them to support the business.

I don’t see myself as deep in either. I sit in the middle. And that’s where I’m leveraging these tools.

I can analyze transcripts and say: “Based on the feedback we got, based on the results we presented, what were the questions that helped us arrive there? What questions would be even better next time?”

This helps us chart the agenda for the next meeting. The questions we plan to ask. Now we’re on train tracks instead of meandering toward the goal.

WTFs Per Minute for Meetings

I thought about Uncle Bob’s metric for code quality: WTFs per minute.

What if we had something like that for meetings? The quality of questions within a meeting. How many questions allowed the conversation to move forward closer to the desired outcome?

Matthew took it further: what if we did a post-analysis that determined how much time in the meeting was actually valuable? The meeting was an hour, but maybe only 20 minutes were valuable.

How could we have made it better? You spent 10 minutes describing something. Couldn’t you have created an image?

When a Picture Beats a Description

There’s nothing more satisfying than an image when one is needed.

If I’m describing something, I have to rely on your imagination. I have to rely on you being visual enough to conjure up what I’m describing in a way that’s faithful to it. That’s a gamble we shouldn’t be making.

We love analogies, but if a picture beats the description, we should always go with the picture.

Last night I was experimenting with creating comic strips to explain concepts. Two years ago, I read a book called See What You Mean about using comic books to illustrate business concepts.

I experimented back then. The images didn’t look great—I’m not a cartoonist. But sticky figures would do. It took a while to think through the frames and figure out the perspective.

With AI, I’m blown away by what I’m getting. I’m creating the individual Lego blocks, ironing out the process. Getting to a point where, mid-conversation, I can say: “Give me two minutes.”

Take the transcript from that part of the conversation, run it through my system, and produce a comic strip.

They get it right away if it’s visual.

Matthew said that’s analogous to context pollution. When you’re doing it manually, you’re hyper-focusing on: should the person be drawn this way or that way? All of that goes away when you describe the idea from a higher level.

Now you can show this in two to five minutes. You move forward with shared understanding, not assumptions.

Scaffolding Toward Where You Want to Be

Someone might say: “But you still have to do all that by hand.”

I’m working my way there. One step at a time. One friction point at a time.

I know where I am. I know where I want to be. What’s my next step?

So I can glide through the process and spend time on what I want to spend time on: having conversations with people, paying attention to what they’re saying, and to their body language. Translating the things AI cannot yet do.

What the Office Experience Should Look Like

Matthew said this is what the office experience should look like.

Given how these tools operate and the ways we can offload tasks to agents, what are we doing while those agents work?

We should be talking to each other. What are you doing and how is it working? Here’s what I’m doing and how it’s working for me. We exchange ideas.

This doesn’t require us to be in front of devices. If anything, this technology should allow us to do more of the human things. To prioritize the human things.

Our jobs for years have been: screen, screen, screen. Type, type, type. But if we can offload some of the work, we can do the more meaningful work—the human-to-human work.

That’s what we should be prioritizing.

Automate Your Life—But Why?

Anytime I see a blog post or video that says “automate your life,” I think, “Why would I want to automate my life?”

I want to automate things I don’t enjoy doing that need doing. So I can focus on the things I really enjoy.

Automate your full life—and then what? What are you freeing up the time to do?

Automate the mundane tasks. Things you do over and over. Things that can be automated. But do it to serve some agenda. Something that creates more space and time for you to do a thing.

Not for you to do nothing.

The Crisis of Identity

Matthew said one of the driving forces behind software developers’ rejection of AI is that it promises to replace something they derive a lot of enjoyment from.

Solving challenging tasks manually. Utilizing all the stuff you’ve built up over time.

There’s a crisis of identity when we move away from the thing we associate with our identity. Isn’t this who I am?

I compared it to cars. Carburetors versus fuel-injected engines. Don’t give me a car with a carburetor today. But what about the mechanics who built their lives on that skill?

I enjoy writing code. I like it pretty clean and well-structured. But that’s not what floats my boat.

I don’t need to write code if I don’t have to. If I want to get down into it, I still can. But that’s not what drives me.

I care more about presenting what I’ve built to stakeholders and seeing their reaction. That’s what floats my boat.

Writing code and showing it to another coder who says, “That code is great”—yeah, thanks. But that doesn’t really do it for me.

Matthew asked, “What if you were the person who enjoys being asked questions?” The person everyone comes to when they’re stuck? What if that was your identity, and it went away because people could now ask Claude instead of Claudio?

What does that do for your concept of self?

Leave a Reply

Trending

Discover more from Claudio Lassala's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading