Episode 21 found Matthew and me both a little off our usual pace. I’ve been busier than usual, adjusting to changes in my team’s composition and how the current project is progressing. Matthew has been doing a lot of reflecting. On the future of work, on the youth, on what it means to stay grounded when the world is changing fast.
What started as a check-in has yet to turn into one of our wider-ranging conversations. Music, AI, software development, F1 cars, alien handshakes. The thread running through it all was a question we kept circling back to: when we make things faster and easier with AI, what do we lose when the human fingerprint disappears from what we build?
The Novelty Is Wearing Off
Matthew mentioned that he’s been paying close attention to how young people talk about AI, and the enthusiasm isn’t what you might expect. The novelty seems to be fading. The memes, the anime filters, the quick transformations: they were fun, and now they’re less interesting.
He deliberately seeks out perspectives from people of all ages, from his five-year-old daughter to people much older than him. He wants to stay grounded, not trapped in amber. I realized I hadn’t been spending much time with the youth lately. Not in a deliberate, “what are they thinking about all this” kind of way. That’s worth changing.
Building For the People Who Will Use the Thing
We talked about a pattern that keeps showing up in how software gets built. Someone has an idea, builds it with speed and confidence, then tries to find product-market fit after the fact. And because of sunk costs, they push it onto the audience rather than listening to what the audience actually needs.
The ability to build fast makes this worse, not better. We can now produce something that looks polished in hours. But polished isn’t the same as right. Now that we can put proofs of concept in front of people faster than ever, we should be doing exactly that. Build something lightweight. Listen. Iterate.
The harder question is whether we’re actually doing that, or whether speed has just made us more confidently wrong.
A Friend Remixed My Song
I ended up telling a story that opened up a lot of the conversation about music and AI. A friend of mine had used Suno to remix one of my songs. He sent it to me as a surprise. When it started playing, I recognized the melody immediately. I wrote it.
What surprised me was how deeply I connected with it. Not with the production quality (which was impressive) but with the melody itself. It pulled me straight back to the circumstances, the place, the feeling of when I wrote it. The AI-generated production was almost irrelevant. The human fingerprint was in the melody, and it remained.
What made it interesting was that my friend had also left his own fingerprint. He nudged Suno toward his style, and you could hear it. I have MP3s of his music from twenty years ago. It sounded like him.
What I Want AI For in Music (and What I Don’t)
This led us to a distinction I’ve been thinking about. There are things I’d welcome AI’s help with in music. I can hear an orchestra playing one of my songs, strings and all, but I don’t know how to orchestrate for that. I don’t play violin. I can’t notate a full arrangement. If AI could take my melody and do that, while keeping the structure I’ve worked to build, I’d find real value in that.
But I don’t want AI to replace the process itself. When I pick up my guitar and start doodling, things sound sloppy at first. There’s buzzing, wrong notes, chords that almost work. And then gradually, it takes shape. Three hours pass without me noticing. That’s flow. That’s the experience I’m protecting.
Matthew put it well: the satisfaction isn’t in the final product when someone else does the work for you. It’s in watching something raw take shape in your hands.
The Generic Problem
We spent time on what happens when the human fingerprint is missing entirely. I can identify bands from thirty or fifty years ago by the first four bars of a song. The drum sound, the production choices, something distinct. Then there’s a producer from the last twenty years whose work I can also identify, but for the wrong reason: every album sounds the same. Different drummers, different bands, different songs. Same drums.
AI-generated music has this problem built in. The model was trained on everything, and the output tends toward the average. Matthew used the example of knowing that “Take On Me” by A-ha is coming before the synth even plays. You have a relationship with that song. A real history. You can’t have that relationship with something generated on demand.
The same problem shows up in software. If you prompt an AI to build an accounting package without further context, it will build you something that looks like QuickBooks. It won’t have anyone’s fingerprint on it. It won’t resonate with the specific people who need to use it. We’re building a lot of sterile things that you can’t tell apart.
Whose Loop Is It?
We talked about the phrase “human in the loop” and why it bothers me. It positions the AI as the thing doing the work, with the human as a checkpoint. That framing is backward.
The way I try to think about it: it’s the human loop. We are doing the work. We’ve invited AI into our loop as a tool. The moment the product owner pushes back and asks why the work isn’t what they asked for, no one is going to say, “That’s the AI’s loop, not mine.” It’s your loop. It was always your loop.
We connected this to guardrails. We talk constantly about putting guardrails on AI, but the more important question is whether we’re giving AI the principles behind our decisions. That’s what we do with junior developers. We don’t just restrict what they can do. We explain how and why we make the choices we make, so they can internalize it and act in our absence. The guardrails matter, but the principles are what produce judgment.
Fast Cars With No Braking Markers
We came back to an analogy we’ve used before: speed without infrastructure. We’ve been handed very powerful cars. But we don’t have the trained bodies of F1 drivers. We don’t have a racetrack with candy stripes. We don’t have the braking markers that tell you when to start slowing down. And we’re all on the same track at the same time, going in all directions.
The Tesla story came up here. A woman in Houston was in a self-driving car with her child when the car hit a guardrail. The failure wasn’t one a human driver paying attention would have made. Cameras can’t detect depth the way LIDAR can. The technology performed well 99% of the time and failed once in a way that mattered.
When we extend trust to these tools, we have to be honest about what they can and can’t perceive. And we have to be willing to keep our hands on the wheel.
Paying With Attention
There’s a pattern Matthew named that I recognized immediately. We start out paying attention to each AI prompt, each suggestion, each permission request. The model performs well. We hit accept. It performs well again. We hit accept faster. Eventually, we’re clicking through without reading.
“Cheap is expensive,” he said. And the currency we’re spending is attention. If you don’t pay attention now, you’ll pay for it later. Maybe time to fix it. It might be better to have someone else fix it. But you’ll pay.
The Kata and the Kitchen
Matthew closed by telling a story about a coding kata he did at the office. A group of developers built the game Snake using AI, constrained to the cheapest available model. While the agents were doing the work, the humans were building community. Talking. Connecting. Learning from each other.
I’ve been hearing that from new hires, too. They love coming into the office because the conversations are changing how they think. Not because the office is efficient. Because it’s human.
We can build a team of agents and sit alone in a room. We’re not going to get the same thing. No matter how good the models get.
Better Problems
Near the end, Matthew asked the big question. Why are we here? His answer: to problem solve. Collectively.
We don’t want to eliminate problems. We want better ones. The ones that require us to collaborate, to think hard, to engage with each other over time. Not the ones that come from building AI Tamagotchis for people who can’t connect with other humans.
Matthew’s closing thought was the one I’ll carry with me: with all the time these tools are giving back, find a human being to sit in front of.
I couldn’t agree more.
If this conversation sparked something for you, the full episode is worth watching. We covered a lot more ground than I’ve captured here, and as always, Matthew has a way of naming things I’ve been circling for weeks.





Leave a Reply