The conversation started with Matthew sharing his experience watching a fleet of AI agents work autonomously—an orchestrator divvying out tasks that rarely required human intervention, except in extreme cases. It was beautiful to watch, but it left him questioning something fundamental: are we using the right word when we talk about “trust” in the context of AI?


Questioning Trust in AI

Matthew pointed out something that’s been bothering him: when we talk about trusting AI agents, what do we actually mean? Trust, as he sees it, involves mutuality between two beings—a contract where if I don’t show up for our meeting without explanation, that harms the trust. There’s a conversation, accountability, the possibility of apology, and change.

But when an AI agent deletes your folder, what happens? We don’t blame the agent. We blame ourselves—“you didn’t set up the right permissions,” “you didn’t establish proper guardrails.” If that’s the arrangement, can we really call it trust? Matthew landed on a question rather than a conclusion: maybe we should question what trust means to us personally and organizationally in relation to AI.

The Trust Trap

I loved where this was going. Are we using the right word? Words preload our minds with entire frameworks. If trust isn’t the right concept, what might be better? My mind went to the 13 trust behaviors from “The Speed of Trust”—what if we created a skill that runs AI interactions through those lenses? Are we listening? Creating transparency? Talking straight?

But I also worry about falling into what Matthew called a “trust trap”—applying human principles to something that isn’t human. I don’t say “please” or “thank you” to AI because it’s ones and zeros, not a person. I have the same issue with people treating their pets better than other humans. We risk treating AI agents better than people while neglecting human relationships.

The Wrong Plane Problem

This led us to another analogy: boarding the wrong plane. You can travel long distances on the wrong plane—AI agents can go far quickly doing the wrong things. In software development, we often move forward without clear goals. We say “let’s see what we’ve got and go with that” instead of stopping to ensure we’re on the right trajectory.

The key is having clarity about what we’re trying to learn or accomplish. If we’re testing how turbulence affects the body, any plane will do. If we’re trying to reach a specific destination, the right plane matters. Context is everything.

When to Stop and When to Continue

We explored different approaches to problems. When launching satellites, anything going wrong means everything stops—no “we’ll check on that later.” But in motorcycle racing, if a rider has mechanical problems, they might continue riding just to collect data, even though they’re out of contention.

The difference is expressed intent. Are we continuing because we want to gather data, or are we ignoring problems because stopping is inconvenient? With AI, we need clear checkpoints and the willingness to abort when something looks wrong.

Documenting the Decision Process

Ray Dalio’s “Principles” came to mind—he documents every decision with his thought process, then later evaluates whether it was the best decision given what he knew at the time. This is exactly what we need with AI: capturing the chain of thought so we can understand why decisions were made.

Matthew suggested we should capture all of the AI’s reasoning as it works, tying it to the final output. Even if the work is perfect, some decisions along the way might be faulty. We need this historical data to learn and improve.

The Fun and Useful Balance

The conversation turned to the sheer joy of what these tools enable. Matthew described staying late at work to see a demo of autonomous agents, while I shared a recent experience of spending 80 minutes conversing with AI to create a complete prototype in 15 minutes. It’s fun—it stretches one hour into ten.

But I’ve learned this isn’t sustainable. Like riding roller coasters, the thrill is amazing but exhausting. We need to know when to stop, when to take a walk, when to step away. We’re not machines.

What Do Humans Do While Agents Work?

This raises a crucial question: what should humans do while AI agents are busy? I’ve found myself scheduling meetings with other improvers, discussing completely different problems while my agents handle client work. The key is using that freed-up time for things only humans can do—talking to other humans, thinking, reflecting.

Matthew predicted we’ll see less talk about tech and more about philosophy as these tools become more capable. If you’ve handed off a task to an agent, why are you still concerning yourself with it? You should be talking to humans.

Making Fun Useful

My parting thought was: have fun, but find ways to make that fun useful. I’ve been publishing a blog post every day this year—not because I’ve found more time, but because I’m using time differently. A 20-minute walk might spark an idea worth sharing. Five minutes of voice recording becomes a blog post.

Matthew agreed, noting that his recent blog post came from a fresh experience that left a strong impression. These tools enable us to build in public during the downtime—to turn our fun into something useful for others.

The Rise of Human Activities

Matthew’s prediction: we’ll see a resurgence of instruments in offices. As people face identity crises in an AI world, they’ll gravitate toward things that showcase human abilities. This sparked another idea: planning breaks where humans do things AI cannot—playing instruments, ping pong, being human.

The key is choosing the right path. We can either fill our freed time with finding the next task, or we can use it for human connection, reflection, and activities that machines cannot do.

I’m excited to see where this conversation leads as we continue exploring these questions together.

Leave a Reply

Trending

Discover more from Claudio Lassala's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading