Tests Aren’t for Computers. They’re for People.
Posted by claudiolassala in Uncategorized on January 21, 2026
We say tests are documentation.
But let’s be honest—most of the time, they’re not.
They’re written for machines, not humans.
The Hidden Cost of Unreadable Tests
When only developers can read tests, something subtle breaks:
Business rules become tribal knowledge.
Product owners stop validating behavior.
And the test suite slowly drifts away from intent.
A Different Way to Think About Tests
I’ve been re‑framing tests as executable conversations.
Given a situation. When something happens. Then an outcome should follow.
That structure isn’t new—but AI makes it practical at scale.
It can translate low‑level tests into something a human can actually reason about.
Why This Changes Trust
When non‑developers can read—and challenge—tests, alignment improves.
Not because everyone codes…
…but because everyone understands.
📣 If this resonates, you’ll enjoy the deeper dive.
I’ll cover this pattern (and others) in my free Improving Talk on January 28 at 12pm Central.
👉 Register here: https://www.improving.com/thoughs/webinars/define-the-need-solve-the-problem-an-ai-first-playbook-for/

Working Offline: A Developer’s Past, Present, and Future
Posted by claudiolassala in Uncategorized on January 20, 2026
The Question That Started It
What happens to a software developer if the computer doesn’t have connectivity? Turn off the Wi-Fi. Turn off the Internet. Turn off the Intranet. The computer can only access its own resources. Can the developer still work? Does the application run?
I’ve been thinking about this question a lot lately, and it’s taken me on a journey through my own career — from a lesson learned twenty years ago to a present-day practice I’ve built into every project, and now to an uncertain future where AI is changing the equation.
Twenty Years Ago: When We Couldn’t Reproduce the System
A long time ago, I worked on a project where the client was in another country, very far from where I was. This was before we had the kind of internet speeds and availability we have today. We couldn’t just transfer big files over the network — it would take forever.
Their system was massive: large codebase, lots of data, deeply integrated with their infrastructure. It ran 24/7 operations. They couldn’t afford downtime, so they’d built a very stable platform on the backend.
Our job was to rewrite the front end. The system had good separation — the front end communicated with the backend using XML (this was before JSON). But here’s the problem: to run the front end, we needed the backend. And we couldn’t reproduce their backend in our development machines.
We tried. We failed.
After multiple attempts, we had to get creative. We built what I called a “fake backend” — essentially a simulator that would respond to the front end’s XML requests with canned responses. We recorded real interactions from their system and played them back during development.
It wasn’t perfect, but it worked. We could develop completely disconnected from their infrastructure. We could work on planes, at home without internet, anywhere. The fake backend ran locally, and we kept moving.
That experience taught me something I’ve carried forward ever since: dependency on external systems is a constraint I don’t want to live with during development.
Today: Building Independence Into Every Project
Fast forward to more recent projects. I’ve made it a practice to structure applications so I’m never stuck because dependencies are down or slow.
Take authentication, for example. On one project, we eventually integrated Azure B2C for production authentication. But I made sure not to be constrained by it during development. I didn’t want to be limited to only logging in through Azure B2C — it’s slower, it requires internet, and it adds friction to the development loop.
So we built a dual-mode system. During development, we use simple form authentication with a local user database seeded with well-known test users. Fast. No external calls. No waiting.
The same approach extended to authorization. We focused on permissions — protecting features, resources, and workflows — and stored those permissions in our own database. This meant we could test the entire authorization flow locally without ever touching Azure B2C.
For end-to-end tests, this was huge. We could write tests like: “Given I have permission to X, when I do Y, then I should observe Z.” We bypassed the login screen entirely and focused on the feature itself. Only a few tests actually exercised the authentication mechanism. Everything else assumed the user was already authenticated and authorized.
We did the same thing with Azure Service Bus. During development, we ran an in-memory message bus. Faster, no external dependencies. But we could flip a switch and use the real Azure Service Bus when we needed to verify that specific integration.
In production and QA environments, everything used the real services. But in development? Complete independence.
This approach has saved me more than once. During the freeze in Houston, I lost power for over a week. I kept working the entire time — running off my laptop battery, recharging with a gas generator when needed. When Azure went down due to a cybersecurity attack, I kept working. No internet? No problem.
Tomorrow: The AI Dependency
Now I’m thinking about the future, and it’s more complicated.
I use AI tools heavily in my development work. LLMs help me move faster, think through problems, generate code. But here’s the thing: I’m not running local models. I need internet connection for that speed.
If I’m in a situation where I don’t have internet — on a plane without Wi-Fi, in a place with spotty connectivity — I’m not going to be able to move as fast with certain kinds of work.
I could run a local LLM, but it would be quite a bit slower. And it would slow down my entire machine because of the resources it consumes. It’s like the difference between driving a car and walking. I can still get there by walking, but the level of effort is much greater and it’s definitely much slower.
Or I could go back to coding things by hand. Which, at this rate, feels like the difference between walking, riding a bicycle, driving a car, or flying.
The Bigger Question
This gets more interesting when I think about where we’re heading. Imagine AI agents that automatically identify issues in production systems, troubleshoot them, patch the fix, and deploy — all automated, all fast. That’s a huge reliance on connectivity.
If the servers are down, if the electricity is out, are we ready to go back to the basics? To the manual way of doing things?
With the speed that everything is moving, it seems like we need to make sure we don’t lose track of the fundamentals. We need to be prepared to handle these dependencies. We need to ask: if these dependencies aren’t there, can we still do it? And if we can, can we still do it in a timely manner?
What I’m Noticing
I understand that most software these days includes features integrated with other systems. Networking capability is necessary. But is that true for every single feature? I don’t think so.
To me, it’s important to be able to do some work while completely disconnected. It’s a design choice. It’s about resilience. It’s about not being blocked when the world around you goes down.
I’ve built this into my practice over the years, and it’s served me well. But as we move into a future where AI is increasingly part of the development workflow, I’m watching to see how this principle holds up.
It will be interesting to see how that’s going to go.

Pace Yourself
Posted by claudiolassala in Uncategorized on January 19, 2026
We’re all nudged – no, we’re all pushed -, into doing everything faster. We need to be more efficient. All the time. Go, go, go. More, more, more.
But sometimes we need to slow down to go fast.
Going fast, but out of control, can turn into a crash. Recovering from that crash often means delaying the achievement of a goal.
Slowing down to either establish or regain control often enables us to set a consistently fast pace and achieve a goal faster.
“What the heck are you talking about?”
I’m likely mixing metaphors based on experiences that resonate with me.
Riding
Riding a motorcycle fast is a thrill. But what’s the goal?
If I’m riding curvy roads, such as those in the Italian Alps, my goal is to enjoy the scenery on a motorcycle. Paraphrasing a passage of [[Zen and the Art of Motorcycle Maintenance]], “driving a car, we’re watching the scenery, while riding a motorcycle we’re a part of the scenery”.
If I’m going so fast that 100% of my focus is on staying on the road and not falling off a cliff or running into incoming traffic, then I’m disallowing myself from witnessing the amazing views around me.
When my goal is to get the thrill from riding a motorcycle as fast as I’m able to, I go to a race track.
But getting on the track and twisting the right wrist only makes me go as fast as I’m going. To learn to go as fast as I can possibly go, I need to slow down first. I need to know where the track goes. I need to know its camber, bumps, cracks, runoff areas, and references.
So first I’ll go on a track walk. That’s right, walk the track. See it up close.
Then get on a bike and go on a sighting lap. Then speed up just a little.
Stop and internalize what I’ve seen and how I’ve felt. Go out again, with the intention of working on specific aspects of my riding. Speed will come.
When winter comes, and the temperatures drop, it’s impossible to go as fast. Do I stop riding? No. Go out and ride my best, given what I have. As I do so, any sloppiness becomes immediately known: jerky body movements and sloppy throttle/brake control come to the surface. The braking approach into corners doesn’t work and requires adjusting. The mandatory slower pace brings extra awareness to things I was overlooking when I could just go fast.
Reading
Should I fast read or slow read?
What’s my goal?
Why have I picked up this book?
Say the goal is to learn as much as possible about the subject within a given timeframe. I’ll probably fast-read it. Flipping through the pages quickly, noticing the main sections, sub-sections, and images. Then go back to page one, and go through all the pages, slower this time, but still at a fast pace.
When my brain detects through my eyes something it deems important (I told my brain why I picked the book and what my goal is), it tells me to slow down. So I do. I highlight the passage. I write notes. I ponder. Then I speed up again.
Now I pick up another book. This is a work of fiction by an author whose writing I appreciate. I read the words at a much slower pace; I crawl through the words, savoring them, marveling at the craft.
Coding
I can write code pretty fast. I can use code snippets to speed up the process. I can use code generators. Or I can simply type fast. And lately, AI tools do that much, much faster.
But what’s the point of coding fast?
Typing as fast as I think doesn’t help if my thoughts are racing.
I’ve learned I should slow down when I’m not even sure what it is that I’m trying to accomplish. Slow down my racing thoughts. Once the goal and the best next steps are clear, then I use all the tricks I can to speed up coding.
I’ve also learned to deliberately slow things down, even when I know there’s a faster way to do it. For example, I may choose to use the mouse instead of a keyboard shortcut. Or type a long command on the terminal instead of any other faster way.
Food
Brazilian statehouses, such as Fogo de Chao, are an all-you-can-eat meat extravaganza. They have a little card or similar token on the table for each patron; one side is green, which means “Bring me meat”, and the other is red, which means “Stop bringing it”.
It’s common for the novice to sit at the table, turn the card green, get busy eating every cut the waiters bring, and then feel stuffed and done within 15 minutes.
Not me.
I keep flipping that card, green-red-green-red, controlling the pace, appreciating each cut. Figuring out which ones taste the best that day. After doing that for a while, someone always comes to the table and asks, “Sir, are you waiting on any cut in specific?” I tell them what I want, and take a few rounds of that. I get my money’s worth, enjoy a great meal, and head out very content.
Pace Yourself
- Do I need one fast lap?
- Or do I need as many consistent laps as possible?
- Do I need to slow down to smooth things out? To rebalance? To regain control?

Productivity by Design, Not by Default
Posted by claudiolassala in Uncategorized on January 18, 2026
In 2019, I made a change that seemed small at the time, but it’s profoundly impacted my productivity: I turned on Do Not Disturb on my phone—and never turned it off.
That’s not an exaggeration. Ever since, my phone has been on Do Not Disturb 24/7. Only a few people can punch through that firewall. Everyone else? I’ll check missed messages and phone calls and reach out later if necessary.
This isn’t about being unreachable. It’s about being intentional. The practice was suggested as part of a leadership training at Improving.
Intentional Tech: Tools That Serve You
We often think of productivity as doing more in less time. But I’ve found it’s far more powerful to reframe the question: What are we optimizing for?
For me, productivity is about protecting my focus so I can solve meaningful problems and do deep thinking. That means stripping distractions down to the bare minimum.
Take my phone, for example. When I unlock it, I don’t see a wall of apps or unread notifications. I see a phone—because that’s what it is—a tool for making calls (well, I know, I’m old…). The apps I do need, like the one I use to track my daily walk or reading habit, are right there. Everything else—email, chat, etc—is pushed to the far end of the phone, buried where I have to make a conscious decision to go looking.
Organize With Purpose
Not everything belongs in one place. I keep practical information—like records of my cars and motorcycles—in simple digital folders. But for deeper thinking and learning, I rely on Obsidian (or whichever tool may serve me in that moment).
When I read, I don’t just capture quotes. I create connections. And I’m deliberate about it. I don’t want AI to auto-generate those links for me. I want my mind doing the work—making sense of what I’m learning, drawing connections between books, ideas, and experiences. Because the value isn’t just in the notes. It’s in the relationships between them.
And those relationships evolve. When I revisit a book a year later, I might see new connections I couldn’t have imagined. That’s not machine learning. That’s human learning.
Productivity Isn’t a Feature—It’s a Practice
I’ve been talking more with teams about designing workflows and setups that support deep work. For developers, this might mean refactoring their code or embracing test-driven development. For consultants, it might be about writing better user stories or aligning more closely with stakeholder goals.
And sometimes, it’s as simple as rethinking your desk setup. I work with multiple monitors, but I’ve learned that more screens don’t always mean more productivity. It depends on how you use them. (I wrote more about that in my post “Multiple Screens May Not Make You Productive”, if you’re curious.)
The key takeaway is this: tools don’t make you productive. How you use them (and what you use them for) does, if they support a good why.
A Gentle Nudge
If there’s one thing I hope readers take away from this, it’s that productivity isn’t something that happens automatically. It’s something we have to design for. On purpose. Every day.
So the next time you pick up your phone, open your laptop, or sit down to read—pause and ask: What am I optimizing for right now?
The answer might just change the way you work.

Preparing a Talk by Thinking Out Loud
Posted by claudiolassala in Uncategorized on January 17, 2026
I’m preparing a one-hour presentation for my colleagues at Improving about travel—specifically, about what travel teaches us about ourselves. The title that emerged from the process is “Learning Who You Are by Leaving Where You Were.” But this post isn’t about the talk itself. It’s about how I’m using AI as a thinking partner in ways that feel genuinely useful.
Starting with Voice
A couple months ago, when I signed up to give this talk, I spent 15 minutes voice journaling while driving. Just stories I normally tell people about my travels—moments that stuck with me, patterns I’ve noticed. I didn’t organize them. I just let them flow.
That transcript became the seed. I dropped it into NotebookLM and asked it to suggest titles and abstracts. The title it helped me land on—“Learning Who You Are by Leaving Where You Were”—immediately resonated. It captured something I’d been circling around but hadn’t quite articulated.
Using Every Feature to Think
When it came time to actually prepare the content, I used pretty much every feature NotebookLM offers. Not because I wanted to try all the tools, but because each one helped me think differently about the material:
- Infographics to see what themes emerged from my initial thoughts
- Audio overviews (the brief “elevator pitch” version, the deep dive, the longer form) to hear my own ideas reflected back in different ways
- Slides to identify what might be unclear or need more development
I’d listen to an audio overview while driving and take voice notes for new thoughts that came up. Then I’d feed those back in and generate new overviews. Back and forth like that.
The Dry Run
Eventually, I did a full dry run—just me in my car, imagining someone sitting next to me, walking through the entire presentation out loud. No slides, no notes. Just talking.
This was valuable in ways I didn’t expect. I could feel when I was rambling, when I was using too many words to make a simple point. I kept an eye on the time and realized I was only halfway through my stories but already 40 minutes in. Saying it out loud made the timing problems obvious in a way that outlining never does.
The whole thing took about 70 minutes. I knew I’d skipped stories I wanted to include, but I also knew where I was losing the thread.
What Comes Next
Now I’m going to take that recording, transcribe it, and feed it back into NotebookLM. But this time, I’m going to be much more specific with my prompts:
- For the brief overview: Help me tighten the core message
- For the deep dive: Expand on the stories that landed well
- For the debate: What hard questions might people ask about the points I’m making?
- For the critique: Focus on timing, pacing, and structure—how should I open, develop, and close?
I’ll also ask it to generate slide decks. I’ve done this before and been impressed—when I mentioned the Munich town hall and standing there alone on a rainy day, it created a photorealistic image that captured the mood.
These aren’t my personal photos, but they might work as placeholders while I tell the story—like a biopic where actors portray real people, then you see the actual photos during the credits. I might try that approach.
The Real Work
After I generate all these materials, I’ll export the slide deck and apply Improving’s branded templates. This will be good practice for streamlining content creation for future talks—figuring out how to move quickly from AI-generated drafts to polished, on-brand presentations.
But here’s what I keep noticing: the AI isn’t doing the work for me. It’s helping me think. Each audio overview surfaces something I hadn’t quite seen. Each infographic shows me where my ideas cluster. Each generated slide deck reveals which moments have visual weight.
The voice journaling, the dry run, the back-and-forth with the tool—it’s all just different ways of thinking out loud. The AI is a mirror that reflects my thoughts back in forms I can examine from new angles.
Why This Matters
I’ve been doing this kind of preparation for years, but usually it’s all in my head until I sit down to write slides. By that point, I’ve already committed to a structure that might not work.
This approach—voice journaling, feeding transcripts into AI tools, generating different views of the same material, doing dry runs, refining—lets me explore the shape of the talk before I lock anything in. It’s messier, but it’s also more honest. I’m not pretending I know exactly what I want to say before I’ve said it.
The AI amplifies my ability to think by giving me more ways to encounter my own ideas. That’s what makes it useful.
I’ll probably write another post after I give the talk—about how the preparation translated to the actual delivery, what worked, what didn’t. And maybe a separate post about the travel content itself. We’ll see.

Beyond Front-End and Back-End: What Developers Really Need Now
Posted by claudiolassala in Uncategorized on January 16, 2026
I remember when we used to only have programmers and system analysts. That’s how it was when I started in the mid-90s, getting into IT and computers.
The main IT guy at the office was both a programmer and an analyst — a system analyst. The analyst would talk to the stakeholders, the business people, to understand the problem that needed to be solved. Then they’d come up with the design for the solution, the different pieces that needed to be built, even the database design. And then, the programmers would write the code to implement it.
At the places where I worked, it was always the same person doing both lines of work: analyzing the system and writing the code.
The Evolution of Titles
A few years later — I don’t remember exactly when — we started talking about developers. Instead of calling people programmers, we called them developers because they were doing both: system analysis and programming.
Then came the front-end and back-end developers. Which was odd to me because, for the longest time, it was just developers, without a real distinction. You had developers who knew how to do both.
But the split made sense for practical reasons. Depending on the team and the project, some developers would focus mostly on the back end, while others would focus mostly on the front end, so that work could be done asynchronously or in parallel. Two people could work on the same feature or user story at the same time, as long as they collaborated and created a contract for how the front end and the back end would communicate. Then they could go off and work in parallel, integrating their pieces later.
What I’ve Noticed About Front-End vs. Back-End
After many years of doing this, I’ve come to a realization: the back end will always outlive the front end.
A well-built, well-designed, well-architected back end will always outlive the front end because the front end is more volatile. And there are good reasons for this.
For one, the front end is the part of the system users are more likely to interact with. It’s where people interface with the system. And as these people understand the system better, as they get more comfortable using it, as they get over the initial challenge of learning where everything is and what the system can do, then they start asking for enhancements or changes.
Depending on what they ask, it may very well be only a change to the front end, not to the back end. That is, if both were created with an architecture that allows for that.
Another consideration: the devices where front ends run change more rapidly than back-end technology. Computers get better processors, more multithreading capability, and multicore. There are things on the client side. Monitors get bigger, and the resolution gets better.
But also, there’s the situation where the screens get smaller — smartphones and tablets. Those also need front ends, but they’re a different kind of interface from those on a regular computer. We’re not using a mouse or an external keyboard. We’re tapping on a screen. We’re using touch.
With all these front-end changes — due to devices, computers, and screens — the back end may not have to change at all. Or when it does, if well-structured and architected, it can be changed with minimal changes. In some cases, it may be an addition. Maybe it’s creating a new view model to map data from the database into what the new UI for the new device requires. It could be a database view or a read model in an event-sourced system. Those are additive, which are usually easier than changing existing code.
The Limitation of Specialization
I’ve been thinking that some developers focus solely on front-end development, and others solely on back-end development, which can be somewhat limiting.
Both of them should really focus on neither. Yes, they may have preferences or be better skilled at one or another, but at the end of the day, those are just pieces of technology.
In both cases, they should learn to work with people, to work with businesses, and to come up with the best solutions.
For those on the front-end side of things, they should get very much in touch with user experience design, empathy, and all of these techniques and approaches to understand the problem being solved and then come up with better experiences for it. And then from there, get to the UI design.
For the person who is mostly on the back end, that could mean understanding the problems you’re trying to solve, the events—the business events that happened, and the domain events—and designing it to capture users’ intent, people’s intent, and business intent. Design around that first, before thinking about persistence concerns, data replication, and those things.
And for that, getting to learn domain-driven design, more specifically the strategic patterns, learning about context mapping, and the relationship between the different types of domains, ubiquitous language. That is very important.
For both front-end and back-end developers, I believe that if they view behavior-driven development from the standpoint of people’s behavior, rather than system behavior, that would be best. Because now they focus on the people and the problems that need solving, they put their heads together to come up with the best solution.
What This Means in the Age of AI
Now, in the age of AI, a person who can do that — who can have the empathy, develop the empathy, articulate thoughts, and facilitate conversations with people to then come up with the designs for the behavior and the experience, to a point where they can describe it — can then collaborate with AI to refine that understanding.
And last but not least, work with AI for AI to do the actual work of proposing mockups, wireframes, and high-definition prototypes. And from there, creating the actual implementation on the front end, on the back end, creating what is technically necessary to pull it off.
No longer would front-end developers spend a lot of time figuring out CSS, styles, and misalignment in the design. Let AI figure that out. Do that kind of work.
Same thing on the back end, trying to address some complicated business logic. Let AI write that logic. As the human in the loop, make sure that the acceptance criteria are well-defined, the user stories are well-defined, and the examples are well-defined. Have documented ways to write good specs for the automated tests, and let AI implement the solution.
The Real Skill
The labels — programmer, analyst, developer, front-end, back-end, full-stack — they’ve all been useful at different times. But they’re just labels for pieces of technology.
What I’m noticing is that the real skill, the one that matters more now than ever, is the ability to understand people and problems. To design with empathy and intent. To articulate what needs to happen before worrying about how it happens.
The technology will change. The devices will change. The tools will change. AI will handle more of the implementation details.
But understanding what to build, and why, and for whom — that’s still on us.

Stop Explaining and Start Showing
Posted by claudiolassala in Software Development on January 15, 2026
We have all heard this before:
“I’ll know it when I see it.”
It usually showed up late—after stories were written, estimates were locked, and code was already underway.
The Visualization Gap
Words are slippery. Even well-written requirements leave room for interpretation.
And interpretation is expensive.
What teams really need—early—is a way to see an idea together before committing to it.
A Faster Way to Get to ‘Yes’ or ‘No’
Help stakeholders “get it” with a 30-second wow.
A rough sketch. A quick photo. A short voice explanation.
Minutes later, there’s something clickable.
Not polished. Not perfect.
But real enough for stakeholders to say:
“Yes, that’s it.” or “No, not quite.”
Why Speed Changes the Conversation
When feedback happens in the same meeting, something important shifts.
Alignment stops being theoretical.
It becomes shared understanding.
📣 Curious?
Join my free Improving Talk on January 28 at 12pm Central.

2025: Annual Review
Posted by claudiolassala in Personal Growth, Uncategorized on January 14, 2026
Here’s the 11th edition of my Annual Reviews.
Previous Year
Continuing from 2024…
Final Cut Pro and Logic Pro
- Experimented with writing automation for Spoken Blog and Read Better series
- Published two music videos: Divide and Conquer (One Take) and Hindsight
Logic Pro
- Finished and published the two songs mentioned above
That marked my complete migration from PC-based tools (Vegas and Mixcraft) to Mac-based tools for video and music editing.
Voice Journaling and AI Transcribing
- My journaling system and practice keep evolving, and I’m getting a lot of value from it
Time Perspective
- The exploration continues, and it is my Back to the Spiral Newsletter‘s main framework
AI Productivity
- That was one of my goals for 2025, and it worked out great: blogs, talks, newsletter, books, work (in several ways)
Visual Thinking
- I’ve read a great book about it and continued exploring the topic. Understanding how others and I think is an ongoing area of exploration for me
- Fast prototyping, turning what I see into something others can see
Book Reading and Learning
- Solid consistency
- Published a lot of content about my process
Riding
- If 2024 was a bad year for riding, 2025 was worse
Coolest things I’ve learned
- In 2024, I mentioned bike maintenance. In 2025, I couldn’t bring myself to do it. No time. Stressed out. I wanted to ride, not work on bikes.
Now, 2025…
Blogging
- I’ve set a new record of posts published in a year: 92
- Both views and visitors went up, so I hope some of my content resonates with others
- Out of the Top 10 most-viewed posts of the year, 7 of them were posted in previous years; I like knowing that the older content is still relevant
- In August, I celebrated 20 years of blogging. I had a goal to celebrate, but I had no idea how. At the last minute, I decided to publish a book with the main lessons that stuck with me over this period.
- The coolest thing about my keeping my blog this long: I have written and published a LOT of my opinions and approaches over the years. In the same way I have shared many of those posts with several people, I’ve been sharing them with AI tools so they know how to produce content the same way I would. I showed an example of that in “Can AI Really Pair Program My Experiment with BDD TDD and the Prime Factors Kata“.
- I was flattered to learn somebody created an “askClaudio” custom command in Claude Code, prompting it to crawl my blog and write AI instructions from it. 🙂
- I’ve also added an Other Publications tab to my blog to include links to articles or posts I wrote for other sources (at least the ones I can still find a link to)
YouTube
- I had a goal to publish more content on my YouTube channel, and so I did: I posted 54 videos
- Views, watch time, and subscribers went up. I hope that the content is helpful to anybody out there.
Books
- I intended to publish a short book. I didn’t publish the one I intended (I made good progress and will likely release it this year)
- But, released two short books that were not planned at all!
Public Speaking
I need to do a better job at keeping track of my talks. I let AI loose on my notes, and it tells me I’ve given 15+ talks. That sounds about right, but I’m positive it’s not counting several internal talks I gave at Improving.
One of the best things about public speaking was leveraging AI to streamline my content creation process and analyze my talk transcripts to extract blog posts and generate new talks. Huge multiplier.
Giving my transcripts to NotebookLM and using its “audio overviews” to create debates and critiques has been an excellent way to improve my content.
Music
I started the year strong by releasing two original songs in January: Hindsight and Divide and Conquer (One Take). Both songs were recorded at the end of 2024.
I then had the ambition to revisit an old, long song (about 13 minutes) that had never been properly recorded. I relearned all guitar parts, mapped out the click track, and recorded a guide guitar track.
I was planning to invest in a new V-Drum kit to replace my (very) old one.
As I started practicing playing the drums to it and rewriting some parts, my audio interface stopped working.
And then a series of events drained the funds I was saving for that.
So I redirected my musical efforts for the rest of the year to playing my acoustic guitar and working on my singing.
Work Environment
My home office’s setup is still the same.
I did upgrade my work environment at the Improving office, though, by adding another 34-inch widescreen monitor, bringing the total to three screens (the laptop’s screen is the 3rd). Still one screen short compared to my home office, but I’m making that work.
I’ll have a dedicated post to talk about that change.
Tech
After three years of enjoying my Heavys headphones, always keeping them in their case when not in use, taking extremely good care of them (it still looks brand new!), they stopped working.
I reached out to them and didn’t like the answer I got (“buy a new one with a 30% discount”). No, thanks, I’ll look for a brand with a better durability record.
Learning
- I’ve been taking a long Google UX Design course on Coursera
- I took the awesome Professional Scrum Product Owner class we have at Improving
- Less input, more creation and shipping
- YouTube Premium
- Audible
Improving
Speaking of Improving:
- In April, I set a new record for how long I’ve stayed at a company, and in August, I celebrated my 9th anniversary with the company.
- I’ve only run one book club. That’s a low number compared to previous years, but we had two groups and great conversations.
- At the beginning of the year, I had a goal of pairing with another Improver to co-present an internal talk, possibly once a quarter. It didn’t happen each quarter, but I did get 4 co-presented talks! I enjoyed each one and will likely keep doing so this year.
- I presented a 4-part series on productivity, which I plan to revisit and offer through my YouTube channel.
- On some of my mindful breaks, I go to the pool table to hit a few shots, always trying to learn something. Sometimes I hit some amazing shots. Most of the time, nobody sees it.
- I’ve leveraged AI on a few occasions to bring to life some ideas I’ve had for years, but never set time to work on them. I’m pleased with the results.
- I have used AI tools every single day, all year long, learning how they can boost my productivity as a consultant and solution/software developer, and sharing everything I can with Improvers and through this blog, my YouTube Channel, Improving’s blog, and webinars.
Soundtrack
I listen to music every day. The list below is a subset of my soundtrack in 2025:
- Full discographies: Devin Townsend, Led Zeppelin, Motorhead.
- Various albums by: Opeth, Kiko Loureiro, Warrel Dane, Sanctuary, Body Count, Serj Tankian, Jinjer, Dream Theater, Alexia Evellyn, Lacuna Coil, Jinjer, System of a Down, Nevermore, Rush, The Warning, Halestorm, and Testament.
Duolingo
Still a daily thing.
Most Useful Things I Learned
How to use AI to make me and those around me better. That was my main goal for the year.
I gave an LLM a ton of information to help me prepare this annual review. It gave me a very good summary of my journey throughout the year:
- Q1: Experimentation and learning
- Q2: Integration into workflows
- Q3: Teaching and evangelizing
- Q4: Production mastery
It explained each point based on what it found in my notes. It included this interesting meta-pattern:
“Human intent + good architecture + pragmatic scope + tests → AI can build end-to-end solutions with genuine 10x speedups.”

The “Do It” Task: Your Scrum Board’s Silent Killer
Posted by claudiolassala in Career & Mentoring on January 13, 2026
I’ve coached many people over the years, and I keep seeing the same pattern. You walk past the physical or virtual Scrum board, and your eyes catch on a particular card. It has a vague title like “Do it.” It’s been sitting in the “Doing” column for what feels like forever. The developer assigned to it looks stressed, progress is invisible, and no one really knows when—or if—it will ever move to “Done.”
The “Do It” task is the work item that gets slapped onto the board when a team is focused on a technical instruction instead of a human problem. It’s a failure of curiosity about the why behind the work. And it’s killing your sprints.
The Anatomy of a “Do It” Task
A “do it” task is really just a placeholder for work that nobody fully understands yet. Often, it focuses on technical implementation over solving a real-world problem for an actual person.
Teams accept these tasks when they don’t know the right questions to ask. Or more importantly, when they don’t know what they don’t know about the work. It’s a symptom of forgetting a core Agile principle: a user story is a placeholder for a conversation, not a set of requirements.
When that conversation doesn’t happen—or when it’s insufficient—you end up with a “do it” task. A vague instruction with no shared understanding of the problem you’re trying to solve for the accountant, the engineer, or the salesperson at the other end of the screen.
Why “Do It” is a Red Flag
I can predict what happens when a team accepts one of these tasks onto their board. The consequences are as predictable as they are damaging:
Stagnation. These items linger for weeks, stuck in the “Doing” column. They become bottlenecks, blocking the flow of work and making sprint goals unattainable.
Estimation failures. “Do it” tasks are a primary cause of massive estimation failures. A task that a team vaguely estimates at 10 hours will predictably expand to 32 hours or more once the hidden complexity finally surfaces.
Unmanaged risk. The “do it” label masks ambiguity. By accepting it, the team is blindly accepting unknown technical and business risks. Any sprint commitment becomes a gamble.
A “Do It” task isn’t just a bottleneck. It represents a breakdown in the collaborative mindset that Agile is supposed to foster.

The AI Parallel: Garbage In, Garbage Out
Here’s something I’ve been noticing lately: the way we work with AI is teaching us something important about how we work with each other.
If you give an AI a vague, poorly defined prompt, it will try to fill in the gaps. It will make stuff up, generate a poor result, and leave you wondering what went wrong.
A developer is no different. When you hand them a task that just says “do it,” you force them to work without clarity. They have to make assumptions about user needs, technical constraints, and business goals. The result is almost always a poor or incorrect implementation that requires significant rework.
The principle is simple and universal: garbage in, garbage out. A poor prompt yields a poor result, whether you’re working with a large language model or a human developer.
Learning to write a clear prompt for an AI is excellent practice for defining a clear task for a human teammate. Both require you to master the art of clarity and intent.

From Fog to Clarity: A Practical Strategy
Eliminating “do it” tasks requires a shift from passive acceptance to proactive clarification. Here’s a four-step strategy I’ve seen distinguish exceptional teams from mediocre ones.
Step 1: Speak Up
This is the most crucial step, and it’s not just a good idea—it’s a developer’s duty to the project and the business.
This is the moment, right here in sprint planning, where I see teams either save their sprint or doom it to failure. When a work item is poorly defined, it’s a team responsibility to pause and have a conversation. Blindly accepting a “do it” is a recipe for failure.
Step 2: Create an Exploratory Task
When you don’t know what you don’t know, the solution isn’t to guess—it’s to investigate and figure out the “best next question.”
Create a new, separate task on the board specifically for exploration. The goal of this task is to “turn the bottle around” to see what you’re dealing with.
This exploratory task is not about completing the feature. It’s about identifying the unknowns. It’s a time-boxed effort to research, ask questions, and figure out what you don’t know. This simple act transforms a vague “do it” task into a concrete plan for discovery.
Step 3: Negotiate with the Product Owner
Once the exploratory task is complete, the development team must circle back with the Product Owner. This is a critical moment for collaboration and negotiation.
The team should present their findings and ask clarifying questions:
- “Now that we know this, are we ready to commit to the full story?”
- “Do we need to remove this story from the sprint to redefine it based on these findings?”
This conversation ensures that the team commits to work only when it’s understood and achievable, protecting the sprint and aligning everyone on the path forward.
Step 4: Reinforce the Purpose of a User Story
This entire process reinforces a fundamental Agile principle: A user story is a placeholder for a conversation, not a final specification or a place to dump requirements.
When teams treat stories as the start of a dialogue rather than a finished order, the “do it” task can’t survive. This collaborative mindset is the ultimate antidote.
From Ambiguity to Leadership
“Do it” tasks aren’t just minor annoyances. They’re symptoms of unaddressed ambiguity that lead to predictable failure, wasted effort, and frustrated teams. They represent a departure from the collaborative spirit of Agile development.
I’ve learned to treat a vague task the same way I’d treat a junior developer—or an AI. You wouldn’t expect them to succeed without clear instructions, context, and a conversation.
By refusing to accept “Do It” tasks, you’re not being difficult. You’re leading, clarifying, and building the foundation for predictable success.

The Bottleneck Isn’t Code — It’s Translation
Posted by claudiolassala in AI & Productivity on January 12, 2026
I’ve lost count of how many projects started with a meeting that felt aligned, only to drift quietly off course weeks later.
Everyone nodded. Notes were taken. Stories were written.
And yet—somehow—we still built the wrong thing.
Most delivery problems don’t begin in code. They begin in translation.
Where Things Go Sideways
Stakeholders rarely speak in implementation terms. Developers rarely think in terms of outcomes. Somewhere in the middle, intent gets flattened into tasks, and the why disappears.
When that happens, teams get efficient at delivering features that don’t actually move the needle.
A Small Shift, A Big Change
Lately, I’ve been experimenting with a different approach—one that forces me to slow down at the moment of maximum leverage.
Instead of rushing to user stories, I capture the raw conversation. The whole thing. The hesitations. The side comments. The context.
Then I let AI help me surface the need hiding behind the words.
That preserves intent.
Why This Matters Now
AI is very good at patterns. Humans are very good at meaning.
When we combine those strengths early—before design or code—we dramatically reduce rework later.
That’s the difference between building fast and building right.
📣 Want to see this in action?
I’ll be walking through real examples in my upcoming free Improving Talk on January 28 at 12pm Central.
