Posts Tagged ai

Staying Oriented When Everything Speeds Up

One of the recurring themes on Reflective Practice Radio is that speed, by itself, isn’t the problem. Losing context is.

In this episode, Matthew and I spent time unpacking what it feels like to work faster than ever—often on multiple things at once—while staying grounded, intentional, and aware of what actually matters. The conversation flowed through journaling, focus, boundaries, AI, and even pool tables, but it all kept circling back to the same question:

How do we avoid drifting when everything is moving quickly?

Capturing Context Instead of Chasing It

I shared some of the recent experiments I’ve been running with voice journaling and lightweight note capture. The goal isn’t to journal ideally or even entirely—it’s to leave breadcrumbs.

A few words. A short reminder. Just enough to preserve context so that later, when there is time to slow down, reflection has something to grab onto.

This has changed how I move through the day. Instead of interrupting deep work to journal fully, I can quickly capture a thought and return to what I was doing—without losing it.

Dashboards, Not Distractions

We also talked about physical and digital workspaces—specifically, how more screens don’t automatically mean more distraction.

Used intentionally, dashboards can reduce cognitive load. A fixed place to see what you’re working on, where you were last, and what’s coming next makes it easier to re‑orient after context switches.

The key distinction we kept coming back to: everything in view must belong to the same context. Email, chat, and notifications don’t live there unless they directly serve the work at hand.

Boundaries as a Form of Care

From there, the conversation turned to boundaries—how they protect not just focus but people.

I shared stories from earlier in my career about being constantly interrupted and how learning to set explicit time windows for collaboration led to better outcomes for everyone involved. Boundaries weren’t about saying no to people; they were about creating space for thinking, learning, and doing meaningful work.

Matthew reflected on how easy it is to use interruptions as an escape from complex problems—and how awareness of that pattern is often the first step toward changing it.

Journaling as an Early Warning System

One of the most important threads in this episode was journaling as a way to notice burnout before it takes over.

By capturing not just what we did, but how the work felt, patterns start to emerge. Repetitive tasks with no perceived value. Resistance that keeps showing up. Days that blur together.

Those signals are always there. Journaling makes them visible—early enough to respond with intention rather than react too late.

Winning the Lesson

We closed with a metaphor that kept resurfacing: strengthening the non‑dominant hand.

Whether it’s brushing your teeth differently, taking a left‑handed pool shot, or approaching familiar work from a new angle, the practice isn’t about winning in the moment. It’s about building adaptability, perspective, and resilience.

Sometimes the real prize isn’t finishing faster or performing better—it’s learning something you can carry forward.

Reflection Over Speed

This episode felt like a fitting pause as the year winds down.

Before asking where you’re going next, it helps to know where you’ve been—and how you felt along the way. Journaling, reflection, and thoughtful use of AI aren’t about doing more. They’re about staying oriented while you do it.

If this conversation resonated, I encourage you to watch the full episode and sit with the ideas for a bit. The insights often show up after the pace slows.

, , , , ,

Leave a comment

My first Alfred workflow

When I moved from PCs to a Mac in 2011, one of the first things I did was to find an application launcher; on Windows, I used SlickRun and Executor. I found Alfred, and I’ve used it since, 18.7 times a day on average.

Note: I lost the stats between 2011 and 2014.

I paid for the Powerpack shortly after adopting the tool, so I can use some of its features that are only available that way.

I have used a few Workflows over the years (such as the one for 1Password, and more recently, Shimmerin Obsidian). However, I had never created one, despite the excellent ways the tool offers to do so.

That changed a few weeks ago.

I needed to automate a small step in my voice journaling process: cleaning up Markdown files in one folder and copying/moving them to another.

I had no idea how to do that, and no time to figure it out. So I used an AI tool (Cursor).

I explained my desired outcome, what I had to work with (the files, Alfred, etc.), and it:

  • Talked me out of using Cursor CLI, which wasn’t needed to accomplish my immediate goal
  • Created a bash script
  • Gave me a step-by-step guide to create the workflow in Alfred (after a few attempts to create a workflow file to import)

My Process Journal workflow is simple:

  • I trigger it with a “process-journal” keyword (just the first few characters)
  • It runs a bash script
  • It displays a list of the affected files

Nothing Earth-shattering, but just what I needed as the next step in the evolution of my system.

Since then, I created another useful workflow. But that’s the topic for another post.

, , , ,

2 Comments

Changing Our Position, Not Slowing the World Down

The Blank Page Podcast is now Reflective Practice Radio. The name change happened for practical reasons, but it also surfaced something we had already been circling for weeks: this show has never been about performance or polished answers. It’s about slowing down inside fast-moving work, thinking out loud, and learning while we’re in motion.

This episode grew out of a simple observation: technology isn’t just moving fast—it’s accelerating in ways that change how we experience our work. The tools aren’t the problem. The question is how we adapt our perception and practices so we don’t lose clarity as everything else accelerates.


Speed Is a Perception Problem

We kept coming back to an analogy that has stuck with me for years: motorsports.

If you’re standing trackside at a Formula One race, the cars are a blur. Noise, vibration, speed—everything feels overwhelming. But if you watch the same race from above, nothing about the cars has changed. They’re still moving at extraordinary speed. What’s changed is your position relative to the action.

That distinction matters.

When work feels chaotic, our instinct is often to slow everything down: fewer tools, fewer initiatives, fewer experiments. Sometimes that’s the right call. But often, what we really need is to change where we’re looking from. Lift our gaze. Create distance. Introduce practices that help us recognize patterns rather than reacting to every moment.

The work isn’t slower. We’re just better oriented.


AI Compresses Work, Not Understanding

We spent a good chunk of the conversation unpacking something that often gets missed in AI discussions: speed is only visible at the surface.

Yes, AI can generate code, summarize meetings, and prototype features in minutes. But those moments of apparent magic sit atop something deeper—years of experience, context gathered in conversations, decisions made in meetings, and judgment built over time.

What AI really does well is compression.

In my own work, recording meetings, capturing transcripts, and letting AI analyze that material means I don’t have to choose between paying attention and taking notes. I can stay present in the conversation, then later extract exactly what I need: decisions, risks, open questions, and next steps.

The work looks faster. The understanding is still human.


Better Meetings, Fewer Meetings

One of the recurring themes in this episode was reframing how we think about meetings.

If a meeting exists only in the moment—if nothing is captured, shared, or revisited—it feels like a tax on time. But when conversations are treated as raw material for insight, they become assets.

Recording meetings, summarizing them, and turning them into actionable artifacts changes the dynamic entirely. Suddenly, the value of the meeting extends beyond the people in the room. It also changes behavior: people show up differently when they know the conversation will be used, not forgotten.

Better meetings naturally lead to fewer meetings. Not because we mandate them away, but because the ones we keep actually move the work forward.


Story Before Features

We also talked about something deceptively simple: how work is presented.

Too often, demos and reviews devolve into lists of completed tasks or UI changes. Buttons clicked. Screens shown. Checkboxes marked.

But the people on the receiving end—stakeholders, customers, decision-makers—don’t experience the system that way. They experience a situation, need, or problem that unfolds over time.

When we start with the story—what was happening before, what changed, and why it matters—the feature finally makes sense. Even deeply technical work can be framed in human terms when we anchor it in outcomes instead of implementation.


Different Minds, Better Outcomes

Toward the end of the episode, we leaned into how differently people think.

Some of us think in words. Others in images. Some linearly, others spatially. None of these are better or worse, but friction emerges when we assume everyone processes information the same way.

What’s interesting is how modern tools can help bridge that gap. A verbal explanation can be turned into a visual diagram. A narrative can accompany a visual walkthrough. Instead of forcing people to change how they think, we can translate between modes.

That’s where collaboration becomes additive rather than exhausting.


Reflection as a Competitive Advantage

This episode wasn’t about slowing down technology. It was about learning how to move with it—without losing ourselves in the process.

Reflective practice creates space for better questions, better decisions, and better outcomes. It helps us recognize when to accelerate and when to lift our heads and reorient.

That’s what Reflective Practice Radio is here for.

If any of this resonates, I encourage you to watch or listen to the full episode. The conversation itself is the point—and, as always, it’s one we’re still learning from.

, , , , ,

Leave a comment

Need, Problem, Solution — Thinking Through the Spiral

I’ve been reflecting a lot lately on the difference between needs and wants, and how those relate to problems and solutions. Earlier this year, as I prepared a lightning talk about AI use cases outside of work, I framed everything around a straightforward structure: What was the problem? And what was the solution? It worked well enough for the talk. It helped me articulate the challenge and walk through how I approached it.

But after the talk, something kept nagging at me. I don’t go around looking for problems to solve—at least not intentionally. So I stepped back and asked a deeper question:

When is a problem actually a problem?

It turns out that question takes you someplace interesting.


Listening Before Solving

Over the years, as a consultant, I’ve sat with people who describe what they perceive as problems. And sometimes, after listening, observing their workflow, and looking at the numbers they’re working with, I realize they don’t actually have a problem. What they have is a need they don’t fully understand, and in the absence of that clarity, anything standing in the way starts to feel like a problem.

People often think they have a data problem when what they really have is a misinterpretation problem. They believe they have a workflow problem when, in fact, they have a perspective problem. They think something is broken when it’s actually just misaligned with what they need.

And the same is true for me.

So I’ve been asking myself more often:

  • Is this really a problem?
  • Why does it feel like a problem?
  • Is it actually blocking a need I have?
  • Do I truly need this thing I think I need?

When the answer is no, there’s nothing to do. When the answer is yes, then I can finally ask: What’s in the way? And what can I do about it?

That’s where the problem-solution pairing becomes relevant. But only after the need is clear.


Revisiting My Own AI Use Cases

When I revisited the non-work AI use cases from my lightning talk, I approached them differently. Instead of starting with, “Here’s the problem I solved”, I asked:

“What was the need?”

That shift completely changed how I looked at those examples. It made the examples more transparent, more grounded, and more honest. It also made me more aware of how many things I label as problems simply because I haven’t yet named the underlying need.

This led me to something that’s been shaping a lot of my thinking lately:

The Need → Problem → Solution Playbook

Put the need first, always.

Because wants are tricky. They sit between the need and the solution. In user stories, this shows up as the “I want to…” part. I’ve never loved that part, because wants can be misleading. The real value is in the “In order to…” part—that’s where the need lives.

By flipping the order, I create more space:

  • The same need can lead to different wants.
  • Different people with the same need can want different things.
  • I’m less constrained by the first idea that pops into my head.

That’s where creativity comes from. That’s where real problem-solving starts.


Closing

I started writing this as a journal entry for myself, but I realized it fits perfectly as a blog post. It captures where I am right now in this ongoing reflection around needs, problems, and solutions.

Need → Problem → Solution → Next Need.

A spiral that keeps moving upward.

And speaking of spirals…

I’m releasing an early edition (about 80% complete) of my upcoming book, The Need–Problem–Solution Playbook: How AI became part of my workflow—one real example at a time. It’s available at a 30% discount through the end of the year, and everyone who buys now will automatically receive the full version for free once it’s published.

If this way of thinking resonates with you, I hope the book helps you explore your own spirals.

Get the book!

, , , ,

Leave a comment

The Drift of Memory, the Speed of Tools, and the Value of Story

Every week, when Matthew and I sit down to record The Blank Page Podcast, I never know exactly where the conversation will go. I only know one thing for sure: if we follow our curiosity, we’ll end up somewhere worth exploring. Episode 7 was no exception.

The Week AI Moved Fast Again

This week brought another wave of AI releases—Google’s Gemini 3, a new AI-powered IDE called Anti-Gravity, and a model with the ridiculous-yet-fantastic name “Nano Banana Pro.” Matthew lit up, describing the new image‑editing capabilities, especially its ability to blend multiple source images into a cohesive composite. It’s the kind of feature that would’ve required specialized tools and hours of effort not that long ago.

Meanwhile, I spent part of the week experimenting with book‑cover concepts. I moved between ChatGPT, Gemini, SnagIt, and back again—nudging, refining, iterating. The early results weren’t great. But then, suddenly, they were. The shift wasn’t dramatic; it was subtle. A little more polish here, a little better structure there. I enjoy those small steps forward. It’s the feeling of “Ah, now we’re getting somewhere.”

Tools Are Only Interesting When They Solve a Problem

As Matthew explained why he loves new tools, I was reminded again of something I’ve been repeating for years: it’s not about the tool—it’s about the problem (with that said, I’ve seen some great ways Matthew puts the shiny toys to good use!)

New IDEs are fun to try, but if the one I already use gets the job done, that’s where I stay, at least until a real need emerges or when I set time to experiment with them.

That’s why I don’t chase every shiny thing. Instead, I keep a catalog of problems I want to solve. If a new tool gives me leverage, I’m ready.

Sometimes the tool doesn’t even need to be the perfect one—it just needs to work well enough.

When “Good Enough” Means “Go”

A great example came this week. I wanted a quick survey for an internal session—not a full-blown form, not a polished UI, just a place to capture answers. Instead of opening yet another form builder, I gave Gemini a markdown list of questions and said, “Create an app for these.” One minute later, I had a working, shareable mini‑app.

No friction. No overhead. Just done.

Moments like that still surprise me. They shouldn’t, not after everything we’ve seen in the last two years—but they do.

Prototypes in Minutes, Not Weeks

The part of the conversation that resonated most with me was the discussion of how AI accelerated a recent User Acceptance Testing effort. All the sessions were recorded. I knew I’d be able to revisit the transcript later, reflect on the discussions, and dig deeper into the wording, reactions, and sentiment.

By the time the meeting ended, I had several ideas to explore. I dumped the transcripts and codebase context into Cursor and asked it to create a plan. I expected it would take half an hour to build the prototype.

Cursor did it in five minutes.

Not a perfect solution. Not even a final one. But a tangible proof‑of‑concept—something I could run, test, and refine.

Within a few hours, I had a working prototype to show the stakeholders. They validated it right away. And only then did I begin thinking about implementation details.

The speed of that loop—idea → plan → prototype → validation—still blows me away.

Diverge, Converge, Repeat

We talked about design thinking, and how AI is becoming a natural partner in the diverge–converge cycle:

  • Diverge into possibilities.
  • Converge into a clear direction.
  • Diverge into solutions.
  • Converge into the next step.

It’s the same idea I use with humans: get multiple perspectives, compare them, merge the best parts, and refine again. AI makes the loops faster.

And the comparison with humans matters. Matthew doesn’t ask a single model for an answer. He asks multiple models for their perspectives, then has them read each other’s plans, poke holes in them, and incorporate improvements. It’s the closest thing we have to creative collaboration in software.

The Strange, Useful Imperfection of Memory

Somewhere along the way, our conversation drifted—beautifully—into the nature of memory. Human memory. Machine memory. The drift that happens over time.

Models remember things across conversations, sometimes helping and sometimes confusing. Humans do the same. We reconstruct memories from fragments, fill gaps with stories, and treat the stitched‑together narrative as fact.

But as I’ve been revisiting my own 20 years of blog posts, I’ve been reminded how important it is to:

  • Capture snapshots of what we believed.
  • Revisit those snapshots with new knowledge.
  • Notice the drift.
  • Update the beliefs.

That’s the heart of my weekly Back to the Spiral newsletter. Past → Present → Future. What I’m doing now, what I’ve done before, and where I think I’m headed—a self‑reflection loop powered by notes, transcripts, old talks, journals, and weekly pause points.

AI accelerates our thinking. But reflection anchors it.

Language, Culture, and the Drift of Meaning

From AI memory, we drifted into human language—how words evolve, how meanings shift, how culture shapes our vocabulary.

We laughed about Portuguese speakers using English loanwords for things that already have perfect Portuguese equivalents. But underneath the humor was a deeper point: nothing stays fixed. Language drifts. Beliefs drift. Cultures drift.

Even our memories drift.

Which is why capturing our thoughts matters. Because the moment will not come back. At least not in its original shape.

What Endures

As always, we closed with a discussion of storytelling. Effects fade. Tools become obsolete. Features get replaced.

But stories carry forward.

It’s why a book like Fahrenheit 451 still hits hard today. And why, when Matthew and I record these episodes, I’m reminded again and again how meaningful it is to sit down, hit record, and talk.

Because somewhere between curiosity, reflection, and conversation, we stumble into the insights we didn’t know we were looking for.


If this episode sparked a thought, question, or tangent you want us to explore, let us know. The Blank Page is always open.

, , , , , ,

Leave a comment

AI Moved Fast This Year. Here’s How I Stayed Grounded.

This is another good title for this post…

Not About the Tools: A Year of Needs, Problems, and Meaningful Solutions

I keep a five-year daily journal. Every day gets just a sentence or two, but those short entries turn into a time machine. A few days ago, it reminded me that exactly a year ago, I gave a lightning talk to Improvers about how I was using AI tools to be a more productive developer and consultant.

Back then, I had one tool in my belt: ChatGPT.

And even with that, I was blown away.

I was copy-pasting code back and forth between my IDE and ChatGPT, getting unstuck on implementation details, and researching domain questions that I didn’t yet have the language for. It was clunky—manual, linear, slow by today’s standards—and still I thought, “If this alone is possible, what does this mean for the way I work?”

Fast forward twelve months.

Now I’m using a whole ecosystem of AI tools.

Not because they’re shiny. Not because they make for cool demos. And definitely not because I want to become the person with the biggest toolbox.

Each tool I’ve added had to earn its way in. Every single one solves a particular need:

  • removing the friction that kept old projects stuck in the backlog,
  • accelerating ideas that used to sit dormant for years,
  • making tedious or logistical steps disappear,
  • turning “I wish I had time for this” into “I can do this today.”

That’s been the most surprising part of this year: projects I shelved for years because they were too time-consuming, too dull, or too logistically annoying suddenly became possible again. Week after week, I’ve been revisiting ideas I had long assumed were dead. Some of them shipped. Others are moving faster than I could have imagined. And a few I’ve pushed aside again—not because I can’t do them, but because I now have clarity that it’s still not their time.

What made the difference wasn’t the explosion of AI tools.

It was a matter of how I think.

I don’t look at tools as solutions in search of a problem. Instead, I lean into a simple playbook:

Need → Problem → Solution.

What’s the need?
What problem is getting in the way?
What solution could remove that friction?

A year ago, AI tools were evolving fast.

Today, they’re evolving even faster.

I still can’t predict what my toolset will look like twelve months from now. I can’t predict what new capabilities will show up, how they’ll reshape the way I work, or what new projects will suddenly become possible.

But I do know my anchor.

No matter how fast the tools move, I’m not hopping onto every fast-moving ship. I’m staying grounded in the same simple loop that has guided me through this year:

Need → Problem → Solution.

And that’s what has made the last twelve months so transformative.

Not the tools.

But the clarity they help bring to mind.

As technology accelerates, the most important thing is knowing what you actually need and what’s finally possible because of it.

, , , ,

Leave a comment

Looking Ahead While Moving Faster: Lessons from AI, Mentoring, and Motorcycles

Six episodes in, The Blank Page Podcast keeps doing what we set out to do: show up with half‑formed thoughts and leave with clearer language, better questions, and a few ideas worth trying this week. This conversation moved from public‑speaking jitters to journaling, from transcript workflows to AI‑powered facilitation, and landed on an unexpected (but valuable) metaphor: what racetracks can teach software teams about speed, safety, and where to look next.

🎥 Watch the full episode of The Blank Page Podcast: Episode 6 for the complete conversation.

Practicing in Public (and Setting Expectations)

We opened with Matthew’s honest confession: public speaking is not his thing—so he’s leaning into it anyway. The key that unlocked progress wasn’t bravado; it was expectation‑setting. If you introduce yourself as the all‑knowing expert, expect hard questions. If you say, “Here’s what I’m learning; come learn with me,” the room leans in. That shift turns a performance into a practice.

Two habits reinforced the point:

  • Say “I don’t know” early so you can move toward knowing.
  • Ask better next questions, not perfect ones. Optimize for learning the very next thing.

Journaling as Time Travel

We revisited why journaling matters. Writing publicly for decades (and privately even more) creates a record you can return to. Old posts make you remember how it felt not to know so that you can teach from empathy, not hindsight bias. The longer you practice, the easier it is to forget the core—journaling helps you rebuild it.

AI Workflows That Feed Reflection (Not FOMO)

Instead of chasing every shiny tool, we shared two pragmatic loops:

  • Newsletter triage → weekly AI summaries. Skim daily AI digests, route them to a label, then let an agent compile a weekly, personalized “what actually matters” brief.
  • Talk transcripts → resonance analysis → treadmill review → blog. Grab YouTube transcripts of talks, analyze for themes that match (or challenge) our work, watch at speed, slow down at the “meaty” parts, voice fresh thoughts, then draft a post. The goal isn’t to consume more; it’s to convert input into insight you can use.

“AI Won’t Replace the Facilitator”

Meeting bots can capture words, action items, and tone. What they miss still matters: body language, politics, who is unusually quiet today, who leans in when a slide appears. That’s the facilitator’s work. On Scrum teams, we should carry those observations into retro—not just what we shipped, but how we presented the work and how stakeholders reacted. Poor sprint reviews can erase the value of great sprints.

From Cost‑Cutting to Capacity

Internally, we’ve started reframing how we talk about our work: don’t sell “cost‑cutting,” sell capacity. AI‑powered consultants don’t simply deliver cheaper—they deliver more: more validated experiments, more iterations, more value pulled forward in time.

Track Lessons for Tech Teams

My racetrack practice offered a map for teams moving faster with AI:

  • Speed changes what matters. Go faster, and you must move your body sooner and change where you look sooner. In product work, that means changing the horizon—maybe tighter release cycles, more frequent alignment, or shifting from single stories to story maps.
  • Safety scales with skill. “Vibe coding” can move in a straight line fast, but can it brake, turn, and protect users? AI‑powered consultants run with guardrails—security, data protection, and recovery—because other people are now in the car.
  • Collapse the checklist. Where we once had many discrete tasks (design, stub, implement, validate), AI lets us collapse steps while keeping intent clear. Like driving: you don’t consciously run a 12‑step checklist to get into first gear anymore; you still do the essentials, just faster and safer.
  • Lap after lap, adjust. Tracks don’t change shape, but laps are never identical. Likewise, sprints repeat but conditions vary—so instrument the work, observe deviations, and course‑correct without drama.

Humans in the Loop (Still Required)

We told a story where raw meeting audio was chaotic—overlapping voices, crosstalk—yet the AI‑cleaned transcript surfaced a clean, business‑ready summary. Powerful. And still: a human reviewing diarization, assigning voices, and clarifying intent made it truly useful. The pattern holds: AI accelerates, humans ensure accuracy, ethics, and meaning.

What We’re Trying Next

  • Keep expectation‑setting at the top of every talk.
  • Double down on weekly journaling—not for posterity, but to teach from empathy.
  • Tighten release horizons so our “reference points” match the speed we can now build.
  • Frame our work as capacity creation, not cost‑cutting.

Reflection and Wrap‑Up

We ended where we often do: with presence. The tech is exciting, but the real impact is how it shapes our days—more clarity, better questions, faster feedback, and enough margin to have dinner with the people we care about. That’s the work.

, , , , , ,

Leave a comment

The Timeless Habits of Effective Remote Teams

In April 2020, just a few weeks into the COVID-19 pandemic, George and I revisited a presentation we had given initially around 2010 at the Houston TechFest. At that time, everyone was suddenly working remotely, and we had just restarted the Virtual Brown Bag after a few years on pause. Every week, we were having conversations on topics that mattered to remote teams—and one of those was this presentation. We thought it would be fun and valuable to look back and see what lessons had stood the test of time.

Now, five years later, in 2025—fifteen years after the original talk—I’ve rewatched that video to see what still resonates and what’s evolved. The short version: almost everything we talked about still holds true.


Staying Informed

Back in 2010, we were figuring out how to stay aligned across continents and time zones. We had team members in Houston, New Orleans, Austin, Brazil, Austria, and India. Using a digital agile board is the norm. Tools differ, but the challenge is the same: keeping everyone in sync.

Regular touchpoints—daily scrums, async updates, or quick calls—remain essential. Remote work still requires deliberate effort to share context and ensure everyone stays connected to the project, the people, and the purpose.


Personality and Communication

Working across cultures taught us early on how tone, humor, and assumptions vary widely. That hasn’t changed. Empathy and self-awareness are still key.

In the 2020 video, we reflected on how easy it is to treat people as words on a screen. I’ve continued giving talks about the importance of conversation and collaboration in the years since. The message remains the same: communication is more than information exchange—it’s about connection. The more we humanize our digital interactions, the better our teams function.


Building Human Connection

One of my favorite memories from that 2010 project was building friendships beyond the work. We met at pubs, played guitar, and talked about life. In 2020, we recreated that spirit at Improving through virtual “Thirsty Thursdays,” online games, and casual chats. Today, in 2025, it’s still just as important.

Teams are using Teams (pun intended), Discord, and other similar tools. The format changes, but the need for human connection never does.


Pairing and Feedback

Pair programming remains one of the most effective ways to accelerate learning and unblock progress. In the 2020 discussion, we talked about the “just pair for 20 minutes” mindset—still my go-to approach.

Our philosophy around code reviews hasn’t changed either. I even posted a recent blog about an old talk I gave on this exact topic. Code reviews should be collaborative conversations, not blame sessions. Including junior developers in reviews helps identify unclear code and creates mentorship opportunities. Feedback, at its best, is still an act of empathy.


Integration and Contracts

Defining integration points before coding continues to be a cornerstone of effective teamwork. Whether it’s between front-end and back-end components or between microservices, clear contracts prevent chaos.

In 2010, we emphasized this. In 2020, we still did. In 2025, it’s second nature. Early alignment, shared understanding, and clearly defined boundaries make all the difference.


Sharing Knowledge

In the video, George mentioned using PlantUML for diagrams as code—something I fully agreed with at the time. These days, I mostly use Mermaid.js, which integrates beautifully with Obsidian, Cursor, and Windsurf. Even better, AI tools now help generate and maintain these diagrams automatically, keeping them versioned alongside our repositories.

That principle—treating diagrams and documentation as code—has only become more critical. It keeps human and AI collaborators aligned around a shared understanding of the system.


Techniques That Endure

Many of the techniques we practiced fifteen years ago still define healthy teams today:

  • Feature branching.
  • BDD and TDD remain powerful tools when applied thoughtfully.
  • Scrum events (no longer called ceremonies since the 2020 Scrum Guide update) help teams stay focused on outcomes.
  • Dependency management—whether through IoC containers or modern dependency graphs—is still critical.

The names and syntax may change, but good engineering habits endure.


Tools: Then, 2020, and Now

Our toolset has evolved with the times. In 2010, we relied on Skype, Dabbleboard, and Mercurial. By 2020, it was Slack, Teams, Miro, Docker, and GitHub. Now in 2025, some of those tools are dead, and new ones have arrived. Most importantly, ChatGPT blew us away, and AI-assisted coding environments like Cursor and Windsurf have entered the picture.

Personally, I’ve transitioned from Evernote to Obsidian for almost everything related to daily logs, meeting notes, and knowledge management. I’ve also moved from ReSharper and Visual Studio to JetBrains Rider, which has become my daily driver, paired with Cursor (and now experimenting with Windsurf). Pomodoro timers remain part of my routine, as does managing interruptions and protecting focus. Those habits have aged well.


What Hasn’t Changed

Fifteen years after our talk about working with distributed teams, the fundamentals are still the same:

  • Communicate intentionally.
  • Build human connection.
  • Collaborate with empathy.
  • Share knowledge openly.
  • Focus on people first, then process, then tools.

The world has changed dramatically since that first project, but these principles still guide how I work and lead. Watching that old video reminded me how much has evolved—and how much still rings true.


(Originally recorded in 2020 as a conversation with George revisiting our 2010 talk on distributed teamwork. Fifteen years later, the lessons still stand. Full video available on YouTube.)

, , , ,

Leave a comment

AI in the Trenches: Reflections on Real-World Software Practice

AI-assisted development isn’t a futuristic concept anymore—it’s our everyday reality. In Episode 4 of The Blank Page Podcast, Matthew and I explored how AI tools like Cursor, Context 7, and JetBrains Rider are changing the way we work, communicate, and even think about our craft.

🎥 Watch the full episode:

From Experiments to Everyday Workflows

Tools like Cursor and Windsurf now feel like an extra teammate.

We discussed how features such as Git worktrees, branch context, and prompt-based investigations allow AI to handle repetitive or exploratory tasks, freeing developers to focus on deeper problem-solving. For example, a recent upgrade of thousands of .NET tests and dependencies—the kind of work that used to take weeks—was completed in just a few days with structured AI collaboration.

That shift only works, though, when we provide the proper context. Our conversation touched on how documenting intentions, reasoning, and constraints helps AI become a more useful partner instead of just a code generator.

Making the Invisible Visible

Much of software work is invisible—refactoring, upgrading, and tuning systems that users never see. We talked about how to communicate that kind of work to non-technical stakeholders in relatable terms. Sometimes it’s like changing the oil while the car is moving.

The goal isn’t to make the work look flashy; it’s to demonstrate the value of maintaining quality and stability as we move forward.

Fear, Tools, and Mindsets

We also unpacked why some developers resist AI tools. Fear of replacement is part of it, but so is misunderstanding what the tools are for. We prefer to see AI as a power tool—it amplifies skill, but it doesn’t replace craftsmanship.

The real challenge is learning to think with the tool, not against it.

Creating for Meaning, Not Metrics

Toward the end, our discussion turned to content creation. The same principles that apply to coding apply to sharing ideas: Are you optimizing for engagement or authenticity?

We mentioned creators like Derek Sivers and James Clear, who focus on creating lasting value rather than chasing clicks. It’s a reminder that every post, video, or talk is part of a longer journey. It’s about learning, growing, and connecting, instead of winning the algorithm.

Playing the Right Game

The episode closes with a reflection that applies far beyond software:

Know what game you’re playing, what prize you’re trying to win, and whether you actually enjoy playing it.

For me, that means this: I don’t play to win—I play to learn.

, , , ,

Leave a comment

20 Years of Blogging, One AI-Powered Cleanup

I’ve been writing to this blog for over 20 years, publishing nearly 600 posts spanning everything from FoxPro to C#, from Evernote to Obsidian, from testing practices to personal growth.

From two decades of writing and reflection came my book 20 Lessons in 20 Years of Blogging, where I share insights gathered along this journey. Check it out!

There’s something I’ve known for a while: my blog was a mess.

The Problem

Over time, I had built up a tangled web of tags and categories. Some categories had a single post. Some posts were assigned to multiple categories. About one-third of all posts were completely uncategorized. I even had overlaps like Testing as both a category and a tag. On top of that, many posts didn’t have any tags at all.

So, finding things was hard. Understanding my main areas of focus was hard. Even sharing relevant content with others took longer than it should have. I wanted a clean tag cloud that truly reflected what I write about.

The Goal

I needed a way to:

  • Make tags and categories consistent.
  • Understand what topics I write about most.
  • Easily group and share posts by topic.
  • Streamline the process so I could keep things tidy going forward.

The Plan

I turned to AI for help. Using ChatGPT, I started by exporting my blog’s data from WordPress — tags, categories, and post metadata. I fed it a prompt explaining my goals:

“You are a blogging specialist. My blog uses tags and categories inconsistently. I need analysis, cleanup recommendations, and automation to fix this.”

ChatGPT responded with a structured plan:

  • Project goals and data extraction.
  • Cleanup rules for tags and categories.
  • Automation options.
  • Next steps for exporting and analyzing posts.

That gave me the roadmap to begin.

From Analysis to Action

I worked iteratively with ChatGPT to identify tag frequency, category overlaps, and consolidation opportunities. For example:

  • Merge Personal Growth and Personal Development.
  • Remove tags used on only one or two posts.
  • Keep categories under ten total, focused on the type of content rather than the topic.

The AI suggested a clean set of top-level categories:

  • Software Development
  • Testing and Quality
  • AI and Productivity
  • Career and Mentoring
  • Improving and Community
  • Personal Growth
  • Newsletter Archive
  • The Blank Page Podcast

That structure made sense, so I moved forward.

Automating the Cleanup

To handle the automation, I switched over to Cursor, an AI-powered coding environment. ChatGPT summarized the project so I could continue in Cursor, where it helped me write scripts for:

  • Reassigning uncategorized posts.
  • Cleaning up old categories and tags.
  • Using the WordPress REST API to apply updates.

Cursor then built scripts that parsed the XML export from WordPress, ran posts through AI categorization (using LM Studio with a local model), and automatically reassigned posts based on confidence scores. High-confidence suggestions were applied automatically; low-confidence ones went into a CSV file for manual review.

Running the AI Locally

Running the models locally with LM Studio (using GPT-OSS) was a breakthrough — no API costs, full transparency. The AI processed each post, explaining its reasoning:

“This post reflects on blogging habits and tool preferences — categorized under Personal Growth (80% confidence).”

Seeing the AI’s reasoning helped me trust its choices. Out of 210 uncategorized posts, 182 were confidently assigned (85%+ confidence). The remaining 28 I reviewed manually.

Refining Tags

Once categories were clean, I moved on to tags. The AI generated per-category tag suggestions and automatically assigned 3–7 tags per post, based on context and confidence. For instance:

“This post showcases advanced C# LINQ usage — tags: C#, Refactoring, Patterns.”

After a final dry run, 517 posts had tags automatically applied with over 70% confidence.

The Result

Now my blog feels fresh again. Categories are tidy and meaningful. The tag cloud reflects what I published. Here are a few examples of new posts after the cleanup:

Now I have the tools to continue maintaining my blog with minimal effort.

Looking Back (and Forward)

Funny enough, I found an old post from 2010 when I first moved my blog from Microsoft Live Spaces to WordPress. In that post, I wrote:

“I’ll go through my old posts and backfill categories soon.”

It only took 15 years.

This whole cleanup — nearly 600 posts — took just a few hours over a few days using today’s AI tools. Work I had avoided for over a decade finally became doable (and even fun).

The Takeaway

There’s probably something in your world you’ve been putting off because it felt like too much work — maybe an old project, or something you didn’t have time or skill for before. Now, with tools like ChatGPT, Cursor, and LM Studio (to name a few in a sea of options), you can revisit those ideas, automate the boring parts, and finally get them done.

🎥 If you’d like to see how this all came together, check out the complete video walkthrough.

, , , ,

Leave a comment