A Method for AI‑Assisted Pull Request Reviews: Aligning Code with Business Value

Pull request reviews are often misdirected.

I’ve sat in (and led) plenty of PR reviews where we spent most of our energy debating formatting, naming, or whether something felt idiomatic… while quietly skipping over the most important question:

Does this change actually deliver the business value we intended?

This post is about a method I’ve been experimenting with — using AI not as a linting bot, but as a thinking partner in PR reviews. One that helps keep us anchored to user intent, architectural clarity, and long‑term quality.


The Shift: From Code‑First to Value‑First Reviews

Traditional PR reviews tend to start at the bottom:

  • Is the syntax right?
  • Does this follow our style rules?
  • Could this be written more cleanly?

Those things matter — but they’re not the starting point.

What I’ve learned is that when we start with code, we usually end with code opinions. When we begin with the story, we get better conversations, better decisions, and better systems.

So I flipped the review flow.

The rule is simple:

Every PR review starts with the user story.

No story? The review doesn’t begin yet.

That single constraint changes the entire tone of the conversation.


AI as a Lead Architect, Not a Linter

The key move here is how the AI is framed.

Instead of asking it to “review this PR,” I ask it to act as a Lead Architect and Team Lead whose primary responsibility is:

  • ensuring the implementation matches the user story
  • protecting architectural intent
  • identifying risks early (performance, security, maintainability)

This matters.

Tools behave the way we invite them to behave. When AI is positioned as a strategic reviewer, its output changes dramatically.

And it enforces a discipline I care deeply about:

The business requirement and the technical solution must remain tightly coupled.


The Three‑Phase Review Flow

To make this repeatable, I use a very explicit three‑phase review structure that always follows this order.

1. User Story Analysis

First: the why.

The AI reviews the user story before touching the code:

  • Is the story well‑formed?
  • Are acceptance criteria clear?
  • Do Given/When/Then scenarios exist?
  • Is the scope explicit?

Only once we agree on intent do we move forward.

This alone eliminates a surprising number of downstream review debates.


2. Test Coverage (Proof Before Preference)

Next: how do we know this works?

Here, test coverage is evaluated against acceptance criteria, not arbitrary percentages:

  • Are the behaviors described in the story covered?
  • Are edge cases tested?
  • Are failure modes considered?

This reframes testing as evidence, not ceremony.


3. Implementation & Architecture

Only now do we look at the code itself.

By this point, discussions about structure, patterns, and performance are grounded in shared context — not taste.

That’s where things get interesting.


A Real Case: An Unplanned Auditing Feature

We recently applied this process to a new auditing feature that emerged during UAT.

It wasn’t in the original backlog. It came from a very real need:

“I need to quickly answer why something didn’t work during testing.”

Because it was opportunistic, it deserved a very careful review.

Story Alignment

The AI confirmed:

  • Stories were complete and well‑structured
  • Acceptance criteria were explicit
  • Implementation matched the stories without scope creep

It also identified an intentional partial implementation (export formats) and recognized it as a conscious prioritization decision—not a bug.

That nuance matters.


Test Coverage Insights

The review surfaced useful, actionable gaps:

  • Missing backend tests around filtering
  • Untested fail‑safe logging paths

It also made an incorrect claim that no end‑to‑end tests existed — even though they did.

That wasn’t a failure.

It became feedback to refine the AI’s instructions so it knows where and how we structure E2E tests next time.

The system learned.


Architecture & Standards

This is where the depth really showed:

  • CQRS boundaries were validated
  • Mediator usage was confirmed
  • Async and error handling were consistent

Context matters when reviewing changes in a living codebase.


The Most Important Find: A Hidden Performance Risk

The biggest win came late in the review.

The AI identified a query that:

  • Loaded all audit records into memory
  • Applied filters only after the fact

That’s fine with small data.

It’s a scalability problem waiting to happen.

The recommendation wasn’t “reject the PR.”

It was:

Approve with suggestions.

Value acknowledged. Risk highlighted. Path forward clear.

That’s the tone I want in reviews.


Turning Reviews into Improvement Plans

The review didn’t stop at detection.

The AI generated a concrete remediation plan:

  1. Articulate the performance risk
  2. Push filtering down to the database
  3. Prefer ORM translation, but fall back to raw SQL if needed
  4. Add tests to lock the behavior in

That decision tree — try the standard path, escalate when necessary — mirrors how senior engineers actually think.

This is where AI stops being a critic and starts being a collaborator.


How to Try This on Your Team

You don’t need new tools to start.

You need clarity.

Here’s what’s worked for me:

  1. Describe your ideal review out loud
    Record a lead architect walking through how they wish PRs were reviewed.
  2. Turn that into AI instructions
    Persona, principles, patterns, standards — write them down.
  3. Run, critique, refine
    Treat the AI’s output like a junior reviewer you’re mentoring.

Each iteration compounds.


Final Reflection

This approach hasn’t replaced human judgment.

It’s made it sharper.

By anchoring reviews in business value and using AI to enforce architectural intent consistently, I’ve seen:

  • better conversations
  • earlier risk detection
  • less bikeshedding
  • more trust in the review process

AI isn’t here to approve code for us.

It’s here to help us think better together.

And that’s a future of code quality I’m happy to build toward.

, , , ,

  1. Leave a comment

Leave a Reply

Discover more from Claudio Lassala's Blog

Subscribe now to keep reading and get access to the full archive.

Continue reading