Posts Tagged testing

TDD? Get a coach!

Good old fellow speaker Jim Holmes had a great addition to my recent post on TDD tips for scrum teams:

Get a coach to embed with the team–someone who’s got practical, hands-on expertise in implementing TDD in legacy systems.

Trying to get TDD rolling on existing codebases is hard, hard, hard. Having a guide to help the team figure out how to succeed at TDD is critical.

I couldn’t agree more.

I was once asked to work with a team to help them improve their TDD practice. Except that they weren’t practicing TDD yet.

The team had taken a TDD class and started to write as many tests as they could. They said: “our test files are very cluttered, very brittle“. Yes they were.

As the team describes what they are experiencing, it is clear that the tests are written after the feature implementation. Further inspection of the test files confirms that, with signs of copy-and-paste throughout and across several files.

Had the team stayed on that path, they’d quickly abandon tests altogether, as it was getting too hard to maintain those tests and write new ones.

By getting a few sprints of coaching, they learned to:

  • refactor the existing tests
  • identify flaws in the software’s current design and implementation
  • start developing the mindset of test-driven design

Start with a great training coursecommit to applying what you’ve learned, and then follow it up with great coaching.

Advertisement

Leave a comment

Evidences that TDD is better

🚨 Spoiler alert: the evidences you seek aren’t likely to be found here!

There are a number of studies available on the web with numbers, charts, etc. This post does NOT include any of that.

For the readers who enjoy geeking out on studies and researches, here are some great books that include some of that:

What follows are some of my current thoughts on the topic at hand.

Weeks ago I shared some thoughts on this question I’ve heard many times over the years: Is TDD something you do sometimes or all the time? It generated a great conversation with an old fox who has been stalking me for several years. 😉

It started like this:

I’ve never seen good evidence that TDD is better. Oh sure, there are opinions, but not concrete evidence.

That’s a common comment, so I wanted to explore it more.

As a guitar player, I’m often asked questions like “who do you think is better: Tony Iommy or James Hetfield? Yngwie J. Malmsteen or Eddie Van Halen?”

At one point in my life, I’d answer Iommy. At a later point, Hetfield. Then, at another point, Malmsteen. And depending on the day, Van Halen.

It’s all contextual. I’ve listened to a lot of music by those four guys since the mid 80’s. The circumstance, the people who were with me, my personal experiences, all of that influenced who I thought was better at one time or another.

But better at what?!

Record sales?
Influence over new bands and/or guitarists?
Number of hit songs?
Number of memorable riffs and/or guitar solos?
Career longevity?
Guitar playing technique? Innovation?
Caring for their fans?

As the old fox said, “oh sure, there are opinions…”

Over the years, I couldn’t agree with my own opinion, let alone the opinion of others. So who is better?

The fox says “…but not concrete evidence.”

Wait; what kind of evidence are we looking for? I think we’d only know that if we could answer the “better at what?” question. In the case of guitarists, if we’re considering “better at selling records”, that’s something measurable.

But who’s better: a best selling or the most influential guitarist? The latter? But most influential at what?

At some point, I gave up on answering those questions. I’ve let it go. Depending on a number of factors based on a single moment of time, I will deliberately choose one guitarist over the other.

But I digress…

So, is TDD is better?

Better at what? Better than what?

Better at decreasing bugs?
Better at increasing code quality?
Better at bringing clarity to our thoughts before we decide how we want to implement something?

Or…

Better than not practice TDD?
(I can’t come up with any other options here…)

If we manage to add more context to the question, then we can look at the next one, “is there any evidence?”, and define what kind of evidence we’re looking for, and finally look at how we could possibly measure it.

Say the question is, “is TDD better at delivering software faster than not practicing TDD?

If we’re looking at it only in the context of a very short period, we may find it that it is not.

If we’re looking at a longer period of time, I’d bet it is better.

But we should figure out what we’re measuring besides the time spent writing tests. For example:

  • time spent reading code until we feel confident to change it
  • time spent on a debugger troubleshooting issues

The old fox is wise; one needs to try something before reaching any conclusions:

“I’ve tried it. I can’t say I saw an improvement in the code or fewer bugs. There’s no way to accurately measure that.”

How do we see improvement in code?

There are code metrics we can use, such as cyclomatic complexity (CC), but can one see improvement in code if CC decreases? If developers working on the code can’t yet appreciate low CC, they don’t see it as an improvement.

Occasional music listeners may not appreciate improvements in remastered albums. Many times we need to be trained so we can start seeing (perceiving?) things as improvements. As someone who dabbles at playing classical guitarJulian Bream’s Masterclass videos has made me hear and see nuances I was respectively deaf and blind to before.

The fox continues:

But when I wrote code I refactored a lot, not just the code I was writing, but the existing code around it. So that could have an effect on quality.

That reminded me of Michael Feathers’ Working Effectively with Legacy Code, when he talks about pinch points (here’s a good short description).

But the old fox wasn’t finished:

I think there are enough developers out there that don’t know how to do unit testing properly. If we want to turn them into TDD devs, first they need to learn how to write good unit tests. Here’s an example: At my last job, I was moved to a team that had little unit testing. They owned code that I was told “It can’t be unit tested.” I showed them it could. Once they’ve learned how to effectively unit test, only then should they be pushed to TDD. At least, that’s my opinion.

BINGO! That resonates a ton with my own experience. Years ago I even gave a talk titled “I Cannot Write Tests for That!”: UI Edition. I’ve had several similar experiences coaching developers into the practice just like the fox described. I draw great satisfaction from helping developers through that journey, and more often than not, I don’t have to “push them to TDD”; they are the ones “pulling for it”!

Final thoughts for now…

If I am to look for “evidences that TDD is better”, I think of what works for me and those around me:

  • The time spent designing potential solutions to problems is time well spent
    • and it is shorter than the back and forth of implementing features without designing them
  • It’s very satisfying to be able to refactor implementation when there are good specs to back it up
  • The level of willingness a development team shows to embrace changes
  • It feels very pleasant and productive to collaborate with others and have conversations using well-crafted specs

Now if you excuse me, I have some TDDing to do.

Leave a comment

TDD tips for scrum teams

“Any tips for getting your Scrum team on board with practicing TDD as a team?”

Yes!

  • It starts with the individual
  • Lunch and Learns
  • Try it with one small story
  • Code Review
  • Ping-Pong Pair Programming
  • Divide and Conquer
  • Book Clubs

Expanding each of those points…

It starts with the individual

A few common situations that prevent teams from adopting TDD include:

  • Can’t make time in current project
  • Legacy system that makes it very hard to practice TDD
  • Team members not willing to try it
  • Lack of support from the business

Those situations should NOT prevent an individual to do it.

Practice TDD on your own time so you can build your skills.

If others see you do it, they may join you. If they do, great. If they don’t, you are still growing.

Lunch and Learn

If you decide to take it on your own, offer lunch and learn sessions for your team to share your experiences, struggles, successes.

The purpose is not to wait until you become an expert; share it as you learn.

Make it a recurring meeting.

Write down ahead of time the things you’d like to share (including how and why you failed, and how you’ve overcome it – or not!).

People aren’t showing up or look uninterested? Consider putting it out as blog posts. Why? There are always people out there who will relate to your struggles and successes.

Try it with one small story

Pick one small story, or a small piece of a story, and commit to doing TDD.

You may fail (many times). You may succeed. Either way, share your findings with the team:

  • was it taking too long?
  • Why?
  • Lack of knowledge? Practice?
  • Difficulties with the legacy code?
  • What kind of difficulties?
  • Too many dependencies?

Work as a team to figure out how the hurdles could be overcome.
Your weaknesses might be someone else’s strengths. And vice-versa.

Share the experience at the sprint retrospective. Figure out the next step and commit to it.

Code Review

When doing code review, start by reviewing tests/specs.

When writing tests first, consider asking for a code review before implementing it, to make sure you have a good understanding of the problem that needs solving.

During that review, share any difficulties you see. The reviewer might know how to help you. And if not, you may find a clear path ahead of you just by articulating your thoughts and sharing it with someone else.

After done with the implementation, ask for another code review. This time, maybe share how you’ve addressed the difficulties. Also, maybe discuss ways how the test code could be improved.

Ping-Pong Pair Programming

Consider ping-pong pair programming: one person writes a test, the other one writes the implementation. Then swap.

Set a time-box. Do not let interruptions get in the way.

Let others know what the pair is up to so they can help avoid interruptions during the time-box.

Share the lessons learned with the team.

Divide and Conquer

Work as a team.

Divide the challenges so that each team member can focus on learning one thing, and then share the findings with the team.

Here are some ideas on what to learn:

  • Test frameworks for the tech stack
  • Testing legacy code
  • Tools such as Cypress.io, Cucumber, SpecFlow, Selenium, etc.
  • How to test code that makes heavy use of libraries or frameworks such as Angular, React, Mass Transit, etc.
  • How to write better specifications in Given-When-Then

Book Clubs

Run book clubs!

Build knowledge and skill together as a team.

Choose a book that seems to fit the team’s current skills, set a cadence (maybe once a week during lunch breaks?), start reading, and discuss the findings together.

Here are a few books you may want to consider: Recommended Reading on Testing

In Summary

I have used all of these techniques. Still do.

I pick and choose whichever one works better depending on my current situation. Sometimes the one I pick doesn’t work on a given team. I drop it, and try another one.

Whether the team thinks of TDD as Test Driven Development or Design and whether they use the terms test or spec, that depends on the team’s maturity. Different people, different backgrounds, different ways to learn.

It all starts with one person. Do not wait for that person. Be that person.

,

Leave a comment

Is it a code smell to use Mocks in unit testing?

Maybe.

But first, I’ll start by clarifying that developers asking that question usually mean “test doubles“, instead of “mocks”.

Some situations that might indicate code smell in tests include:

  • using several test doubles to test one thing
  • complex test doubles setup

Both cases aren’t only an indication of a code smell in the test; they often indicate a bigger problem: code smell in the design!

If the system under test (SUT) requires too many test doubles to satisfy its dependencies, it likely violates the Interface Segregation Principle

A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use.

Take the example below:

public class SomeClass(Dependency1 Dependency1,   
Dependency2 Dependency2, Dependency3 Dependency3, 
Dependency4 Dependency4)
{ 
 public void SomeMethod()  
 {    
   if (dependency1.CheckThis() && 
       dependency2.CheckThat() && 
       dependency3.CheckSomethingElse() &&   
       dependency4.FinalCheck())    
   {
       // do the thing...    
   }
  }
}

 

To write a unit test for SomeMethod, we would need mock each one of the 4 dependencies.

By inspecting how those dependencies are used, we could identify a new abstraction that offers what SomeClass needs:

public class SomeClass(Dependency Dependency)
{  
  public void SomeMethod()  
  {   
   if (dependency.CanIDoTheThing())     
   {     
     // do the thing...    
   }  
  }
}

 

Now there’s only one dependency to deal with in tests, and it’s easier to learn what that dependency provides.

An example of code smell

Here’s a test I’ve run into many years ago:

According to the name of the test, it verifies that the “PUT method returns OK and PortfolioModel when service response is successful“.

When I’ve read through the test, these considerations came to mind:

  • Number of stubs and mocks (mockPortfolioService, portModel, response, portfolioModel)
  • Overuse of Arg.Any
  • Arg.Any hiding the meaning/intent of several parameters
  • Unclear what “Configure” is all about (what does it mean to return true from it?)
  • What’s the difference between portModel and portfolioModel? Why are both needed?
  • The file had about 40 tests that looked very similar to this one in terms of mocks and stubs; a product of copy-and-paste.

After raising the issues with the developers on the team, we identified the design issues that resulted in tests having to be written that way. The tests were rewritten to isolate and call out the issue, and a design change was proposed.

Leave a comment

Is it worth writing test code for application logic?

“Is it worth writing test code for application logic (as opposed to business logic)?”

  • Yes.
  • Not all of it.
  • Not always.

Test what yields business value.
Making the development effort more efficient may yield business value.

If application logic is directly related to business value, it needs automated tests.

If lack of tests for application logic delays development efforts (including manual testing), then it’s worth writing tests.

An example…

As a developer, I like being able to take an API contract designed by the team (the URI to the endpoint and the shape of the input and output) and write a quick test for it that we can use to make sure the endpoint works as expected.

This is what such tests look like:

On the left, we see the test. On the right, we see the expected payload and response.

This integration test verifies that:

  • The route exists
  • The json payload can be handled
  • The response gets serialized into the expected json

But not only that, it also verifies any middleware that exists between the route and the controller, so things like authorization, model binders, dependency registrations, etc.

We either find a test harness, or build one, to make such tests easy to write, so there’s no reason not to.

The example above:

  • Does not need any special tool
  • Is written in plain C# and xUnit.net

In summary, when deciding what we should write tests for, “application logic” also comes into consideration.

Leave a comment

Is TDD something you do sometimes or all the time?

That’s another common question: Is TDD something you do sometimes or all the time?

The short answer is neither. Or, “it depends”.

But let’s explore the long answer…

When I started learning TDD, yes, I’d do it all the time.

“But have you always worked on projects where you could do TDD all the time?!”

Most certainly I have NOT!

There are times when I can’t do TDD on a project.
That doesn’t prevent me from still doing it on the side.
I learn and practice it on my own time.

TDD became something I do most of the time, but never all the time.

So when do I NOT do TDD?

There are situations when I specifically choose not to do TDD. Here are some that come to mind…

Exploring a library or framework

When trying to learn what a library or framework can do for me, and how I’m supposed to use it.

Once I identify ways I’ll use it, I often refactor those tests into Given-When-Then, documenting my understanding and assumptions, so I can later remember what parts of it I’m using.

That approach also provides a safety net when consuming updates to those packages (identifying breaking changes and such).

Exploring approaches

Sometimes I need to implement features that currently lack clarity, so I want to gather feedback from stakeholders as soon as possible. I may try a few different approaches and won’t do TDD.

I will, however, use BDD (Behavior Driven Development) to describe, to the best of my ability, the feature we’re building.

Solving small problems

If a problem is too small and yields very little value, I may skip TDD.

Pitfalls of TDD

Remember Design Patternitis? That’s something most developers face when we learn Design Patterns; we start trying to apply them everywhere! As mentioned earlier, TDD is not something to be done always, everywhere, every time.

Another situation I see often is “copy-and-paste inheritance”; tests are initially written carefully following TDD, but then every new test comes from copy-and-paste, without any effort going into refactoring the test code. This is a pitfall that happens in most automated tests, and tests written following TDD can also suffer from it.

But BDD…

I try to do BDD most of the time, even if it means writing the stories and scenarios on a napkin.

Many times I don’t even know how I’ll actually implement the specs/tests, but I still write those English sentences before trying to write any code. No GWT, No Code!

Leave a comment

Differences between TDD and BDD

So what’s the different between Test-Driven Development (TDD) and Behavior-Driven Development (BDD)? I’ve written about that before, but I think that post lacks something more illustrative.

For one thing, I forgot to mention: I’m in the camp that thinks of the last D as design, instead of development. That matters.

I’ve heard developers say things like:

  • “I don’t do TDD anymore, now I’m BDD all the way!”, or
  • “I only do TDD, because I can’t use Cucumber, SpecFlow, or any one of those BDD tools.”

Here’s how I see it…

At some point, I learned to write unit/integration tests in this manner:

Through TDD, I learned to write such test before writing the actual code.

Then I learned about “BDD-style” tests, so I refactored tests such as the one above into the one below:

That was my transition from Arrange-Act-Assert (AAA) to Given-When-Then (GWT).

Then I learned that BDD and TDD go hand-in-hand:

No special tools, languages, test frameworks. In the example above, just plain C# and xUnit.net.

In summary:

  • BDD specifying the desired outcome
  • TDD specifying the desired approach to achieve the desire outcome

Leave a comment

TDD and Legacy Code

 

Is Test Driven Development (TDD) practical when working with a lot of legacy code?

Yes, it is.

There’s an opportunity to practice Test Driven Development (TDD) whenever we write new code in a legacy system. As an example, when writing a new component, service, feature, etc., which is going to be called from the legacy system.

But what if the new code is to be written right in the middle of existing legacy code?

There’s still an opportunity for TDD.

Think what the new code is supposed to do.

Maybe it’s a new if-block that checks some variables in the existing code. That if-block asks the system a question; could that question be implemented as a method, a function, or similar approach, and then called from the existing code?

Or maybe the new lines will perform some sort of task, which is an opportunity to follow TDD to design and implement the new task, and then call it from the legacy code.

What about testing the existing legacy code?

There’s a possibility that the current state of the legacy code is so poor that even changing it to call any new code is so hard that a decision is made to only change the old code, and therefore, leaving no room for TDD.

If that’s the case, there’s still value in at least trying to write characterization tests for the existing code. That’s one scenario where using code coverage can yield benefits, as opposed to the bad way many people use it.

The existing system may be massive, so it’s important to know what we should write tests for.

What do we get from that?

The experience acquired with writing tests for legacy code makes us stronger TDD practitioners; as we experience the pain of writing tests for code that wasn’t implemented and designed with testability in mind, we also apply those lessons when designing new solutions.

But what if I’m working on a greenfield project and there’s no legacy code here?

Isn’t there? Are you sure? Many developers think of legacy code as code written a long, long time ago, often in defunct languages.

Michael Feathers defined legacy code as “code without tests”.

I’ve learned to think of legacy code as “code nobody wants to deal with, with or without tests”.

Sometimes a decision is made NOT to write tests (topic for another post). We can still write the code in ways that people won’t feel compelled to run away from it, even if there are no tests yet.

Sometimes we write tests within a sprint, but we do a poor job at writing those, so tests become “code nobody wants to deal with”.

In case you haven’t already, make sure to read and apply techniques from Michael Feathers’ Working Effectively with Legacy Code.

Leave a comment

Recommended Reading on Testing

I give many talks on Testing (more specifically, TDD, BDD, unit testing, etc.) and often get asked for recommended reading on the topic.

I’ll list here some of the resources I remember I’ve read. Beware that some of them I’ve read a long time ago and do not know how well they’ve aged.

I’ll focus solely on books. I know I’ve read useful blog posts in the past, but I haven’t kept track of the most relevant for me, unfortunately.

Working Effectively with Legacy Code, by Michael Feathers

Amazon link

I’ve read this book twice. The first time was shortly after it came out in 2004. It had been only a couple of years since I learned about unit tests and had already felt the pain of writing tests for legacy code. I remember the book helped me a lot through period, and many of its lessons stuck with me.

I’ve read it a 2nd time earlier this year (2022) and believe the book has aged well, and think every developer should read it at some point in their career.

“oh, but I don’t work with legacy code”. If that’s you, just know that the code you wrote this morning may already be considered “legacy”.

Agile Principles Patterns and Practices in C, by Robert C. Martin

Amazon link

This book came out in 2006, and that’s when I think I’ve read it. I remember recommending it to many developers for several years afterward.

I loved the book because it covered OOP, Design Patterns, SOLID principles, and TDD, often writing tests before refactoring code that would eventually surface as a pattern or principle.

I have not revisited this book in many years, but I know that a lot of the things I’ve learned from it have stuck with me.

The Art of Unit Testing, by Roy Osherove

Amazon link

I’ve read this book around the same time as the other two mentioned above. The Amazon link is for the 2nd edition, published in 2013, which I haven’t read (I see there’s also a 3rd edition, with samples in JavaScript).

I recall learning things about fake objects (mocks, stubs, spies), and I remember recommending this book to other developers back then. If memory serves me right, it’s a short book to read and it provides good information for those getting started into the practice.

Fifty Quick Ideas to Improve Your Tests, by Gojko Adzic and David Evans

Amazon link

This one I’ve read earlier this year and enjoyed it very much, as it validated many of the lessons I’ve learned and applied over the years, and it also gave me some new ideas to try out.

The RSpec Book: Behaviour Drien Development with RSpec, Cucumber, and Friends, by David Chelimsky, Dan North, and others

Amazon link

I’ve read this book when I moved from .NET to Ruby on Rails in 2011. I found this book as I was digging more into Rails, Cucumber, RSpec, Testing, BDD, and all that stuff, and it helped me at the time.

I am positive that things I’ve learned there were brought over when I came back to .NET years later, and I also apply it to JavaScript, TypeScript, Cypress.io, Jest, etc.

Is that all?

Those are books that first come to mind when people ask. I’m sure I’ve read others in the last 20 years, but I didn’t use to keep track of it they way I have been in the last few years, and whichever other books I’ve read didn’t stick with me.

I’ve put out several blog posts on the topic of testing, documenting my questions, confusions, learnings, etc., and will continue to do so.

I also have a list of books on the topic on my “to read” list, and I’ll write up posts in case the books deserve it. 🙂

While the books mentioned here have helped me along the way, the single most important that shaped my practice has been actually doing it!

,

Leave a comment

Virtual Lunch and Learn this week: Testing in Agile

I’m giving a Virtual Lunch and Learn talk this Friday, June 26, at 12pm Central Time. You may register here!

This has been my favorite talk for the last three years or so. I’m going through the content and updating it to reflect feedback I got during this time. I hope to see some of you there!

Testing in Agile: from Afterthought to an Integral Part

Many who try to start automating tests end up giving up the practice. Those tests seem really helpful at the beginning, but are abandoned over time. Even the practice of Test-Driven Development (TDD) faces similar issues, with many giving it up. 

How do long-time practitioners do it? Or, perhaps more importantly, why do they do it? 

Let me share my experiences in this area, starting with unit tests way back in 2004, navigating through lessons learned the hard way, and ending with my current approach to automated tests, code coverage, TDD/BDD, and how I use those techniques to bring together developers, QA, UX, Product Owners, and Business Analysts .

Leave a comment