Search Results for: test

Test-First vs Test-Last

When practicing Test-Driven Development, we’re supposed to write a test first. I’ve heard developers say “it doesn’t matter if we write the test before or after the implementation, as long as we do it.

This is how I think of it:

Writing the implementation first, and then writing tests for it, sounds like implementation-driven tests. Such tests are shaped by the implementation and end up reflecting the implementation’s dependencies and how it works, unless the tests are refactored into BDD-style specs (see the differences between TDD and BDD), which is hardly ever the case.

Writing the test first, and then the implementation, drives the implementation. Test-driven development. That’s why many people (myself included) rather think of TDD as test-driven design, placing focus on the design aspect of the practice.

“Why does that matter?”

Shift in perspective.

By quickly jumping into writing code, we’re also quickly distancing ourselves from the real-world problem we’re supposed to solve.

“So, there’s no value in writing tests for existing (legacy) code?”

Yes, there is. Please do so. And once the tests are written, refactor them so to document the feature and NOT the implementation. In other words, they document why the code exists, not how it works.

I believe reading tests should go like this:

  • WHY this feature exists
  • HOW the API supports it (input/output)
  • WHAT supports the API (classes, components, methods, functions…)

In that order.

The tests/specs are not for a business feature, but instead, a technical feature?
Same thing: Why ➡️ How ➡️ What.

Perspective.

Leave a comment

Is it a code smell to use Mocks in unit testing?

Maybe.

But first, I’ll start by clarifying that developers asking that question usually mean “test doubles“, instead of “mocks”.

Some situations that might indicate code smell in tests include:

  • using several test doubles to test one thing
  • complex test doubles setup

Both cases aren’t only an indication of a code smell in the test; they often indicate a bigger problem: code smell in the design!

If the system under test (SUT) requires too many test doubles to satisfy its dependencies, it likely violates the Interface Segregation Principle

A client should never be forced to implement an interface that it doesn’t use, or clients shouldn’t be forced to depend on methods they do not use.

Take the example below:

public class SomeClass(Dependency1 Dependency1,   
Dependency2 Dependency2, Dependency3 Dependency3, 
Dependency4 Dependency4)
{ 
 public void SomeMethod()  
 {    
   if (dependency1.CheckThis() && 
       dependency2.CheckThat() && 
       dependency3.CheckSomethingElse() &&   
       dependency4.FinalCheck())    
   {
       // do the thing...    
   }
  }
}

 

To write a unit test for SomeMethod, we would need mock each one of the 4 dependencies.

By inspecting how those dependencies are used, we could identify a new abstraction that offers what SomeClass needs:

public class SomeClass(Dependency Dependency)
{  
  public void SomeMethod()  
  {   
   if (dependency.CanIDoTheThing())     
   {     
     // do the thing...    
   }  
  }
}

 

Now there’s only one dependency to deal with in tests, and it’s easier to learn what that dependency provides.

An example of code smell

Here’s a test I’ve run into many years ago:

According to the name of the test, it verifies that the “PUT method returns OK and PortfolioModel when service response is successful“.

When I’ve read through the test, these considerations came to mind:

  • Number of stubs and mocks (mockPortfolioService, portModel, response, portfolioModel)
  • Overuse of Arg.Any
  • Arg.Any hiding the meaning/intent of several parameters
  • Unclear what “Configure” is all about (what does it mean to return true from it?)
  • What’s the difference between portModel and portfolioModel? Why are both needed?
  • The file had about 40 tests that looked very similar to this one in terms of mocks and stubs; a product of copy-and-paste.

After raising the issues with the developers on the team, we identified the design issues that resulted in tests having to be written that way. The tests were rewritten to isolate and call out the issue, and a design change was proposed.

Leave a comment

Is it worth writing test code for application logic?

“Is it worth writing test code for application logic (as opposed to business logic)?”

  • Yes.
  • Not all of it.
  • Not always.

Test what yields business value.
Making the development effort more efficient may yield business value.

If application logic is directly related to business value, it needs automated tests.

If lack of tests for application logic delays development efforts (including manual testing), then it’s worth writing tests.

An example…

As a developer, I like being able to take an API contract designed by the team (the URI to the endpoint and the shape of the input and output) and write a quick test for it that we can use to make sure the endpoint works as expected.

This is what such tests look like:

On the left, we see the test. On the right, we see the expected payload and response.

This integration test verifies that:

  • The route exists
  • The json payload can be handled
  • The response gets serialized into the expected json

But not only that, it also verifies any middleware that exists between the route and the controller, so things like authorization, model binders, dependency registrations, etc.

We either find a test harness, or build one, to make such tests easy to write, so there’s no reason not to.

The example above:

  • Does not need any special tool
  • Is written in plain C# and xUnit.net

In summary, when deciding what we should write tests for, “application logic” also comes into consideration.

Leave a comment

Recommended Reading on Testing

I give many talks on Testing (more specifically, TDD, BDD, unit testing, etc.) and often get asked for recommended reading on the topic.

I’ll list here some of the resources I remember I’ve read. Beware that some of them I’ve read a long time ago and do not know how well they’ve aged.

I’ll focus solely on books. I know I’ve read useful blog posts in the past, but I haven’t kept track of the most relevant for me, unfortunately.

Working Effectively with Legacy Code, by Michael Feathers

Amazon link

I’ve read this book twice. The first time was shortly after it came out in 2004. It had been only a couple of years since I learned about unit tests and had already felt the pain of writing tests for legacy code. I remember the book helped me a lot through period, and many of its lessons stuck with me.

I’ve read it a 2nd time earlier this year (2022) and believe the book has aged well, and think every developer should read it at some point in their career.

“oh, but I don’t work with legacy code”. If that’s you, just know that the code you wrote this morning may already be considered “legacy”.

Agile Principles Patterns and Practices in C, by Robert C. Martin

Amazon link

This book came out in 2006, and that’s when I think I’ve read it. I remember recommending it to many developers for several years afterward.

I loved the book because it covered OOP, Design Patterns, SOLID principles, and TDD, often writing tests before refactoring code that would eventually surface as a pattern or principle.

I have not revisited this book in many years, but I know that a lot of the things I’ve learned from it have stuck with me.

The Art of Unit Testing, by Roy Osherove

Amazon link

I’ve read this book around the same time as the other two mentioned above. The Amazon link is for the 2nd edition, published in 2013, which I haven’t read (I see there’s also a 3rd edition, with samples in JavaScript).

I recall learning things about fake objects (mocks, stubs, spies), and I remember recommending this book to other developers back then. If memory serves me right, it’s a short book to read and it provides good information for those getting started into the practice.

Fifty Quick Ideas to Improve Your Tests, by Gojko Adzic and David Evans

Amazon link

This one I’ve read earlier this year and enjoyed it very much, as it validated many of the lessons I’ve learned and applied over the years, and it also gave me some new ideas to try out.

The RSpec Book: Behaviour Drien Development with RSpec, Cucumber, and Friends, by David Chelimsky, Dan North, and others

Amazon link

I’ve read this book when I moved from .NET to Ruby on Rails in 2011. I found this book as I was digging more into Rails, Cucumber, RSpec, Testing, BDD, and all that stuff, and it helped me at the time.

I am positive that things I’ve learned there were brought over when I came back to .NET years later, and I also apply it to JavaScript, TypeScript, Cypress.io, Jest, etc.

Is that all?

Those are books that first come to mind when people ask. I’m sure I’ve read others in the last 20 years, but I didn’t use to keep track of it they way I have been in the last few years, and whichever other books I’ve read didn’t stick with me.

I’ve put out several blog posts on the topic of testing, documenting my questions, confusions, learnings, etc., and will continue to do so.

I also have a list of books on the topic on my “to read” list, and I’ll write up posts in case the books deserve it. 🙂

While the books mentioned here have helped me along the way, the single most important that shaped my practice has been actually doing it!

,

Leave a comment

Refactoring Test Code – Free Virtual Lunch and Learn

I’m giving my “Refactoring Test Code” talk as a Free Virtual Lunch and Learn this Friday, March 12, 12-1pm CDT, as part of the Improving Virtual Events. Register here!

Here’s the talk’s description:

Most developers hear about “Red->Green->Refactor” as part of the TDD process. Some never get to the “refactor” part. Some only refactor the “production” code, but not the test code, after all, that’s “just test code”. Tests become cluttered, hard to maintain, and are abandoned.

In this talk, let’s have a look at some ways to refactor “test” code (C#, JavaScript/TypeScript, unit/integration/end-to-end…), so that they become easier to read and even create opportunities for better collaboration with non-technical people.

Image by GraphicMama-team from Pixabay

Leave a comment

New Talk – “Improving Code: Refactoring Test Code”

After a short hiatus, the Improving Code user group is back. The first talk of the year happens online next week, Feb 3, 6:30pm CDT. RSVP here, will ya?

Refactoring Test Code

Every developer hears about the TDD process as “Red->Green->Refactor”. Some never get to the “refactor” part. Some only refactor the “production” code, but not the test code, after all, that’s “just test code”. Tests become cluttered, hard to maintain, and are abandoned.

In this talk, let’s have a look at some ways to refactor “test” code (C#, JavaScript/TypeScript, unit/integration/end-to-end…).

https://www.meetup.com/Improving-Code/events/276006936

Leave a comment

Virtual Lunch and Learn this week: Testing in Agile

I’m giving a Virtual Lunch and Learn talk this Friday, June 26, at 12pm Central Time. You may register here!

This has been my favorite talk for the last three years or so. I’m going through the content and updating it to reflect feedback I got during this time. I hope to see some of you there!

Testing in Agile: from Afterthought to an Integral Part

Many who try to start automating tests end up giving up the practice. Those tests seem really helpful at the beginning, but are abandoned over time. Even the practice of Test-Driven Development (TDD) faces similar issues, with many giving it up. 

How do long-time practitioners do it? Or, perhaps more importantly, why do they do it? 

Let me share my experiences in this area, starting with unit tests way back in 2004, navigating through lessons learned the hard way, and ending with my current approach to automated tests, code coverage, TDD/BDD, and how I use those techniques to bring together developers, QA, UX, Product Owners, and Business Analysts .

Leave a comment

Why are you Writing Tests?!

I don’t think I’ve ever met a developer who hasn’t had to answer this question: “Why are you writing tests?!”. Some have given up the practice because they grew tired of that, others have moved on to places where they don’t have to fight this uphill battle. Fortunately, we also have developers such as my fellow Improver Harold, who believes in and follows the practice, and can articulate explaining many of the reasons why we test from a developer’s point-of-view.

I have heard many reasons why tests are NOT written and I plan on writing individual posts to tackle those at a later time. For this post, I’d like to offer you my thoughts and answer to the initial question.

Such as with most developers, my answer used to be along the lines of “I write tests to make sure my code works!”. That answer evolved into incorporating “…and it also allows me to refactor my code. Look at how clean my code looks now!”.

However, bugs would still show up with my fully-tested code. Other developers would also have trouble working on fixing it because they couldn’t understand my tests.

After several years of that, I started seeing why it was so hard to get people’s buy-in on testing. Did you notice the word I had bolded on the previous paragraphs? Yup, “my”. I was making it all about me.

Many people use the following analogy to justify writing tests: “Doctors scrub their hands before working with a patient, because that’s the right thing to do!”, or something along those lines.

Do the doctors do it for themselves? Nope.

So the short answer to our initial question here (“Why are you writing tests?”) should be: I am doing that for you!

Or, a slightly longer elaboration:
I am doing it to make sure what we are building reflects the needs of the business as we understand it now.

This inversion in the motivation changes the dynamics of the relationship considerably; if our practices bring value to others, we’re way more likely to get their buy-in.

This realization didn’t come to me overnight. As I check out my posts on testing, I realize the first one dates back to 2008 and in it I say it was 2003 when I first heard of unit tests. Maybe my motivation shifted when I went from Arrange-Act-Assert to Given-When-Then. From that, the next step had to be the “No GWT? No code!” approach.

To wrap up this post, I’ll drop the quote I have on my business card:

“What you do matters, but WHY you do it matters much more.” – unknown

And also

“People don’t buy what you do, they buy why you do it.” – Simon Sinek

Leave a comment

Are you Testing Someone Else’s Code?

We normally hear that we should only be writing tests for our code, not someone else’s (external libraries or APIs). For example, if we are writing tests for code that calls out to an API, the test should mock the dependency on the API and only verify that the calls are made as expected and that the results are handled as expected. The test should NOT verify that the API produces the appropriate results for the given the input; that would be testing someone else’s code.

I agree with all of that; for unit tests.

However, I’d still consider writing integration tests against that API, NOT to test someone else’s code, but to document our assumptions about the API.

Why? Because our code relies on those assumptions. That’s a dependency.

What happens if the API implementors decide to make changes that break our assumptions? Without integration tests validating those assumptions, all our unit tests would still pass, but we could end up with either defects or inaccurate results in production (which could go unnoticed for a long time).

Another added benefit from the practice of writing such tests is that, should a new version of the API come out, evaluating risk levels of consuming the new version becomes much simpler: just run the tests against the new version.

Last but not least, say an API offers a large variety of features that could be used; having tests that describe how we use that API makes it much easier for developers to learn the details of how we depend on it. Such understanding, again, helps with both assessing risks when consuming different versions of the API, as well as assessing a potential replacement of the API.

Dependency management is very important!

Leave a comment

What should we write tests for?

Another common question I get from developers who are starting to get into testing (or even from devs who have been doing it for a while): “how do you decide what to write tests for?”.

This question normally applies to brownfield cases (existing codebase). There’s already a lot of code there. Where do we even start? Yes, maybe we write tests for the new code, but what about the existing one?!

Here’s my personal technique for it. When working with an existing codebase, I’ll ask the business:

What is the single most important feature of this product?
Think of the feature that, if broken, will either cause the business to lose money or not make money.

THAT is where we start. Those are our must-have tests.

Once the most important features have been covered by tests, the next question is:

What is the feature or area of the system that when you tell developers they need to make changes to it, they feel like running away?

That’ll usually surface areas where the code is a mess, complex, convoluted. Hence, it needs tests, so developers can feel safe making changes to it, refactoring it. Now, we only move on to this one when the features that came out of the first question above have been covered by tests.

Leave a comment