Archive for May, 2020
Here’s a set of questions I’d like to ask every Scrum adapter out there (myself included). Think of how you live your life:
- Do you create a backlog?
- Do you refine it?
- Do you prioritize it?
- Do you plan what you’ll do and how you’ll do it?
- Do you check on a daily basis how things are going?
- Do you review your results?
- Do you look at it retrospectively to see how you can do better?
Answering no to any of the questions above should prompt us to reflect on how can we recommend Agile/Scrum to others, then?
We need to figure out what’s important to our own life, figure out what is valuable to us, where we want to get to, and then set up a backlog. Then we should refine it, by adding more information to it, making it clear to ourselves why those things are of value to us. Armed with information, we can then prioritize it.
With a prioritized backlog in place, we can plan on what we’ll do and how we’ll do it.
Now that we’re doing it, we should check on a daily basis how things are going.
At the end of the day, week, month, year, we should review our results.
Finally, we should look at our results in retrospective and make corrections as needed.
If that’s what we decided to do for work, it should also be what we decide to do for life.
If you need some ideas on how to do that, here you go:
* Organizing my daily, weekly, monthly, quarterly, yearly plans
* Planning and reviewing my day
* My annual reviews
* Bonus: there are several more related posts under the Lifestyle category of my blog
I don’t think I’ve ever met a developer who hasn’t had to answer this question: “Why are you writing tests?!”. Some have given up the practice because they grew tired of that, others have moved on to places where they don’t have to fight this uphill battle. Fortunately, we also have developers such as my fellow Improver Harold, who believes in and follows the practice, and can articulate explaining many of the reasons why we test from a developer’s point-of-view.
I have heard many reasons why tests are NOT written and I plan on writing individual posts to tackle those at a later time. For this post, I’d like to offer you my thoughts and answer to the initial question.
Such as with most developers, my answer used to be along the lines of “I write tests to make sure my code works!”. That answer evolved into incorporating “…and it also allows me to refactor my code. Look at how clean my code looks now!”.
However, bugs would still show up with my fully-tested code. Other developers would also have trouble working on fixing it because they couldn’t understand my tests.
After several years of that, I started seeing why it was so hard to get people’s buy-in on testing. Did you notice the word I had bolded on the previous paragraphs? Yup, “my”. I was making it all about me.
Many people use the following analogy to justify writing tests: “Doctors scrub their hands before working with a patient, because that’s the right thing to do!”, or something along those lines.
Do the doctors do it for themselves? Nope.
So the short answer to our initial question here (“Why are you writing tests?”) should be: I am doing that for you!
Or, a slightly longer elaboration:
I am doing it to make sure what we are building reflects the needs of the business as we understand it now.
This inversion in the motivation changes the dynamics of the relationship considerably; if our practices bring value to others, we’re way more likely to get their buy-in.
This realization didn’t come to me overnight. As I check out my posts on testing, I realize the first one dates back to 2008 and in it I say it was 2003 when I first heard of unit tests. Maybe my motivation shifted when I went from Arrange-Act-Assert to Given-When-Then. From that, the next step had to be the “No GWT? No code!” approach.
To wrap up this post, I’ll drop the quote I have on my business card:
“What you do matters, but WHY you do it matters much more.” – unknown
“People don’t buy what you do, they buy why you do it.” – Simon Sinek
We normally hear that we should only be writing tests for our code, not someone else’s (external libraries or APIs). For example, if we are writing tests for code that calls out to an API, the test should mock the dependency on the API and only verify that the calls are made as expected and that the results are handled as expected. The test should NOT verify that the API produces the appropriate results for the given the input; that would be testing someone else’s code.
I agree with all of that; for unit tests.
However, I’d still consider writing integration tests against that API, NOT to test someone else’s code, but to document our assumptions about the API.
Why? Because our code relies on those assumptions. That’s a dependency.
What happens if the API implementors decide to make changes that break our assumptions? Without integration tests validating those assumptions, all our unit tests would still pass, but we could end up with either defects or inaccurate results in production (which could go unnoticed for a long time).
Another added benefit from the practice of writing such tests is that, should a new version of the API come out, evaluating risk levels of consuming the new version becomes much simpler: just run the tests against the new version.
Last but not least, say an API offers a large variety of features that could be used; having tests that describe how we use that API makes it much easier for developers to learn the details of how we depend on it. Such understanding, again, helps with both assessing risks when consuming different versions of the API, as well as assessing a potential replacement of the API.
Dependency management is very important!
If you’ve either been following my blog posts or have attended my talks, you probably got a gist of what floats my boat. If you haven’t, here’s a summary, straight out of my business card:
If you ask me a question related to anything on that list (or any topic I write/speak about), I’ll bleed your ears off!
How about we take a 15-minute coffee break to chat about any of those topics (you pick one!)?
If you feel like connecting, send me a direct message on any social network (you can find me easily on the main ones) and let’s set that up!
Several people talk about how having multiple screens makes us more productive. But does it, really?
It’s not the number of screens that matters; it’s how you use them!
Let’s take my current setup as an example:
Those three active screens are the ones I use when doing most of my focused work. Let’s say this is how I use those screens:
Hey, we can see a Pomodoro Timer at the top-left on that picture, so this MUST be a very productive setup, right? I’m afraid not. Consider my current focus is software development work. Let me walk you through the points I’m indicating on the picture:
1. Dead space. Unused real estate. If I’m on my focused time, I should probably not be seeing my exciting track photos, which change every 20 minutes; maybe a solid color would help keep my focus;
2. An email client. My current focus is NOT “email processing”, I shouldn’t keep the distracting email client open like that;
3. A messaging app taking up an entire monitor. Does that conversation pertain to the current task I’m focusing on? If not, then this app should not be there;
4. That is the browser window showing me the software I’m building. That’s the result of my focused work. It can benefit from a little more real estate, no? To add insult to the injury, maybe I’d even have the developer tools open, all squished, docked inside that same window!
5. The IDE. The thing where I produce the result of my current task. The code I’m working on cannot be seen without scrolling horizontally!
So, do the multiple screens make me more productive if used that way? Most certainly not.
Here’s a better setup I believe makes me more productive:
Let me walk you through it:
1. My Pomodoro Timer. Time-boxed task. The time I have left helps me stay focused;
2. A place to drop in notes, screenshots, links, etc., related to the task I’m working on;
3. Any research or supporting material I’m currently using. In that browser window, I make sure to only have tabs related to the task at hand;
4. My IDE. That’s the screen I’m looking at most of the time, so it has to feel comfortable, relaxing, easy on the eyes (not a lot of information or things other than the current code I’m working on);
5. The software that I’m building, which is the result of the code in #4;
6. The Developer Tools (console, debugger, etc.);
7. The terminal (console) window, so I can quickly see if my current changes have broken my local build (also supported by what I may see on #6).
As it has been document on the internets since 2007, I am very specific about how I organize windows and multiple screens. I organize them based on the focused task at hand and I’m always looking for A) better ways to organize it, B) processes and tools to make it easier.
I’ve just heard about the FancyZones in the Windows 10 Power Toys this morning, and I’ll be looking into adding that to my toolbox as well.
You’ve read it right. I have worked with teams that initially said things like “Yeah, we have daily stand-up every other day!”, or “Yeah, we do Sprint Planning, but we don’t to Sprint Retrospective…”.
In order to help out those teams get their mind around Scrum and improve their adoption, I decided to create a talk a few years ago called “Beyond the Daily Stand-up: An Intro to Scrum”. I’ll be giving this talk as a free event on June 4, 3:30-4:30pm, as part of the Virtual Agile Shift.
That’s right, the conference had to be postponed due to the current pandemic, but it’ll still happen as a virtual conference, with daily talks, Monday through Thursday, during the month of June.
Check out the schedule, figure out what sessions you’ll attend, and sign up!
Did I get your attention with that title? I hope so.
Let me clarify it: most people use code coverage for the wrong reason, making it worthless. I know I did that for a while.
Back when I first learned about writing tests, it didn’t take long until I heard about code coverage, and then the search for the magic code coverage percentage started:
“100% code coverage?”. Nope, that’s impractical
“50%, then?”. Nope, too low.
“92.35?”. Yeah, that’s more like it! Well… not!
Seriously, I’ve seen some crazy numbers as the required code coverage policy out there.
Writing tests for the sake of bringing up code coverage will NOT:
- make the code quality get better
- delivery better value to the business
- make refactoring easier
I have seen tests out there that have no assertions.
Those tests have hundreds of lines of code (usually involving some crazy, unreadable mock setups), and no assertions. Why? Simple: because developers had to satisfy the policy of XX% code coverage! The only thing those tests do is make sure no exceptions get thrown when those lines of code run. It’s pretty much a smoke test.
Such tests do NOT bring value. In many cases, the tests may be exercising lines of code for features that aren’t even used!
Think of new developers joining the project and having to go through all of that code trying to learn things, figuring out how things are done. Even existing developers after a while will have a hard time remembering why certain code is there.
So, when is code coverage worthwhile?
When writing tests for existing code!
Once a conscious decision has been made about what we should write tests for, start by writing a test that does a clean pass through the code (meaning, without it throwing exceptions). We’re likely to uncover dependencies we didn’t even know the code had. This will be a big integration test. I wouldn’t even fret about putting assertions in that test. Why? It’s very likely I don’t even know what the expect outcome of that code is at that moment.
With the first clean pass in place, look at the code coverage number. If we have about 30%, that’s too low, so we need to look into writing more tests that go through different branches of the code. Once we get to a number we feel comfortable with (that could be 70, 80, 90%… it really depends on the risks and costs of breaking changes), then we can start capturing the current outcome of that code, by writing assertions for it, bearing in mind that the outcome may not even be accurate, but it is what the code produces without any changes.
Now we can go ahead and start refactoring the code, making it more readable, without fear of breaking whatever it currently does. As we split it into smaller chunks of code, we identify opportunities to write new unit tests for those smaller pieces of logic.
Eventually, we’ll get to a point where that initially big integration test may either end up not being relevant anymore (and can be removed, replaced by the new unit tests), or, it can be refactored to something that more accurately describes the reason the code exists; the big picture.
Once the team starts using code coverage for the right reasons, then the metrics can be changed over from “Code Coverage” to “Feature Coverage”. Knowing what features are covered by test is a far more valuable practice.
If you choose to get one thing out of this post, may it be this: read Working Effective with Legacy Code, by Michael Feathers. It still one of my all-time favorite books.
Another common question I get from developers who are starting to get into testing (or even from devs who have been doing it for a while): “how do you decide what to write tests for?”.
This question normally applies to brownfield cases (existing codebase). There’s already a lot of code there. Where do we even start? Yes, maybe we write tests for the new code, but what about the existing one?!
Here’s my personal technique for it. When working with an existing codebase, I’ll ask the business:
What is the single most important feature of this product?
Think of the feature that, if broken, will either cause the business to lose money or not make money.
THAT is where we start. Those are our must-have tests.
Once the most important features have been covered by tests, the next question is:
What is the feature or area of the system that when you tell developers they need to make changes to it, they feel like running away?
That’ll usually surface areas where the code is a mess, complex, convoluted. Hence, it needs tests, so developers can feel safe making changes to it, refactoring it. Now, we only move on to this one when the features that came out of the first question above have been covered by tests.
A very common question I hear from developers is “How do I write tests for private methods?”. My immediate answer is “You don’t!”. Technically, if you’re in C# Land, you can instantiate the class and then use Reflection to call the private method. But please don’t!
Say you have some class like this one:
Of course, instead of comments, you’d have the actual code. You get the point.
You then decide to clean things up a bit and extract the “make sure all the ingredients are in” code into a separate, private ValidateIngredients method, like so:
That’s usually the moment when developers ask “how do I test that private method?”. If we have tests for the main method (DoTheMagic, in this example), then ValidateIngredients already get test coverage.
Quite often, when developers feel strong about having separate tests for a private method, there’s a clear indication that the private method should really be a public method on a separate class. Think Single Responsibility Principle.
Following the example above, we create a Validator class and move our validation logic in there:
And then we use that validator in the previously shown class:
You’ve probably noticed that we also introduced an IValidatePotion interface. Think Dependency Inversion Principle. One of the benefits here is being able to isolate tests for the AwesomenessPotion and AswesomenessPotionValidator classes.
Whatever the IDE I’m currently using, I always end up creating a couple of handy snippets, such as the one I’ve shared on how to create a TODO template in VS Code. Here’s another handy one…
Many times, I want to write out the contents of something to the console. Say, the contents of “this.week” in the example above. I’d end up writing something like this:
Except that it bothers me having to write this same kind of thing over and over again. Snippets to the rescue! I created the following snippet for me:
Now, all I have to do is to select the thing I want to print out (“this.week”), copy it into the clipboard, and then invoke the snippet in the code editor with “clogc” (as in “console.log clipboard”):
Like it? Share it with your team and friends!