Answering questions from my Scrum talk

I’ve had a great time giving my “Beyond the Daily Stand-up: An Intro to Scrum” talk at the Virtual Agile Shift yesterday (check out the conference: it’s going through the end of the month!).

There were great questions asked, some of which I was able to answer at the end of the talk, and some that I couldn’t answer as I ran out of time, but promised I’d post the answers to my blog. Hence this post!

Some of the questions make me want to write a full blog post for each, but in order to keep my commitment to answering them today, I’ll give the short answers now, but I’m saving the questions for future, longer posts.

Here we go!

What are your thoughts on Unified Engineering?

I had not heard of “Unified Engineering” before. When I first saw the question I thought it could be one of those things I knew about, but I just didn’t know that’s what it was called. That turned out to be the case.

A web search didn’t yield many results, but I’ve found this podcast from 2016 that had some references to it. Fortunately, there’s a transcript there and I was able to skim it to get a gist of it. If I haven’t misread it, my blog post from the day before my talk was exactly about that (The QA’Role in a Scrum Team), so those are my thoughts on it. 🙂

What should the Burndown be based on? Story Points? A count of stories? or is it based on hours assigned to tasks?

The Burndown represents the Sprint and it tracks the work to be done within the Sprint. That work is represented by the Sprint Backlog Items (the “tasks”), which are the way the team found to implement the user stories.

It’s very common for Scrum Teams to size those tasks in terms of hours, in which case, the number of hours is used when updating the Burndown chart. I’ve also worked on teams where we’ve decided to only track the number of tasks, instead.

The team decides what works best and has the autonomy to change the approach from one Sprint to the other, based on what the team believes the best approach is.

Is there a formula to calculate the velocity of the team?

It’s very common to calculate velocity based on the average of story points (if that’s how the user stories are sized) delivered by the team in the last 3 Sprints. We average it like that in order to account for fluctuations from Sprint to Sprint. For example, in one Sprint the team may deliver 60 story points, and then 50 on the next one. Why the drop? It could be because a team member was off sick for two days.

Also, as the team matures, the velocity tends to go up. Whenever the team formation changes (for example, a team member leaves and a new one joins in), the velocity tends to drop for a couple of Sprints. Averaging the last three Sprints help manage these fluctuations.

Who amongst the Scrum Team should take down notes for the feedback provided by the stakeholders during the Sprint Review (Demo)

That would normally be either the Product Owner or the Scrum Master, but I always encourage the other members of the development team to also take down notes where they see fit. They may see things that maybe neither the PO nor the SM picked up. It’s a group effort.

Where do Developers document what was coded?

Different people, teams, organizations do it in different ways. My personal favorite approach is a combination of things:

  • Write good specs (aka “tests”). I believe there’s a good example at the bottom of this post. I also have a whole set of posts around testing;
  • Add good comments to the Pull Request, referring back to the user story it implements. Include a link back to the user story in the tracking system used (Pivotal Tracker, Team System, Jira, etc…);
  • Add a link to the Pull Request in the user story on the tracking system.

With such approach, we can learn about things both ways: we may come to the user story to find out what code changes (pull requests) were made to implement the story, or maybe looking at the code changes (pull requests) and figure out what they were made (link back to user stories).

How should the information gathered from a 1/1 conversation between Dev and Business be shared with the entire team?

It would depend on the nature and outcome of the conversation. Here are some ways that could go:

  • If a new acceptance criteria has been come up, update the specs/tests;
  • If a user story has been clarified, update the user story on the track system to reflect that clarification (maybe a change in the wording?);
  • Bring it up at the daily scrum to share it with the team;
  • If a more in-depth discussion with the team is needed, book a meeting and share the information there;
  • Add comments to the user story in the tracking system;
  • Drop a note into whatever messaging system the team uses (Slacks, MS-Teams, email, etc.)
  • All of the above?

Pick the ones that work for the team and the business.

Are there agreed-upon roles and responsibilities for the various players? Ambiguity makes it more challenging – especially if Agile is new to the org

The Scrum Framework lists the three roles: Product Owner, Scrum Master, Developers. Within developers, it’s up to the team to define the roles. A development team may start with a hard separation between QA and coder, for example, the QA person tests the work produced by the coder.

As the team matures its collaboration skills, the coder may start helping QA, by teaching them how to write automated tests, while QA may start helping the coders by helping them understand the acceptance criteria better.

The roles and responsibilities within the team may change as per the team’s needs and how it grows in maturity over time.

If the user stories are not completed till we release to production then the burndown will not go down till release is done typically at/after the end of the sprint

This question touches on the Definition of Done (DoD). The idea is to have potentially releasable increments at the end of the Sprint. If the DoD for user stories at the end of the Sprint includes something like “feature deployed to production” and that item hasn’t been checked off, then yes, this story rolls over into the next Sprint. If the team tracks tasks by hours, then the hours associated with deploying to production rolls over to the Burndown for next Sprint.

On the other hand, “deploying to production” may be part of DoD for release. Depending on how the business does things, a release may only happen after a number of Sprints, with an aggregate of features built during those Sprints, so at that point, the release’s DoD should include the “deployed to production” check.

Wrapping up

I saw the tweet below early this morning. What a great way to start off my day!!

 

Leave a comment

The QA’s Role in a Scrum Team

I remember years ago a person saying that “a QA’s job is to find bugs in the programmer’s code”. I’ve actually heard that from quite a number of people, as early as last year. I remember companies rewarding QA employees based on the number of bugs they find. I believe it’s kind of hard to keep a good relationship between QA and programmers in such environments.

In Scrum, both programmers and QA are developers, because…

  • They both run the software and click around to make sure make sure it works. They test it.
  • Programmers automate their tests. They write tests.
  • QA automate their tests. They write tests.

The nature of the tests they write are different. The programming languages they write are likely different. But they are both contributing to the product’s development. That’s why they’re both called developers.

Yes, each one leans towards different areas of software development, but they’re not working against each other. They’re collaborating. Any and all efforts going towards improving such collaboration should be potentialized. Just a few ideas:

  • If programmers are done implementing code, they can offer help with the QA efforts (testing features implemented by other programmers, writing automated tests, writing scripts that can speed up the testing process, etc.);
  • If a QA personnel are done testing what’s available, they can offer help clarifying acceptance criteria to the programmers, they can help programmers write the specs for their tests (given-when-then!);

These are just a few thoughts that I’ve been sharing at my “Testing in Agile” talk, which comes from putting that approach into practice with teams I work with.

If you’d like to get more insight into this kind of approach, please check out this great video my friend Daniel posted recently.

Leave a comment

Virtual Brown Bag: May 2020 Summary

Lots of goodies shared at the Virtual Brown Bag during the month of May. Here’s a summary (with links to the videos):

May 7: Talks on managing interruptions, Pomodoro Technique, finding opportunities and leveraging them, trust in IT, importance of tests, and a couple more miscellaneous things!

May 14: We talked about Udi Dahan’s Advanced Distributed Systems Design course, C#’s new feature: source generators, Security standards and considerations, Node and NPM, Tribes of Programmers, and some miscellaneous things, as usual

May 21: Arrange-Act-Assert, Given-When-Then, When-When-Then, Refining user stories, Righting Software (book), Architectural book, Software Architecture youtube channel, Google.dev, Azure App Service Static Apps with Svelte + Sapper, Top-Level Programs in C#9

May 28: George’s “challenges”: https://github.com/togakangaroo/daily, https://orgmode.org/, emacs, Org Babel

Looking forward to seeing what April brings us!

Leave a comment

Questions to the Scrum adopters

Here’s a set of questions I’d like to ask every Scrum adapter out there (myself included). Think of how you live your life:

  • Do you create a backlog?
  • Do you refine it?
  • Do you prioritize it?
  • Do you plan what you’ll do and how you’ll do it?
  • Do you check on a daily basis how things are going?
  • Do you review your results?
  • Do you look at it retrospectively to see how you can do better?

Answering no to any of the questions above should prompt us to reflect on how can we recommend Agile/Scrum to others, then?

We need to figure out what’s important to our own life, figure out what is valuable to us, where we want to get to, and then set up a backlog. Then we should refine it, by adding more information to it, making it clear to ourselves why those things are of value to us. Armed with information, we can then prioritize it.

With a prioritized backlog in place, we can plan on what we’ll do and how we’ll do it.

Now that we’re doing it, we should check on a daily basis how things are going.

At the end of the day, week, month, year, we should review our results.

Finally, we should look at our results in retrospective and make corrections as needed.

If that’s what we decided to do for work, it should also be what we decide to do for life.

If you need some ideas on how to do that, here you go:

* Organizing my daily, weekly, monthly, quarterly, yearly plans
* Planning and reviewing my day
* My annual reviews
* Bonus: there are several more related posts under the Lifestyle category of my blog

 

Leave a comment

Why are you Writing Tests?!

I don’t think I’ve ever met a developer who hasn’t had to answer this question: “Why are you writing tests?!”. Some have given up the practice because they grew tired of that, others have moved on to places where they don’t have to fight this uphill battle. Fortunately, we also have developers such as my fellow Improver Harold, who believes in and follows the practice, and can articulate explaining many of the reasons why we test from a developer’s point-of-view.

I have heard many reasons why tests are NOT written and I plan on writing individual posts to tackle those at a later time. For this post, I’d like to offer you my thoughts and answer to the initial question.

Such as with most developers, my answer used to be along the lines of “I write tests to make sure my code works!”. That answer evolved into incorporating “…and it also allows me to refactor my code. Look at how clean my code looks now!”.

However, bugs would still show up with my fully-tested code. Other developers would also have trouble working on fixing it because they couldn’t understand my tests.

After several years of that, I started seeing why it was so hard to get people’s buy-in on testing. Did you notice the word I had bolded on the previous paragraphs? Yup, “my”. I was making it all about me.

Many people use the following analogy to justify writing tests: “Doctors scrub their hands before working with a patient, because that’s the right thing to do!”, or something along those lines.

Do the doctors do it for themselves? Nope.

So the short answer to our initial question here (“Why are you writing tests?”) should be: I am doing that for you!

Or, a slightly longer elaboration:
I am doing it to make sure what we are building reflects the needs of the business as we understand it now.

This inversion in the motivation changes the dynamics of the relationship considerably; if our practices bring value to others, we’re way more likely to get their buy-in.

This realization didn’t come to me overnight. As I check out my posts on testing, I realize the first one dates back to 2008 and in it I say it was 2003 when I first heard of unit tests. Maybe my motivation shifted when I went from Arrange-Act-Assert to Given-When-Then. From that, the next step had to be the “No GWT? No code!” approach.

To wrap up this post, I’ll drop the quote I have on my business card:

“What you do matters, but WHY you do it matters much more.” – unknown

And also

“People don’t buy what you do, they buy why you do it.” – Simon Sinek

Leave a comment

Are you Testing Someone Else’s Code?

We normally hear that we should only be writing tests for our code, not someone else’s (external libraries or APIs). For example, if we are writing tests for code that calls out to an API, the test should mock the dependency on the API and only verify that the calls are made as expected and that the results are handled as expected. The test should NOT verify that the API produces the appropriate results for the given the input; that would be testing someone else’s code.

I agree with all of that; for unit tests.

However, I’d still consider writing integration tests against that API, NOT to test someone else’s code, but to document our assumptions about the API.

Why? Because our code relies on those assumptions. That’s a dependency.

What happens if the API implementors decide to make changes that break our assumptions? Without integration tests validating those assumptions, all our unit tests would still pass, but we could end up with either defects or inaccurate results in production (which could go unnoticed for a long time).

Another added benefit from the practice of writing such tests is that, should a new version of the API come out, evaluating risk levels of consuming the new version becomes much simpler: just run the tests against the new version.

Last but not least, say an API offers a large variety of features that could be used; having tests that describe how we use that API makes it much easier for developers to learn the details of how we depend on it. Such understanding, again, helps with both assessing risks when consuming different versions of the API, as well as assessing a potential replacement of the API.

Dependency management is very important!

Leave a comment

Let’s Connect?

If you’ve either been following my blog posts or have attended my talks, you probably got a gist of what floats my boat. If you haven’t, here’s a summary, straight out of my business card:

If you ask me a question related to anything on that list (or any topic I write/speak about), I’ll bleed your ears off!

How about we take a 15-minute coffee break to chat about any of those topics (you pick one!)?

If you feel like connecting, send me a direct message on any social network (you can find me easily on the main ones) and let’s set that up!

Leave a comment

Multiple screens may NOT make you productive

Several people talk about how having multiple screens makes us more productive. But does it, really?

It’s not the number of screens that matters; it’s how you use them!

Let’s take my current setup as an example:

Those three active screens are the ones I use when doing most of my focused work. Let’s say this is how I use those screens:

Hey, we can see a Pomodoro Timer at the top-left on that picture, so this MUST be a very productive setup, right? I’m afraid not. Consider my current focus is software development work. Let me walk you through the points I’m indicating on the picture:

1. Dead space. Unused real estate. If I’m on my focused time, I should probably not be seeing my exciting track photos, which change every 20 minutes; maybe a solid color would help keep my focus;

2. An email client. My current focus is NOT “email processing”, I shouldn’t keep the distracting email client open like that;

3. A messaging app taking up an entire monitor. Does that conversation pertain to the current task I’m focusing on? If not, then this app should not be there;

4. That is the browser window showing me the software I’m building. That’s the result of my focused work. It can benefit from a little more real estate, no? To add insult to the injury, maybe I’d even have the developer tools open, all squished, docked inside that same window!

5. The IDE. The thing where I produce the result of my current task. The code I’m working on cannot be seen without scrolling horizontally!

So, do the multiple screens make me more productive if used that way? Most certainly not.

Here’s a better setup I believe makes me more productive:

Let me walk you through it:

1. My Pomodoro Timer. Time-boxed task. The time I have left helps me stay focused;

2. A place to drop in notes, screenshots, links, etc., related to the task I’m working on;

3. Any research or supporting material I’m currently using. In that browser window, I make sure to only have tabs related to the task at hand;

4. My IDE. That’s the screen I’m looking at most of the time, so it has to feel comfortable, relaxing, easy on the eyes (not a lot of information or things other than the current code I’m working on);

5. The software that I’m building, which is the result of the code in #4;

6. The Developer Tools (console, debugger, etc.);

7. The terminal (console) window, so I can quickly see if my current changes have broken my local build (also supported by what I may see on #6).

As it has been document on the internets since 2007, I am very specific about how I organize windows and multiple screens. I organize them based on the focused task at hand and I’m always looking for A) better ways to organize it, B) processes and tools to make it easier.

If I’m working in Visual Studio, I may use the Windows Layout feature. Working either on a PC or Mac, I find ways to move windows around by only using the keyboard.

If I’m on the road, away from my normal setup, carrying only my laptop and my iPad, I turn my iPad into an extra screen (here and here).

I’ve just heard about the FancyZones in the Windows 10 Power Toys this morning, and I’ll be looking into adding that to my toolbox as well.

Leave a comment

Daily Standup every other day?!

You’ve read it right. I have worked with teams that initially said things like “Yeah, we have daily stand-up every other day!”, or “Yeah, we do Sprint Planning, but we don’t to Sprint Retrospective…”.

In order to help out those teams get their mind around Scrum and improve their adoption, I decided to create a talk a few years ago called “Beyond the Daily Stand-up: An Intro to Scrum”. I’ll be giving this talk as a free event on June 4, 3:30-4:30pm, as part of the Virtual Agile Shift.

That’s right, the conference had to be postponed due to the current pandemic, but it’ll still happen as a virtual conference, with daily talks, Monday through Thursday, during the month of June.

Check out the schedule, figure out what sessions you’ll attend, and sign up!

Leave a comment

Code Coverage is Worthless!

Did I get your attention with that title? I hope so.

Let me clarify it: most people use code coverage for the wrong reason, making it worthless. I know I did that for a while.

Back when I first learned about writing tests, it didn’t take long until I heard about code coverage, and then the search for the magic code coverage percentage started:

“100% code coverage?”. Nope, that’s impractical
“50%, then?”. Nope, too low.
“92.35?”. Yeah, that’s more like it! Well… not!

Seriously, I’ve seen some crazy numbers as the required code coverage policy out there.

Writing tests for the sake of bringing up code coverage will NOT:

  • make the code quality get better
  • delivery better value to the business
  • make refactoring easier

I have seen tests out there that have no assertions.

Those tests have hundreds of lines of code (usually involving some crazy, unreadable mock setups), and no assertions. Why? Simple: because developers had to satisfy the policy of XX% code coverage! The only thing those tests do is make sure no exceptions get thrown when those lines of code run. It’s pretty much a smoke test.

Such tests do NOT bring value. In many cases, the tests may be exercising lines of code for features that aren’t even used!

Think of new developers joining the project and having to go through all of that code trying to learn things, figuring out how things are done. Even existing developers after a while will have a hard time remembering why certain code is there.

So, when is code coverage worthwhile?

When writing tests for existing code!

Once a conscious decision has been made about what we should write tests for, start by writing a test that does a clean pass through the code (meaning, without it throwing exceptions). We’re likely to uncover dependencies we didn’t even know the code had. This will be a big integration test. I wouldn’t even fret about putting assertions in that test. Why? It’s very likely I don’t even know what the expect outcome of that code is at that moment.

With the first clean pass in place, look at the code coverage number. If we have about 30%, that’s too low, so we need to look into writing more tests that go through different branches of the code. Once we get to a number we feel comfortable with (that could be 70, 80, 90%… it really depends on the risks and costs of breaking changes), then we can start capturing the current outcome of that code, by writing assertions for it, bearing in mind that the outcome may not even be accurate, but it is what the code produces without any changes.

Now we can go ahead and start refactoring the code, making it more readable, without fear of breaking whatever it currently does. As we split it into smaller chunks of code, we identify opportunities to write new unit tests for those smaller pieces of logic.

Eventually, we’ll get to a point where that initially big integration test may either end up not being relevant anymore (and can be removed, replaced by the new unit tests), or, it can be refactored to something that more accurately describes the reason the code exists; the big picture.

Once the team starts using code coverage for the right reasons, then the metrics can be changed over from “Code Coverage” to “Feature Coverage”. Knowing what features are covered by test is a far more valuable practice.

If you choose to get one thing out of this post, may it be this: read Working Effective with Legacy Code, by Michael Feathers. It still one of my all-time favorite books.

Leave a comment