Author's Bio: Anton Mamaenko

Sunday, December 11, 2016

How many Acceptance Tests do I need?

[Upd. 11 Dec, 16]: fixed the naming of Integration Tests erroneously used by me to the more appropriate "Acceptance Tests".
[Upd. 16 Dec, 16]: changed "fork" to "branch" for the flow

Overview

This post describes one of the guiding quantitative principles that I have regarding writing acceptance tests, and that helps to understand how "good" my acceptance tests are.

The principle is: "I need a separate auto-test for every branch in the business logic flow."

I started forming these principles back in the days when I was still mainly a coder, regardless my title then, continued through my managerial career, and are currently unceasing to develop, while I am doing coding for a hobby. So I have a positive experience with it both first-hand both during authoring the code itself and looking at the resulting product from a more high-level, managerial perspective.

Terminology

  • Auto-tests - any test that is scripted in a computer language and executed automatically at will
  • Acceptance Tests - a subset of auto-tests where the scripted operations are expressed in high-level domain terms

When to start writing tests?

When starting to work on an application, an engineer typically analyzes the future structure and thinks through the architecture. At this point we have key components and data structures figured out.

The next step is to understand the data flow between key components. Sometimes it is a simple one-way flow, and sometimes it's a complicated web of two-way flows. Outcome of this phase is a list of all interactions between components.

Step three is to try and model the application model composed of the components and data flows on a sample input. This is the point where one should start writing auto-tests. The process of application design reduces intricacies of code into a limited set of domain-specific terms. Thus, first tests should be an expression of desired business logic in those high-level terms.  I call them the Acceptance Tests.

Separate auto-test for every branch in the flow

If I am designing a chat bot that helps the user to figure out the best route from point A to point B then my business logic will be the following:

1. Wait until requested by a user
2. Get A from the user
3. Check if A is valid, if not - return to step 2
4. Get B from the user
5. Check if B is valid, if not - return to step 2
6. Compute the best route(s)
7. Display the result to the user
8. Wait until the next request

This is a fairly straightforward application with only two branch points. At the steps 3, and 5 my application validates user input against some input data template and returns to the previous step if the user's input is invalid. This is important when writing Integration Tests.

Test scripts must be non-branching. That is, for each condition with two possible outcomes there should be two separate scripts execution. Each execution is making sure that it checks its test condition. In our example that means 4 auto-test execution passing through all possible workflows.

Note that having no branches does not mean that one cannot parametrize scripts or not using any predefined fixtures! Rather it means that to test the logic properly auto-tests must walk through each scenario. That gives me at least four separate scenarios.

No comments:

Post a Comment

Search This Blog