-
Notifications
You must be signed in to change notification settings - Fork 5
AutomatedTesting
The purpose of this page is to discuss aspects of the "philosophy" of automated testing - it is hoped that the discussion will serve as an overview of the aims of making use of automated testing techniques in our codebases and projects and what we can achieve doing so.
Automated testing is the practice of writing logic that can be used to ascertain the correct function of a system. This is, purposefully, a rather broad definition as many techniques exist that can aid in doing at many differing scales with respect to the pieces that make up a system. But always the central theme is assurance.
We seek to achieve a high level of confidence in the function of our software systems at different stages of the process of its delivery to users: while building the system we are concerned with the behaviour of the small pieces we are both composing and relying upon - their operation at a fine level of detai - while from the perspective of a user performing an operation via a user-interface it may be overall result of action undertaken or operation initiated that is of concern.
The most commonly thought of automated testing technique is unit testing. This is specifically concerned with various building blocks that are being altered, recomposed, added or removed. Unit testing is very much targeted at aiding the developer of software in its implementation.
In order for the operation of a system at large to be correct we must be able to rely behaviour of the software components that are interfaced with. This leads to the somewhat central role for the testing of its constituent pieces. We refer to these pieces as units.
However, in building assurance we must also be concerned with the operation of larger portions of the software, particularly in terms of a whole feature as would be experienced as a user. For the latter the case of the latter we talk about system-level and/or end-to-end tests, while checking the behaviour of a number of units in concert we refer to as integration tests.
The first thing of note is we purposefully use a generic term: "unit". This is a conscious decision to avoid any lanugage or runtime specific terms, such as functions, modules, packages, libraries, etc to ensure that we do not equate a unit with something specific. Units are what we decide are useful atoms to be tested, though they will often be the equivalent of e.g. a function in terms of their size.
A unit is thus a separable chunk of logic that can be tested in isolation. Such a definition is then necessarily impacted by concrete aspects of a codebase. Note also that the notion of a unit is independent of whether or not it is interfaced with by direct or indirect call.
This also means that as developers of the software it is our task to define the boundaries of our units as part of process of authoring tests for them. This is critical to maintainable test suites.
Revisiting and reiterating the central notion that tests are concerned with providing assurance we can begin to build that out into a series of principles.
Given that we are primarily concerned with automated tests, we ought to by definition have the ability to run such tests at any point and thus any requirements/dependencies required for the operation of a test must be automated and repeatable as well.
a test must establish its own pre-requisites
Furthermore, our ability to run and re-run tests means that each test must have cleaned up after itself.
a test must clean must leave no remnants of its execution
The above two tenets lead, in conjunction, to another and arguably more powerful principle: tests may not affect each other.
a test must be able to run in isolation and may not affect other tests
With our intent to be to identify aspects of the system under test that are not behaving as our per out intent, a test should identify the cause of a particular issue.
a test should be focused on one aspect of behaviour
Having discussed the rationale for why we would wish for automated testing and their value it is important to also address, at a high level, the more practical aspects of being able to make use of them succesfully.
Irrespective of the facilities that are available for doing so, be that support provided by a particular runtime or within a particular ecosystem or facilities provided to ease the creation of within a codebase, it is worth cleaely stating that writing of tests is at the very least its own set of skills, which while related to the development of software are distinct from them.
With a basic set of principles enumerated we will turn to look at some key points in achieving automated tests. For the purposes of this high level tour we will reference these somewhat in the abstract as considerations rather than becoming derailed in details of how to manifest them in a particular codebase.
The challenges that present themselves are in reconciling the following concerns:
- Maintainability
- Coverage
- Performance
It is crucial that tests be written in such a way that they clearly communicate the behaviour they are intending to capture. That is, tests are very much for the reader.
A well written test should succinctly explain a property of the system, and a well structured set of test cases should describe a set of behaviours.
This is related to how well the various behaviours of the code being tested are validated. To illustrate by example, a piece of code is considered covered if all its behaviours are exercised by the test suite - it succeeding correctly and the results thereof, but also its failure conditions.
Since automated tests in particular are intended to not only be run regularly but also to act as an aid to further development work on the specific code or area to which they relate, it is important that the barriers to their execution remain as low as possible.