Perceived wisdom is that IT change programmes do not do enough testing. This is typically based on an analysis of issues when systems go live and ongoing maintenance problems. And this is used to argue that not enough time and money is spent on testing. However, if you descend into the detail a different story emerges.
Most tests are viewed as unnecessary by the stakeholders who will operate the system once it has gone live. These are the people who will live with the risks of serious defects getting into live. Again, perceived wisdom would expect them to be opting for more testing. But we have found the exact opposite.
This finding is based on our reviews of the testing on change programmes, in particular the tests that have been written and planned for execution. We include the programme stakeholders in the review and ask them their views on which tests should be run.
The analysis is astonishing. The charts below shows a typical set of results:
For this programme, the leadership said that the tests deemed to be of no value should simply not be run. And in this case, a small number of tests were added: these were tests that were omitted from the test suite, but were viewed as essential by the stakeholders.
In terms of effort, executing the worthwhile tests often takes longer than the valueless ones and new essential tests require additional effort. So in this example, the elapsed time spent testing was only reduced by about 45 per cent.
And this is a typical result: where we run workshops with pre-existing tests, the stakeholders usually ask for around half of them to be removed.
Of course, it is possible that these findings don't match your programme. You will know this if the business, technical and programme stakeholders have already got together and reviewed the tests and the testing suite is agreed by all parties.
If you haven’t done this review, you are surely wasting time and money. Naturally, there are variations. During the last ten years, at Acutest, our colleagues have worked with more than a million tests on hundreds of projects. Interestingly, the project size and structure has an impact on the efficiency of the testing:
- Very small projects typically have fewer unnecessary tests.
- Co-located teams perform more efficiently than remote test teams.
- Larger, independent test teams are usually the least efficient.
If you'd like to know more about this topic or how to go about reviewing your programmes testing with a view to reducing time and cost, please contact us and we will be happy to help.