In the past 20 years, test automation within the development process has become increasingly important. For each activity within the overall testing process there are tools available, such as test management tools, bug tracking tools, model based testing tools and capture & playback tools. And the list is growing on daily basis. This is logical, because we do not want to fall behind the latest developments happening within the development process. We want to continue to improve the testing process, speeding up and innovating as we go along. But hasn’t automation become a goal and havn’t we forgotten the essence of the value of our testing expertise?
If we look back and consider the former responsibilities of the tester, these have not changed. Naturally, the methods and techniques have, but not the core responsibilities. Namely the preparation and execution of the testing process with the ultimate goal, to provide insight into the difference between the required and the current status of the test objects. It does not matter whether the test is based upon a 100-page functional design or 300 use cases that each have up to 2 lines of text. The essence of testing remains the same!
But can we as testers today, with automated testing, still say something about the test coverage and can we still provide a clear understanding of the difference between required and actual status of the test object. And how can test tools provide this support?
Take for example the test tools that can automatically play test scripts for automated regression testing. Which test cases should we insert in these tools? Are these test cases developed by a test specification technique and are these related to the different areas of risk? Are these test cases generated automatically from a technical model, using a Model Based Testing Tool? Or were these test cases designed by the business or have they been generated with a scan tool. In other words, what exactly will be tested and to what depth do these test cases cover the complete test basis? That is the most important question we should always ask ourselves when using a test tool.
Another example is the Model Based Testing tool. These tools are able to generate the test cases from a model. But what kind of model will be used? Many of these tools work with technical models or process models. The generated test cases will, therefore, only relate to the technical or process model and not on the functional requirements of the system to be tested. Indeed, if we are only going to use these test cases, we will only test the technical implementation of the process and we can not say anything about the functional test coverage.
To summarize, with the use of a tool that automates the regression test, the test cases must be prepared by one or more test specification techniques. This is not necessary if the test cases within the tool will be used to supplement a structured functional test set.
If we use a Model Based Testing tool for our functional test preparation, the models should be functional models on which the test cases are generated. This assumes that the models are a 1: 1 translation of the test basis. In this case, the test cases generated, are real usable and functional test cases that we can possibly use again within a test tool for automated regression testing.
If you have any questions about this subject, please contact us.
Silvio Cacace