‘One for All’ magic of Test Scripts

'One for All' magic of Test Scripts

It is not necessary to create test scripts for each testing we perform, some test flows are highly reusable and are included in a variety of tests. The implementation details of automating tests diverge very quickly down paths defined by what level of testing you’re performing. Developers have unit tests in IDEs, QA testers have their own tools and scripts, and operations folks have monitoring – all to make visible what is either broken or performing poorly.

When we say, scripts can be reused, we can consider that many different test cases may begin with the same login sequence. Testing is a technical aspect and therefore the test scripts should strictly be of a technical origin.

Traditional stages of testing (i.e. unit/integration/interface/system) exposes issues with different aspects of how software performs within certain periods of the delivery cycle or under certain conditions. The maintenance of test scripts in not an easy task at hand. Feature delta causing script breakage, code coverage and analysis, compositionality of tests, and scalability of scripts all impact how much our automated tests actually simplify our day to day software quality efforts.

So, does that give us a notion that the implementation details of each testing level differ from each other?

This question does not come as a surprise, as even experienced testers often fall prey to the ‘this should be simpler’  notion, which might be a great long-term direction, but is counter to the very nature of why we do testing to begin with. The statement usually goes:

“Why can’t I use this [unit/functional] test as a [load/performance] test? Shouldn’t it be as simple as that?”

Testing is done to simplify complicated things. For instance it is of utmost importance to check the software thoroughly so as to keep it in the simplest from. Before the tester initiates his testing simplicity should be checked for just in time. This leads to saving time and enhances testing. In other words, simplicity is a perspective on outcome, not a technical approach to determining quality.

Take for instance the use of functional, interface-based web tests (like those created in TestComplete, Selenium, etc.) to make sure a feature still works before release. Interface tests by their very nature infer the need to have interactive browser or window session, where concepts like ‘click’ and ‘type’ apply. In contrast, a load test is typically a representative set of traffic between computers, having no concept of ‘click’, just a representation of what happened due to the click. The overhead of ‘interface’ as part of the test dramatically increases the cost of each instance of the test you run.

Functional testing also often requires validation based on data retrieved from a back-end system as an evident that the app completed a logical process accurately. However, in a load test, the simple act of retrieving a database value for comparison to the app’s results introduces both additional chatter (side traffic to your QA data, etc.) and increased latency (query plus comparison times).

It is with all these derivation that one realises how painstaking it is to quantify the performance impact of custom scripts, since they themselves take time to execute and are hard to separate from other performance metrics. Expert testers say, “The more custom script you use to run a test, the more you run the risk of indulging into complications and hassles. This is why the testing industry typically divides functional testing from load testing, in strategy and in tool sets.”

Reducing cost and scalability issues by having different scripts for different stages of testing is reasonable in comparison to the alternative: unwieldy or incomprehensible scripts and test results.

Advantages of reusing a script:

So, Testers.. Go on & Taste the magic of ‘one for all’!

Referral: smartbear.com


Get A Free Quote