The Need to Perform Functional and Performance Testing Concurrently

Software testing plays a major and a wider role to authenticate and validate a software. There are various aspects to look into while carrying out software testing. Functional aspects are given more importance over the non-functional aspects during testing and are usually carried out separately. However, simultaneous and practical consideration of both the aspects can give an added advantage with better quality of products.

Let us analyze various scenarios in the software testing life cycle where both functional and performance testing can be performed simultaneously.

Functional testing usually happens in a test environment identical to that of the production. Based on the outcome of the results, validation and verifications are performed and is important for any software pre-release.

Bug effusion is a serious issue

The quality of a product lies in the hand of a tester. No one wants to leak the defects, especially when the product has been functionally tested.

It is often seen that the testers emphasize more on functional testing as compared to performance testing. They might be progressing well in functional testing; however, the scope of performance testing remains dull. This is basically due to the deliverable requirements or the test environments, which might not be an exact copy of the production.

Based on the disparity between functional and performance testing, one might raise the following concern:

  • Is there any dependency between functional and performance testing?
  • What if the product is delivered with degraded performance?
  • Can performance testing co-exist with the functional testing?

There are no solid reasons to say that functional testing has an advantage over the performance testing. Over the years, it has been a general practice where the non-functional aspects are ignored unless they are required. Moreover, if the software is performing well and there is no complaint from the client regarding the non-functional aspects, it is really not required.

However, the following two concerns require some serious considerations.

  • Does performance impact the functional testing?
  • If the client is not-satisfied with the performance, is it fine to carry out performance testing independently?

 

The importance of performance testing

There are various architectural models on which a software works. Some of them are mentioned below.

  • Response–reply models
  • Transaction models
  • Load based models
  • Data replication models

The functional testing behaviors of the software based on these models depend upon the system’s performance.

A client often expects multiple testing iterations, which are accomplished by automating the functional testing. The idea is to optimize the efficiency with automation, as it requires minimal effort to give high output.

Understanding a Case Study

Let us analyze a situation where a software is being produced by a company and is tested by some other company.  However, there are some limitations for both the companies to work in association with each other. In addition, any dialogue between the two companies is restricted to one or two times a month. In such situations, the system works on the request-response model.

Once the product has been developed, the functional testing was done by the other company, which finally decides to release the product after two rounds of testing.  Though the product was released and was certified from the testing team as well; however, it came up with a lot of issues and bugs from the client’s end.

So, where does the problem lie?

You might speculate different things as discussed below.

  • Does the problem lie in restriction or association between the two aspects?
  • Was the problem in gathering requirements?
  • Was the test environment proper?

After analyzing the situation precisely and carefully, the following conclusion was drawn.

  • The provided test inputs were incomplete
  • The software was weak in terms of robustness
  • Independent applications were not synchronized
  • Lot of rework was done

This resulted in some restorative actions performed by the planning team. The below-mentioned points were suggested.

  • There must be more frequent interactions between the development and testing teams
  • The functional testing must include all dependent applications interconnected with each other
  • The functional testing must take different inputs ranging from simple to complex
  • The remedial team must take the performance testing seriously
  • Testing must be done on the integrated system as a whole
  • Traced bugs must be re-tested after some fixed time-interval
  • The current iteration must fix the previous bugs

This resulted in exposing more bugs in less time. Moreover, there was a positive impact on the system overall.

  • The software quality improved significantly with optimized test-cycle time
  • Resulted in fewer issues post-implementation
  • Rework got replaced by iterations, which kept on improving the product’s quality