Performance testing
Performance testing allows us to predict and monitor the system load in order to optimize infrastructure and development requirements. Our service seamlessly integrates performance testing into your existing testing processes.
While working on test projects, I saw many different reasons why automated testing is not used or performs inefficiently. I am not unique with these observations and global analysis performed by Capgemini (available in the Word quality report 2019-2020) proves that. In this short article, I will try to share my experience, analyze these reasons one by one, and suggest solutions for specific situations.
In fact, this is a very common case. There are several reasons why automated testing is not performed regularly:
In this case, failure to perform the tests is not associated with technical reasons, as it might seem at first glance, but with violations in the project processes.
We can analyze these reasons in terms of project processes.
This problem suggests that the project does not have a general Testing Strategy that would cover:
This indicates that there is a problem in development processes. Possibly, there is an issue in schema migration scripts’ execution during database deployment. Schema versioning support and automatic migration are what saves many projects from firefighting during releases.
Another possible reason is the failure to provide backward compatibility during development. This is also an issue in development processes.
Active development is underway and it is necessary to constantly adapt existing tests to the changed interfaces.
In this case, several solutions are available:
The first solution: transfer as many automated tests as possible from the level of system tests (GUI, end-to-end tests) to the level of component tests.
For example, it is possible to test all API service requests using mocks and in an isolated environment. But, preferably, “production-like” test data is used to conduct the testing process. Ideally, component testing is a required step in the CI/CD process.
The second solution: Change the approach to developing the automated tests themselves. The ideal option is to regenerate the code (or a part of the code) of the tests using the changed interfaces. So far, complete code generation is an unattainable dream, but a correctly developed test framework allows us to quickly identify changes and quickly (up to 2 days for a very large number of changes) adapt tests.
Any project is subject to change. Manual regression testing after each development iteration also takes time. I always suggest automating the Acceptance Test suite as a necessary minimum.
In addition, on the basis of this automated suite, load testing can be carried out, which is often required even for the smallest projects.
I became convinced that in many cases the complexity of test scenarios is directly proportional to the need for their automation. There are various reasons why scenarios can be difficult:
If the test data preparation process is complex, it is even more beneficial to automate testing processes.
If complex test data is prepared manually, there is always a chance that there is an error in the data itself. In this case, a lot of time is spent investigating the cause of the defects.
In addition, if something happens to the environment or the database itself, manual data recovery is a very time-consuming job. The best way I’ve come across is the automated creation of test data immediately after the deployment of the databases as part of the CI/CD process. In my experience, it is always possible to automate even the most sophisticated data preparation activities.
I also have experience automating complex testing scenarios, including those that are usually performed manually. There are different challenges that may arise in test scenarios, rendering them difficult to automate at first glance. Overcoming them is possible, however, as illustrated in the list of examples below:
Challenges in test scenarios | Possible solutions |
Each subsequent step of the script is performed a few days after the previous one. Every night, a scheduled server task that modifies test data is executed. | The testing team in cooperation with developers and DevOps creates a procedure to “skip time” the required amount of days forward and the automated test framework obtains necessary access to start server jobs at any time. The testing is then carried out in these steps: Skip to the required date-> perform necessary testing operations -> Start the server job (if needed, several times) -> check expected results |
It is required to check how the system handles errors received from other services (in case of a service timeout, the service doesn’t respond) | Automated test framework obtains required access to start/stop services on the server. The testing is then carried out in these steps: Stop service-> send the request->check the expected error |
It is required to check the branched logic of chatbot answers in a mobile application | The real challenge is finding a chatbot interface (GUI is usually not optimal for automation) on a custom developed chatbot. In my case, common TCP was used. Automating chatbot tests expands testing coverage to the most valuable business scenarios. |
It is required to check how server tasks execute (scheduled for a specific run time) | An external API is created to manage such jobs, providing us with a possibility to perform server job execution at any time. |
The most important factor that drives test automation is the short development cycle. Agile teams have only a few weeks to get a grasp of the requirement, make the code changes, and test the changes. If all testing were to be done manually, the time required would surpass the actual development time. Alternatively, testing would have to be hurried, thus compromising on quality.
Performance testing allows us to predict and monitor the system load in order to optimize infrastructure and development requirements. Our service seamlessly integrates performance testing into your existing testing processes.
We help to organize and coordinate CI/CD processes in the project, find and eliminate pitfalls and significantly accelerate delivery.
We speed up the development of automatic tests by changing the approach to writing them. Often, a fresh look at the test code and experience in many projects can speed up the development and support of tests several times.