The most important factor that drives test automation is the short development cycle. Agile teams have only a few weeks to get a grasp of the requirement, make the code changes, and test the changes. If all testing were to be done manually, the time required would surpass the actual development time. Alternatively, testing would have to be hurried, thus compromising on quality.
While working on test projects, I saw many different reasons why automated testing is not used or performs inefficiently. I am not unique with these observations and global analysis performed by Capgemini (available in the Word quality report 2019-2020) proves that. In this short article, I will try to share my experience, analyze these reasons one by one, and suggest solutions for specific situations.
Automated tests are written, but not run regularly
In fact, this is a very common case. There are several reasons why automated testing is not performed regularly:
Required test data is not present
Data on the test environment is not updated after the release of new functionality
Tests are not yet adapted for new releases (e.g., application interfaces have changed)
Related functionality has not completed yet
In this case, failure to perform the tests is not associated with technical reasons, as it might seem at first glance, but with violations in the project processes.
We can analyze these reasons in terms of project processes.
Required test data is not present.
This problem suggests that the project does not have a general Testing Strategy that would cover:
Test Data Management in general
Reusing test data on all non-production environments
Test Data Set definition
Allowed testing types on production environments
How to retest defects discovered in production environments that are dependent on the data.
Data on the test environment is not updated after the release of new functionality
This indicates that there is a problem in development processes. Possibly, there is an issue in schema migration scripts’ execution during database deployment. Schema versioning support and automatic migration are what saves many projects from firefighting during releases.
Another possible reason is the failure to provide backward compatibility during development. This is also an issue in development processes.
Due to constant changes, there is no time to write automated tests for new functionality
Active development is underway and it is necessary to constantly adapt existing tests to the changed interfaces. In this case, several solutions are available:
The firstsolution: transfer as many automated tests as possible from the level of system tests (GUI, end-to-end tests) to the level of component tests.
For example, it is possible to test all API service requests using mocks and in an isolated environment. But, preferably, “production-like” test data is used to conduct the testing process. Ideally, component testing is a required step in the CI/CD process.
The second solution: Change the approach to developing the automated tests themselves. The ideal option is to regenerate the code (or a part of the code) of the tests using the changed interfaces. So far, complete code generation is an unattainable dream, but a correctly developed test framework allows us to quickly identify changes and quickly (up to 2 days for a very large number of changes) adapt tests.
The project is short-term. There is no point in covering it with automated tests
Any project is subject to change. Manual regression testing after each development iteration also takes time. I always suggest automating the Acceptance Test suite as a necessary minimum.
In addition, on the basis of this automated suite, load testing can be carried out, which is often required even for the smallest projects.
Test scenarios are too complex to be automated
I became convinced that in many cases the complexity of test scenarios is directly proportional to the need for their automation. There are various reasons why scenarios can be difficult:
The complexity of test data preparation
The complexity of conducting the scenario itself
If the test data preparation process is complex, it is even more beneficial to automate testing processes.
If complex test data is prepared manually, there is always a chance that there is an error in the data itself. In this case, a lot of time is spent investigating the cause of the defects. In addition, if something happens to the environment or the database itself, manual data recovery is a very time-consuming job. The best way I’ve come across is the automated creation of test data immediately after the deployment of the databases as part of the CI/CD process. In my experience, it is always possible to automate even the most sophisticated data preparation activities.
I also have experience automating complex testing scenarios, including those that are usually performed manually. There are different challenges that may arise in test scenarios, rendering them difficult to automate at first glance. Overcoming them is possible, however, as illustrated in the list of examples below:
Challenges in test scenarios
Possible solutions
Each subsequent step of the script is performed a few days after the previous one. Every night, a scheduled server task that modifies test data is executed.
The testing team in cooperation with developers and DevOps creates a procedure to “skip time” the required amount of days forward and the automated test framework obtains necessary access to start server jobs at any time. The testing is then carried out in these steps: Skip to the required date-> perform necessary testing operations -> Start the server job (if needed, several times) -> check expected results
It is required to check how the system handles errors received from other services (in case of a service timeout, the service doesn’t respond)
Automated test framework obtains required access to start/stop services on the server. The testing is then carried out in these steps: Stop service-> send the request->check the expected error
It is required to check the branched logic of chatbot answers in a mobile application
The real challenge is finding a chatbot interface (GUI is usually not optimal for automation) on a custom developed chatbot. In my case, common TCP was used. Automating chatbot tests expands testing coverage to the most valuable business scenarios.
It is required to check how server tasks execute (scheduled for a specific run time)
An external API is created to manage such jobs, providing us with a possibility to perform server job execution at any time.
Summary
The most important factor that drives test automation is the short development cycle. Agile teams have only a few weeks to get a grasp of the requirement, make the code changes, and test the changes. If all testing were to be done manually, the time required would surpass the actual development time. Alternatively, testing would have to be hurried, thus compromising on quality.
The goal of the technical audit is to analyse the current environment architecture, obtain trustworthy data on system performance, and develop proposals and recommendations for improving the system architecture.
Performance testing allows us to predict and monitor the system load in order to optimize infrastructure and development requirements. Our service seamlessly integrates performance testing into your existing testing processes.
We speed up the development of automatic tests by changing the approach to writing them. Often, a fresh look at the test code and experience in many projects can speed up the development and support of tests several times.
Manage Cookie Consent
We use cookies to optimize our website and our service.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.