Project background

This is a general-purpose CRM project. There are a user portal and two mobile applications: for internal and external CRM end-users with Live Chat and several chatbots to solve delivery problems, place orders, and contact technical support.

Part of the CRM functionality is outsourced to other companies.

The Initial Testing Assessment

01.

From a business perspective

From the customer’s business perspective, the most important CRM features are the functionality of mobile applications, chatbots, and dashboards on the user portal.

What is less important are the development and improvements in the external API, reporting system, and the company’s internal portal.

02.

In terms of data

In a test environment, all data is created by the tests themselves.

If problems arise in the production environment, defect root cause analysis is performed there.

Unit tests use a predefined set of test data.

Third-party companies, which outsource part of the functionality, provide accounts for testing with a set of test data.

03.

From the application point of view

There are web applications, mobile applications, APIs, and external services that provide interfaces for:

  • mass e-mailing
  • creation of electronic documents using templates
  • IP telephony service.
04.

In terms of technology

Unit tests cover each module in the main programming language of the module.

Integration and main business scenarios are tested by end-to-end tests for web applications and for mobile applications.

Gap analysis

After studying the current state of the project, the following gaps were identified:

  1. It is not possible to determine test coverage due to an inconsistent approach to writing and storing test cases. Manual testers use checklists, and test automation developers use Cucumber feature files.
  2. Automated testing on a test environment is not possible due to the incompatibility of test data after several updates.
  3. Chatbot testing is not fully automated, except for a few basic scenarios for mobile applications.
  4. There is no performance and security testing in the project.
  5. Testers who validate releases on the production environment have access to users’ personal data.

Opportunities and Solutions

After careful analysis of existing testing processes, the following improvement steps have been proposed:

  1. Introduce a unified approach to writing and storing test cases. Establish a transparent connection between requirements and test cases. For automated tests, implement automated execution reporting.
  2. Break test data into Master data (the initial data set in the database needed to start testing) and Test data for testing mobile applications and chatbots.
  3. Deploy test environments in the “Infrastructure as code” concept.
  4. Implement code analysis static tools into CI/CD pipelines to achieve compliance with security and performance requirements.
  5. Shift the chatbot testing “to the left”: before starting the chatbot tests, prepare the necessary test data using automated scripts. Use the API to send chatbot commands and check expected results. Redistribute the scope of testing: check the entire functionality of the chatbots via the API, check the integration, and the presence of graphic elements and controls on mobile devices.
  6. Implement performance testing of chatbots. To eliminate the influence of network connections of mobile devices, performance testing should be carried out at the API level.
  7. Based on the data in the Production, generate de-personalized test data. Shift testing to the right. Use “production-like data” on test environments. Check the correctness of data updates after the release of new functionality.
  8. Generate a large amount of test data and start the performance testing of web applications.