Listen to this article

Agile and Test Automation are at times like chalk and cheese if not moved in the right contextualization to the situation on ground; i.e., client expectations, skills on team, process efficiency. Here’s a recent experience from one of the testing engagements that I wish to share with you all.  The client had brought in a QA director with a vision to automate all functional test cases of their flagship product. He had a clear mandate to automate all their functional regression tests. 

Here was the catch, the product was built in agile scrum model – where user stories were prioritized and listed but the pipelines were getting built at the very end of the sprint cycle. This was pushing the QA phase to the very fag end of the sprint, within a very constrained time window. Most of the times there was a spillover in successful completion of user stories, and those were getting built as desired in the next sprint cycle only. It is usually the case wherein, the current sprint user stories get tested functionally in the same sprint (say, Sprint N) but getting automated in the next sprint only (say, Sprint N+1). In this current product context, it so happened that most of the functional tests for all user stories were getting validated/closed in the next sprint cycle only (say, Sprint N+1) and the test automation had to spill over into (say, Sprint N+2). The automation team went about automating the tests from scratch, slowly building a foundational framework, which gave coverage to key functionalities. The client was wanting to see the value of their investment in test automation, while functional test team was not leveraging the automation scripts in later sprints as well.

We got into the program from an independent assessment of the situation to understand why automation scripts though developed was not being leveraged. It was a bit startling to hear the initial comment from the functional test team that they were not using the automation scripts since they felt that if something failed in production, they would still be held responsible and not the automation team, so their faith on automation scripts where low. Initially, the impression to me was that the functional test team did not rely on the automation scripts from their gut feel perspective. 

When we had deep dive into discussions with functional and automation testers, we realized that the problem was much larger due to lack of process adherence. There were loose ends on both sides (functional and automation team) which we picked, prioritized and started educating everyone involved about current pitfalls, and what are potential benefits that everyone could leverage if course correction was being made.

The first thing we did was to ensure that there was a simple traceability matrix put together mapping the sprints to the user story, to the functional test cases written and having clearly tagged regression test cases. This was carried out by the functional test team though it involved some backtracking work time. Once this was done, we visited the automation test cases and advised them to break down the methods to smaller test methods that could be reused easily.

Next, we advised the automation team to reorient the test suite in such a way that there is a direct one-to-one mapping between functional test cases and automated regression test cases. The automation team was asked to update the traceability matrix making the mapping clear between sprint to user story to functional test case to automated test case. Moving on, we made sure that automated tests were triggered and automated test run report were published to the functional test team. At the same time, we made the functional test team to perform manual regression for those initial pipelines and publish the manual functional test report. 

We requested that the same exercise be carried out for couple of regression cycles to validate that both the functional and automated test results were the same. This built the confidence for everyone, that the automated regressions were indeed stable, robust, and designed to minimize the manual regression effort. The efforts in terms of functional regression testing and automated regression testing were compared and there were clear indications that manual functional regression test effort was getting reduced, delivering cost saving to the customer.

Once this was achieved, we moved on to maintaining the automation framework, where functional test team focused only on the new user stories in upcoming sprints and regression automation team looked at maintaining the framework and adding test cases from later sprints. Processes where adhered, like constantly updating the traceability matrix and keeping a clear mapping at any given point of time from sprint to user story to test case to automated test case.

Unravelling the basic simple process issues in the program and owning up to it, lead to a lot of trust building with the customer and within the team, which in turn elevated us from being a regular run-of-the-mill vendor to a trusted QA advisor.