In the past I have glossed over how to calculate complexity for an automated test, because in general it was not the area where people were the most confused or needed the most help. However after recently speaking on the practice of prioritizing automated tests at ConTEST NYC, I think that providing some more guidance is going to be useful for a lot of test engineers.
First of all, complexity should not be confused with difficultly or with time to implement. Although there are similarities. Complexity is what a lot of development teams use when they are calculating sprint or story points so test engineers are often familiar with it. Since developing automated tests is exactly like developing product features, the same processes should be followed. I am going to break it down a little bit differently than I would for developing a feature and use some slightly different terminology that is more test specific.
The first aspect to consider when calculating complexity is the actual development effort. How much time and effort will go into developing this test compared to others? Is it an hour, a day, a week, or more? Assign a value based on how that compares to other tests (about average, low, high, etc).
The next step is to consider the effort needed to validate the test. I hope that everyone is validating their automated tests, because skipping this is one of the leading causes of unreliable tests. And to be clear, validating an automated test does not entail running the test and checking that it passed. It is the same kind of effort you would put into validating a product feature. In many cases the validating effort may be much higher then the effort to develop the test. I have done whole talks on how to properly validate automated tests.
Modify your original development effort value up or down based on your estimate of the validate effort. Decrease the value if this will be easier than normal or increase if it is going to be require more effort and time than normal.
Third-party integrations can be any system or service that the feature under test is using to function that is outside of the product codebase and outside of the same development and deployment pipeline. So this could be external services like Facebook, or Auth0, or Wayfair, or any of hundreds of other examples. Or it could be a service that is developed in house, but is maintained and deployed by a separate team. These can add complexity if we have to build more robustness into our tests to handle things like the service being down for maintenance, or being less than reliable.
Issues with these third-party services should not keep us from moving our product up the deployment pipeline if our feature is working correctly. Dealing with these issues and questions add more complexity. So add an appropriate number of points to your complexity estimate to account for it.
Risks / Unknowns
Consider next if there are any special risks or unknowns that would be involved in the development of this test. Perhaps the test requires using some testing techniques that we have never automated before. Maybe we would have to collaborate with another team or developer that his historically difficult. For these cases, add in a little to the test complexity to account for it.
The last item to consider, but not the least important, is future maintenance. Any test that we automate will have to have some kind of ongoing maintenance. This could be because in response to product feature updates that require test updates, it could be refactoring code as we are adding new tests in the future. It is something that we have to do and we need to account for it. Would the test code be more likely or more frequently need to be updated? If so account for it with an increase in the test complexity.
So that is it. Start with assigning a score for complexity and then modifying it up or down for validation effort, third-party integrations, risks and unknowns, and maintenance.
I am not going to delve into what scoring scales to use, or how to include test complexity when prioritizing automated tests since I cover those elsewhere. The are best covered their own articles.
One final note, do not take these four items as unbendable rules, bear in mind that your application, your testing framework, your team culture is different that everyone else’s. It may be that there are some extra items that you should consider that are unique to your situation that you should consider when calculating test complexity. Go ahead and add those to the list and use them when you are calculating your test complexities.