1. Introduction
The development of computer science and software engineering and the increasing use of artificial intelligence and data mining technologies have led to the development of a wide range of applications that are critical to operations in business, healthcare, and education. Software development is a complex and expensive process, prone to errors and subsequent failure to meet user requirements [
1]. Organisations, therefore, invest significant resources to ensure that software products are tested against set criteria, ensuring that they are of the best quality before being released to their clients and users [
2]. Traditionally, testing has been a manual process, involving humans executing applications and comparing their behaviour against certain benchmarks. However, advances in technology and the constant desire to improve quality have introduced and increased the use of automated testing, which uses computer algorithms to detect bugs in software applications [
3]. Automated testing can generally be categorised into two types: those in which automated testing tools write and use manual test cases and frameworks in which the testing tools automatically generate test cases. In this study, we focus on the first type in which test cases are created manually. The phrase automated testing is used throughout the rest of the paper to refer to instances of automated testing that involve the manual creation and automated use of test cases.
A key aspect of all software development processes is that they each have testing phases at different points in the development cycle. For example, the Waterfall development process has a distinct test phase after development has taken place [
4]. Another example is Agile, which is an iterative development process and has repetitive testing phases [
5]. Although there are processes involving iterative and concurrent testing, many development processes assume users can specify a finished set of requirements in advance, ignoring the fact that they develop as the project progresses and change depending on the client’s circumstances. Manual testing and correction of errors, as well as the integration of changes, is feasible in small projects, as the code size is easy to manage. However, as client requirements change or more requirements are added, the projects grow in complexity, yielding more lines of code and a higher probability of software faults occurring (commonly named bugs). This results in the need for an increased frequency of manual software testing. Consequently, there has been a shift to more flexible methodologies that combine testing with the completion of each phase to identify software problems before progressing to the next phase.
Automated software testing has many well-established benefits [
6,
7]; however, several organisations are still not using automation techniques due to the requirement of knowledge, legacy challenges, and reluctance to change [
8,
9]. The results of the 2018 SmartBear State of Testing Report survey on test automation identified that automation is not yet as common as organisations desire [
10]. There are still many factors that hinder the update and use, such as challenges in acquiring and maintaining expertise, cost, and the use of the correct testing tools and frameworks. Although previous studies present the reasons why automated testing may not be used, there is an absence of literature focusing on different job roles and experiences and how they relate to factors preventing the adoption of automated testing. There is also ongoing research and debate among academics and professionals as to the merits of automated testing over traditional manual testing methods [
9,
11,
12,
13,
14]. This research paper presents an empirical study to gain an understanding of the different attitudes of employees working in the software industry. The particular focus of this research is to understand whether there are common patterns surrounding different roles and levels of experience. Furthermore, this research aims to identify common reasons why automation is not being used.
At the end of this research, the following question will be answered: Do common themes emerge when investigating opinions on why automated testing is not used, with the focus being on the job role and level of experience? To answer this research question, a twenty-two-question survey has been created to collect attitudes toward automated testing (AtAT) from employees working in the software testing industry. The data are then thoroughly analysed by using quantitative techniques to determine key patterns and themes.
This paper is structured as follows.
Section 2 presents and discusses existing work, grounding this study in the relevant literature.
Section 3 describes and justifies the process adopted in this paper, which includes using a two-stage analysis approach. This section also presents and discusses the results of the study in detail, identifying common themes relevant to the objective of this study.
Section 4 provides a summary of key findings and discusses how these findings motivate future work. Finally, in
Section 5, a conclusion of the work is provided. The full set of participant responses is available in
Appendix A.
4. Discussion and Findings
The purpose of this study was to identify key themes when investigating opinions on why automated testing is not used. Through this two-stage study, the nature of the relationship between a set of predictors including software characteristics, nonsoftware issues, and those reasons relating to practitioner support and opposition to AT adoption were investigated. In this spirit, scholars have found that AT characteristics, e.g., functionality, usability, and adaptability, can have a strong effect on practitioners’ support or opposition. In particular, we sought to test these predictors in different scenarios to gain an understanding of how perceptions of individuals operating in different roles and with different levels of experience differ. To that end, it has been established that there are key identifiable patterns surrounding attitudes toward automated testing from employees who assume different roles and have different levels of experience. These key findings can be used by employers in the software industry to better understand the viewpoints of their employees.
Based on the values in
Table 6, the responses for technical roles are asymmetric, as technical roles believe that the reasons for not adopting AT are due to nonsoftware factors. However, the responses for nontechnical roles are symmetric and agree that both nonsoftware and software reasons are the factors preventing adoption. We deduce that this could be down to the following reasons: (1) questions in nonsoftware factors related to cost that all (i.e., not just nontechnical) employees agree with; (2) based on common practice in the IT sector, technical employees are often promoted to nontechnical (managerial) roles, meaning that they have both technical and nontechnical attitudes; and (3) nontechnical might have less understanding of how capable technical people are, i.e., management lacks understanding of their employees’ skills.
Based on the combination of the comprehensive basic analysis and the principal component analysis, we draw the key findings presented in the remainder of this section. Throughout this section, the original questions and their responses are cross-referenced by adding the question number in parentheses (e.g., q3 for question 3). In this section, optional free-text responses provided by the user are analysed alongside the previously discussed quantitative information. The complete free-text responses provided by 19 of the participants can be seen in
Table A1, and since this section is trying to establish key findings from the data, they are used to support the quantitative patterns. A summary of the key themes of the free-text submissions can be seen in
Table A1.
Summary Point 1. Although technical employees are more likely to believe that testers require a high level of expertise and that open-source tools are challenging, this is not identified as a factor preventing their adoption. However, on the contrary, nontechnical roles agree that an absence of expertise is preventing the use of automated testing.
When asking participants if they believe that a lack of skilled resources is preventing automated testing from being used, it is evident that management staff believe this to be true, while those with more technical expertise do not (q3). This is an interesting finding, as it confirms that there is a different perception between technical and nontechnical staff in regard to whether a lack of skilled resources is a prevention factor. This could be because nontechnical staff are unable to determine the requirements of expertise and match them with the capabilities within their organisation. Furthermore, it could also be because technical staff overstate their ability without having significant experience in automated testing.
It is also evident that people do not believe that automated testing is not fully utilised because people do not realise its benefits (q9). Furthermore, technical roles do not believe that there is an issue with open-source tools; however, less technical roles are more likely to support this argument (q12). Additionally, there is a weak indication that those with technical expertise believe that a high level of expertise is required (q13). It is, however, evident that the majority of the participants believe that strong programming skills are required to undertake automated testing (q14). However, when relating this to the results of the analysis of the principal components, it is evident that the technical staff do not believe that technical reasons prevent the use of automated testing.
It is perhaps not too surprising that technical roles are more likely to believe that a high level of expertise is required. This is because they are working closely with technology and will have a comprehensive understanding of what knowledge is required. However, as demonstrated, technical roles are less likely to believe that skilled resources are preventing the use of automated testing, as they have already gone through the learning process and become competent testers. On the contrary, management is more likely to view the capability within their organisation versus what is to be delivered, and therefore, a lack of skilled resources might refer to there being insufficient resources available to deliver a project on time, rather than the absence of expertise, from preventing thorough software testing. In contrast, it is possible for those in technical roles to report that they have the expertise required to utilise automated testing tools; however, this raises the question of why they are not always used if the necessary expertise is available.
In terms of comments provided by the participants, 7 of the 19 responses were directed at the necessity and lack of expertise. All of these 7 responses, shown in
Table A1, are provided by individuals performing technical roles. Interestingly, all responses agree that technical knowledge is important, but an interesting observation is that some responses draw attention to the fact that there is a lack of training and mentorship within testing roles. One response even highlights the importance of individuals being able to learn the necessary skills independently. It is also interesting that a couple of responses directly state that the management of people is extremely important to help remove any skill and expertise gaps, resulting in a more thorough and robust testing process.
Summary Point 2. Those with less experience are more likely to agree that individuals do not have enough time to participate in automated tests. Furthermore, employees with less technical experience with automated testing and greater management responsibilities do not agree that they are time consuming to learn.
Whether individuals have enough time to perform automated testing is polarised, with an even split agreeing and disagreeing. However, it has been identified that those with more junior roles are more likely to agree with this statement (q4). Furthermore, when considering how difficult they are to learn, the majority of people disagree that they are time consuming to learn. However, in general, the least experienced employees tend to agree, and so do managers and CEOs (q15). This agrees with the results of the principal component analysis, whereby technical staff are identified to agree that nontechnical reasons are behind not adopting automated testing.
This finding agrees with the fact that work levels and deadline pressures will be different in different organisations, and, furthermore, people will respond to and handle these pressures differently. The fact that junior employees are more likely to state that they do not have sufficient time to perform automated testing duties is explained by the fact that junior employees might take longer to perform testing duties. This could also be due to a lack of experience as a result of the employee learning new skills necessary for their role, which could slow down testing. It is also possible that those with less experience are burdened with learning the necessary knowledge and expertise to perform their entire role and therefore have little capacity to take on improvement activities. This may change as an individual gains more experience, becoming more efficient in their role and creating more space for learning and improvement.
Summary Point 3. Most of the participants agree that automated testing is expensive, with nontechnical roles more likely to agree that they are expensive to use and maintain.
Most of the participants agree that commercial tools are expensive to use, but there is no discernible pattern (q11). However, there is a weak correlation that management roles are more likely to agree with the statement that test scripts are more expensive to generate (q19). This is further compounded when nontechnical roles agree that there are high maintenance costs for test cases and scripts (q20). This agrees with the presented principal component analysis, as both technical employees agree with nontechnical reasons being responsible for not adopting automated testing. Furthermore, nontechnical roles are split between believing that software and nonsoftware factors are responsible for not adopting automated testing.
It is not surprising that most of the users agree that the costs of automated testing are expensive. Furthermore, the pattern that management staff more strongly agree with this statement is explainable through their closeness to the financial operations of the business. It is, however, quite surprising that managerial staff believe that automated testing has high maintenance costs. A fundamental aspect of automated testing is its reuse and ease of maintenance. This difference in perspective is likely to originate from management’s lack of understanding when it comes to fundamental aspects of automated testing.
The comments provided by the participants also reflect the fact that automated testing is expensive to perform and maintain, largely due to the cost of the testing team. One participant (# 75) states that management does not see the wasted amount of time in automated software testing, and this could provide justification as to why nontechnical roles agree that they are expensive to maintain. If they saw the amount of wasted time, they might have a better understanding of the true cost.
Summary Point 4. All but the most experienced employees disagree that automated testing tools and techniques lack functionality. Furthermore, experienced employees are more likely to disagree that problems arise due to fast revisions, whereas those with managerial roles agree.
When considering whether automated testing tools and techniques lack functionality, in general, the more experienced employees are likely to agree, but overall the majority disagree (q16). When asked whether people believe that automated tools are reliable enough, there was a very strong tendency to disagree (q17). There is a slight agreement that people believe that automated testing tools lack support for testing nonfunctional requirements (q18). When asked whether automated testing tools and techniques change too often, introducing problems that need fixing, the general trend is that a greater number of years of experience leads to an increased chance of disagreement. Furthermore, of the response categories, nontechnical roles agree/strongly agree (q21). This aligns with the findings from performing principal component analysis where nontechnical roles more strongly believe that software reasons are preventing the use of automated testing, whereas those undertaking technical roles believe it is nonsoftware issues.
The reason why more experienced employees disagree that automated testing tools and techniques lack functionality is most likely down to the fact that more experienced employees either have fully mastered the tools, or they have developed sufficient workaround or alternative techniques. Furthermore, experienced staff do not believe that updates cause significant problems, which could be put down to the fact that they are experienced in how to handle revisions within automated testing frameworks. Nonfunctional requirements are a secondary feature set of automated testing tools and techniques and, as such, are not the primary feature set integral to their core use. This is most likely the reason why the majority of participants do not see an issue with their lack of support for nonfunctional requirements.
Many comments were received in regard to the capabilities of tools and techniques, and in general, they state that the tools, techniques, and frameworks do not lack functionality. Rather, they justify the complexity of tightly integrating the functionality within a project and how this can make it hard to reuse and fix revisions. Furthermore, it is evident that technical employees also believe that those in managerial roles do not understand what is involved in the implementation of automated testing. It is also interesting that one response from an individual performing a technical role (#64) even states that test scripts breaking is a good sign, as it clearly demonstrates that they are working. A comment from an individual in management (#81) states that product delivery is more important than testing, demonstrating that for management, their emphasis is on project completion rather than testing.
Summary Point 5. Only managerial staff believe that test preparation and integration inhibit their use. Furthermore, only management staff do not believe that software requirements change too frequently, having negative impacts on automated testing.
In terms of utilisation, when considering whether difficulties in preparing test data and scripts inhibit their use, only nontechnical staff agree, and there is a balanced response from technical roles (q5). Furthermore, the majority of the participants do not believe that not having the right automation tools and available frameworks prevents use (q6). When asking staff specifically about whether the difficulty of integrating tools is a problem, nontechnical roles agree, and technical roles are balanced with a slight emphasis on disagreement (q7). The majority of participants also agreed that requirements changing too frequently are impacting their use (q8); however, it is also the case that nontechnical roles do not agree. There is also strong disagreement among the technical staff that test scripts are difficult to reuse in different testing stages (q22). This finding also agrees with the performed principal component analysis, where it was identified that nontechnical staff more strongly believe the reasons for not adopting automated testing to be technical.
The fact that nontechnical employees believe that there are difficulties, both in setting up and maintaining automated tests, which are prohibiting the use of automated testing tools is most likely down to the disconnect between nontechnical and technical staff when it comes to understanding limitations with software testing. All participants believe that there are sufficient frameworks to meet their individual testing requirements. Interestingly, only management believes that changing requirements do not impact automated testing techniques. This difference could most likely originate from a managerial misunderstanding of the impact of changing requirements throughout the software development cycle.
Comments provided by the participants do support the argument that those in testing roles understand the technical complexities involved and why automated testing might not be fully utilised. However, there is a lack of responses from management staff to justify that this is only a point of view of technical staff. There are many reasons specified for poor utilisation, from formal training and guidance, a preference to view automated testing as second to manual, and that automation might be used for the wrong reasons, i.e., to replace manual rather than complement.
Summary Point 6. Whether a lack of support prevents the use of automated testing is polarised.
There is no agreement or disagreement that lack of support prevents the use of automated testing. However, there is an observation that consultants tend to agree with this statement (q10). This is interesting, as it demonstrates that there is no majority, either in terms of role or experience, stating that lack of support is preventing them from adopting and using automated testing within their organisation. However, it is also worth noting that the responses to this question are rather polarised, with people agreeing and disagreeing, but overall there are few holding strong views on this. This is consistent with the performed principal component analysis, which determined that nontechnical roles and technical roles agree (technically more strongly) that nonsoftware factors, such as finance, expertise, and time, are preventing the adoption of automated testing.
Comments received from participants detail that training is a common limiting factor to their uptake, but the biggest theme is that nontechnical either do not understand or value test automation. This means that automation is seen as an afterthought from manual testing and thus will not be well supported by their employer.