Next Article in Journal
Effects of Long-Term Chemical and Organic Fertilizer Application on Soil Phosphorus Fractions in Lei Bamboo Plantations
Next Article in Special Issue
Implementing Project-Based Language Teaching to Develop EFL High School Students’ Key Competences
Previous Article in Journal
Experimental Study on Mechanical Properties and Stability Analysis of Structural Plane under Unloading Normal Stress
Previous Article in Special Issue
Teachers’ English Language Training Programmes in Saudi Arabia for Achieving Sustainability in Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Synthesizing the Attributes of Computer-Based Error Analysis for ESL and EFL Learning: A Scoping Review

1
Faculty of Education, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
2
Pusat Pengajian Citra Universiti, Universiti Kebangsaan Malaysia, Bangi 43600, Malaysia
3
Faculty of Education, Universitas Pendidikan Ganesha, Bali 81116, Indonesia
4
Faculty of Business and Accountancy, Lincoln University College, Petaling Jaya 47301, Malaysia
5
School of Media and Communication, Taylor’s University, Subang Jaya 47500, Malaysia
*
Authors to whom correspondence should be addressed.
Sustainability 2022, 14(23), 15649; https://doi.org/10.3390/su142315649
Submission received: 29 September 2022 / Revised: 17 November 2022 / Accepted: 21 November 2022 / Published: 24 November 2022

Abstract

:
An error analysis (EA) is the process of determining the incidence, nature, causes, and consequences of unsuccessful language acquisition. Traditional EA for English as a second language/English as a foreign language technique lacks an orderly investigation due to human errors. Consequently, computer-based error analysis (CBEA) was introduced into EA in linguistics to achieve accuracy and instant analysis. Although many studies have concluded that CBEA holds numerous strengths, other studies have found that CBEA has certain limitations. However, the strengths and limitations of the CBEA were not clearly synthesized and outlined. Accordingly, this review aims to explore the strengths and limitations of CBEA to provide areas for improvement of computer applications toward an efficient EA procedure. This work also aims to synthesize the strengths and limitations of CBEA mentioned in a variety of articles into a single review to sustain its efficiency and serve as a guide for teachers to benefit from the strengths and gain awareness of CBEA’s limitations. Stakeholders can access broader perspectives on developing application software capable of addressing the deficiencies in EA. By doing so, we can sustain CBEA’s efficiency for the benefit of all. For this purpose, Arksey and O’Malley’s procedure of a scoping review and the PRISMA framework were adopted to guide the filtering and selection of relevant previous studies. Sixty-two articles were selected through the processes of identification, screening, eligibility, and inclusion. Although the findings showed six strengths and seven limitations of CBEA, CBEA can only perform the diagnostic part of EA. Human intervention is still required to perform the prognostic part to accomplish an efficient EA.

1. Introduction

Conventional teaching strategies are catching up to digital learning as teachers increasingly use technology to create an innovative and motivating learning environment for students to learn English. Describing this, Brock [1] explained that a computerized text analysis program or computer-based error analysis (CBEA) has been employed since the 1980s, has evolved through various improvisations, and is now commonly employed for error analysis (EA) in the linguistic field. Brock [1] highlighted that CBEA is known by various names, such as writing analysis programs, text evaluation programs, grammar checkers, spellcheckers, text analysis programs, text checkers, text analyzers, style checkers, and error checkers. This tool can analyze and investigate language production to detect errors, thus providing recommendations to correct any errors. Researchers have demonstrated the various strengths of CBEA and have stated that it allows students to learn in an inquiry-based, constructive learning environment while reducing teachers’ workload. For instance, Chukharev-Hudilainen and Saricaoglu [2] described the functions of CBEA as straightforward, time-saving, and beneficial to increasing enthusiasm for learning English. Furthermore, as stated by Lee and Briggs [3], CBEA is thought to benefit learners and teachers, as learners can see deficiencies in their language use, and teachers can focus on pedagogical development. Teachers of English as a second language (ESL)/English as a foreign language (EFL) have a strong behavioral intention to use technology, because they believe that it will aid them in optimizing their teaching with the most recent material, thus improving the teaching quality [4]. Teachers can employ CBEA for EA to enhance their pedagogical knowledge and provide an effective remedial to sustain ESL/EFL learning, because teaching and learning have been digitized in the past decades.
Although it has been demonstrated that CBEA has numerous strengths, certain limitations have also been discovered. Additionally, CBEA’s outcome is claimed to have been impeded by imperfections. CBEA is further claimed to have failed in detecting certain errors, such as content and organization errors in learners’ writing [5]. In addition, CBEA can only detect lower-level errors, whereas higher-level errors, such as errors in figurative language and complex sentences, are not detected [5]. Although CBEA can recognize error types and recommend how to alter the incorrect words, it cannot add to them. The strengths and limitations of CBEA must be identified to improve and sustain its efficiency in the future.
Despite the fact that the strengths and limitations of CBEA have been discussed separately in individual studies, reviews on this topic are limited, and the strengths and limitations of CBEA are scattered across different articles. We were highly motivated to carry out a scoping review on previous studies of CBEA and researchers’ findings on its effectiveness to identify the strengths and limitations of CBEA and present them in a single review. Therefore, this scoping review aims to explore the strengths and limitations of CBEA and determine the efficacy of CBEA in accomplishing EA processes. This work also aims to provide some insights into the additional features of the software, such as suggestions for improvements that can help learners more efficiently learn the language. Moreover, this work can serve as a guide for teachers to benefit from the strengths while gaining awareness of CBEA’s limitations. Thus, this review addresses the following review question: What are the strengths and limitations of CBEA in the ESL/EFL learning context, and how efficiently can CBEA accomplish error analysis (EA) processes?

2. Background: Evolution of CBEA with the Advent of Technology—Three Strands of CALL Theory

Technology has evolved in EA, and computer-assisted language learning (CALL) was first introduced in the 1960s and evolved with time. When computer-based error analysis was introduced in the early 1990s, error analysis was given a new dimension. When human resources are scarce, the number of documents is large, and the need to analyze learners’ errors is pressing, human error analysis becomes inconvenient. CALL initially focused on language input and feedback. Traditional EA techniques lack an orderly investigation of errors, because errors are manually analyzed, and human errors are likely to occur due to unusual monotonous tasks [6]. The first phase of CALL was developed in the 1950s and put into practice in the 1960s and 1970s. It was based on the Behaviorist Theory, which placed an emphasis on repetitive language exercises [7]. This theory emphasizes language proficiency. Grammar or focus-on-form exercises received the majority of its attention. Behaviorist CALL claimed that a computer can perform repetitious actions with the same linguistic material without growing weary or making mistakes while also providing immediate feedback. Such material could be presented individually by the computer, enabling learners to work at their own pace and ensuring their success.
Following Behaviorist CALL, a popular educational theory in the 1970s and 1980s was Communicative Theory [8]. John Underwood, the main proponent of the Communicative CALL, laid down a set of principles that are linked to CBEA [8]. The first principle stressed that CBEA should allow learners to concentrate more on how to use forms than on the forms themselves. The second principle asserted that learners learn grammar implicitly rather than explicitly. Accordingly, the third principle enabled and encouraged learners to construct their own sentences rather than modifying prefabricated language. The fourth principle stated that CBEA should not evaluate or judge every action taken by the students, nor should it offer them encouragement, and the fifth principle of the theory stated that CBEA should be open to a range of student responses and should refrain from telling the students that they are wrong.
The Interactive and Integrative Theory of CALL refers to a perspective that aims to better incorporate technology and language skills [9]. Developed by Warschauer, Pennington, and Garrett, it is based on two recent technological advancements: namely, the Internet and multimedia, which enable students to browse at their own pace [9]. This theory describes the interaction between two devices: one to produce audio or images and the other to control them.
A significant number of software is available today for language learning. Every software has its own range of operations and is designed for a specific purpose. Among these, Computational Natural Language Learning for instance, collaborated on a grammatical error correction task aimed at automatically detecting and correcting grammatical faults in essays. Automatic writing evaluation (AWE) is a tool that can identify learners’ writing errors and provide feedback to improve text revision success. Proofreaders are tools for detecting incorrect words, missing commas, capitalization, and verb errors. Automated speech recognition is used for English speech recognition, whereas automated computer scoring systems are used for spell checking. Thus, computer software is used for nearly all linguistic purposes, such as proofreading, grammar checking, language translation, spell checking, and writing styles, to suit the goals of usage.
Analyzing the errors of second-language learners can be used for identifying the level of proficiency in the targeted language. Education has evolved as a result of the development of various teaching methods to meet the varying demands of different generations. Teachers can utilize numerous online educational technologies to enhance the efficiency of their teaching and learning processes. Language teachers and researchers have employed computer software in their field and found it to have been a great help in instantly and accurately detecting learners’ errors [10]. An error-detecting software has been developed to not only analyze various writing errors but also provide feedback to writers or to immediately correct spelling in texts [5]. CBEA can potentially achieve efficient outcomes with higher accuracy and is a promising avenue for future research. Consequently, research persistently examines, develops, and regularly updates the efficacy of CBEA to reduce its limitations.

3. Materials and Methods

This scoping review was aimed at exploring the strengths and limitations of CBEA. A scoping review is a suitable method for identifying specific characteristics or ideas in studies, as well as surveying, reporting, or discussing these characteristics or concepts [11,12]. This scoping review was able to synthesize the strengths and limitations of CBEA from various articles into a single review.

3.1. Search Strategy

Certain criteria of inclusion and exclusion were followed. The EA concentrated only on ESL/EFL learners’ errors. Owing to the rapid advancement of technology, the chosen articles were from the past 7 years, ranging from 2016 to 2022. The articles explaining theories and definitions from prior years were included to provide a thorough overview. All of the articles included were related to CBEA, and articles not related to CBEA were excluded. Articles that did not fit the aforementioned criteria were excluded. Table 1 describes the selection criteria.

3.2. Charting the Results

This review adopted the Preferred Reporting of Items for Scoping Review (PRISMA) flowchart as a guideline for the related article and journal searching procedure. The following four phases from PRISMA, as described by Moher, Liberati, and Tetzlaff [13], were employed to collect data for this study: the articles were carefully identified, screened, checked for eligibility, and included in the review. Figure 1 shows the PRISMA flow diagram explaining the steps of identifying the relevant studies involved in this review.
  • The first phase was “identification”, which required the selection and acquisition of materials from databases, such as EBSCOhost, Google Scholar, ERIC, and Scopus. Error analysis for ESL/EFL, second language errors, artificial intelligence for ESL/EFL error analysis, computer software for ESL/EFL error analysis, computer-aided ESL/EFL error analysis, technology for ESL/EFL error analysis, Grammarly for error analysis, and grammar checkers ESL/EFL were the terms used to search the related articles and journals. Owing to the enormous number of references generated by the search technique, the inclusion and exclusion criteria were used to eliminate references that were irrelevant to the study [12].
  • In the second phase, i.e., “screening”, articles related to CBEA for the ESL/EFL errors published from 2016 onward were screened by reading the abstracts. This process was aimed at ensuring that the results were trustworthy, as technology is constantly evolving, and CBEA frequently produces new results. The population chosen was mainly ESL/EFL learners’ language production by employing software to analyze the errors. Given that this review required authentic information on CBEA, only empirical publications were included, and review papers were excluded.
  • In the third phase of the selection, i.e., “eligibility”, full-text articles were reviewed for the eligibility of the findings and information presented in the retrieved resources. Considering the ambiguous data, some publications were removed, because they did not offer information, as raised by Munn and co-authors [12]. Some studies were unclear about the software that they employed, and their conclusions did not reflect the use of CBEA.
  • The fourth phase, i.e., “inclusion”, ended the process with a qualitative and quantitative synthesis of the articles to include the most appropriate resources. After evaluating the citations, full articles for those publications that were the “best fit” for the study question were retrieved [14]. Sixty-two papers (full texts) were chosen for inclusion in the review from the original 1839 references (mainly abstracts). Some articles could be ruled out simply by looking at the title or abstract. Table 2 provides a summary of the findings from the selected articles.

4. Results

Sixty-two articles were reviewed to identify the strengths and limitations of CBEA. To answer the first review question, the results show the seven most found strengths and the six most found limitations of CBEA. The strengths and limitations of CBEA were identified by examining the actual patterns of use through a concordance system and statistical charting of the results. Most articles discussed the functions of the software for error analysis and corrective feedback, because the software can detect errors in addition to providing feedback. The studies revealed more than one strength, such as accurate and instant analysis, or more than one limitation, such as being unable to detect long and complex sentence errors and content and organization errors of the writing. Accordingly, the total number of articles supporting the strengths and limitations varied as a function of the total number of articles reviewed. The subsequent discussion of CBEA’s strengths and limitations was exclusively based on the findings of the articles chosen for this review.
On the basis of the results illustrated in Table 3 and Figure 2, the strengths of CBEA found in the articles were categorized into different categories, and the results of the 15 studies found that CBEA can provide solutions for errors (23.07%), 14 studies indicated that the analysis of CBEA is accurate and precise (21.54%), 10 studies proved that CBEA is able to provide instant analysis (15.39%), 8 studies found that CBEA is able to reduce teachers’ workload (12.31%), 6 studies showed that CBEA is easy to use (9.23%), 6 studies showed that CBEA enables iteration (9.23%), and 6 studies found that CBEA can analyze big data (9.23%). Each strength is explained in detail below with the supporting articles and their points.
Based on the results illustrated in Table 4 and Figure 3, the limitations of CBEA found in the articles reviewed are categorized into few categories, and accordingly, the results of 15 studies (28.30%) indicated CBEA unable to analyze higher-level errors, 11 studies (20.75%) showed CBEA unable to identify content and coherence errors, 10 studies (18.87%) proved the CBEA providing misleading feedback, 7 studies’ findings (13.21%) showed CBEA may miss certain errors by autocorrect them, 6 articles (11.32%) highlighted about the need of various software packages to conduct a complete CBEA, and 4 articles (7.55%) found CBEA is more diagnostic than being prognostic. The limitations are explained in detail below with the supported articles and their points.

5. Discussion

5.1. Strengths of CBEA

  • Accurate and Precise: Accuracy and precision are among the prominent features of CBEA, and they are vital for EA. The majority of the articles searched for this review concurred with this statement. CBEA was employed in the form of Grammarly, Moodle, and Criterion software in studies achieving a high level of precision [15,16,17]. Moreover, CBEA excelled in traditional strategies in terms of accuracy [18,19]. Given that accuracy is one of the vital parts of EA, the reliability of the CBEA of the COCA software can be trusted [20]. CyWrite produces fast and accurate analyses and provides technical assistance for academic writing [21,22]. Furthermore, studies employing Grammarly and AntCont (Version 3.5.8) proved that CBEA is able to detect the errors missed by humans while analyzing large amounts of text [9,23]. The main strength of CBEA is accuracy, as frequently mentioned in most of the studies reviewed. The findings of such studies indicated that the number of errors recognized by CBEA was substantially higher than that found during manual analysis.
  • Ease of Use: CBEA as employed in the form of Grammarly, COCA, Pigai, and Word Smith tools (6.0) was found to be user-friendly by researchers, requiring less effort to operate the software [20,24,25,26]. Given that written texts were computerized prior to the CBEA processes, the software could automatically analyze the errors, minimizing the effort needed [27,28].
  • Manual EA requires teachers to carefully check through the learners’ writings, and CBEA can alleviate this load when correctly handled.
  • Instant Analysis: CBEA can provide instant analysis as opposed to manual EA, which requires teachers to identify learners’ errors individually, which is a time-consuming and tedious task. Respondents of the studies that employed Grammarly, CyWrite, and Pigai software expressed their satisfaction with regard to time spent for the analysis and agreed that CBEA can produce instant analysis [29,30]. CBEA is designed to detect each error and deliver instant analysis with the appropriate response alternatives [31,32]. Accordingly, if immediate results are required, then the teacher can provide them to learners by adopting suitable tools to analyze the errors [33,34]. This action creates a positive effect on the learners and motivates them to initiate corrective action and improve their language use [30].
  • Reducing Teachers’ Workload: Teachers viewed tensions or discrepancies within classroom practices and beliefs due to contextual factors, such as time constraints, high-stakes examinations, and prescribed curricula [73]. Studies on Grammarly and Criterion showed that CBEA saves teachers time and allows them to concentrate on further actions on the basis of the EA results [35,36]. This phenomenon occurs because teachers typically devote extra time to carefully examining students’ errors to ensure that none are ignored [19]. Furthermore, teachers must devote a significant amount of time to analyze a large number of samples. Language teachers can use software to help them in effectively managing the work of analyzing students’ writings [5]. Given that CBEA can reduce teachers’ workload and save time, teachers can devote additional time to the preparation of teaching materials that are appropriate for correcting and improving learners’ errors.
  • Enabling Iteration: In the event of a questionable circumstance, teachers and students can reiterate the EA procedure for clarity or identify the linguistic part where the greatest errors are made, and the response purportedly utilized to correct those areas [6]. A teacher stated that CBEA as employed in Criterion is helpful in monitoring her students’ written work by encouraging them to repeatedly identify errors as a means to allow students to amend their work [36]. Teachers attempt to avoid making repetitive analyses due to time constraints [37]. Moreover, teachers can repeat CBEA as employed via n-gram/LSTM to generate more trustworthy and concrete results that can be used to determine errors [38,39]. CBEA enables teachers and learners to obtain precise data on their errors and their causes by going over text as many times as necessary.
  • Analysis of a Large Amount of Data: CBEA can analyze large datasets in a short period because the software is designed to handle large amounts of data [40,41]. CBEA takes less time to complete, and the analysis process can be completed more quickly than manual EA; thus, teachers can instantly move on to the next dataset [10] A corpus-based study used the AntConc software to analyze large datasets and completed the analysis in a short time, and this capability has substantially aided ESL/EFL research [23,42].
  • Providing Feedback: CBEA provides not only detailed information about each of the writing errors but also extra writing judgments according to a set of writing objectives. Teachers and learners agreed that CBEA as employed in Moodle, Grammarly, and Inspector software is actually useful because it allows them to verify their grammatical mistakes, thereby instantly correcting them [43,44]. Software packages, such as spell check, grammar check, electronic translators, and machine translation (MT), have helped learners autonomously analyze and revise their written work [44,45,46]. MT can assist learners with individualized feedback that they can relate to their second language translations to aid interpretations and paraphrases throughout the editing process [3]. Grammarly and DIALANG software enhanced learners’ involvement with their tasks and reduced learners’ struggle to overcome errors [47,48]. Long-term usage of CBEA software for EA can enhance learners’ language competency because they can recognize the reasons for errors, as well as solutions to improve them; it can also be the best way to help learners successfully learn their second language autonomously [49,50]. The authors of [51,52] found that learners experience convenience and confidence when using Grammarly to correct their errors, thus improving their writing quality.

5.2. Limitations of CBEA

  • Inability to Analyze Higher-Level Errors: CBEA is unable to detect errors in long and complex sentences. Such a limitation adds difficulty for teachers in explaining the problem to their students. Studies on CBEA as employed in Grammarly, Gamet, and Pigai software demonstrated that it cannot easily detect semantic issues in texts and can only recognize surface-level errors while failing to cover major issues [37,53]. Findings from a study on CEA (N-gram) software indicated that errors that were homophones were overlooked, accounting for 16% of spelling errors [54]. CBEA failed to identify incorrect multiword units or collocations [55,56]. The Grammarly and Pigai systems were confused by long and complex sentences, such as those involving the use of idioms and collocations and a passive voice in sentences [25,35]. CBEA is only effective at detecting errors and providing feedback at the surface level [19,57]. Thus, teachers should change the grade and manually recognize the student’s creativity in these situations.
  • Need for Various Software Packages: Another limitation of CBEA is that one application is insufficient to detect all errors in a document. This condition proved that spellcheckers are meant to check spelling errors and not grammatical errors. Avila et al. [58] supported that one software was not enough to obtain the required data for their study. When evaluating the performance of different grammar checking tools, Sahu [59] found that they were unable to detect errors in sentence structures. Crossley et al. [29] highlighted that it is impossible for software to analyze the overall errors, since it is rule-based software. Each program has restrictions in terms of analyzing errors because it is designed to detect particular types of errors [60]. The authors of [10] explained that no single feature set can predict a skill across all second-language writing datasets. CBEA can be carried out with limited purpose. The authors of [61] asserted that users should be aware that CBEA serves a variety of purposes, and learners should carefully select the appropriate tool recommendations according to their specific goals.
  • Inability to Identify Content and Coherence Errors: CBEA is unable to identify content appropriateness and the sentence movement in each paragraph, regardless of whether or not the paragraph is coherent [62]. The reliability of CBEA as employed in Grammarly, Pigai, and Criterion was questioned with respect to the content and organization of learners’ writing [10,17,24]. Sahu [59] highlighted that CBEA remains in a development stage, because it cannot properly evaluate the text structure, logic, or coherence. Results of studies on Grammarly, CYWrite, MyAccess, and Write&Improve demonstrated that human involvement for CBEA is necessary to identify errors such as disconnection between the topic and the content, since they will be attentive if the text lacks cohesion [63,64,65]. CBEA only detects programmed errors, whereas anything that is not in the program will not be detected. Consequently, false detection deviates from the purpose of EA [66].
  • Autocorrection: Certain errors are automatically corrected by the software without the author’s knowledge, resulting in erroneous analyses. While explaining reasons for not employing software for EA, Shirban Sasi [67] mentioned that the software may automatically rectify various problems, such as spelling, punctuation, and even word choice, without the researcher’s knowledge. A study by Barrot [32] employing Grammarly required learners to turn off the autocorrect feature as protocol to prevent the software from prescreening the text. Grammarly highlights errors in red, and students can simply click on these errors for Grammarly to correct them [52]. “CBEA software has various features, among which autocorrection allows learners to autocorrect their errors” described Ziad [22]. The autocorrect features in software for ESL/EFL learning help learners correct their erroneous written text [68,69]. In studying the perspective of students using software to learn, Yunus and Hua [70] mentioned that the autocorrection feature can help learners correct their writing errors; however, it may not be suitable for error analysis. Given this condition, the autocorrection feature can be a strength for ESL/EFL learning. Nevertheless, this feature is a limitation for the EA process where the analyst is unable to collect a genuine result.
  • Misleading Feedback: CBEA as employed in grammar checkers often provides corrective feedback to correct erroneous words or sentences. However, researchers employing Grammarly, CYWrite, and Write&Improve found that, on certain occasions, the feedback given could divert the meaning of the sentence; this situation occurred when the suggested answers were not suitable with respect to the intentions of the sentence [33,45,67]. The feedback provided by Grammarly is not in line with the intentions of the users [27,42,68]. Shelvam and Bahari [69] claimed that, on certain occasions, the software provides misleading feedback that needs improvisation in the future. Furthermore, students must be aware of the need for the sentences because the suggested answers can be accepted or dismissed by users according to the need of the sentence [35,69]. Systems are often confused about the difference between American spelling and British spelling, whereby some words can be detected as erroneous or correct [58]. Additionally, Musk [71] asserted that, although the spelling is always correct, it depends on the language setting, which isn’t always the case.
  • More Diagnostic Than Prognostic: The authors of [72] described the characteristics of EA, including diagnostic and prognostic aspects. CBEA has its own set of limitations that are said to be more diagnostic than prognostic. Considering that CBEA analyzes learners’ errors as a whole, it often provides the types of errors [72] but fails to identify the causes of errors. Im and Can [62,72] employed a human specialist to interpret the causes of the errors that were detected by CBEA, highlighting that human involvement is vital to completing CBEA processes. CBEA can accomplish the first three steps of Corder’s three-stage process, namely, collection, identification, and description; however, researchers may need to identify the causes of the errors and explain them to complete the process.

5.3. Review Question 2

To answer the second review question, CBEA could not accomplish all five stages of Pit Corder’s EA process. CBEA can efficiently accomplish the first three stages, namely, collection (collecting dataset), identification (identifying errors), and description (classification of errors). However, CBEA is unable to complete the fourth and fifth stages of the EA process, namely disclosure (identifying the causes of the errors) and evaluation (providing remediation to overcome the errors). The feedback from CBEA can be used to correct erroneous writing, but justification for the correction can only be provided by teachers [25,49]. Teachers can observe the learners’ language use, identify the causes of the error, and provide a remedy. Given this situation, human intervention is needed to accomplish the last two stages. Thus, this review revealed that CBEA needs human intervention to complete the EA process.

6. Pedagogical Implications

According to the results, CBEA was found to be a helpful method in conducting EA in terms of accuracy and ability to produce an instant analysis of large datasets. Teachers have time constraints, and error analysis can be a tedious and burdening work for them. Although CBEA was found to be ineffective in identifying higher-level errors such as errors in passive voice, homophones, content reliability, and complex sentences, it can accurately detect lower-level errors, such as mechanical and grammatical errors, which account of a large portion of ESL/EFL learning. Despite its limitations, CBEA can still be a helpful tool for teachers in reducing their workload by analyzing errors and providing feedback that enables them to focus on identifying the causes of errors and providing remedial actions. Similarly, Behaviorist CALL argued that a computer can perform repetitious actions within the same linguistic material without growing weary or making mistakes while also providing immediate feedback. Haong and Kunnan [66] explained that analyzing errors and providing feedback are intended to transform errors into learning opportunities by exposing students to potential errors in their writing and offering them language skills as they progress through the feedback. The principles of Communicative CALL can be used to explain this statement, as learners are able to learn grammar implicitly through the feedback provided. Learners, therefore, begin to focus on the use of forms, hence using their language skills authentically rather than fabricating them via the use of software.
The best option is an integrated pedagogy in which teachers serve as facilitators and students conduct the CBEA to understand their errors and resolve them using the software. For instance, teachers can provide students with an explanation of the causes of the errors and the proper method to correct the errors, whereby teachers can ask students to work in groups to generate ideas before writing to overcome content issues. Accordingly, teachers can encourage critical thinking among students in addition to improving their language skills. CBEA can also be a tool for autonomous learning. Teachers should encourage learners to employ CBEA in their learning. Teachers and learners should be aware of the strengths and limitations of CBEA when conducting EA and guide learners in independently analyzing their errors, while being mindful of the limitations, to sustain ESL/EFL learning. However, teachers must realize that CBEA cannot detect everything, especially authentic errors, and they should not exclusively depend on it to accomplish the EA process. These limitations may not stop users in employing CBEA. However, awareness of the strengths and limitations can serve as a guide for gaining advantages and avoiding higher-level errors. With the evolution of technology, stakeholders should look into the strengths and limitations of CBEA and develop software to overcome the current limitations. Lawmakers should make an effort to introduce CBEA in a school context such that teachers and learners can benefit from it. Thus, CBEA is recommended at all levels of academia to sustain ESL and EFL learning.

7. Conclusions

Unlike manual EA, CBEA is a computer-assisted analysis, and some researchers have claimed that it is not as efficient as expected in sustaining language learning. In light of this finding, a scoping review was conducted on 62 studies to determine the strengths and limitations of CBEA. This scoping review focused on studies that employed various software. The results showed that CBEA can assist teachers in a variety of ways, the most essential of which are saving time and reduced workload. Moreover, CBEA can produce a more accurate and precise result, as well as conduct real-time analysis. Furthermore, the availability of diverse software packages for specific purposes allows teachers to select the appropriate software on the basis of their teaching. However, some limitations exist, the most important of which is that teachers should not rely solely on the CBEA to complete tasks. Moreover, teacher participation is critical to avoid false results.
CBEA can perform the diagnostic aspect of EA. However, human intervention is required to perform the prognostic aspect, thereby producing an absolute analysis report. The inability of CBEA to determine the causes of errors leads to an incomplete EA process, thus necessitating human involvement to identify the causes of the errors.
Although CBEA has been in ESL/EFL language production for some time, developments aimed at achieving greater accuracy and detailed analysis have been beneficial for teachers. Researchers have limited the usage of CBEA in schools where English is taught as a second or foreign language. Therefore, ESL/EFL teachers and students are not fully aware of the efficacy of CBEA in detecting learners’ writing errors. A CBEA study in schools can be an eye-opener for teachers to adopt CBEA in their teaching practice. Thus, this tool is important for enhancing the pedagogical knowledge of teachers, particularly in understanding and tackling errors incurred by their learners.

Author Contributions

Conceptualization, K.H.T. and R.M.; Methodology, K.H.T.; formal analysis, K.H.T. and R.M.; validation, K.H.T., J.Y., J.C. and P.K.C.; writing—original draft preparation, K.H.T. and R.M.; writing—review and editing, K.H.T. and R.M.; visualization, J.Y., J.C. and P.K.C.; supervision, K.H.T.; and funding acquisition, J.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Brock, M.N. Computerised Text Analysis: Roots and Research. Comput. Assist. Lang. Learn. 1995, 8, 227–258. [Google Scholar] [CrossRef]
  2. Chukharev-Hudilainen, E.; Saricaoglu, A. Causal discourse analyzer: Improving automated feedback on academic ESL writing. Comput. Assist. Lang. Learn. 2014, 29, 494–516. [Google Scholar] [CrossRef]
  3. Lee, S.M.; Briggs, N. Effects of using machine translation to mediate the revision process of Korean university students’ academic writing. ReCALL 2020, 33, 18–33. [Google Scholar] [CrossRef]
  4. Song, S.J.; Tan, K.H.; Awang, M.M. Generic digital Equity Model in Education: Mobile-Assisted Personalized Learning (MAPL) through e-Modules. Sustainability 2021, 13, 11115. [Google Scholar] [CrossRef]
  5. Park, J. An AI-based English Grammar Checker vs. Human Raters in Evaluating EFL Learners’ Writing. Multimed. Assist. Lang. Learn. 2019, 22, 112–131. Available online: http://journal.kamall.or.kr/wp-content/uploads/2019/3/Park_22_1_04.pdf; http://www.kamall.or.kr (accessed on 22 June 2022).
  6. Mohammed, A.A.; Al-Ahdal, H. Using Computer Software as a tool of Error Analysis: Giving EFL Teachers and Learners a much-needed Impetus. 2020. Available online: www.ijicc.net (accessed on 24 June 2022).
  7. Warschauer, M.; Healey, D. Computers and language learning: An overview. Lang. Teach. 1998, 31, 57–71. Available online: http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6193077%5Cn; http://journals.cambridge.org/abstract_S0261444800012970 (accessed on 11 November 2022). [CrossRef] [Green Version]
  8. Livingstone, K.A. Artificial Intelligence and Error Correction in Second and Foreign Language Pedagogy. In LINCOM Studies in Second Language Teaching; LINCOM: Raleigh, NC, USA, 2012. [Google Scholar]
  9. Garrett, N. Technology in the Service of Language Learning: Trends and Issues. Mod. Lang. J. 1991, 75, 74–101. [Google Scholar] [CrossRef]
  10. Lei, J.-I. An AWE-Based Diagnosis of L2 English Learners’ Written Errors. Engl. Lang. Teach. 2020, 13, 111. [Google Scholar] [CrossRef]
  11. Munn, Z.; Peters, M.D.J.; Stern, C.; Tufanaru, C.; McArthur, A.; Aromataris, E. Systematic Review or Scoping Review? Guidance for Authors When Choosing between a Systematic or Scoping Review Approach. BMC Med. Res. Methodol. 2018, 18, 143. [Google Scholar] [CrossRef]
  12. Peters, M.D.J.; Godfrey, C.M.; Khalil, H.; McInerney, P.; Parker, D.; Soares, C.B. Guidance for conducting systematic scoping reviews. Int. J. Evid. Based Healthc. 2015, 13, 141–146. [Google Scholar] [CrossRef] [Green Version]
  13. Moher, D.; Liberati, A.; Tetzlaff, J.; Altman, D.G. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA statement. BMJ 2009, 339, 332–336. [Google Scholar] [CrossRef] [PubMed]
  14. Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef] [Green Version]
  15. Moon, D.; Prof, A. Evaluating Corrective Feedback Generated by an AI-Powered Online Grammar Checker. Int. J. Internet Broadcast. Commun. 2021, 13, 22–29. [Google Scholar] [CrossRef]
  16. Sarré, C.; Grosbois, M.; Brudermann, C. Fostering accuracy in L2 writing: Impact of different types of corrective feedback in an experimental blended learning EFL course. Comput. Assist. Lang. Learn. 2021, 34, 707–729. [Google Scholar] [CrossRef]
  17. Aluthman, E.S. The Effect of Using Automated Essay Evaluation on ESL Undergraduate Students’ Writing Skill. Int. J. Engl. Linguistics 2016, 6, 54. [Google Scholar] [CrossRef] [Green Version]
  18. John, P.; Woll, N. Using Grammar Checkers in an ESL Context. CALICO J. 2020, 37, 193–196. [Google Scholar] [CrossRef]
  19. Almusharraf, N.; Alotaibi, H. An error-analysis study from an EFL writing context: Human and Automated Essay Scoring Approaches. Technol. Knowl. Learn. 2022, 1–17. [Google Scholar] [CrossRef]
  20. Satake, Y. How error types affect the accuracy of L2 error correction with corpus use. J. Second Lang. Writ. 2020, 50, 100757. [Google Scholar] [CrossRef]
  21. Feng, H.-H.; Saricaoglu, A.; Chukharev-Hudilainen, E. Automated Error Detection for Developing Grammar Proficiency of ESL Learners. CALICO J. 2016, 33, 49–70. [Google Scholar] [CrossRef] [Green Version]
  22. AlKadi, S.Z.; Madini, A.A. EFL Learners’ Lexico-grammatical Competence in Paper-based Vs. Computer-based in Genre Writing. Arab World Engl. J. 2019, 5, 154–175. [Google Scholar] [CrossRef] [Green Version]
  23. Ang, L.H.; Tan, K.H.; Lye, G.Y. Error Types in Malaysian Lower Secondary School Student Writing: A Corpus-Informed Analysis of Subject-Verb Agreement and Copula be. 3L Southeast Asian J. Engl. Lang. Stud. 2021, 26, 127–140. [Google Scholar] [CrossRef]
  24. Lu, X. An Empirical Study on the Artificial Intelligence Writing Evaluation System in China CET. Big Data 2019, 7, 121–129. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Dodigovic, M. Automated Writing Evaluation: The Accuracy of Grammarly’s Feedback on Form. Int. J. TESOL Stud. 2021, 3, 71–87. [Google Scholar] [CrossRef]
  26. Li, Y. Corpus-Based Error Analysis of Chinese Learners’ Use of High-Frequency Verb Take. Engl. Lang. Teach. 2022, 15, 21. [Google Scholar] [CrossRef]
  27. Cavaleri, M.; Dianati, S. You want me to check your grammar again? The usefulness of an online grammar checker as perceived by students. J. Acad. Lang. Learn. 2016, 10, 223. [Google Scholar]
  28. Mushtaq, M.; Mahmood, M.A.; Kamran, M.; Ismail, A. A Corpus-Based Analysis of EFL Learners’ Errors in Written Composition at Intermediate Level English as a Global Language and Its Impact on Other Languages View Project A Corpus-Based Analysis of EFL Learners’ Errors in Written Composition at Intermediate Level View Project. 2019. Available online: https://www.researchgate.net/publication/330886433 (accessed on 29 June 2022).
  29. Crossley, S. Using human judgments to examine the validity of automated grammar, syntax, and mechanical errors in writing. J. Writ. Res. 2019, 11, 251–270. [Google Scholar] [CrossRef]
  30. Oneill, R.; Russell, A. Stop! Grammar time: University students’ perceptions of the automated feedback program Grammarly. Australas. J. Educ. Technol. 2019, 35, 42–56. [Google Scholar] [CrossRef] [Green Version]
  31. Kraut, S. Two Steps Forward, One Step Back: A Computer-aided Error Analysis of Grammar Errors in EAP Writing. 2018. Available online: https://repository.stcloudstate.edu/engl_etds/143 (accessed on 2 July 2022).
  32. Barrot, J.S. Using automated written corrective feedback in the writing classrooms: Effects on L2 writing accuracy. Comput. Assist. Lang. Learn. 2021, 28, 1–24. [Google Scholar] [CrossRef]
  33. Wali, F.A.; Huijser, H. Write to improve: Exploring the impact of an automated feedback tool on Bahraini learners of English. Learn. Teach. High. Educ. Gulf Perspect. 2018, 15, 14–34. [Google Scholar] [CrossRef] [Green Version]
  34. Waer, H. The effect of integrating automated writing evaluation on EFL writing apprehension and grammatical knowledge. Innov. Lang. Learn. Teach. 2021, 1–25. [Google Scholar] [CrossRef]
  35. O’neill, R.; Russell, A.M.T. Grammarly: Help or hindrance? Academic Learning Advisors’ perceptions of an online grammar checker. J. Acad. Lang. Learn. 2019, 13, A88–A107. [Google Scholar]
  36. Li, Z. Teachers in automated writing evaluation (AWE) system-supported ESL writing classes: Perception, implementation, and influence. System 2021, 99, 102505. [Google Scholar] [CrossRef]
  37. Gao, J. Exploring the Feedback Quality of an Automated Writing Evaluation System Pigai. Int. J. Emerg. Technol. Learn. 2021, 16, 322–330. [Google Scholar] [CrossRef]
  38. Santos, E.A.; Campbell, J.C.; Patel, D.; Hindle, A.; Amaral, J.N. Syntax and Sensibility: Using Language Models to Detect and Correct Syntax Errors; IEEE: Piscataway, NJ, USA, 2018. [Google Scholar]
  39. Yannakoudakis, H.; E Andersen, A.; Geranpayeh, A.; Briscoe, T.; Nicholls, D. Developing an automated writing placement system for ESL learners. Appl. Meas. Educ. 2018, 31, 251–267. [Google Scholar] [CrossRef] [Green Version]
  40. White, M.; Rozovskaya, A. A Comparative Study of Synthetic Data Generation Methods for Grammatical Error Correction. In Proceedings of the Fifteenth Workshop on Innovative Use of NLP for Building Educational Applications, Seattle, WA, USA, 10 July 2020; pp. 198–208. [Google Scholar] [CrossRef]
  41. Zhang, Z.V. Engaging with automated writing evaluation (AWE) feedback on L2 writing: Student perceptions and revisions. Assess. Writ. 2020, 43, 100439. [Google Scholar] [CrossRef]
  42. Jin, Y.H. Efficiency of Online Grammar Checker in English Writing Performance and Students’ Perceptions. Korean J. Engl. Lang. Linguistics 2018, 18, 328–348. [Google Scholar] [CrossRef]
  43. Lyashevskaya, O.; Panteleeva, I.; Vinogradova, O. Automated assessment of learner text complexity. Assess. Writ. 2020, 49, 100529. [Google Scholar] [CrossRef]
  44. Jayavalan, K.; Razali, A.B. Effectiveness of Online Grammar Checker to Improve Secondary Students’ English Narrative Essay Writing. Int. Res. J. Educ. Sci. 2018, 2, 1–6. [Google Scholar]
  45. Conijn, R.; Van Zaanen, M.; Van Waes, L. Don’t Wait Until it Is Too Late: The Effect of Timing of Automated Feedback on Revision in ESL Writing. In Transforming Learning with Meaningful Technologies; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2019; pp. 577–581. [Google Scholar] [CrossRef]
  46. Kokkinos, T.; Gakis, P.; Iordanidou, A.; Tsalidis, C. Utilising Grammar Checking Software within the Framework of Differentiated Language Teaching. ACM Int. Conf. Proceeding Ser. 2020, 234–240. [Google Scholar] [CrossRef]
  47. Karyuatry, J.P.I.; Rizqan, L. Grammarly As a Tool to Improve Students’ Writing Quality (Free Online Proofreader across the Boundaries). JSSH 2018, 2, 83–89. [Google Scholar] [CrossRef]
  48. Vakili, S.; Ebadi, S. Exploring EFL learners‘ developmental errors in academic writing through face-to-Face and Computer-Mediated dynamic assessment. Comput. Assist. Lang. Learn. 2019, 35, 345–380. [Google Scholar] [CrossRef]
  49. Lorena, P.G.; Ximena, C.S. Automated Writing Evaluation Tools in the Improvement of the Writing Skill. Int. J. Instr. 2019, 12, 209–226. [Google Scholar]
  50. Shang, H.F. Exploring online peer feedback and automated corrective feedback on EFL writing performance. Interact. Learn. Environ. 2022, 30, 4–16. [Google Scholar] [CrossRef]
  51. Bailey, D.; Lee, A.R. An Exploratory Study of Grammarly in the Language Learning Context: An Analysis of Test-Based, Textbook-Based and Facebook Corpora. TESOL Int. J. 2020, 15, 4–27. [Google Scholar]
  52. Pratama, Y.D. The Investigation of Using Grammarly as Online Grammar Checker in the Process of Writing. J. Engl. Lang. Educ. 2020, 1, 46–54. [Google Scholar]
  53. Choi, I.-C. Exploring the Potential of a Computerized Corrective Feedback System Based on a Process-Oriented Qualitative Error Analysis. STEM J. 2019, 20, 89–117. [Google Scholar] [CrossRef]
  54. Harvey-Scholes, C. Computer-assisted detection of 90% of EFL student errors. Comput. Assist. Lang. Learn. 2017, 31, 144–156. [Google Scholar] [CrossRef]
  55. Koltovskaia, S. Student engagement with automated written corrective feedback (AWCF) provided by Grammarly: A multiple case study. Assess. Writ. 2020, 44, 100450. [Google Scholar] [CrossRef]
  56. Nova, M.; Lukmana, I. The Deteceted and Undetected Errors in Automated Writing Evaluation Program’s Result. Engl. Lang. Lit. Int. Conf. (ELLiC) Proc. 2018, 2, 120–126. [Google Scholar]
  57. Thi, N.K.; Nikolov, M. How Teacher and Grammarly Feedback Complement One Another in Myanmar EFL Students’ Writing. Asia-Pacific Educ. Res. 2021, 31, 767–779. [Google Scholar] [CrossRef]
  58. Avila, E.C.; Lavadia, M.K.S.; Sagun, R.D.; Miraña, A.E. Readability Analysis of College Student’s Written Outputs using Grammarly Premium and Flesch Kincaide Tools. J. Phys. Conf. Ser. 2021, 1933, 012120. [Google Scholar] [CrossRef]
  59. Sahu, S. Evaluating performance of different grammar checking tools. Int. J. Adv. Trends Comput. Sci. Eng. 2020, 9, 2227–2233. [Google Scholar] [CrossRef]
  60. Kehinde, A.; Adesina, G.; Olatunde, O.; Olusayo, O.; Temitope, O. Shallow Parsing Approach to Automated Grammaticality Evaluation. J. Comput. Sci. Control Syst. 2017, 13, 11–17. [Google Scholar]
  61. Manap, M.R.; Ramli, F.; Akmar, A.; Kassim, M.; Pengajian Bahasa, A.; Alam, S. European Journal of English Language Teaching Web 2.0 Automated Essay Scoring Application and Human Esl Essay Assessment: A Comparison Study. Eur. J. Engl. Lang. Teach. 2019, 5, 146–161. [Google Scholar] [CrossRef]
  62. Im, H.-J. The use of an online grammar checker in English writing learning. J. Digit. Converg. 2021, 19, 51–58. [Google Scholar] [CrossRef]
  63. Ghufron, M.A.; Rosyida, F. The Role of Grammarly in Assessing English as a Foreign Language (EFL) Writing. Lingua Cult. 2018, 12, 395. [Google Scholar] [CrossRef] [Green Version]
  64. Schmalz, V.J.; Brutti, A. Automatic Assessment of English CEFR Levels Using BERT Embeddings. In Proceedings of the Eighth Italian Conference on Computational Linguistics CliC-it 2021, Milan, Italy, 26–28 January 2022. [Google Scholar] [CrossRef]
  65. McCarthy, K.S.; Roscoe, R.D.; Likens, A.D.; McNamara, D.S. Checking It Twice: Does Adding Spelling and Grammar Checkers Improve Essay Quality in an Automated Writing Tutor? Springer International Publishing: Berlin/Heidelberg, Germany, 2019; Volume 11625. [Google Scholar] [CrossRef]
  66. Hoang, G.T.L.; Kunnan, A.J. Automated Essay Evaluation for English Language Learners:A Case Study of MY Access. Lang. Assess. Q. 2016, 13, 359–376. [Google Scholar] [CrossRef]
  67. Sasi, A.S.; Lai, J.C.M. Error Analysis of Taiwanese University Students’ English Essay Writing: A Longitudinal Corpus Study. Int. J. Res. Engl. Educ. 2021, 6, 57–74. [Google Scholar] [CrossRef]
  68. Karlina Ambarwati, E. Indonesian University Students’ Appropriating Grammarly for Formative Feedback. ELT Focus 2021, 4, 1–11. [Google Scholar] [CrossRef]
  69. Shelvam, H.; Bahari, A.A. A Case Study on the ESL Upper Secondary Level Students Views in Engaging with Online Writing Lessons Conducted Via Google Classroom. LSP Int. J. 2021, 8, 93–114. [Google Scholar] [CrossRef]
  70. Yunus, C.C.A.; Hua, T.K. Exploring a Gamified Learning Tool in the ESL Classroom: The Case of Quizizz. J. Educ. e-Learning Res. 2021, 8, 103–108. [Google Scholar] [CrossRef]
  71. Musk, N. Correcting spellings in second language learners’ computer-assisted collaborative writing. Classr. Discourse 2016, 7, 36–57. [Google Scholar] [CrossRef]
  72. Can, C. Agreement Errors in Learner Corpora across CEFR: A Computer-Aided Error Analysis of Greek and Turkish EFL Learners Written Productions. J. Educ. Train. Stud. 2018, 6, 77–84. [Google Scholar] [CrossRef] [Green Version]
  73. Philip, B.; Tan, K.H.; Jandar, W. Exploring Teacher Cognition in Malaysian ESL Classrooms. 3L Southeast Asian J. Engl. Lang. Stud. 2019, 25, 156–178. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow diagram according to [13] for the articles searches and study selection process.
Figure 1. PRISMA flow diagram according to [13] for the articles searches and study selection process.
Sustainability 14 15649 g001
Figure 2. Strengths of CBEA and the number of studies.
Figure 2. Strengths of CBEA and the number of studies.
Sustainability 14 15649 g002
Figure 3. Limitations of CBEA and the number of supporting articles.
Figure 3. Limitations of CBEA and the number of supporting articles.
Sustainability 14 15649 g003
Table 1. Summary of the inclusion and exclusion criteria.
Table 1. Summary of the inclusion and exclusion criteria.
VariablesInclusion CriteriaExclusion Criteria
PopulationESL/EFL LearnersNon-ESL/EFL learners
Publication Year2016–2022Before 2016
FocusEmpirical studies related to CBEAStudies not related to CBEA and review articles
Table 2. Summary of information from the selected articles.
Table 2. Summary of information from the selected articles.
DatabaseYearLocationResearch DesignMethod and SoftwareFindings (Excerpts from the Articles)
[2]Eric2016USAQuasi-experimentalStanford CoreNLPLess agile, and it takes a good evaluation mechanism to identify the issue.
Effective in identifying technical errors.
Effective feedback also needs to be “nonjudgemental”, “contextualized”, and “personal”, which is much more difficult to achieve, as it requires a level of teacher presence.
[3]Eric2020KoreaQuantitativeMachine TranslationTool for accuracy in L2 writing.
MT should not be regarded as a replacement for the traditional language learning classroom.
[5]Google Scholar2020KoreaQuantitativeGrammarlyReduces teachers’ workload.
Unable to detect sentence-level errors.
Incorrect suggestions and insufficient explanations.
It has a long way to go before it can be fully developed.
[6]SCOPUS2020Saudi ArabiaQuantitativeGrammarly Able to detect errors missed in manual analysis.
Users can repeat the process as many times as they want.
CEA seamlessly integrates into the workflow with ease of use.
Provide detailed and immediate feedback.
A larger amount of data can be analyzed.
[10]ERIC2020TaiwanQuantitativeGrammarly Instant analysis and analyze a large amount of data.
Can perform the first three steps of the procedure, although researchers may need to enumerate and analyze errors to complete the process.
[15]Google Scholar2021KoreaQuantitativeGrammarlyHigh accuracy.
Fails to detect tense shift and sentence structure errors.
Teachers should make judicious decisions regarding how and when to use Grammarly, being fully informed of both its strengths and limitations.
[16]Eric2021FranceQuantitativeMoodleStudents can self-analyze their writing.
It produces an accurate output.
[17]Google Scholar2016Saudi ArabiaQuantitativeCriterionThe Criterion® system has great potential for tracking progress and generating individualized student portfolios, including areas of strength and weakness.
[18]Scopus2020CanadaQuantitativeMicrosoft Word, Grammarly, Virtual Writing TutorCan accurately identify mechanical and grammatical errors.
The system is unable to detect every error.
Cannot be relied upon alone.
[19]Google Scholar2022Saudi ArabiaQuantitativeGrammarly Not suitable as an independent assessment tool, only as a complementary tool. Achieves high accuracy compared to human raters. Grammarly cannot detect all errors although it does offer valuable suggestions; thus, it is critical to be aware of its strengths and weaknesses.
[20]Google Scholar2020JapanQuantitativeCOCAHelps learners make appropriate adjustments to correct their errors.
Analyzes a large amount of data.
[21]Eric2016USAQuantitativeCyWrite and CriterionThe performance of CyWrite detection on the four target error types, quantifiers, subject-verb agreement, articles, and run-on sentences, outper-formed Criterion.
[22]Google Scholar2019Saudi ArabiaMixed-method researchPadlet Web 2.0 Reveal more errors.
Help students develop competency in writing.
Various features, such as autocorrection, and smart prediction.
[23]Google Scholar2020MalaysiaQuantitativeAntConc
(Version 3.5.8)
Corpus study is the solution for inaccuracies in analysis and avoids overlooking certain errors.
Analyzes a large amount of data.
[24]Scopus2019ChinaMixed-method researchPigaiClear and immediate feedback is provided, which saves time. The AWE system can only comment on grammar errors and basic word collocations. It cannot meet the requirements of the evaluation for the composition of the text structure, content logic, and coherence.
[25]Google Scholar2021SpainMixed-method researchGrammarlyAble to categorize errors and provide clear explanations.
Occasionally presented errors related to hypercorrection.
Over-flags feedback thus making it more useful to the learner.
[26]Google Scholar2022ChinaQuantitativeWordSmith Tools 6.0Easy to categorize all errors more accurately.
Encourage autonomous learning.
Helps to design new pedagogical tools.
[27]Google Scholar2016AustraliaQualitativeGrammarlyEasy to use and enhance learners’ confidence in writing and understanding of grammatical concepts.
Incorrect suggestions and hard to understand.
[28]Google Scholar2019PakistanQuantitativeAntConc softwareCorpus leads to wide data analysis.
Easier, efficient, and, more objective.
[29]Scopus2019USAQuantitativeGametUnable to capture complex errors that may occur across phrases or clauses within sentences, the semantics of missing words, or redundant words, which is a difficult task for rule-based software.
[30]Google Scholar2019AustraliaMixed-method researchGrammarlyProvides prompt feedback, reduce teachers’ workload.
Improves students’ language learning.
[31]Google Scholar2018USAQuantitativeUAM CorpusTool programEasier, quicker, and more consistent than annotating by hand.
Allows searching for examples of errors easily.
[32]EBSCO
host
2021PhilippinesQuasi-experimentalGrammarlyProvides feedback
Enhance students’ writing performance.
Can be systematically integrated into the teaching of writing.
[33]Google Scholar2018BahrainQualitativeWrite & ImproveLess agile and effective in identifying technical errors.
Effective feedback is much more difficult to achieve, as it requires a level of teacher presence.
[34]Google Scholar2021EgyptQuantitativeWrite & ImproveProvides support for apprehensive EFL writers.
Provides immediate feedback.
[35]Google Scholar2019AustraliaMixed-method researchGrammarlyProvides immediate feedback, reducing teachers’ workload.
Promotes greater autonomy in students.
Tends to be multifarious and contentious.
Inaccurate suggestions can be made relating to the use of the passive voice.
[36]Google Scholar2021CanadaQuantitativeCriterionPossible to check the number of times students revised their papers.
Fails to capture some errors.
[37]Google Scholar2021ChinaMixed-method researchPigaiHelpful tool but there are some flaws in identifying collocation errors suggesting syntactic use.
[38]Google Scholar2018CanadaQuasi-experimentaln-gram/LSTMCan generate accurate results and runtime performance.
[39]Eric2018UKQuantitativeWrite & Improve The system is used in an iterative fashion as envisaged.
[40]Google Scholar2020USAQuantitativee Inverted Spellchecker, Pattern+POSOutperforms the inverted spellchecker; analyzes a larger dataset.
[41]Google Scholar2020ChinaMixed-method researchPigaiConvenience and immediacy are its merits and it also reduces teachers’ workload.
[42]EBSCO
Host
2020MalaysiaMixed-method researchAntConc
(Version 3.5.8)
Corpus study is the solution for inaccuracies in analysis and avoids the overlooking of certain errors. Analyzes a large amount of data.
[43]Google Scholar2021RussiaQuantitativeInspectorDoes not always provide the best solutions.
Encourages self-editing and improves learning.
[44]Google Scholar2018MalaysiaQuasi-experimentalGrammarlyOvercomes the problem of delayed feedback.
Helps school students with assessment and grading of their essays.
[45]Google Scholar2019BelgiumQuantitativeCYWriteAlthough timely feedback has been argued to be most useful this is not clearly reflected in the revision patterns nor users’ satisfaction.
[46]Google Scholar2020GreekQuantitativeGreek Grammar CheckerHelp students regulate their learning.
Cannot track all mistakes.
[47]EBSCO
Host
2018IndonesiaQualitativeGrammarlyEasy to use.
Very helpful to in minimizing the need for teachers to provide
corrections on students’ essays.
Students actively participate in the teaching–learning process.
[48]EBSCO
Host
2019IranQualitativeDIALANGLearners can benefit from the affordance of computer-mediated dynamic assessment in overcoming their developmental errors.
[49]Eric2019EcuadorQuantitativeGrammark and GrammarlyImproves learner writing performance.
Human guidance is important to compensate for the limitations of AWE programs.
[50]Google Scholar2019TaiwanQualitativePigaiIdentifies vocabulary, collocation, and common grammatical errors.
Provides immediate feedback and error corrections.
[51]Eric2022South KoreaQuantitativeGrammarlySuccessful at identifying local-level errors.
High-stakes testing results in more risk-taking with vocabulary and sentence complexity, which come at the cost of readability (i.e., clarity).
[52]Google Scholar2020IndonesiaQualitativeGrammarlyStudents believe that Grammarly is easy to use.
Corrects errors automatically.
[53]Google Scholar2021MynmarQualitativeICALLReduces challenges regarding time constraints.
AI only detects surface-level errors, whereas teachers’ feedback covered lower and higher-level errors; integration of both types of feedback is required.
[54]Eric2018SpainQualitativeCEA (N-gram)Errors that are homophones (e.g., your versus you’re) or otherwise real words (were versus where) are missed by generic spellcheckers, accounting for 16% of spelling errors in the corpus.
[55]Google Scholar2020USAQualitativeGrammarlySupplemental tool to facilitate lower-order concerns.
[56]Google Scholar2018IndonesiaMixed-method researchGrammarlyDetects grammar, spelling, and punctuation errors. Can also detect the addition and omission of some syntactical items in a sentence. Can be misleading when it comes to long phrase, passive voice structure, and question structure.
[57]Google Scholar2021MyanmarMixed-method researchGrammarlyHas pedagogical potential as a tool that can facilitate teachers’ identification of surface-level errors.
[58]Google Scholar2021PhilippinesQuantitativeGrammarly Premium and Flesch Kincaide ToolsWith Flesch Kincaide Reading Ease tools that can be instantly integrated with the Microsoft Office Word and Ubuntu programs, students can understand the level of their writing’s readability and their vocabulary and grammar competence. Likewise, suppose that teachers can obtain a Grammarly premium subscription. In that case, other aspects of students’ writing errors can be analyzed with regard to the correctness of punctuation, tone, clarity, engagement, and delivery of words.
[59]Scopus2020IndiaQuantitativeGrammarly, Ginger, ProWritting
Aid,
All apps fail to identify sentence structure errors.
One cannot completely trust these apps for the identification and correction of grammar errors.
[60]Google Scholar2017NigeriaQuasi-experimentalShallow ParserThe efficiency of operation of each of these systems varied widely.
The scale of the operation is still too small, limiting its ability to tackle fundamental linguistic phenomena.
[61]EBSCO
Host
2019MalaysiaQuantitativePaper Rater.comWith regard to which application should be used, this depends on the users’ preferences and needs. Learners can independently check their errors and correct them.
[62]Google Scholar2021KoreaMixed-method ResearchGrammarlyFeedback from online grammar checkers is not always accurate.
A balance may be found, whereby students focus on micro-level writing errors, and teachers focus more on macro-level errors, such as organization and idea development.
[63]Google Scholar2018IndonesiaQuasi-experimentalGrammarlyMore effective in reducing errors in relation to three indicators (diction, language use, and mechanics). Has less of an effect on content and organization and cannot detect whether or not the content is appropriate for the topic.
[64]Google Scholar2021TaiwanQuantitativeError TaxonomySeveral errors, such as spelling, punctuations, and even word choice, might automatically be corrected by the software.
[65]Eric2016USAQuantitativeMyAccessIn this study in particular, it missed 60.4% of errors that should have been attended to according to human judgment. The choice to use this software could be made as part of a combined pedagogy.
[66]EBSCO
Host
2021MalaysiaQualitativeGoogle DocsGoogle Docs auto-corrects users’ grammatical errors while users are writing.
[67]Scopus2020MalaysiaMixed-method ResearchSocial Networking Sites (SNS)The auto-correct feature of the profile can help learners to correct their errors/mistakes automatically.
[68]Google Scholar2021IndonesiaQualitativeGrammarlyOne student’s perspective: “Each time I am typing, auto correct will automatically appear.”
[69]Scopus2020MalaysiaMixed-method ResearchSocial Networking Sites (SNS)The auto-correct feature from the profile can help learners to correct their errors/mistakes automatically.
[70]Google Scholar2021IndonesiaQuantitativeGrammarlyEffectively detects errors but requires human involvement to find the causes of the errors.
[71]Eric2016SweedenQualitativeWord (Spell Checker)Allows self-regulation and enhances learning.
Occasionally, the spellcheck function gives rise to unnecessary corrections.
[72]Google Scholar2018TurkeyQuantitativeCEAThe nature of EA could accommodate both diagnostic and prognostic features.
Table 3. Number of studies supporting the strengths of CBEA.
Table 3. Number of studies supporting the strengths of CBEA.
Strengths of CBEANumber of Studies (n=)Percentages (%)
Provide solutions1523.07%
Accurate and precise1421.54%
Instant analysis1015.39%
Reducing teachers’ workload812.31%
Ease of use69.23%
Enable iteration69.23%
Analyze big data69.23%
Table 4. Number of studies supporting the limitations of CBEA.
Table 4. Number of studies supporting the limitations of CBEA.
Limitations of CBEANumber of Studies (n=)Percentages (%)
Inability to analyze higher-level errors1528.30%
Inability to identify content and coherence errors1120.75%
Misleading feedback1018.87%
Autocorrection713.21%
Need for various software packages611.32%
More diagnostic than being prognostic47.55%
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Mariappan, R.; Tan, K.H.; Yang, J.; Chen, J.; Chang, P.K. Synthesizing the Attributes of Computer-Based Error Analysis for ESL and EFL Learning: A Scoping Review. Sustainability 2022, 14, 15649. https://doi.org/10.3390/su142315649

AMA Style

Mariappan R, Tan KH, Yang J, Chen J, Chang PK. Synthesizing the Attributes of Computer-Based Error Analysis for ESL and EFL Learning: A Scoping Review. Sustainability. 2022; 14(23):15649. https://doi.org/10.3390/su142315649

Chicago/Turabian Style

Mariappan, Rajati, Kim Hua Tan, Jiaming Yang, Jian Chen, and Peng Kee Chang. 2022. "Synthesizing the Attributes of Computer-Based Error Analysis for ESL and EFL Learning: A Scoping Review" Sustainability 14, no. 23: 15649. https://doi.org/10.3390/su142315649

APA Style

Mariappan, R., Tan, K. H., Yang, J., Chen, J., & Chang, P. K. (2022). Synthesizing the Attributes of Computer-Based Error Analysis for ESL and EFL Learning: A Scoping Review. Sustainability, 14(23), 15649. https://doi.org/10.3390/su142315649

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop