Next Article in Journal
Bee Bread as a Functional Product: Phenolic Compounds, Amino Acid, Sugar, and Organic Acid Profiles
Next Article in Special Issue
Assessment of Hygiene Management Practices and Comparative Analysis of Regulatory Frameworks for Shared Kitchens across Different Countries
Previous Article in Journal
Integrated Fruit Ripeness Assessment System Based on an Artificial Olfactory Sensor and Deep Learning
Previous Article in Special Issue
Raw Meat Consumption and Food Safety Challenges: A Survey of Knowledge, Attitudes, and Practices of Consumers in Lebanon
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Effectiveness of Online Food-Safety Educational Programs: A Systematic Review, Random-Effects Meta-Analysis, and Thematic Synthesis

by
Zachary Berglund
,
Senay Simsek
and
Yaohua Feng
*
Department of Food Science, Purdue University, 745 Agriculture Mall Drive, West Lafayette, IN 47907, USA
*
Author to whom correspondence should be addressed.
Foods 2024, 13(5), 794; https://doi.org/10.3390/foods13050794
Submission received: 15 January 2024 / Revised: 21 February 2024 / Accepted: 25 February 2024 / Published: 4 March 2024
(This article belongs to the Special Issue Risk Perception, Communication and Behavior on Food Safety Issues)

Abstract

:
Online food-safety educational programs are increasingly important to educate different populations as technology and culture shift to using more technology. However, the broad effectiveness of these programs has yet to be examined. A systematic review, random-effects meta-analysis, and thematic synthesis are conducted to identify the effect size of online food-safety educational programs on knowledge, attitudes, and practices of consumers, food workers, and students and their respective barriers and recommendations. Online food-safety education was found to be of moderate and low effectiveness, with attitudes being the lowest in all populations. Consumers struggled with staying focused, and it was found that messaging should focus on risk communication. Students struggled with social isolation and a lack of time, and it was recommended that videos be used. Food workers struggled with a lack of time for training and difficulty understanding the material, and future programs are recommended to implement shorter but more frequent trainings with simple language. Future online food-safety educational programs should focus on incorporating social elements, as they can remain a huge barrier to learning. They should also focus on changing the participant’s attitude to risk perception and beliefs in the importance of food safety.

1. Introduction

Each year, numerous foodborne illnesses are attributed to mishandling or unsafe food handling behaviors [1,2,3]. In the U.S., more than 800 instances of foodborne illness outbreaks are reported annually, and restaurants are implicated in over half of them. When carefully designed and delivered, food-safety educational programs can improve knowledge and change the food preparation behaviors of food handlers [4,5,6]. Prior to the COVID-19 pandemic, food safety educational programs were typically delivered in-person; however, currently, more and more online food-safety educational programs are being developed and delivered [7,8,9,10].
Knowledge about the effectiveness of food safety educational programs is constantly evolving. Young et al. [6] found, with a meta-analysis, that food safety educational programs are effective at increasing the knowledge and inspection scores of various food handlers. Furthermore, tools and technologies have enabled the evolution of online educational programs, a form of asynchronous (classes run on a more relaxed schedule) or synchronous (conducted in real-time) program that uses digital tools to deliver educational instruction without requiring the physical presence of the instructor. Previous literature reviews suggested that online educational programs can be as effective as in-person programs for most work-related topics, including food safety educational topics [11,12].
The effectiveness of online educational programs can vary in different subpopulations. Some of the previous meta-analysis literature containing non-food-related topics suggested that education levels, age, and gender can significantly influence the effectiveness of online educational programs [13,14]. Different populations have different needs, and tailoring educational programs to meet those needs is essential in promoting behavior change [15].
Little is known about the needs of different subpopulations and the effectiveness of online food-safety educational programs in different subpopulations. To address this gap of unknown needs and possible varying effectiveness, a systematic review, meta-analysis, and thematic synthesis was conducted to evaluate the effectiveness of online food-safety educational programs for various food handlers and identify the areas for improving future online food-safety educational programs.

2. Materials and Methods

This article is reported following Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines (PRISMA) (Document S1) and Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) guidelines [16,17].

2.1. Systematic Review

2.1.1. Review Question, Approach, and Eligibility Criteria

This review was developed and adapted to the systematic review design from Young and Thaivalappil [18], Young et al. [19], and Young et al. [6]. The research questions were as follows: (1) What is the effectiveness of online food-safety educational programs among different food handler subpopulations? (2) What is needed to improve the effectiveness of those programs for these subpopulations?
The Population, Intervention, Comparison, Outcome (PICO) framework was used to define the review scope eligibility criteria in Table 1 [20,21]. The populations of interest were students, food workers, and consumers. Students were defined as participants who attended a university or school classroom at the time of the study, which were primarily K-12 and college-level students. Studies focusing on youth under the age of 18 were also assumed to be K-12 students. Food workers were defined as participants in a study aimed at food workers, managers, or other individuals who handle food or train others to handle food as part of their occupation. Consumers were defined as participants in a study targeted to the public who prepare or handle food for household consumption. The outcomes of interest were measures of knowledge, attitudes, and practices (KAP) following an online food-safety educational program. The types of interventions were training courses, workshops, educational messaging materials (e.g., emails and websites), and other theory-based or motivational interventions, so long as they were using digital media with a trainer who was not physically present. Both asynchronous digital educational programs and synchronous virtual web-based workshops were included.
Studies that compared online food-safety educational programs with traditional in-person food safety programs were excluded from this review if they did not contain a defined pre-intervention score or negative control group. The following measures were considered valid for analysis if they measured a component of KAP: (1) inspection scores, (2) coded observed practices, (3) self-reported practices and survey values, and (4) test scores. Relevant study designs included any experimental study with an independent control group, including randomized, controlled trials (RCT), and non-randomized, controlled trials (NCT). Uncontrolled before and after studies (pre-post) (i.e., single group pre-test and post-test comparisons) were also included due to the low availability of studies. Eligible sources of evidence were only peer-reviewed journal articles published in English.

2.1.2. Search Strategy

PICO was used to design keywords for the search strategy, as seen in Table 2. Following the initial keyword search, an additional reverse reference search was conducted by adopting the method from Young et al. to ensure no relevant articles were missed [22].
The titles and abstracts of all search results were screened before assessing acceptance into the analysis by using a pre-specified form containing two questions to determine whether the study met the eligibility criteria and to save a copy of a PDF of the full article. The following questions were included: (1) Does the study discuss an evaluation of an online food-safety educational program for consumers, students, or food workers? (2) Is the study a peer-reviewed journal article or literature review? Studies that answered both questions in the affirmative were saved in Mendeley’s (Elsevier, Amsterdam, The Netherlands) reference manager based on the database and article type. Studies that failed to meet both criteria were removed. Saved references were exported as XML files, and duplicates were removed using Excel Version 2010 (Microsoft, Redmond, WA, USA). A team of three, one author and two trained research assistants, screened articles through a Qualtrics XM (Qualtrics, Provo, UT, USA) survey for criteria of acceptance into both forms of analysis and data collection. Each article was screened twice, once by the author and again by one of two research assistants. The two reviewers for an article discussed disagreements regarding acceptance until they reached a consensus. The requirements for meta-analyses ensured that the following were reported: (1) sample size, (2) source for variance values, and (3) pre-intervention and post-intervention evaluation values. The criteria for acceptance into the thematic synthesis was to ensure the study contained information about one of the following: (1) barriers to online educational programs, (2) feedback or perceptions of the online educational programs, and (3) benefits of training and experience. Additionally, articles were screened to ensure there was no in-person effect on the intervention. The survey also collected numerical data required for the meta-analysis of the studies. Articles for the meta-analysis were evaluated for risk of bias using ROBINS-I and ROB-2 templates based on the study design as required by the tools [23,24]. Articles included for the thematic synthesis were evaluated for quality using a modified critical appraisal skills programme (CASP) checklist [25]. These tools also assisted in collecting data about demographics and study characteristics.

2.1.3. Data Extraction

Detailed quantitative results (i.e., outcome data) for the efficacy of interventions were extracted from each study using the Qualtrics XM survey. Relevant outcome types included dichotomous and continuous measures. Dichotomous measures were noted by the table numbers in which data are located for copying later, while continuous measures were extracted with their associated sample size and variance. When variance was not available, other statistics were extracted (e.g., p and t-test values) that could be used to estimate an effect size. The missing statistics were calculated with the extracted values using the RevMan Calculator hosted on Cochrane [26].

2.2. Random-Effects Meta-Analysis

The data collected were stratified into subgroups by populations of (1) students, (2) food workers, and (3) consumers. Within these subgroups, three main outcome types were considered: (1) knowledge, (2) attitudes, and (3) behaviors. Given that studies used different measurement instruments and scales, a form of the standardized mean difference (SMD) corrected for low sample sizes, Hedge’s G, was selected as the primary effect size metric, as shown in Equation (1).
G = (1 − (3/(4 × n1 + n2 − 2) − 1)) × (X_bar1 − X_bar2)/Sp
where X_bar1 is the mean score of the control group (if no control group exists, the pre-intervention score is substituted), X_bar2 is the mean score of the experimental group after treatment, and n1 and n2 are the sample sizes of the control and experimental groups, respectively. Sp is defined in Equation (2) below as:
Sp = ((n1 − 1) × S12 + (n2 − 1) × S22)/(n1 + n2 − 2))(1/2)
S1 and S2 are the sample standard deviation of the scores for the control group and the experimental group, respectively. Hedges G can be interpreted as follows: (1) G = 0.2 as a low effect, (2) G = 0.5 as a moderate effect, and (3) G = 0.8 as a large effect [27].
Studies that reported dichotomous outcomes were converted to Hedge’s G, assuming that all studies measured the same outcome construct. Within meta-analysis subgroups, some studies reported more than one relevant outcome measure. These measures were combined into single values with a fixed-effects model weighted mean, as suggested by Hedges et al. [28].
A random-effects analysis was conducted within groups that contained more than one study to identify variance between studies and calculate an average effect. For calculating the initial estimate for interstudy variance (τ2), the method described in Hedges et al. was used [28]. This calculation was refined using the restricted maximum-likelihood estimator (REML) adapted from Sidik and Jonkman, using a convergence criterion defined in Equation (3) [29].
2new − τ2old| < 0.001
where τ2new is the newly calculated difference between study variance and τ2old is the previously iteratively calculated difference between study variances. A list of the statistical tests, statistics calculated (e.g., heterogeneity measures), and graphs created for the analysis can be found below in Table 3. All p-value tests considered statistical significance at p < 0.05. All calculations for the study were performed in Excel using the XRealStats [30] package version 7.6 and self-authored Visual Basic for Applications (VBA) code validated by textbook examples when needed. Finally, a Grading of Recommendations Assessment, Development and Evaluation (GRADE) was conducted for each group to evaluate the overall quality of the groups [21]. An author and another researcher assessed the evidence together to assign scores for the GRADE to obtain a consensus on the evaluation.

2.3. Thematic Synthesis

Thematic synthesis of all relevant articles was conducted using the approach described by Thomas and Harden, which aimed to develop “analytical themes” to advise design in future online food-safety educational programs [37]. Thematic synthesis is adaptable for a systematic review and captures first-order and second-order data constructs. The definitions of these orders, along with a third-order construct adapted from Britten et al., can be found in Table 4 [38]. First-order data are generated from one’s own interpretation of a lived experience. Examples of this order include quotes from interviews or surveys. A second-order datum is the interpretation of someone else’s lived experiences. Examples of this order include thematic analysis from interviews and observations of researchers.
The analysis consisted of identifying potentially relevant information, followed by the coding process. One of the authors and two trained research assistants reviewed each study and annotated the construct order of relevant qualitative data. Qualitative data were considered relevant if they met one of the following criteria: (1) identify feedback or opinions about online educational programs as a media, (2) identify insight into online educational programs design that would have been beneficial, and (3) identify potential barriers or issues faced with online educational programs. Identified data were then coded by adapting the process from Chen et al. [41]. Created codes were grouped into deductive themes that could be classified as recommendations or barriers. The analysis was conducted using PDFs of full articles imported into the Nvivo 12 qualitative analysis software Version 12.6.1.970 (QSR International, Doncaster, Australia).
The Confidence in the Evidence from Reviews of Qualitative research (CERQual) approach [19,42] was adapted and used to assess how much confidence to place in each of the findings. CERQual is analogous to the GRADE approach used for the meta-analysis. CERQual evaluates four categories: (1) methodological limitations, (2) coherence, (3) adequacy of the data, and (4) relevance. Based on the assessment of the criteria, the confidence for evidence for a specific subgroup was determined to be one of the four levels: (1) high confidence, (2) moderate confidence, (3) low confidence, and (4) very low confidence.

3. Results

3.1. Search, Bias, Quality, and GRADE Results

After sorting and assessing the risk of bias, 22 articles from 1590 search results were identified, 15 for the meta-analysis and 14 for qualitative synthesis, with 7 articles being included in both analyses. A review flowchart of the process is shown in Figure 1. Detailed study characteristics can be seen in Table 5.
The locations of the studies collected were homogeneous. Fourteen (93.33%) of the articles for the meta-analysis were from studies conducted in the United States of America (USA). The last study (6.66%) was conducted in Turkey. Of the 15 studies identified for the meta-analysis, 3 (20%) were RCTs that provided the highest-quality evidence, 7 (46.66%) were pre-post designs, and 4 (26.66%) were NCTs. The kinds of programs included in the meta-analysis varied. Of 15 included studies, 4 (26.66%) are interactive computer programs, 4 (26.66%) are computer modules consisting of various digital media but are not necessarily interactive, 3 (20%) are videos, 2 (13.33%) are web-based programs, 1 (6.66%) is an online course, and 1 (6.66%) is a course delivered via email. Six (40%) of these studies are on the food workers group, five (33.33%) are on the students group, and five (33.33%) are on the consumers group. The study by Barret et al., 2020, is the only study featuring both consumers and students. The study reported separate measures for adults and youths [43]. The youths were assumed to be students, and adults were assumed to be consumers. The studies for the qualitative analysis primarily consisted of studies conducted in the USA. Of the 14 included studies, 10 (71.4%) were conducted in the USA. The remaining studies were conducted in England, Portugal, France, and India. Of the 14 qualitative studies, 3 (21.43%) studies were on consumers, 7 (50%) studies were on food workers, and 4 (28.57%) were on students.
Full risk of bias, GRADE, and CERQual results can be seen in Tables S2 and S3 and Spreadsheets S1, S2, S3, S4. Most studies included in the meta-analysis had a low risk of biases, according to ROBINS-I and ROB-2. Only 5 out of the 15 studies had a moderate or serious risk due to confounding effects. For example, Costello et al. (1997) failed to consider participants’ level of education and ethnicity, which could be potential confounding domains [40]. GRADE and CERQual evaluations were conducted to assess the findings associated with each subpopulation for quality. Low heterogeneity was identified among the studies in each subpopulation (all τ2 < 0.1, I2 < 20%), suggesting studies are adequately grouped within each category [31]. However, due to the low sample size, many studies consist of NCTs and many consist of self-reported outcomes, and GRADE was adjusted to have low findings. The resulting assessment yielded that most effect sizes ended up with negative values for GRADE or very low confidence. Additionally, much of the qualitative evidence comes from researcher-based, biased second-order constructs. This resulted in lowering confidence in the CERQual assessment.
The CASP checklist was used to check the quality of studies included in the thematic synthesis. A full list of results can be found in Table 6. The results showed that most studies were of moderate quality. The most frequently deficient quality criteria included the following: “Was the research design appropriate to address the aims of our research?” (70% Yes); “Has the relationship between research participants been adequately addressed?” (66.67% Yes); and “Were the qualitative data collected in a way that optimally answers our research questions?” (33% Yes).

3.2. Meta-Analysis Results

Information such as effect sizes for individual studies can be found in Figure S1. Publication bias tests indicated no bias, and these tests can be seen in Spreadsheet S5. The knowledge effect size (G = 0.58, p < 0.001, n = 12) suggests that online food-safety educational programs have a medium effect. Additionally, the practice effect size (G = 0.42, p < 0.0008, n = 5) suggests that online educational programs have a moderate effect on improving food handling practices. However, the attitude effect size (G = 0.29, p = 0.078, n = 5) is a statistically non-significant low effect.

3.3. Subgroup Meta-Analysis and Thematic Synthesis Results

Based on the meta-analysis results, the effectiveness of online educational programs on each subpopulation is varied, as seen in Table 7. I2 is low across groups. However, the confidence intervals for I2 values are considerably larger. The meta-analysis results for consumers, students, and food workers, as well as the barriers and recommendations for these subpopulations, are reported in order.
Consumers have a knowledge effect size (G = 0.74) and practices effect size (G = 0.35) that indicate a moderate to possibly high effectiveness and low to possibly moderate effectiveness, respectively. They are the largest group and the most culturally diverse. However, consumers were noted to have experienced (1) technical difficulties and (2) difficulty focusing [51,60]. Recommendations for consumers were limited, and not enough evidence was collected to synthesize results.
Students had a knowledge effect size (G = 0.72) indicating a moderate-to-high effectiveness. Students had both an attitudes effect size (G = 0.23) and a practices effect size (G = 0.30) indicating low effectiveness for both outcomes. Additionally, the knowledge effect size (G = 0.73) has the largest between study variance of all groupings (τ2 = 0.094) and is reflected in the confidence interval (CI = 0.3). Three of the studies focused on college-aged students, while two focused on middle- or high-school students (Figure S1). Larger study-level knowledge effect sizes (G = 1.5, G = 0.94, G = 0.55) are associated with older college students, while smaller study-level knowledge effect sizes (G = 0.68, G = 0.34) are associated with younger K-12 students [43,48,53,54,56]. Students struggled with a lack of time, using virtual technology, and social isolation [42,44]. In Fajardo-Lira and Heiss, students reported that the training pace was “moving too quickly” to learn effectively, which resulted in some skipped material [48]. Debacq et al. also noted increased isolation and stress as their students were asked to keep their cameras off in synchronous sessions, that students’ computers would slow during aspects of the course, and internet connection problems were sometimes an issue [44].
Some recommendations identified for students were to (1) increase social interaction and (2) use pop culture, gamification, and videos. Increasing the amount of social interaction in training can be accomplished through incorporating more virtual one-on-one interactions and social media [44]. Debacq et al. recommended increasing the number of interactions between the instructor and students [44]. Mayer and Harrison [54] and Lynch et al. [53] suggested that social media or discussion boards can increase social interaction and create a social learning environment. Additionally, Debacq et al. noted that students appear to enjoy more pop-culture references and games in learning [44]. Mayer and Harrison discovered that students preferred videos as a method of information delivery and that they should be four to seven minutes long [54].
Food workers had a knowledge effect size (G = 0.38) and an attitudes effect size (G = 0.35) that both indicate low to possibly moderate effectiveness. Food workers also were noted to have experienced barriers with (1) turnover, (2) varying educational levels, and (3) a lack of time [50]. Fenton et al. (2006) stated that employees were given only one hour to complete the training [50]. Still, some employees needed more time because they had “difficulty reading the material”.
Some recommendations for food workers that were identified are: (1) use extra resources, (2) use evaluations, and (3) use videos. Temen et al. noted that using extra resources to help those struggling with difficult content was shown to help [59]. Costello et al. recommended that evaluations are necessary to ensure that the content is understood [40]. Regarding findings for students, Temen et al. found that including videos in training generated higher engagement and more improvement in their study [59].

4. Discussion

4.1. Search Results Indicate Potential Challenges with the Search and Analysis

This review used a structured approach to identify and synthesize available evidence on the effectiveness of online food-safety educational programs in the different subpopulations of students, food workers, and consumers. Additionally, the review employed a thematic synthesis to combine barriers and recommendations for these subpopulations. This review is the first to report an effect size for only-online food-safety educational programs. The types of programs investigated by this study are diverse and grasp an understanding of the effectiveness of online food-safety educational programs. However, most of the studies from the meta-analysis (n = 14) and qualitative synthesis (n = 11) were in the USA. This is a finding similar in many meta-analyses on different topics within food safety education, in which a majority of the studies were found to be conducted in North America ([6,63,64,65,66]). This attribute can have an impact on external validity when extrapolating beyond the USA. The United States is an example of a Western, educated, industrialized, rich, and democratic (WEIRD) society [67]. WEIRD is a large collection of characteristics involving cultural and environmental factors that may not represent aspects of other parts of the world in sociological and psychological characteristics. Cultural factors, such as factors in Hofsted’s cultural taxonomy [68], are demonstrated to affect the way students learn, their learning styles, perceptions, motivational orientation, and achievement, albeit in a potentially minor fashion [69,70,71,72,73]. The effect of cultural factors has also been explored in studies on workplace food safety culture and argued to be considered in studies on behavior change [74,75]. Additionally, sociocultural approaches in adult education emphasize how the social, cultural, and political environments influence adult learning and development [76,77]. It is important to recognize that many of the findings from these studies may not be best extrapolated to other settings without appropriately considering differences in culture and environment underlying these different populations. As such, researchers may want to attempt to investigate effectiveness in other cultures since the nature of online training enables easy interaction with the rest of the world.
The search did not collect non-bibliographic sources (i.e., grey literature or PhD dissertations) or articles in a language other than English due to a lack of resources to enable these inclusions. Other studies may have been published with negative results or were not written in English and may have been missed [78]. Pham et al. conducted a random sample investigation on meta-analysis to explore the effects of limiting the search to bibliographic sources [79]. Random samples of meta-analyses from the agri-food public health area found that up to five articles included in these studies would have been kept out of the analysis. This suggests that some potential articles may have been found if searching through non-bibliographic sources. Additionally, a meta-analysis by Young et al. (2019) [6] on food handler training and educational interventions identified only two non-English articles. However, they noted there might have been more articles in different languages not listed in the bibliographic databases they searched. It is unknown the exact number of articles missed in the search; it is likely that some were missed.
The impact of the study designs in this meta-analysis was reflected in the GRADE and CERQual assessments by the lowering of the confidence in findings. Our search results collected only three RCT trials that provide the highest quality of evidence. With seven collected studies consisting of pre-post uncontrolled designs, some concern should be raised about potential of bias in the results for grouping due to these studies being susceptible to bias [80]. This was reflected in our GRADE assessment as most of the final scores were negative, suggesting low confidence. Pre-post studies are likely more commonly used due to difficulties in recruiting participants [81]. However, this study design suffers from major internal validity issues by assuming the intervention directly causes the outcomes, ignoring uncontrolled factors [82]. Maier-Riehle and Zwingmann identified in their meta-analysis that an effect size for these single-group pre-post designs may likely be overestimated [83]. It is recommended to use a randomized control trial as well as Consolidated Standards of Reporting Trials for Sociological and Psychological Interventions guidelines for designing future studies to limit issues [84]. The CERQUAL assessment on individual studies (Spreadsheet S2) identified that, on average, 94% of codes were based on second-order constructs. Furthermore, the modified CASP questions reveal that only 33% of the included studies conducted their study in a way that is optimal for the analysis. This suggests that results will be based on the researcher interpretation of events without any guarantee for a validified form of data analysis on primary qualitative data, which is a potential source of bias. This was reflected in the downgrade in the CERQUAL assessment.
While most studies were found to be good quality and to have a low to moderate risk of bias, these studies also may be featured in other meta-analyses with different assessments of bias and GRADE results [6,12,22,63,64]. According to Young et al. [6], GRADE requires some judgment to determine appropriate grading criteria and may cause different conclusions based on the researchers. While this study had two reviewers independently assess GRADE and ROBINS-I, this may not be enough to ensure commonality in assessment.

4.2. Both Knowledge and Behavior Effect-Sizes Were Moderately Effective

Currently, no other meta-analysis calculates composite effect sizes for food safety educational programs that include different populations together. However, comparisons can be made with existing meta-analysis literature reviews of online food-safety educational programs in other populations (Table 8).
The knowledge effect size was found to be moderately effective, suggesting that online food-safety educational interventions are effective at communicating food safety information. This finding remains consistent with the effect sizes reported in previous meta-analyses [6,12,22,64]. While the knowledge effect size was effective, it is not the only factor that can change behavior [6,85]. Attitudes may play a role in food safety behavior change.
The attitudes effect size was low and found to be statistically insignificant from 0 (G = 0.29, p = 0.078), suggesting that the ability of online food-safety educational programs to change how food handlers view food safety is minimal. This value is slightly lower than the attitudes standard mean difference (SMD) effect size calculated in Insfran-Riverola et al. [12] (SMD = 0.29). However, Insfran-Riverola et al. [12] concluded that the effect was moderate despite the calculated effect size being much closer to what is considered low effectiveness based on criteria in Durlak [27] and Borenstein et al. [86]. In contrast, interpretations are cautioned to be relative based on the intervention [87]; larger effect sizes are seen in the results of previous meta-analyses [6,22,64], questioning the interpretation of the findings. Regardless, the effect sizes are similar in calculations, as seen in three other meta-analyses [6,22,63]. Specifically, the results are like the findings in Young et al.’s meta-analysis [6] on food handler food safety training in which they describe a low to moderate effect seen in non-randomized controlled trials on attitudes. Attitudes play an important role in determining food safety behaviors [88,89,90]. Changes in attitude-related variables are important factors that precede behavior change and are included in several models of behavior change [91,92,93]. Following this, it has been shown in several meta-analyses that positive attitude change corresponds with a positive change in behavior [65,94,95]. This suggests that a low attitude effect-size may be a barrier to improving behavior change outcomes in interventions. It is recommended that future studies focus on improving attitudes by stressing the importance of risk and good food safety practices [96].
The behavior effect size was found to be of moderate effectiveness. This is seen in observed behavior effect sizes for Soon et al. [64] and Insfran-Riverola et al. [12]. The effect size of randomized controlled trials for adult educational programs by Young et al. also was of similar effectiveness [63]. However, all other effect sizes from Young et al. were considerably lower [63]. Additionally, Young et al. [6] and Young et al. [22] both calculated a larger effect size for NCTs. The current meta-analysis that calculated the effect size for this study consisted of only a single RCT, two pre-posts, and three NCTs. Considering the study designs by Young et al. [6] and the NCT effect size by Young et al. [22], it is expected that our study’s calculated effect size for practices would be closer to these studies’ effects because the composition for our effect size is mainly NCTs. Our smaller effect size may suggest a difference in effectiveness between online food-safety educational programs and in-person food-safety educational programs. However, more research with a comparative meta-analysis would be required. Despite any potential lower effectiveness, online food-safety educational programs are an effective methodology to improve food safety behaviors by participants. Additionally, many of the behaviors in the studies were self-reported behaviors, which are heavily affected by the instrument used [97]. Furthermore, these measurements are likely to be heavily influenced by social desirability, which may cause overestimation [98,99,100]. However, Young et al. also discuss how some studies have shown agreement between observed and reported behaviors in studies with valid and reliable measurement instruments [63]. It is not evident that the evaluations used in the collected studies have verified the surveys used to measure behaviors, which suggests that the actual behavior effect size may be lower than was calculated. Additionally, a previous meta-analysis separated inspection scores, observed behaviors, and self-reported behaviors into separate categories [6]. That was not considered for this meta-analysis due to the low sample size and low numbers of these studies. For example, Duong et al.’s research was one of the only studies to report observed behaviors. It is unknown what effect combining the measures would have on the effect sizes reported without having correlations assessed [47].
When compared to other meta-analyses on food safety interventions, the results suggest a reoccurring pattern that is seen in both online food-safety educational programs and in-person educational programs. Knowledge change is the highest effect size, with behavior change being the second highest and attitudes being the lowest [6,12,22,63,64]. These patterns are seen with meta-analyses of food-safety educational programs of any method of delivery, suggesting little potential difference between in-person food-safety educational programs and online food-safety educational programs in terms of the relative effectiveness of different outcomes. While this observation may seem to conflict with the conclusion drawn from the behavior effect size, the finding from the pattern relates to the relative order of effectiveness, not the actual magnitude suggested by the behavior effect size.

4.3. Subpopulation Analysis

The current meta-analysis explored the effectiveness of online food-safety educational programs and the barriers and recommendations for different populations. The populations explored were consumers, students, and food workers. The implications for some of the results of the meta-analysis and the thematic synthesis are discussed in order of consumers, students, and food workers.
Consumers had a knowledge effect size of G = 0.74, similar to the large effect size of SMD = 0.87 reported by Young et al. for randomized controlled trials with adults [63]. This suggests that online food-safety educational programs are similarly effective at knowledge change in consumers as the overall effectiveness of food safety educational programs of any format to change knowledge in consumers. Consumers also have a low to moderate effect on practices, which is similar to the effect size for non-randomized controlled trials found by Young et al. [63]. This is surprising because the design make-ups of the studies for consumers consist of three pre-post studies and two RCTs. Due to the high amounts of heterogeneity noted in Young et al. [63], it is difficult to determine if the results are similar due to an overestimated bias from the pre-post studies or due to random chance.
Consumers faced (1) difficulties in focusing and (2) technical difficulties. Difficulty in focus is a barrier similar to other noted barriers in adult learning, which is an adjacent topic as consumers are typically adults. Trepka et al. identified that consumers had a difficult time focusing on learning when they had to focus on taking care of their kids [60]. This would be classified as an external barrier that emerges due to personal responsibilities or other influences beyond their control [101,102]. Another traditional type of barrier to adult learning is institutional barriers or barriers constructed by educational bodies, such as researchers, that make participation challenging [103,104]. Technical issues may be considered an institutional barrier as they emerge from the choice of delivery format made by the educational body. Technical difficulties have been linked to lower test scores for adults in online training programs, which can cause attrition in longer-term programs [105]. This suggests that technical issues can pose a large barrier to effectiveness in interventions. Therefore, as technology grows and more tools become available, it is necessary to also put in efforts to assist consumers in learning how to use the technology [104,106]. Technical issues are a barrier unique to online food-safety educational programs. To overcome this barrier, Zirkle and Fletcher claim that having technical assistance in place and maintained is key to any successful online educational program [104].
While no recommendations were identified in the thematic synthesis for consumers, the principles of adult education may be applied when educating consumers. Knowles states that adults are independent and self-directed and need to know the rationale for what they are learning [107]. Knowles et al. outline the principles of andragogy, the art of helping adults learn. The tenets are (1) the learner’s need to know, (2) the self-concept of the learner, (3) prior experience of the learner, (4) readiness to learn, (5) orientation to learning, and (6) motivation to learn [108]. While many of these key assumptions may lack holistic empirical evidence, they can serve as a starting place when designing consumer online food-safety educational programs [109]. While these principles are older, a modern system built upon Bloom’s taxonomy, andragogy, transformational learning, constructivism, and communities of practice, called “virtual andragogy”, has been proposed by Greene and Larsen, who indicated that successful online educational programs for adults will engage the following elements: (1) readiness to learn and understanding, (2) need to know and remembering, (3) experience and applying, (4) orientation to learning and analyzing, (5) self-concept and evaluating, and (6) motivation to learn and creating [110]. However, as a more recent theory, few other publications have been published to validate such a model. These proposed frameworks can operate as design principles when designing online food-safety educational interventions for consumers. While consumers are an important population to educate to assist in reducing foodborne illnesses, the effectiveness, barriers, and recommendations for online food-safety educational programs are different for students.
Students had a large effect size for knowledge and low effect sizes for attitudes and behaviors. This suggests that students are learning about food safety but are often not changing behaviors or views on the importance of food safety. This appears to be the first effect size reported for online food-safety educational interventions for students. Despite this, other meta-analyses have been conducted to look at the effectiveness of online educational programs for students. Ulum conducted a meta-analysis and found a moderate effect size for outcomes in students. However, their subgroup analysis on only studies from the USA reported a low effect size [111]. Weightman et al. found that a meta-analysis on information literacy skills for students was large (G = 0.92) [112]. These previous meta-analyses support the findings that online food-safety education is an effective way to deliver food safety information to students. Finding a good comparison is difficult, as many studies instead focus on comparative effectiveness between in-person and online educational programs and typically only measure knowledge [13,113,114].
Results also identified that larger effect sizes are more associated with college-level students (n = 3), while smaller effect sizes are affiliated with younger K-12 students (n = 2). This suggests that perhaps online educational interventions may be more effective for older students. While the sample size may be small, the differences in magnitude suggest a possible hypothesis backed by other evidence and theory. This result was similar to a meta-analysis finding by Means et al. on online educational programs [13]. Turan et al. identify in the beginning of their literature review that traits for success in online education are (1) self-regulation, (2) satisfaction, (3) perceived flexibility, and (4) independence [115]. Turan et al. find that literature suggests that self-regulation and perceived flexibility were noted to predict satisfaction, which was highly associated with (1) dropout rate, (2) motivation, (3) determination to complete a course, and (4) success rates [115]. Self-regulation is slowly developed from childhood to adulthood, albeit with some level of heterogeneity [116,117]. This suggests that these are traits mostly possessed by adults and may explain why online educational interventions are more effective for older students. However, Barbour and Reeves identified that most of the younger students enrolled in an online school were more successful when they had a higher level of self-regulation [118]. While more evidence is needed to confirm if online food-safety educational programs are more effective in older students than younger students, it is clear that it is also important to teach self-regulation to younger students for them to be successful.
One of the barriers identified for students was a loss of social interaction. Hermanto and Srimulyai identified this barrier in their study of students participating in an online educational program during COVID-19 [119]. Social interaction is an extremely important aspect of education, especially for adult learners, such as college students [120,121]. It plays a huge role in personal motivation for learning [122]. Additionally, social interaction can compound in a negative effect with other barriers [123]. For example, the current study identified that the loss of social interaction was noted with increased stress. The exact effect of stress varies from person to person, but in some cases, it can affect learning outcomes [124]. Stress can also negatively impact learners’ attitudes toward what is being taught, but the overall effect is not fully understood [124]. Furthermore, stress and difficulty focusing suggest an underlying lack of motivation to complete the training [120,125]. Training needs to be interactive and rewarding to be motivating and effective, especially for college students [120,125,126]. Overall, this barrier can present a huge problem by potentially limiting the effectiveness of the training. Ivanec noted in a survey of Croatian university students that those who perceived greater social isolation also noted greater difficulties with learning and self-regulation in studying [127]. As such, future studies should identify a means to implement more social interaction in their online food-safety educational programs.
One recommendation for students that may assist with generating more social interaction is to use more videos because they tend to generate engagement. Furthermore, Dailey-Hebert also suggested that videos can improve interpersonal student–teacher relations while helping students to learn [128]. In addition to recommendations to students, the current analysis also suggested further investigating the efficacy of food safety education programs for food workers.
Food workers have a low to moderate effect size in knowledge (G = 0.38). This is lower than effect sizes from findings in meta-analyses on food handlers, for which knowledge was found to have a large effect size [6,12,22,64]. This suggests that when comparing online food-safety educational programs to in-person food-safety educational programs, regardless of their instructional format, these programs may be below average in effectiveness at improving food safety knowledge in food workers. However, a more rigorous investigation is warranted as the current meta-analysis was not designed as a comparative analysis. Furthermore, this small effect size may be due to a study by Feinstein et al. that was initially kept in our meta-analysis results but was removed as it was an outlier with a very large effect size [49]. This will be discussed later in the limitations. Despite the effect size in the population being lower than the other studies, the statistically significant effect suggests that online food-safety education is effective at increasing knowledge for food workers, albeit perhaps to a limited degree.
One of the barriers noted in this group is the high turnover rate, which lowers peer collaboration and communication [129]. Cornelissen found that peer collaboration and communication support knowledge spillover that supports learning [130]. Additionally, turnover may reduce employee motivational investment in training. Royalty suggests that human capital theory predicts that workers will be more likely to invest in job training the longer they expect to remain working [131]. They found that the likelihood of training success can be attributed to differences in turnover by education level rather than just a pure interaction between education and training. Overall, high turnover has a negative impact on employee training.
Another noted barrier was varying education levels, which can cause issues in understanding the educational content. Fenton et al. recommended looking at the design of the training for appropriate difficulty, especially for reading level [50]. One method of implementing this would be to apply adapting learning paths. Such a form of personalized training is becoming more popular due to newer technologies that can adapt and modify things as feedback is received [132]. Multiple studies have shown that automated adapted learning paths benefit various populations [133,134,135]. However, no study has yet evaluated the effectiveness of such programs for food workers and may present a potential opportunity to improve online food-safety educational programs for food workers.
One other barrier noted in the population was a lack of time for training. This was a barrier also identified in other studies on different food handlers and managers [136,137,138]. With the prevalence of the barrier, training that takes less time may help to alleviate these barriers [139]. Sandlin summarized the literature and identified that online educational programs have been identified to be more time- and cost-efficient than in-person alternatives when it comes to implementations [140]. Online educational programs are also flexible [141]. It may be best to design online educational programs that are shorter in duration for food workers than those currently implemented, which has been demonstrated to be possible without decreasing their effectiveness [142,143].

4.4. Limitations

Despite the carefully developed protocol, some limitations remain that need to be acknowledged due to assumptions made in the methodology as a compromise to include more studies in the analysis. Other limitations emerged from the collected studies in the study.
To accommodate for more studies, p-values were used to estimate variance. However, exact p-values were not reported in some studies. These reported p-values were used as if they were the actual p-values, which would conservatively overestimate the variance. Additionally, while none of the publication bias tests reported the existence of publication bias, it is still a concern in any meta-analysis.
Several limitations were found to be due to the nature of the collected studies. Many studies failed to meet the inclusion criteria for the meta-analysis because of missing required metrics, confounding interventions, and failure to collect a pre-intervention score [44,51,55,59,144,145,146,147,148,149]. Therefore, not all available or collected data could be synthesized, which is the primary goal of a meta-analysis. Additionally, a lack of available studies reflects the small number of studies that can be included in subpopulation analysis. One standard method used to evaluate outcomes in the collected studies is evaluation using tests. These tests can suffer from a ceiling effect when calculating effect size and regression to the mean (RTM) biases [82]. The ceiling effect occurs when comparing pre-tests and post-tests. If the pre-test mean is high, it limits the magnitude of change that can be measured in the post-test. The RTM occurs and suggests that extremely low or extremely high values tend towards the mean. For example, if a participant scores 90% on a pre-test but scores 85% on the post-test, this decrease is unlikely due to “unlearning,” the idea that the intervention worsened their knowledge, but is instead because of RTM. Future studies are recommended to consider these biases and make efforts to correct or control them, as Barnett et al. suggested [150]. One example that may suggest the existence of the ceiling effect in some studies is Feinstein et al. (2013), which was initially included in the food workers group but had a considerable effect size (G = 2.22) outside three standard deviations from the mean [49]. The inclusion of this study would have increased the food workers’ knowledge effect size to G = 0.78, closer to the effect sizes seen in other meta-analyses, but it was removed due to it being an outlier. Feinstein et al. removed from their reporting all participants who scored greater than 80% on the pre-test, while three out of five studies in the food workers group had pre-score means greater than 80% [49]. This removal of participants would explain why the calculated effect size for the study was so large compared to others in the group, as it would reduce the impact of the ceiling effect. However, it is worth considering if this is an approach future studies should mimic as researchers should identify if the main objective of the educational program is to measure effectiveness in all involved or effectiveness on participants who may need the knowledge.

5. Conclusions

Overall, our results present a rough estimate of the success of food safety training for different groups but suggests that online educational programs are effective at improving knowledge in almost all subpopulations. However, there is room for improvement in practices and attitudes. The results may suggest that online food-safety educational programs may be more effective in older students than younger students. However, more research is needed to confirm these findings. The findings from the meta-analysis also need to be interpreted with some caution due to the low number of RCTs and NCTs, as results may be biased. Online food-safety educational programs for food workers are limited in effectiveness due to a lack of time to conduct training, varying educational levels, and high worker turnover. It may be beneficial to design shorter trainings and experiment with adaptive learning paths in training programs. Students struggle with technical difficulties and a lack of social interactions. Students can be more successful with the teaching of self-regulation and building confidence in using technology. Furthermore, including more opportunities for social interactions, such as social media or message forms in the program, may assist with reducing the impact of the loss of social interaction. Training courses should be shorter and incorporate more breaks to improve the experience. Additionally, designing the research to be more interactive with activities can bring benefits, such as improved results in training. To improve the results of future studies, scientists should carefully consider how they measure the study outcomes and what the main objectives of their intervention should be. Ultimately, the end goal of training is to improve practices.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/foods13050794/s1, Document S1: PRISMA checklist, Table S1: Date databases accessed, Table S2: ROBINS-I assessment summary for studies in the meta-analysis, Table S3: ROB-2 assessment summary results for studies in the meta-analysis, Spreadsheet S1: GRADE-CERQUAL grouping assessment results, Spreadsheet S2: GRADE-CERQual individual studies results, Spreadsheet S3: GRADE grouping assessment, Spreadsheet S4: GRADE individual studies assessment, Spreadsheet S5: Publication bias tests and funnel plots, Figure S1: Study-level individual effect sizes.

Author Contributions

Conceptualization, Z.B. and Y.F.; methodology, Z.B.; software, Z.B.; formal analysis, Z.B.; data curation, Z.B.; writing—original draft preparation, Z.B.; writing—review and editing, Z.B., S.S. and Y.F.; visualization, Z.B.; supervision, Y.F.; project administration, Y.F.; funding acquisition, S.S. and Y.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the U.S. Department of Agriculture, National Initiative of Food and Agriculture, grant number 2022-70020-37665, 2021-70020-35663, 2020-70020-32263.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author.

Acknowledgments

We wanted to acknowledge Amalia Diaz and Reyhan Soewajarno for assistance with paper screening; Han Chen for assistance with advisement on the qualitative data analysis methodology and assistance with creating the codebook for the study; and Reyhan Soewajarno and Anthony James Vicino for assistance in identifying qualitative data.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

References

  1. Angelo, K.M.; Nisler, A.L.; Hall, A.J.; Brown, L.G.; Gould, L.H. Epidemiology of restaurant-associated foodborne disease outbreaks, United States, 1998–2013. Epidemiol. Infect. 2017, 145, 523–534. [Google Scholar] [CrossRef]
  2. Redmond, E.C.; Griffith, C.J. Consumer Food Handling in the Home: A Review of Food Safety Studies. J. Food Prot. 2003, 66, 130–161. [Google Scholar] [CrossRef]
  3. CDC. Contributing Factors: Preventable Causes of Foodborne Illness Infographic; CDC: Atlanta, GA, USA, 2017.
  4. Feng, Y.; Bruhn, C.M.; Elder, G.; Boyden, D. Assessment of Knowledge and Behavior Change of a High School Positive Deviance Food Safety Curriculum. J. Food Sci. Educ. 2019, 18, 45–51. [Google Scholar] [CrossRef]
  5. Archila-Godínez, J.C.; Chen, H.; Klinestiver, L.; Rosa, L.; Barrett, T.; Henley, S.C.; Feng, Y. An Evaluation of a Virtual Food Safety Program for Low-Income Families: Applying the Theory of Planned Behavior. Foods 2022, 11, 355. [Google Scholar] [CrossRef] [PubMed]
  6. Young, I.; Greig, J.; Wilhelm, B.J.; Waddell, L.A. Effectiveness of food handler training and education interventions: A systematic review and meta-analysis. J. Food Prot. 2019, 82, 1714–1728. [Google Scholar] [CrossRef] [PubMed]
  7. Saniga, K.; Chapman, B.; Stevenson, C.D. A Case Study of Food Safety Training Delivery Methods in Dairy Processing Plants. Food Prot. Trends 2023, 43, 23–32. [Google Scholar] [CrossRef]
  8. Ghimire, R.P.; Maredia, K.M.; Adeyemo, M.; Mbabazi, R. Participants’ evaluation of an online international food safety short course. Evaluation Program Plan. 2022, 92, 102089. [Google Scholar] [CrossRef] [PubMed]
  9. Hann, M.; Allison, R.; Truninger, M.; Junqueira, L.; Silva, A.; Lundgren, P.T.; Hugues, V.L.; Godard, M.; Fehér, Á.; Csenki, E.; et al. Educating Young Consumers about Food Hygiene and Safety with SafeConsume: A Multi-Centre Mixed Methods Evaluation. Educ. Sci. 2022, 12, 657. [Google Scholar] [CrossRef]
  10. Koch, A.K.; Mønster, D.; Nafziger, J.; Veflen, N. Food safety related efficacy beliefs, behaviors, beliefs in myths, and the effects of educational online interventions: Data from an online survey experiment with 1973 consumers from Norway and the UK. Data Brief 2022, 42, 108102. [Google Scholar] [CrossRef]
  11. Sitzmann, T.; Stewart, D.; Wisher, R. The comparative effectiveness of web-based and classroom instruction: A me-ta-analysis. Pers. Psychol. 2006, 59, 623–664. [Google Scholar] [CrossRef]
  12. Insfran-Rivarola, A.; Tlapa, D.; Limon-Romero, J.; Baez-Lopez, Y.; Miranda-Ackerman, M.; Arredondo-Soto, K.; Ontiveros, S. A systematic review and meta-analysis of the effects of food safety and hygiene training on food handlers. Foods 2020, 9, 1169. [Google Scholar] [CrossRef] [PubMed]
  13. Means, B.; Toyama, Y.; Murphy, R.; Bakia, M.; Jones, K. Evaluation of Evidence-Based Practices in Online Learning: A Meta-Analysis and Review of Online Learning Studies Center for Technology in Learning; US Department of Education: Jessup, MD, USA, 2010.
  14. Islam, M.A.; Asliza, N.; Rahim, A.; Liang, T.C.; Momtaz, H. Effect of Demographic Factors on E-Learning Effective-ness in A Higher Learning Institution in Malaysia. Int. Educ. Stud. 2011, 4, 112–121. [Google Scholar] [CrossRef]
  15. Arlinghaus, K.R.; Johnston, C.A. Advocating for Behavior Change with Education. Am. J. Lifestyle Med. 2018, 12, 113–116. [Google Scholar] [CrossRef] [PubMed]
  16. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. he PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  17. Tong, A.; Flemming, K.; McInnes, E.; Oliver, S.; Craig, J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med. Res. Methodol. 2012, 12, 181. [Google Scholar] [CrossRef] [PubMed]
  18. Young, I.; Thaivalappil, A. A systematic review and meta-regression of the knowledge, practices, and training of restaurant and food service personnel toward food allergies and Celiac disease. PLoS ONE 2018, 13, e0203496. [Google Scholar] [CrossRef]
  19. Young, I.; Waddell, L. Barriers and facilitators to safe food handling among consumers: A systematic review and thematic synthesis of qualitative research studies. PLoS ONE 2016, 11, e0167695. [Google Scholar] [CrossRef] [PubMed]
  20. Higgins, J.P.; Thomas, J.; Chandler, J.; Cumpston, M.; Li, T.; Page, M.J.; Welch, V.A. (Eds.) Cochrane Handbook for Systematic Reviews of Interventions; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  21. Pollock, A.; Berge, E. How to do a systematic review. Int. J. Stroke 2018, 13, 138–156. [Google Scholar] [CrossRef]
  22. Young, I.; Waddell, L.A.; Wilhelm, B.J.; Greig, J. A systematic review and meta-regression of single group, pre-post studies evaluating food safety education and training interventions for food handlers. Food Res. Int. 2020, 128, 108711. [Google Scholar] [CrossRef]
  23. Sterne, J.A.C.; Hernán, M.A.; Reeves, B.C.; Savović, J.; Berkman, N.D.; Viswanathan, M.; Henry, D.; Altman, D.G.; Ansari, M.T.; Boutron, I.; et al. ROBINS-I: A tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016, 355, i4919. [Google Scholar] [CrossRef]
  24. Sterne, J.A.C.; Savović, J.; Page, M.J.; Elbers, R.G.; Blencowe, N.S.; Boutron, I.; Cates, C.J.; Cheng, H.-Y.; Corbett, M.S.; Eldridge, S.M.; et al. RoB 2: A revised tool for assessing risk of bias in randomised trials. BMJ 2019, 366, l4898. [Google Scholar] [CrossRef]
  25. Long, H.A.; French, D.P.; Brooks, J.M. Optimising the value of the critical appraisal skills programme (CASP) tool for quality appraisal in qualitative evidence synthesis. Res. Methods Med. Health Sci. 2020, 1, 31–42. [Google Scholar] [CrossRef]
  26. Drahota, A.; Bellor, E. Finding_SDs.xls. Available online: https://training.cochrane.org/resource/revman-calculator (accessed on 18 November 2020).
  27. Durlak, J.A. How to Select, Calculate, and Interpret Effect Sizes. J. Pediatr. Psychol. 2009, 34, 917–928. [Google Scholar] [CrossRef] [PubMed]
  28. Hedges, L.V.; Higgins, J.P.T.; Rothstein, H.R.; Borenstein, M. Introduction to Meta-Analysis, 1st ed.; Wiley: Hoboken, NJ, USA, 2009. [Google Scholar]
  29. Sidik, K.; Jonkman, J.N. Robust variance estimation for random effects meta-analysis. Comput. Stat. Data Anal. 2006, 50, 3681–3701. [Google Scholar] [CrossRef]
  30. Zaiontz, C. Real Statistics Resource Pack Software (Release 7.6) [Computer Software]. 2020. Available online: https://www.real-statistics.com/ (accessed on 18 November 2020).
  31. Huedo-Medina, T.B.; Sánchez-Meca, J.; Marín-Martínez, F.; Botella, J. Assessing heterogeneity in meta-analysis: Q statistic or I2 index? Psychol. Methods 2006, 11, 193–206. [Google Scholar] [CrossRef]
  32. Salanti, G.; Del Giovane, C.; Chaimani, A.; Caldwell, D.; Higgins, J. Evaluating the quality of evidence from a network meta-analysis. PLoS ONE 2014, 9, e99682. [Google Scholar] [CrossRef]
  33. Borenstein, M.; Higgins, J.P.T.; Hedges, L.V.; Rothstein, H.R. Basics of meta-analysis: I2 is not an absolute measure of heterogeneity. Res. Synth. Methods 2017, 8, 5–18. [Google Scholar] [CrossRef]
  34. Higgins, J.P.T.; Thompson, S.G. Quantifying heterogeneity in a meta-analysis. Stat. Med. 2002, 21, 1539–1558. [Google Scholar] [CrossRef] [PubMed]
  35. Begg, C.B.; Mazumdar, M. Operating characteristics of a rank correlation test for publication bias. Biometrics 1994, 50, 1088–1101. [Google Scholar] [CrossRef]
  36. Egger, M.; Smith, G.D.; Schneider, M.; Minder, C. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997, 315, 629. [Google Scholar] [CrossRef]
  37. Thomas, J.; Harden, A. Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Med. Res. Methodol. 2008, 8, 45. [Google Scholar] [CrossRef]
  38. Britten, N.; Campbell, R.; Pope, C.; Donovan, J.; Morgan, M.; Pill, R. Using meta ethnography to synthesise qualitative research: A worked example. J. Health Serv. Res. Policy 2002, 7, 209–215. [Google Scholar] [CrossRef]
  39. Evans, E.W.; Lacey, J.; Taylor, H.R. Development and piloting of a support package to enable small and medium sized food and drink manufacturers to obtain third party food safety certification. Food Control 2021, 127, 108129. [Google Scholar] [CrossRef]
  40. Costello, C.; Gaddis, T.; Tamplin, M.; Morris, W. Evaluating the effectiveness of two instructional techniques for teaching food safety principles to quick service employees. Foodserv. Res. Int. 1997, 10, 41–50. [Google Scholar] [CrossRef]
  41. Chen, H.; Ellett, J.; Phillips, R.; Feng, Y. Small-scale produce growers’ barriers and motivators to value-added business: Food safety and beyond. Food Control 2021, 130, 108192. [Google Scholar] [CrossRef]
  42. Lewin, S.; Booth, A.; Glenton, C.; Munthe-Kaas, H.; Rashidian, A.; Wainwright, M.; Bohren, M.A.; Tunçalp, Ö.; Colvin, C.J.; Garside, R.; et al. Applying GRADE CERQual to qualitative evidence synthesis findings: Introduction to the series. Implement. Sci. 2018, 13, 2. [Google Scholar] [CrossRef]
  43. Barrett, T.; Feng, Y. Effect of Observational Evaluation of Food Safety Curricula on High School Students’ Behavior Change. J. Food Prot. 2020, 83, 1947–1957. [Google Scholar] [CrossRef]
  44. Debacq, M.; Almeida, G.; Lachin, K.; Lameloise, M.-L.; Lee, J.; Pagliaro, S.; Romdhana, H.; Roux, S. Delivering remote food engineering labs in COVID-19 time. Educ. Chem. Eng. 2021, 34, 9–20. [Google Scholar] [CrossRef]
  45. Dipietro, R.B. Return on investment in managerial training: Does the method matter? J. Foodserv. Bus. Res. 2006, 7, 79–96. [Google Scholar] [CrossRef]
  46. Dudeja, P.; Singh, A.; Sahni, N.; Kaur, S.; Goel, S. Effectiveness of an intervention package on knowledge, attitude, and practices of food handlers in a tertiary care hospital of north India: A before and after comparison study. Med. J. Armed Forces India 2017, 73, 49–53. [Google Scholar] [CrossRef]
  47. Duong, M.; Shumaker, E.T.; Cates, S.C.; Shelley, L.; Goodson, L.; Bernstein, C.; Lavallee, A.; Kirchner, M.; Goulter, R.; Jaykus, L.-A.; et al. An observational study of thermometer use by consumers when preparing ground turkey patties. J. Food Prot. 2020, 83, 1167–1174. [Google Scholar] [CrossRef] [PubMed]
  48. Fajardo-Lira, C.; Heiss, C. Comparing the effectiveness of a supplemental computer-based food safety tutorial to traditional education in an introductory food science course. J. Food Sci. Educ. 2006, 5, 31–33. [Google Scholar] [CrossRef]
  49. Feinstein, A.H.; Dalbor, M.C.; McManus, A. Assessing the Effectiveness of ServSafe® Online. J. Hosp. Tour. Educ. 2013, 19, 11–20. [Google Scholar] [CrossRef]
  50. Fenton, G.D.; LaBorde, L.F.; Radhakrishna, R.B.; Brown, J.L.; Cutter, C.N. Comparison of knowledge and attitudes using computer-based and face-to-face personal hygiene training methods in food processing facilities. J. Food Sci. Educ. 2006, 5, 45–50. [Google Scholar] [CrossRef]
  51. Kosa, K.M.; Cates, S.C.; Godwin, S.L.; Ball, M.; Harrison, R.E. Effectiveness of educational interventions to improve food safety practices among older adults. J. Nutr. Gerontol. Geriatr. 2011, 30, 369–383. [Google Scholar] [CrossRef] [PubMed]
  52. Liceaga, A.M.; Ballard, T.S.; Esters, L.T. Increasing Content Knowledge and Self-Efficacy of High School Educators through an Online Course in Food Science. J. Food Sci. Educ. 2014, 13, 28–32. [Google Scholar] [CrossRef]
  53. Lynch, R.A.; Steen, M.D.; Pritchard, T.J.; Buzzell, P.R.; Pintauro, S.J. Delivering food safety education to middle school students using a web-based, interactive, multimedia, computer program. J. Food Sci. Educ. 2008, 7, 35–42. [Google Scholar] [CrossRef]
  54. Mayer, A.B.; Harrison, J.A. Safe eats: An evaluation of the use of social media for food safety education. J. Food Prot. 2012, 75, 1453–1463. [Google Scholar] [CrossRef]
  55. Pádua, I.; Moreira, A.; Moreira, P.; Barros, R. Impact of a web-based program to improve food allergy management in schools and restaurants. Pediatr. Allergy Immunol. 2020, 31, 851–857. [Google Scholar] [CrossRef]
  56. Quick, V.; Corda, K.W.; Chamberlin, B.; Schaffner, D.W.; Byrd-Bredbenner, C. Ninja Kitchen to the rescue: Evaluation of a food safety education game for middle school youth. Br. Food J. 2013, 115, 686–699. [Google Scholar] [CrossRef]
  57. Smith, K.; Shillam, P. An evaluation of food safety training using videotaped instruction. Foodserv. Res. Int. 2000, 12, 41–50. [Google Scholar] [CrossRef]
  58. Strohbehn, C.H.; Arendt, S.W.; Famah Ungku Zainal Abidin, U.; Meyer, J. Effectiveness of Food Safety Managerial Training: Face-to-face or computer-based delivery. J. Foodserv. Manag. Educ. Page 2013, 7, 7–19. [Google Scholar]
  59. Temen, T.; Cherrez, N.J.; Coleman, S. Assessment of an online piloted module targeted toward home-based food operators in Iowa. J. Food Sci. Educ. 2019, 19, 173–182. [Google Scholar] [CrossRef]
  60. Trepka, M.J.; Newman, F.L.; Huffman, F.G.; Dixon, Z. Food safety education using an interactive multimedia kiosk in a wic setting: Correlates of client satisfaction and practical issues. J. Nutr. Educ. Behav. 2010, 42, 202–207. [Google Scholar] [CrossRef]
  61. Unusan, N. E-mail delivery of hygiene education to university personnel. Nutr. Food Sci. 2007, 37, 37–41. [Google Scholar] [CrossRef]
  62. Walker, B.L.; Harrington, S.S.; Cole, C.S. The Usefulness of Computer-Based Instruction in Providing Ed.: Journal for Nurses in Professional Development. Journal for Nurses in Staff Development. May 2006. Available online: https://journals-lww-com.ezproxy.lib.purdue.edu/jnsdonline/Fulltext/2006/05000/The_Usefulness_of_Computer_Based_Instruction_in.9.aspx (accessed on 11 August 2021).
  63. Young, I.; Waddell, L.; Harding, S.; Greig, J.; Mascarenhas, M.; Sivaramalingam, B.; Pham, M.T.; Papadopoulos, A. A systematic review and meta-analysis of the effectiveness of food safety education interventions for consumers in developed countries. BMC Public Health 2015, 15, 822. [Google Scholar] [CrossRef] [PubMed]
  64. Soon, J.M.; Baines, R.; Seaman, P. Meta-analysis of food safety training on hand hygiene knowledge and attitudes among food handlers. J. Food Prot. 2012, 75, 793–804. [Google Scholar] [CrossRef] [PubMed]
  65. Young, I.; Reimer, D.; Greig, J.; Turgeon, P.; Meldrum, R.; Waddell, L. Psychosocial and health-status determinants of safe food handling among consumers: A systematic review and meta-analysis. Food Control 2017, 78, 401–411. [Google Scholar] [CrossRef]
  66. Thaivalappil, A.; Waddell, L.; Greig, J.; Meldrum, R.; Young, I. A systematic review and thematic synthesis of qualitative research studies on factors affecting safe food handling at retail and food service. Food Control 2018, 89, 97–107. [Google Scholar] [CrossRef]
  67. Henrich, J.; Heine, S.J.; Norenzayan, A. The weirdest people in the world? Behav. Brain Sci. 2010, 33, 61–83. [Google Scholar] [CrossRef]
  68. Hofstede, G. Dimensionalizing Cultures: The Hofstede Model in Context. Online Read. Psychol. Cult. 2011, 2, 8. [Google Scholar] [CrossRef]
  69. Joy, S.; Kolb, D.A. Are there cultural differences in learning style? Int. J. Intercult. Relations 2009, 33, 69–85. [Google Scholar] [CrossRef]
  70. Sulkowski, N.B.; Deakin, M.K. Does understanding culture help enhance students’ learning experience? Int. J. Contemp. Hosp. Manag. 2009, 21, 154–166. [Google Scholar] [CrossRef]
  71. Manikutty, S.; Anuradha, N.; Hansen, K. Does culture influence learning styles in higher education? Int. J. Learn. Chang. 2007, 2, 70. [Google Scholar] [CrossRef]
  72. Brok, P.D.; Levy, J.; Wubbels, T.; Rodriguez, M. Cultural influences on students’ perceptions of videotaped lessons. Int. J. Intercult. Relat. 2003, 27, 355–374. [Google Scholar] [CrossRef]
  73. Salili, F.; Chiu, C.; Lai, S. The Influence of Culture and Context on Students’ Motivational Orientation and Performance. Stud. Motiv. 2001, 48, 221–247. [Google Scholar] [CrossRef]
  74. Nyarugwe, S.P.; Linnemann, A.; Hofstede, G.J.; Fogliano, V.; Luning, P.A. Determinants for conducting food safety culture research. Trends Food Sci. Technol. 2016, 56, 77–87. [Google Scholar] [CrossRef]
  75. Griffith, C. Food Safety in Catering Establishments. In Safe Handling of Foods; CRC Press: Boca Raton, FL, USA, 2000; pp. 251–256. [Google Scholar] [CrossRef]
  76. Merriam, S.B.; Baumgartner, L. Learning in Adulthood: A Comprehensive Guide, 4th ed.; Jossey-Bass: Hoboken, NJ, USA, 2020; Available online: https://www.wiley.com/en-us/Learning+in+Adulthood%3A+A+Comprehensive+Guide%2C+4th+Edition-p-9781119490494 (accessed on 17 September 2021).
  77. Biniecki, S.Y.; Kang, H. Examining Adult Learning Through the Lens of Culture: A U.S. Perspective. Rocz. Andragogiczny 2015, 21, 133–142. [Google Scholar] [CrossRef]
  78. Lillguist, D.R.; McCabe, M.L.; Church, K.H. A comparison of traditional handwashing training with active hand-washing training in the food handler industry. J. Environ. Health 2005, 67, 13–16. [Google Scholar]
  79. Pham, M.T.; Waddell, L.; Rajić, A.; Sargeant, J.M.; Papadopoulos, A.; McEwen, S.A. Implications of applying methodological shortcuts to expedite systematic reviews: Three case studies using systematic reviews from agri-food public health. Res. Synth. Methods 2016, 7, 433–446. [Google Scholar] [CrossRef]
  80. Higgins, J.P.T.; Altman, D.G.; Gøtzsche, P.C.; Jüni, P.; Moher, D.; Oxman, A.D.; Savović, J.; Schulz, K.F.; Weeks, L.; Sterne, J.A.C.; et al. The Cochrane Collaboration’s tool for assessing risk of bias in randomised trials. BMJ 2011, 343, d5928. [Google Scholar] [CrossRef] [PubMed]
  81. Patel, M.X.; Doku, V.; Tennakoon, L. Challenges in recruitment of research participants. Adv. Psychiatr. Treat. 2003, 9, 229–238. [Google Scholar] [CrossRef]
  82. Frey, B.B. The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2018; ISBN 978-1-5063-2615-3. [Google Scholar]
  83. Maier-Riehle, B.; Zwingmann, C. Effect strength variation in the single group pre-post study design: A critical review. Die Rehabil. 2000, 39, 189–199. [Google Scholar] [CrossRef] [PubMed]
  84. Grant, S.; Mayo-Wilson, E.; Montgomery, P.; Macdonald, G.; Michie, S.; Hopewell, S.; Moher, D. CONSORT-SPI 2018 Explanation and Elaboration: Guidance for reporting social and psychological intervention trials. Trials 2018, 19, 406. [Google Scholar] [CrossRef] [PubMed]
  85. Zanin, L.M.; da Cunha, D.T.; de Rosso, V.V.; Capriles, V.D.; Stedefeldt, E. Knowledge, attitudes and practices of food handlers in food safety: An integrative review. Food Res. Int. 2017, 100, 53–62. [Google Scholar] [CrossRef] [PubMed]
  86. Borenstein, M.; Hedges, L.V.; Higgins, J.P.T.; Rothstein, H.R. When Does It Make Sense to Perform a Meta-Analysis? In Introduction to Meta-Analysis, 1st ed.; John Wiley & Sons, Ltd.: Chichester, UK, 2009; pp. 357–364. [Google Scholar] [CrossRef]
  87. Cohen, J. Statistical Power Analysis for the Behavioral Sciences, 2nd ed.; Lawrence Erlbaum Associates: Hillsdale, NJ, USA, 1988. [Google Scholar] [CrossRef]
  88. Kwol, V.S.; Eluwole, K.K.; Avci, T.; Lasisi, T.T. Another look into the Knowledge Attitude Practice (KAP) model for food control: An investigation of the mediating role of food handlers’ attitudes. Food Control 2020, 110, 107025. [Google Scholar] [CrossRef]
  89. Rimal, R.N.; Real, K. Perceived Risk and Efficacy Beliefs as Motivators of Change: Use of the Risk Perception Attitude (RPA) Framework to Understand Health Behaviors. Hum. Commun. Res. 2003, 29, 370–399. [Google Scholar] [CrossRef]
  90. Wilcock, A.; Pun, M.; Khanona, J.; Aung, M. Consumer attitudes, knowledge and behaviour: A review of food safety issues. Trends Food Sci. Technol. 2004, 15, 56–66. [Google Scholar] [CrossRef]
  91. Ajzen, I. The theory of planned behavior. Organ. Behav. Hum. Decis. Process. 1991, 50, 179–211. [Google Scholar] [CrossRef]
  92. Janz, N.K.; Becker, M.H. The Health Belief Model: A Decade Later. Health Educ. Behav. 1984, 11, 1–47. [Google Scholar] [CrossRef]
  93. Prochaska, J.O.; Velicer, W.F. The Transtheoretical Model of Health Behavior Change. Am. J. Health Promot. 1997, 12, 38–48. [Google Scholar] [CrossRef] [PubMed]
  94. Tamene, A.; Habte, A.; Woldeyohannes, D.; Afework, A.; Endale, F.; Gizachew, A.; Sulamo, D.; Tesfaye, L.; Tagesse, M. Food safety practice and associated factors in public food establishments of Ethiopia: A systematic review and meta-analysis. PLoS ONE 2022, 17, e0268918. [Google Scholar] [CrossRef] [PubMed]
  95. Zenbaba, D.; Sahiledengle, B.; Nugusu, F.; Beressa, G.; Desta, F.; Atlaw, D.; Chattu, V.K. Food hygiene practices and determinants among food handlers in Ethiopia: A systematic review and meta-analysis. Trop. Med. Health 2022, 50, 34. [Google Scholar] [CrossRef] [PubMed]
  96. Liu, Z.; Mutukumira, A.N.; Shen, C. Food safety knowledge, attitudes, and eating behavior in the advent of the global coronavirus pandemic. PLoS ONE 2021, 16, e0261832. [Google Scholar] [CrossRef] [PubMed]
  97. Schwarz, N. Self-reports: How the questions shape the answers. Am. Psychol. 1999, 54, 93–105. [Google Scholar] [CrossRef]
  98. Dharod, J.M.; Pérez-Escamilla, R.; Paciello, S.; Bermúdez-Millán, A.; Venkitanarayanan, K.; Damio, G. Comparison between self-reported and observed food handling behaviors among latinas. J. Food Prot. 2007, 70, 1927–1932. [Google Scholar] [CrossRef] [PubMed]
  99. DeDonder, S.; Jacob, C.J.; Surgeoner, B.V.; Chapman, B.; Phebus, R.; Powell, D.A. Self-reported and observed behavior of primary meal preparers and adolescents during preparation of frozen, uncooked, breaded chicken products. Br. Food J. 2009, 111, 915–929. [Google Scholar] [CrossRef]
  100. Simkins, L. The reliability of self-recorded behaviors. Behav. Ther. 1971, 2, 83–87. [Google Scholar] [CrossRef]
  101. Merriam, S.B.; Caffarella, R.S. Learning in Adulthood: A Comprehensive Guide, 2nd ed.; Jossey-Bass Publishers: San Francisco, CA, USA, 1999. [Google Scholar]
  102. Falasca, M. Barriers to adult learning: Bridging the gap. Aust. J. Adult Learn. 2011, 51, 584–590. [Google Scholar]
  103. Cross, K.P. Adults as Learners; Jossey-Bass: San Francisco, CA, USA, 1981. [Google Scholar]
  104. Zirkle, C.; Fletcher, E.C., Jr. Access Barriers Experienced by Adults in Distance Education Courses and Programs. In Handbook of Research on E-Learning Applications for Career and Technical Education: Technologies for Vocational Training; Wang, V., Ed.; IGI Global: Hershey, PA, USA, 2009; pp. 444–454. [Google Scholar]
  105. Sitzmann, T.; Ely, K.; Bell, B.S.; Bauer, K.N. The effects of technical difficulties on learning and attrition during online training. J. Exp. Psychol. Appl. 2010, 16, 281–292. [Google Scholar] [CrossRef]
  106. DeVito, K.M.; Vito, D. Implementing Adult Learning Principles to Overcome Barriers of Learning in Continuing Higher Education Programs. Online J. Workforce Educ. Dev. 2009, 3, 1. [Google Scholar]
  107. Knowles, M. The Modern Practice of Adult Education: Andragogy versus Pedagogy; Association Press: New York, NY, USA, 1970. [Google Scholar]
  108. Knowles, M.S.; Holton, E.F.; Swanson, R.A. The Adult Learner: The Definitive Classic in Adult Education and Human Resource Development, 5th ed.; Gulf: Houston, TX, USA, 1998. [Google Scholar]
  109. Blondy, L.C. Evaluation and Application of Andragogical Assumptions to the Adult Online Learning Environment. J. Interact. Online Learn. 2007, 6, 116–130. [Google Scholar]
  110. Greene, K.; Larsen, L. Virtual Andragogy: A New Paradigm for Serving Adult Online Learners. Int. J. Digit. Soc. 2018, 9, 1376–1381. [Google Scholar] [CrossRef]
  111. Ulum, H. The effects of online education on academic success: A meta-analysis study. Educ. Inf. Technol. 2022, 27, 429–450. [Google Scholar] [CrossRef] [PubMed]
  112. Weightman, A.L.; Farnell, D.J.J.; Morris, D.; Strange, H.; Hallam, G. A systematic review of information literacy programs in higher education: Effects of face-to-face, online, and blended formats on student skills and views. Évid. Based Libr. Inf. Pract. 2017, 12, 20–55. [Google Scholar] [CrossRef]
  113. Cavanaugh, C. The effectiveness of interactive distance education technologies in K-12 learning: A me-ta-analysis. Int. J. Educ. Telecommun. 2001, 7, 73–88. [Google Scholar]
  114. Zhonggen, Y.; Liheng, Y. A Meta-Analysis of Online Learning Outcomes and Their Gender Differences. Front. Psychol. 2021, 19, 33–50. [Google Scholar]
  115. Turan, Z.; Kucuk, S.; Karabey, S.C. The university students’ self-regulated effort, flexibility and satisfaction in distance education. Int. J. Educ. Technol. High. Educ. 2022, 19, 35. [Google Scholar] [CrossRef]
  116. Raffaelli, M.; Crockett, L.J.; Shen, Y.-L. Developmental Stability and Change in Self-Regulation from Childhood to Adolescence. J. Genet. Psychol. 2005, 166, 54–75. [Google Scholar] [CrossRef]
  117. Montroy, J.J.; Bowles, R.P.; Skibbe, L.E.; McClelland, M.M.; Montroy, J.J.; Morrison, F.J. The Development of Self-Regulation across Early Childhood. Dev. Psychol. 2016, 52, 1744–1762. [Google Scholar] [CrossRef]
  118. Barbour, M.K.; Reeves, T.C. The reality of virtual schools: A review of the literature. Comput. Educ. 2009, 52, 402–416. [Google Scholar] [CrossRef]
  119. Hermanto, Y.B.; Srimulyani, V.A. The Challenges of Online Learning During the Covid-19 Pandemic. J. Pendidik. Pengajaran 2021, 54, 46–57. [Google Scholar] [CrossRef]
  120. Samouilidi, M. Barriers to Adult Learning. Digital Repository. 1990. Available online: https://hydra.hull.ac.uk/resources/hull:6907 (accessed on 19 April 2022).
  121. Cook, D.A.; Artino, A.R. Motivation to learn: An overview of contemporary theories. Med. Educ. 2016, 50, 997–1014. [Google Scholar] [CrossRef] [PubMed]
  122. Noe, R.A. Trainees’ Attributes and Attitudes: Neglected Influences on Training Effectiveness. Acad. Manag. Rev. 1986, 11, 736–749. [Google Scholar] [CrossRef]
  123. Goodhart, C. Learning is a social activity. Rev. Behav. Financ. 2020, 12, 21–25. [Google Scholar] [CrossRef]
  124. Vogel, S.; Schwabe, L. Learning and memory under stress: Implications for the classroom. Npj Sci. Learn. 2016, 1, 16011. [Google Scholar] [CrossRef] [PubMed]
  125. Galbraith, M.W. Adult Learning Methods: A Guide for Effective Instruction; Krieger Pub Co.: Malabar, FL, USA, 1998; 408p. [Google Scholar]
  126. El Hajjar, S.T.; Alkhanaizi, M.S. Exploring the factors that affect employee training effectiveness: A case study in Bahrain. SAGE Open 2018, 8, 2158244018783033. [Google Scholar] [CrossRef]
  127. Ivanec, T.P. The Lack of Academic Social Interactions and Students’ Learning Difficulties during COVID-19 Faculty Lockdowns in Croatia: The Mediating Role of the Perceived Sense of Life Disruption Caused by the Pandemic and the Adjustment to Online Studying. Soc. Sci. 2022, 11, 42. [Google Scholar] [CrossRef]
  128. Dailey-Hebert, A. Maximizing interactivity in online learning: Moving beyond discussion boards. J. Educ. Online 2018, 15, 8. [Google Scholar] [CrossRef]
  129. Knight, D.K.; Becan, J.E.; Flynn, P.M. The impact of staff turnover on workplace demands and coworker relation-ships. Counselor 2013, 14, 20. [Google Scholar]
  130. Cornelissen, T. Do social interactions in the workplace lead to productivity spillover among co-workers? IZA World Labor 2016, 1, 314. [Google Scholar] [CrossRef]
  131. Royalty, A.B. The Effects of Job Turnover on the Training of Men and Women. Ind. Labor Relat. Rev. 1996, 49, 506–521. [Google Scholar] [CrossRef]
  132. Shemshack, A.; Spector, J.M. A systematic literature review of personalized learning terms. Smart Learn. Environ. 2020, 7, 33. [Google Scholar] [CrossRef]
  133. Chen, C.-M. Intelligent web-based learning system with personalized learning path guidance. Comput. Educ. 2008, 51, 787–814. [Google Scholar] [CrossRef]
  134. Vanbecelaere, S.; Cornillie, F.; Sasanguie, D.; Reynvoet, B.; Depaepe, F. The effectiveness of an adaptive digital educational game for the training of early numerical abilities in terms of cognitive, noncognitive and efficiency outcomes. Br. J. Educ. Technol. 2021, 52, 112–124. [Google Scholar] [CrossRef]
  135. Zahabi, M.; Razak, A.M.A. Adaptive virtual reality-based training: A systematic literature review and framework. Virtual Real. 2020, 24, 725–752. [Google Scholar] [CrossRef]
  136. Ghezzi, S.; Ayoun, B.; Lee, Y.M. Exploring Food Truck Food Safety Training and Practices in the United States: A Qualitative Study. Food Prot. Trends 2020, 40, 413–423. Available online: https://www.foodprotection.org/publications/food-protection-trends/archive/2020-11-exploring-food-truck-food-safety-training-and-practices-in-the-united-states-a-qualitative-s/ (accessed on 17 October 2020). [CrossRef]
  137. Bush, D.; Paleo, L.; Baker, R.; Dewey, R.; Toktogonova, N.; Cornelio, D. Restaurant Supervisor Safety Training: Evaluating a Small Business Training Intervention. Public Health Rep. 2009, 124, 152–159. [Google Scholar] [CrossRef]
  138. Ghezzi, S.; Ayoun, B.; Lee, Y.M. Food safety knowledge, training methods, and barriers to training: An examination of the U.S. food truck industry. J. Foodserv. Bus. Res. 2021, 24, 534–553. [Google Scholar] [CrossRef]
  139. Reynolds, J.; Dolasinski, M.J. Systematic review of industry food safety training topics & modalities. Food Control 2019, 105, 1–7. [Google Scholar] [CrossRef]
  140. Sandlin, C. An Analysis of Online Training: Effectiveness, Efficiency, and Implementation Methods in a Corporate Environment. 2013. Available online: https://dc.etsu.edu/honors/57 (accessed on 18 October 2020).
  141. Marín Díaz, V.; Reche Urbano, E.; Maldonado Berea, G.A.; Guadalupe Maldonado Berea, Y.A. Ventajas e inconvenientes de la formación online. Rev. Digit. Investig. Docencia Univ. 2013, 1, 33–43. [Google Scholar] [CrossRef]
  142. Ferguson, J.M.; DeFelice, A.E. Length of online course and student satisfaction, perceived learning, and academic performance. Int. Rev. Res. Open Distrib. Learn. 2010, 11, 73–84. [Google Scholar] [CrossRef]
  143. Shaw, M.; Chametzky, B.; Burrus, S.; Walters, K. An Evaluation of Student Outcomes by Course Duration in Online Higher Education. Online J. Distance Learn. Adm. 2013, 16, 1–22. [Google Scholar]
  144. Minnens, F.; Luijckx, N.L.; Verbeke, W. Food supply chain stakeholders’ perspectives on sharing information to detect and prevent food integrity issues. Foods 2019, 8, 225. [Google Scholar] [CrossRef] [PubMed]
  145. Gallego, A.; Fortunato, M.S.; Rossi, S.L.; Korol, S.E.; Moretton, J.A. Case method in the teaching of food safety. J. Food Sci. Educ. 2013, 12, 42–47. [Google Scholar] [CrossRef]
  146. Mohammadi-Nasrabadi, F.; Salmani, Y.; Esfarjani, F. A quasi-experimental study on the effect of health and food safety training intervention on restaurant food handlers during the COVID-19 pandemic. Food Sci. Nutr. 2021, 9, 3655–3663. [Google Scholar] [CrossRef] [PubMed]
  147. Pivarnik, L.F.; Richard, N.L.; Gable, R.K.; Worobo, R.W. Knowledge and attitudes of produce and seafood processors and food safety educators regarding nonthermal processes. J. Food Sci. Educ. 2016, 15, 120–128. [Google Scholar] [CrossRef]
  148. Lee, Y.M.; Barker, G.C. Comparison of food allergy policies and training between Alabama (AL) and national restaurant industry. J. Culin. Sci. Technol. 2016, 15, 1–16. [Google Scholar] [CrossRef]
  149. Kenefick, H.W.; Ravid, S.; MacVarish, K.; Tsoi, J.; Weill, K.; Faye, E.; Fidler, A. On your time: Online training for the public health workforce. Health Promot Pract. 2014, 15 (Suppl. S1), 48S–55S. [Google Scholar] [CrossRef] [PubMed]
  150. Barnett, A.G.; van der Pols, J.C.; Dobson, A.J. Regression to the mean: What it is and how to deal with it. Int. J. Epidemiol. 2005, 34, 215–220. [Google Scholar] [CrossRef]
Figure 1. Flowchart showing study selection for the review of the meta-analysis and thematic synthesis.
Figure 1. Flowchart showing study selection for the review of the meta-analysis and thematic synthesis.
Foods 13 00794 g001
Table 1. Inclusion criteria and exclusion criteria based on PICO 1 framework.
Table 1. Inclusion criteria and exclusion criteria based on PICO 1 framework.
InclusionExclusion
LanguageEnglishA PhD thesis
P: PopulationIs the population a type of food handler?An MS thesis
I: InterventionDoes the study include an online educational program?
Was the focus of the study a food safety topic? (e.g., hygiene, regulations, practices, allergies, quality, etc.)
A review article (systematic reviews or meta-analyses)
Only an abstract
C: ComparisonDoes the study include a control group or pre-training measure for the experimental group? (quasi-experimental, RCT, non-randomized trials)Studies with potential confuscation of an in-person training on the online educational program’s evaluation
O: OutcomeDoes the study include one or more of the following outcomes: knowledge, practices, or behaviors (KAP)?
1 PICO is a common project design method for comparative studies and is endorsed by Cochrane for use in systematic reviews [20,21].
Table 2. Searching keywords used for systematic review.
Table 2. Searching keywords used for systematic review.
PICOKeyword
PopulationIndustry OR Handler OR Worker OR Employee OR Processor OR Manufacturer OR Farm OR Restaurant OR Retail OR Business
AND
InterventionWorkshop OR Intervention OR Instruction OR Education OR Curriculum OR Course OR Training OR Brochure OR Strategy OR Lesson
AND
Digital SpecifierWebinar OR Digital OR Virtual OR Online OR Internet
AND
OutcomePractices OR Behavior OR Perception OR Attitude OR Knowledge
AND
TopicSafety OR Handling OR Preparation OR Hygiene OR Quality OR Technology OR Allergy
AND
Food
Table 3. Statistics, metrics, and plots calculated.
Table 3. Statistics, metrics, and plots calculated.
PurposeStatisticsTestsPlotsMethod Adapted From
Effect SizesHedge’s G95% CI, 95% PIForest PlotsAverage Effect [28],
Intervals [20]
Between-Study Varianceτ2p-valueFunnel PlotInitial [28]
REML [29]
Observed HeterogeneityI295% CIN/AI2 [31,32,33]
Interval [34]
Publication BiasBegg’s Rankingp-valueFunnel PlotBegg’s Ranking and p-value [35]
Funnel Plot [36]
Table 4. Definitions and examples for the different orders of qualitative data constructs.
Table 4. Definitions and examples for the different orders of qualitative data constructs.
Order of ConstructAdopted DefinitionTypical FormsExampleCitation of Example
1st levelDirect life experience of evidence. These are typically quotes from peopleQuotes“We don’t have the resources in place at this time to take part in the programme”[39]
2nd levelInterpretations of 1st-level data generalizations involving an interpretation of other’s life experiencesScientific observations, codes, or summariesIn terms of practical application, the use of the computer training provided a much easier method to implement[40]
3rd levelTop-level synthesis of 1st-level and 2nd-level interpretationsThemes from a synthesisThemes from the result of this studyN/A
Table 5. Meta-analysis and thematic synthesis study characteristics.
Table 5. Meta-analysis and thematic synthesis study characteristics.
StudyIn Thematic SynthesisIn Meta-AnalysisPopulationDigital MethodType of DesignOutcomes NCountry of Focus
[43]No qualitative
data found
Students, ConsumersVideoPre-PostKAPConsumers = 581
Students = 349
USA
[40]Food WorkersInteractive Computer ProgramNCT **KControl = 15
Experimental = 14
USA
[44]No clear varianceStudentsVirtual Course (Semi-Synchronous, Interactive)Pre-PostKn = 30France
[45]Food WorkersInteractive Computer ProgramNCT **PControl = 20
Experimental = 20
USA
[46]In-person component in intervention §Food WorkersVideoPre-PostKPn = 270India
[47]ConsumersVideoRCTPControl = 210
Experimental = 182
USA
[39]In-person component in intervention §Food WorkersDigital QuizPre-PostAn = 11 BusinessesEngland
[48]StudentsWeb-based ProgramPre-Post *KPre-Test = 19
Experimental = 19
USA
[49]No qualitative data foundFood WorkersWeb-based ProgramPre-PostKPre-Test = 343
Experimental = 343
USA
[50]Food WorkersComputer Module (Recorded PowerPoints with Video Clips)RCTKAControl = 28
Experimental = 35
USA
[51]In-person component in intervention §ConsumersInformational WebsiteRCTAPn = 446USA
[52]No qualitative data foundConsumersOnline CoursePre-PostKPre-Test = 76
Experimental = 76
USA
[53]StudentsInteractive Computer ProgramPre-PostKPre-Test = 217
Experimental = 217
USA
[54]StudentsComputer ModuleNCTKAPControl = 93
Experimental = 278
USA
[55]No clear varianceFood WorkersVideosPre-PostKn = 146Portugal
[56]No qualitative data foundStudentsDigital Game/Interactive Computer ProgramNCT **KAPControl = 365
Experimental = 903
USA
[57]No qualitative data foundFood WorkersVideoPre-PostKPre-Test = 240
Experimental = 240
USA
[58]No qualitative data foundFood WorkersComputer Module (Recorded PowerPoints with Video Clips, Digital Documents)Pre-PostKAPre-test = 40
Experimental = 20
USA ***
[59]In-person component in intervention §Food WorkersWeb-based ProgramN/A ▪ (Post Only)Kn = 21USA
[60]No outcomes measuredConsumersComputer ModuleN/A ▪N/A ▪n = 180USA
[61]No qualitative data foundConsumersE-mail CoursePre-PostPPre-test = 34
Experimental = 34
Turkey
[62]No qualitative data foundConsumersComputer ModulesRCTKControl = 10
Experimental =10
USA ***
* Design was RCT in assignment between in-person and digital, but effect-size is calculated from a pre-post test design. ** Cluster of participants was randomly assigned to the computer or the face-to-face group. Participants themselves were not randomly assigned. *** Assumed based on the country of the publishing author. K–Knowledge, A—Attitudes, P—Practices. ▪ N/A—Information is not available as it was not reported or unclear. § Program contained an in-person or non-online aspect that may have affected the numerical results and was not included in the analysis. Qualitative data were checked to ensure that the context was around the online food-safety educational program aspect of the study.
Table 6. Modified CASP checklist results.
Table 6. Modified CASP checklist results.
Question% No (n)% Cannot Tell/Partially (n)% Yes (n)
Was there a clear statement for the aims of the research? (CASP)0%0%100% (30)
Is the methodology of the study appropriate for the topic of interest? (CASP)3.33% (1)6.66% (2)90% (27)
Was the research design appropriate to address the aims of the research? (CASP)3.33% (1)3.33% (1)93.33% (28)
Was the research design appropriate to address the aims of our research?30% (9)0%70% (21)
Was the recruitment strategy appropriate to the aims of the research? (CASP)6.66% (2)3.33% (1)90% (27)
Were the qualitative data collected in a way that optimally answers our research questions?33.33% (10)33.33% (10)33.33% (10)
Were the data collected in a way that addresses the research issue? (CASP)0%0%100% (30)
Has the relationship between research participants been adequately addressed? (CASP)33.33% (10)0%66.67% (20)
Have ethical issues been taken into consideration? (CASP)3.33% (1)10% (3)86.66% (26)
Was the data analysis sufficiently rigorous? (CASP)6.66% (2)6.66% (2)86.66% (26)
Is there a clear statement of findings (CASP)0%3.33% (1)96.66% (29)
n = 30, as these are the findings together by two reviewers on 15 papers. (CASP)—Indicates the criteria was from the original CASP checklist.
Table 7. Effect sizes between each subpopulation.
Table 7. Effect sizes between each subpopulation.
GroupOutcomeAvg. Hedge’s G ± CI 95%τ2I2 ± CI 95%
ConsumersKnowledge (n = 3)0.74 ± 0.11 **0 *0
Attitudes (n = 0)N/A §N/A §N/A §
Practices (n = 3)0.35 ± 0.28 *0.04590 ± 83.62
Food WorkersKnowledge (n = 5)0.38 ± 0.16 *0 *0
Attitudes (n = 2)0.35 ± 0.26 * Г0 *0
Practices (n = 0)N/A §N/A §N/A §
StudentsKnowledge (n = 4)0.72 ± 0.30 **0.09430718.67 ± 40.67
Attitudes (n = 3)0.23 ± 0.280.055082.75 ± 13.83
Practices (n = 3)0.30 ± 0.320.073444.49 ± 20.32
AllKnowledge (n = 12)0.58 ± 0.19 *0.04867615.07 ± 35.63
Attitudes (n = 5)0.29 ± 0.260.053820
Practices (n = 6)0.42 ± 0.22 **0.0433140
* Two-tailed p-value ≤ 0.05. ** Two-tailed p-value ≤ 0.01. Г—Subgroup contains only 2 studies. Variance and dispersion arise in random-effects models when dealing with a small number of studies [33]. §—N/A, as there were no studies in the subpopulation reporting the outcome.
Table 8. Characteristics of meta-analysis featuring mainly the population of the United States, considering both online and in-person studies.
Table 8. Characteristics of meta-analysis featuring mainly the population of the United States, considering both online and in-person studies.
CitationPopulationEffect Size UsedKnowledgeAttitudesPractices
Young et al., 2015 [63]ConsumersSMDRCT
Training
Adults = 0.87
Media Campaigns
Adults = 0.42
NRT
Adults = 0.44
Children = 0.24
NRT
Training
Adults = 0.26
RCT
Media Campaign
Adults = 0.34
RCT
Education
Adults = 0.68
Children = 0.2
Media Campaign
Adult intents = 0.36
Adult = 0.24
NRT
Education
Adults = 0.37
Children = 0.33
Insfran-Rivarola et al., 2020 [12]Food HandlersSMD1.240.28Observed = 0.45
Self-reported = 0.8
Young et al., 2020 * [22]Food HandlersSMD1.1040.4330.898
Young et al., 2019 [6]Food HandlersSMDRCT
0.97
NRT
1.77
RCT
0.12
NRT
0.38
RCT
0.18
NRT
1.16
Soon et al., 2012 [64]Food HandlersHedge’s G1.2840.6830.718 **
* Study only includes single group pre-post designs. ** Study labels figures as hand hygiene attitudes, but study characteristics show studies for the chart measure behavioral outcomes.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Berglund, Z.; Simsek, S.; Feng, Y. Effectiveness of Online Food-Safety Educational Programs: A Systematic Review, Random-Effects Meta-Analysis, and Thematic Synthesis. Foods 2024, 13, 794. https://doi.org/10.3390/foods13050794

AMA Style

Berglund Z, Simsek S, Feng Y. Effectiveness of Online Food-Safety Educational Programs: A Systematic Review, Random-Effects Meta-Analysis, and Thematic Synthesis. Foods. 2024; 13(5):794. https://doi.org/10.3390/foods13050794

Chicago/Turabian Style

Berglund, Zachary, Senay Simsek, and Yaohua Feng. 2024. "Effectiveness of Online Food-Safety Educational Programs: A Systematic Review, Random-Effects Meta-Analysis, and Thematic Synthesis" Foods 13, no. 5: 794. https://doi.org/10.3390/foods13050794

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop