Next Article in Journal
Dropout Intention among University Students with ADHD Symptoms: Exploring a Path Model for the Role of Self-Efficacy, Resilience, and Depression
Previous Article in Journal
School Leader Preparation in the U.S. State of Virginia: Exploring the Relationship between Data Use in Standards and Program Delivery
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Training Staff to Implement Free-Operant Preference Assessment: Effects of Remote Behavioral Skills Training

by
Tangchen Li
1 and
Sheila R. Alber-Morgan
2,*
1
Have Fun Learning, Powell, OH 43065, USA
2
Department of Educational Studies, The Ohio State University, Columbus, OH 43229, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(10), 1082; https://doi.org/10.3390/educsci14101082
Submission received: 24 July 2024 / Revised: 17 September 2024 / Accepted: 1 October 2024 / Published: 4 October 2024

Abstract

:
Behavior Skills Training (BST) was used remotely to teach four special education teachers who lived in China to conduct free-operant preference assessments. A multiple baseline across participant design demonstrated a functional relationship between remote BST and the percentage of assessment steps completed correctly. Additionally, two of the four participants demonstrated generalization. Limitations and future research directions are discussed.

1. Introduction

Applied behavior analysis is based on the science of learning and behavior. One assessment based on the science of applied behavior analysis is preference assessment [1]. Preference assessment is recommended for practitioners as an efficient procedure for identifying potential reinforcers from several stimuli [2]. Unfortunately, most professionals working with individuals with developmental disabilities reported that the limited time and lack of training were barriers to utilizing preference assessments in practice [3]. Preference assessments can use a restricted-operant format or a free-operant format. In a typical restricted-operant format, the teacher presents the student with single, paired, or multiple stimuli over a series of trials and prompts the student to choose the item he or she most prefers [4]. For example, in a paired-stimuli assessment, two items are presented in a trial, and the student chooses one item he wants to engage with. The interventionist takes the preferred item back from the child after a predetermined amount of time (e.g., 15 to 30 s) before presenting the next trial. In a free-operant preference assessment, the student is provided free access to a range of stimuli while the teacher records the items the student selects and the duration of time the student engages with each item [5]. Compared to restricted-operant assessments, free-operant assessments provide a quick, easy evaluation of student preferences. With free-operant assessments, preferences can be detnermined without removing or withholding preferred items or making students feel as if demands are being placed on them when asking them to make discrete choices [6]. For these reasons, behavior problems are less likely to occur during a free-operant preference assessment [5].
Numerous training methods have been demonstrated to be effective for helping practitioners learn new instructional skills (e.g., performance feedback, self-monitoring, and goal setting). Behavioral skills training (BST) is a training procedure that promotes consistent improvement of implementation integrity [7]. A typical BST package includes four components: instructions, modeling, role-play, and feedback. During instructions, trainers describe the skills and explain the reasons for using the skills. Modeling is usually provided to the trainee in the form of in vivo or video modeling. The trainer performs a demonstration of the skill for the trainee to observe. During role-play, the trainees are provided with opportunities to practice using the skills. For example, the trainer can take on the role of a student or client, and the trainee uses the skill to implement an intervention or conduct an assessment. In some studies, this is called rehearsal instead of role-play since multiple practice opportunities are provided. Feedback can be provided during or after role-play. Trainers usually reinforce correct responses and provide corrective feedback for incorrect responses. Trainers can also deliver feedback in a different format, such as in-person oral feedback, remote oral feedback, or an email or voice message [8].
BST can be used for training teachers, clinical staff, and parents to use various skills, including discrete trail training, prompting hierarchies, functional communication training, preference assessment, and visual analysis [8,9]. Several studies demonstrated that BST is effective for training staff to implement preference assessments. Higgins et al. (2017) examined the effectiveness of a remote BST package on the direct-care staff’s implementation of a multiple stimulus without replacement (MSWO) preference assessment [10]. In MSWO, a student is presented with five to seven stimuli in the first trial. After the student chooses an item and engages with it, the instructor removes that item from the set and asks the student to select another item from the remaining stimuli. The procedure is repeated until one stimulus is left. The order in which the items are selected is considered the student’s order of preference. In the research of Higgens et al., an immediate improvement in implementation fidelity of the MSWO procedures was observed after the BST training. The effects were maintained during follow-up observations. In addition, participants were satisfied with the remote BST experience. In a similar study, Smith (2018) evaluated the effects of BST and didactic training on staff implementing an MSWO preference assessment [11]. The results of both studies demonstrated a functional relationship between the training and participants’ treatment integrity scores and positive effects on generalization and maintenance. However, no previous study evaluated the effects of BST on staff implementation of free-operant preference assessment.
Remote training can be a cost-effective and efficient way to provide training. Previous studies evaluated the effects of remote training on the implementation of multiple stimulus without replacement (MSWO) preference assessment. The results demonstrated that remote training effectively improved participants’ implementation integrity [10,12,13]. Ausenhus and Higgins (2019) evaluated the effects of remote real-time feedback on clinical staff’s procedural fidelity when implementing a brief MSWO preference assessment [12]. Results demonstrated that all four participants showed increased procedural integrity for implementing the preference assessment. Short training time (range of 31–46 min) and minimal sessions (range of 2–3 sessions) were effective for helping the clinical staff acquire and maintain the skill of conducting an MSWO preference assessment. Moreover, the social validity questionnaire results showed that the participants were satisfied with the remote training and would recommend the training procedure to others. Pizzella (2020) also conducted a remote BST experiment, which compared the effects of in-person and remote BST on participants’ implementation of MSWO [13]. Results indicated that the two trainings were equally effective for increasing the percentage of correct steps. The group that received remote training spent more time (63 more minutes on average) on the training because they watched the training video multiple times.
Previous research has demonstrated that BST is an effective training model, and it can be utilized remotely to train practitioners to implement interventions and assessments. However, no study has assessed the effectiveness of remote BST training on implementing a free-operant preference assessment. Moreover, Sun et al. (2019) found that about 1 in 100 children in mainland China had been diagnosed with autism spectrum disorders [14]. However, there is a lack of trained professionals and paraprofessionals to design and implement quality interventions for individuals with autism. It is critically important for intervention specialists to be well trained to implement appropriate assessments and interventions. Therefore, this study aimed to evaluate the effectiveness of remote BST on implementing free-operant preference assessments with a group of teachers in China. Specifically, the experimenter sought to answer the following research questions:
  • What are the effects of remote BST for special education teachers on their correct implementation and scoring of free-operant preference assessments?
  • What are the effects of remote BST on teachers’ maintenance of free-operant assessment implementation?
  • What are the effects of remote BST on teachers’ generalization of free-operant preference assessments?
  • What are the participants’ opinions of the remote BST training procedures?

2. Method

2.1. Participants

The participants were four special education teachers who lived in China and worked with children with developmental delays. After receiving approval for this research from The Ohio State University’s Institutional Review Board and permission from the school administrators, the experimenter recruited the participants by posting invitation letters to a discussion group. The teachers who were interested in participating met with the experimenter remotely, were provided information and opportunities to ask questions about the study, and signed consent forms to participate in the research.
Participant A was a special education teacher with eight years of teaching experience working at a center for children with developmental delays in Guangdong, China. She conducted one-on-one and group instruction with children between three and seven years old. Participant B was a paraprofessional with two years of teaching experience at a center in Beijing, China, who was also hard of hearing. Most of her experience was providing prompts and implementing behavior management interventions to children during group instruction. Participant C worked with children with moderate-to-severe disabilities with five years of teaching experience and delivered one-on-one instruction to her students in Guangdong, China. Participant D was a teacher with three years of teaching experience who worked at a school for children with developmental delays in Beijing, China. She provided both one-on-one and group instruction to her students. Participants C and D were BCaBA (Board-Certified Assistant Behavior Analyst) credentialed. All four participants had no experience with implementing free-operant preference assessments.

2.2. Setting

Experimental sessions were conducted remotely through one-on-one videoconferencing calls. For each experimental training session, the experimenter was in the United States in her home, and the participants were in China. Participants A, B, and D participated in the preference assessment sessions from the center or school where they worked. Participant C participated in preference assessment sessions from her home.

2.3. Materials

Videoconferencing provided a live audio and visual connection between the participants and experimenter using Zoom 5.12.6. Remote sessions were conducted using an HP Notebook laptop, the laptop webcam, and the laptop microphone. All sessions were videotaped for scoring purposes. Additional materials included PowerPoint presentation slides, videos for video-modeling, a task analysis of the preference assessment procedure, preference assessment data sheets, scenarios (i.e., short paragraphs discussing potential preferred and nonpreferred stimuli), preference assessment stimuli, interval timers, a calculator, and writing utensils.

2.4. Dependent Variable

The dependent variable was the percentage of steps implemented correctly by the participant. Figure 1 shows the scoring checklist. For Step 7 on the checklist (i.e., place a check on the item that the student interacted with during the 10 s interval), a partial interval scoring form divided into 10 sections was provided to the participant. Each section indicated stimuli selected during play. The participants observed the play behavior for five minutes (i.e., 30 10 s intervals) and were scored on their accuracy of recording. Each opportunity to engage with an item during Step 7 (i.e., 10 opportunities) was counted when calculating the total percentage of correct steps.
For the last step of the task analysis, the participant was scored on accuracy of identifying the preference hierarchy by identifying highly preferred items (i.e., engaged for highest percentage of intervals), moderately preferred items (i.e., engaged for lowest percentage of intervals), and low preferred items (i.e., did not approach).

2.5. Experimental Design

A delayed multiple baseline across participants design was used to assess the effects of the remote training package on the acquisition and maintenance of free-operant preference assessment skills. The experimental conditions were baseline, intervention, maintenance, and generalization. Participants began training in a staggered format when their baseline performance showed stable responding in baseline.

2.5.1. Baseline

The experimenter emailed the task analysis and the partial interval data collection sheet to each of the participants on the first baseline data collection session. Participants were encouraged to ask any questions they had before conducting the baseline free-operant preference assessment. Questions about the specific details of implementing the preference assessment were not answered during baseline to minimize interfering effects prior to training. After receiving a session request from the experimenter, the participants each conducted a videotaped session with a confederate (i.e., an adult who role-played as a student). The participants then submitted their videos and data sheets to the experimenter. The experimenter watched the video, collected procedural fidelity data, and compared the partial interval recording data. No feedback about the preference assessment procedure was provided to the participants during baseline. However, questions about the quality of the video or what equipment to use were discussed by the experimenter and the participants.

2.5.2. Intervention

A training-assessment session consisted of two components: a multimedia presentation lasting 20 to 30 min, which included instruction, modeling, and data collection practice, and a role-playing task with a delayed feedback component. The experimenter delivered the multimedia presentation through videoconferencing and scheduled a session for participants to role-play with a confederate within one week. The experimenter repeated the training sessions until the participants demonstrated at least 88% accuracy of the task analysis steps during role-play.
The multimedia presentation consisted of a Microsoft PowerPoint presentation with information on the rationale for and use of positive reinforcement and the task analysis of the preference assessment. The experimenter delivered the presentation during videoconferencing with the participants. The presentation included a brief textual display of definitions of each component skill and narration, followed by a brief video model. The experimenter also provided the participants with a chance to practice collecting 10 p second partial interval data using the video model. The experimenter shared her screen and asked the participant to watch the video and take data simultaneously. They compared the data after watching the video and discussed any questions the participants had.

2.5.3. Role-Play with Delayed Video Feedback

The role-play was conducted following completion of the multimedia presentation. The participant recorded each role-play trial with a confederate and sent the video to the experimenter within one day of the session. The experimenter watched the video and scored the performance via the fidelity checklist. Within one day of receiving the role-play video, the experimenter had a meeting with the participant and provided feedback to the participants on their performance during their last role-play session. During each trial of the video replay, the experimenter paused the video to provide descriptive feedback, which involved (a) showing a video clip of the trial, (b) stating the correct and incorrect responses regarding the implementation of the skills, and (c) obtaining confirmation that the participants observed the correct or incorrect response. The training was discontinued after trainees reached mastery criteria (i.e., three consecutive sessions at or above 88% accuracy).

2.5.4. Maintenance and Generalization

During the generalization sessions, the participants were asked to implement a free-operant preference assessment with a child instead of a confederate adult. Whether or not a participant had generalization sessions depended on the feasibility of implementing and video recording a session with a child. The experimenter asked all participants to conduct a generalization session, but only two could implement and video record a preference assessment session with a child (Participant C had one generalization probe during baseline and one during the intervention phase, and Participant D had one generalization probe during the intervention phase). There were two maintenance sessions for each participant. The first one was conducted two weeks after the last intervention session, and the second one was conducted four weeks after the last intervention session. The materials provided during the generalization and maintenance sessions were the same as the materials provided at the baseline sessions. Participants were asked to videotape the sessions for data collection purposes. Feedback was provided for generalization during the intervention phase. Feedback was not provided during the maintenance sessions.

2.5.5. Interobserver Agreement

Two observers, a graduate student and a graduated Ph.D. were trained to perform an interobserver agreement (IOA) by reviewing the task analysis of the preference assessment, watching the video modeling, and participating in several practice opportunities. IOA was examined for 30% of the sessions across experimental phases and participants. IOA across participants was 91.6% (range of 86–100%) during baseline, 92.5% (range of 86–100%) during intervention, and 90% (range of 86–94%) during maintenance for Participants A, B, and C. No maintenance data were collected for Participant D.

2.5.6. Procedural Fidelity

Another teacher observed the researcher for 30% of the baseline and training sessions to ensure that the experiment was being implemented as written. Procedural fidelity was assessed during the baseline phase and intervention phase. A procedural fidelity checklist was created and utilized. The observer marked on the checklist whether the experimenter followed each step of the experimental procedure. Overall, procedural fidelity across all participants was 100% for the behavioral skills training intervention.

3. Results

Figure 2 shows the results of the percentage of steps completed correctly for all participants. Baseline data were stable for each participant prior to free-operant assessment training, ranging from 27% to 61% of steps completed correctly across participants. Baseline data ranged between 27% and 30% for Participant A and Participant D, 50–55% for Participant C, and 61% on each baseline session for Participant D (one of which was a pre-training generalization probe).
After intervention, all participants demonstrated an immediate and substantial increase to a range of 83% to 100% of the steps completed correctly. All four participants demonstrated mastery criteria with 80% to 100% accuracy on the last three intervention sessions before maintenance. Participants A, B, and C demonstrated mastery criteria in three sessions, and Participant D demonstrated mastery in four sessions. The first three participants also demonstrated maintenance of the intervention to at least 90% accuracy. Participant D was unable to complete the maintenance phase. Participant C and Participant D achieved generalization with 94% and 100% accuracy, respectively. Participants A and B had no opportunity to complete a generalization probe.
The experimenters also documented time spent on training for each participant. Participant A attended a BST session (60 min) and three delayed feedback sessions (10 to 15 min per session) for a total of 98 total minutes in training. Participant B attended a BST session (50 min) and three delayed feedback sessions (9 to 13 min per session) for a total of 83 min in training. Participant C attended a BST session (53 min) and three delayed feedback sessions (8 to 15 min per session) for a total of 86 min in training. Participant D attended a BST session (55 min) and four delayed feedback sessions (10 to 13 min per session) for a total of 90 min in training. The average time spent training across participants was 89.25 min.

Social Validity

Each participant anonymously completed a social validity questionnaire following the final session. The social validity questionnaire consisted of six items to which the par-ticipants responded as either strongly disagree, disagree, neutral, agree, or strongly agree. The experimenter sent a link to the survey to the participants via email. No personal in-formation was collected for the survey to preserve each participant’s anonymity. Survey items included the following: I am satisfied with the remote training; I am satisfied with the process of arranging cameras and recording the free-operant assessment; I am satisfied with the process in which feedback was provided; the process of remote training is ac-ceptable for learning other instructional skills; I recommend this process of remote training to other individuals; and I will use free-operant assessment in the future.
The results of the social validity survey indicated that all participants were satisfied with using videoconferencing to attend training (100%). Two of the participants (50%) ex-pressed that they “agree” but not “strongly agree” with the process of arranging cameras and recording the free-operant assessment. All participants (100%) were satisfied with the delayed feedback, would like to learn other skills from remote training like the current ex-periment, will recommend remote training to other individuals who cannot receive on-site training, and will use the free-operant preference assessment in their practice.

4. Discussion

The current study examined the effects of using BST to remotely train four special education teachers to implement free-operant preference assessment. The results of this study demonstrated a functional relationship of remote BST training with delayed feedback on increased implementation fidelity. All four participants met the mastery criteria for implementing free-operant preference assessments. During baseline, the participants’ average implementation fidelity was 27% to 61%. After training, the average implementation fidelity increased from 83% to 100%.
These findings contribute to the preference assessment literature by demonstrating that remote training can be implemented effectively and efficiently. Specifically, the participants achieved mastery after three to four role-play and feedback sessions and only required an average of 89 min in total training time. These findings support previous research by showing that remote BST is effective for helping teachers acquire skills needed to conduct different types of preference assessments and extend this research to Chinese teachers implementing free-operant preference assessments [13]. Additionally, as indicated by the social validity surveys, BST training for implementing free-operant preference assessment was acceptable to the teacher participants in this study. Remote training should be considered when delivering instruction to teachers because it provides flexibility, convenience, and accessibility for individualized distance learning and professional development.

5. Limitations and Future Research

Despite the positive outcomes of this study, there are several limitations that should be addressed in future research. The most important limitation of the study was the minimal data on generalization to real students. Only three generalization probes were conducted for two of the teachers. Two of the participants were unable to collect generalization probes with their real students because they did not receive parent permission to participate in the study. For the two participants who received parent permission, generalization probes were minimal due to the changing pandemic situation at the time of the study. However, the few generalization probes that were conducted produced promising results. Future research should attempt to experimentally implement more generalization probes (during baseline and intervention) to determine if a functional relationship exists for generalized outcomes.
Another important limitation that should be considered when interpreting the results is the variation of the participants’ background experiences. All four participants had at least two years of experience working with children with disabilities. Participants C and D were credentialed assistant-level behavior analysts. For this population, training was quick and effective. However, participants with less teaching experience or a limited background in behavioral interventions may take longer to reach mastery. Future research should examine the effectiveness and efficiency of preference assessment training for novice professionals.
Another important limitation was the implementation of a delayed baseline phase. Because the experimenter was unable to start baseline sessions for each participant during the same week, baseline was implemented in a staggered format. The experimental design would be stronger if all the participants started baseline at the same time. Future research should attempt to strengthen the experimental design by avoiding a delayed baseline condition, if possible.
In this study, the participants implemented the free-operant assessment for a 5-minute observation period. Based on some of the participants’ comments to the experimenter during the study, the 5-minute free-operant session might be too long for some young children with very short attention spans. Future research should examine the effects of training using shorter free-operant observation sessions. Shorter sessions might enable teachers to identify highly preferred items while decreasing the possibility of problem behaviors emitted by students during the assessment [15].
The purpose of conducting reinforcer assessments is to identify potentially effective reinforcers, but the only way to determine if the preferred items function as reinforcers is to conduct a reinforcer assessment. Specifically, the teacher would use the preferred items as reinforcers during behavior change interventions and examine the extent to which the preferred items are effective for producing a behavior change. In the study, the participants did not have an opportunity to conduct a reinforcer assessment. Future research would be strengthened by including a reinforcer assessment condition that would be implemented after potential reinforcers are identified.

6. Conclusions

Overall, this research supports previous research that demonstrates BST is an effective and practical approach for training practitioners to implement assessments and interventions. Additionally, remote BST is a practical and effective way to build skills for practicing teachers. The current investigation demonstrated that remote BST effectively trained four teachers to implement free-operant preference assessments. Conducting free-operant assessments is an important skill that helps teachers identify reinforcers that will work best for their students’ behaviors. Future research should examine remote BST training across a range of practitioners, settings, and student populations.

Author Contributions

Conceptualization, T.L.; methodology, T.L. and S.R.A.-M.; investigation, T.L.; writing—original draft preparation, T.L.; writing—review and editing, S.R.A.-M.; visualization, T.L.; supervision, S.R.A.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the research was approved by The Ohio State University’s Behavioral and Social Sciences Institutional Review Board (Protocol #2021B0392) on 12 January 2022.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

Data is unavailable due to privacy or ethical restrictions.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cooper, J.O.; Heron, T.E.; Heward, W.L. Applied Behavior Analysis, 3rd ed.; Pearson Education Inc.: London, UK, 2020. [Google Scholar]
  2. Leaf, J.B.; Leaf, R.; Leaf, J.A.; Alcalay, A.; Ravid, D.; Dale, S.; Oppenheim-Leaf, M.L. Comparing paired-stimulus preference assessments with in-the-moment reinforcer analysis on skill acquisition: A preliminary investigation. Focus Autism Other Dev. Disabil. 2018, 33, 14–24. [Google Scholar] [CrossRef]
  3. Graff, R.B.; Karsten, A.M. Assessing preferences of individuals with developmental disabilities: A survey of current practices. Behav. Anal. Pract. 2012, 5, 37–48. [Google Scholar] [CrossRef] [PubMed]
  4. Ortiz, K.R.; Carr, J.E. Multiple-stimulus preference assessments: A comparison of free-operant and restricted-operant formats. Behav. Interv. 2000, 15, 345–353. [Google Scholar] [CrossRef]
  5. Roane, H.S.; Vollmer, T.R.; Ringdahl, J.E.; Marcus, B.A. Evaluation of a brief stimulus preference assessment. J. Appl. Behav. Anal. 1998, 31, 605–620. [Google Scholar] [CrossRef] [PubMed]
  6. Sautter, R.A.; LeBlanc, L.A.; Gillett, J.N. Using free operant preference assessments to select toys for free play between children with autism and siblings. Res. Autism Spectr. Disord. 2008, 2, 17–27. [Google Scholar] [CrossRef]
  7. Brock, M.E.; Cannella-Malone, H.I.; Seaman, R.L.; Andzik, N.R.; Schaefer, J.M.; Page, E.J.; Barczak, M.A.; Dueker, S.A. Findings across practitioner training studies in special education: A comprehensive review and meta-analysis. Except. Child. 2017, 84, 7–26. [Google Scholar] [CrossRef]
  8. Kirkpatrick, M.; Akers, J.; Rivera, G. Use of behavioral skills training with teachers: A systematic review. J. Behav. Educ. 2019, 28, 344–361. [Google Scholar] [CrossRef]
  9. Fetherston, A.M.; Sturmey, P. The effects of behavioral skills training on instructor and learner behavior across responses and skill sets. Res. Dev. Disabil. 2014, 35, 541–562. [Google Scholar] [CrossRef] [PubMed]
  10. Higgins, W.J.; Luczynski, K.C.; Carroll, R.A.; Mudford, W.W.F.O.C. Evaluation of a telehealth training package to remotely train staff to conduct a preference assessment. J. Appl. Behav. Anal. 2017, 50, 238–251. [Google Scholar] [CrossRef] [PubMed]
  11. Smith, S.G. The Effects of Didactic Training and Behavioral Skills Training on Staff Implementation of a Stimulus Preference Assessment with Adults with Disabilities (Publication No. 10937768). Master’s Thesis, Utah State University, Logan, UT, USA, ProQuest Dissertation and Theses Global. 2018. [Google Scholar]
  12. Ausenhus, J.A.; Higgins, W.J. An evaluation of real-time feedback delivered via telehealth: Training staff to conduct preference assessments. Behav. Anal. Pract. 2019, 12, 643–648. [Google Scholar] [CrossRef] [PubMed]
  13. Pizzella, D. A Comparison of the Effectiveness, Efficiency, and Post-Training Outcomes of Traditional Behavioral Skills Training and Asynchronous Remote Training (Publication No. 28149930). Ph.D. Thesis, University of Missouri-St. Louis, St. Louis, MO, USA, ProQuest Dissertation and Theses Global. 2020. [Google Scholar]
  14. Sun, X.; Allison, C.; Wei, L.; Matthews, F.; Auyeung, B.; Wu, Y.; Griffiths, S.; Zhang, J.; Baron-Cohen, S.; Brayne, C. Autism prevalence in China is comparable to Western prevalence. Mol. Autism 2019, 10, 7. [Google Scholar] [CrossRef] [PubMed]
  15. Clay, C.J.; Schmitz, B.A.; Clohisy, A.M.; Haider, A.F.; Kahng, S. Evaluation of free-operant preference assessment: Outcomes of varying session duration and problem behavior. Behav. Modif. 2020, 45, 962–987. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Implementation fidelity checklist.
Figure 1. Implementation fidelity checklist.
Education 14 01082 g001
Figure 2. Percentage of correct steps implemented by the teachers. Data points show percentage of correct steps during baseline, squares show percentage of correct steps during post training, diamonds show percentage of correct steps during maintenance, and triangles show percentage of correct steps during generalization probes.
Figure 2. Percentage of correct steps implemented by the teachers. Data points show percentage of correct steps during baseline, squares show percentage of correct steps during post training, diamonds show percentage of correct steps during maintenance, and triangles show percentage of correct steps during generalization probes.
Education 14 01082 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, T.; Alber-Morgan, S.R. Training Staff to Implement Free-Operant Preference Assessment: Effects of Remote Behavioral Skills Training. Educ. Sci. 2024, 14, 1082. https://doi.org/10.3390/educsci14101082

AMA Style

Li T, Alber-Morgan SR. Training Staff to Implement Free-Operant Preference Assessment: Effects of Remote Behavioral Skills Training. Education Sciences. 2024; 14(10):1082. https://doi.org/10.3390/educsci14101082

Chicago/Turabian Style

Li, Tangchen, and Sheila R. Alber-Morgan. 2024. "Training Staff to Implement Free-Operant Preference Assessment: Effects of Remote Behavioral Skills Training" Education Sciences 14, no. 10: 1082. https://doi.org/10.3390/educsci14101082

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop