Intervention Fidelity

Learning support teachers were trained in all of the FORI lessons before administering them. Additionally, all lessons were written out as scripts to help ensure the planned lessons were adhered to and that there was uniformity of instruction across teachers and groups. To determine the fidelity of the lessons, a minimum of three instructional sessions were observed each week. For each lesson, teachers were expected to include a minimum of two core FORI activities. Similarly, for the sessions involving a performance lesson, the teachers were to a fford each student the opportunity to perform independently on a previously rehearsed fluency oriented task. Finally, students were inevitably absent for an occasional instructional day due to illness, for example, or their class being involved in another activity. When students missed a lesson, catch-up sessions were held to instruct them on the content of the lesson, either in a group or individually. Across all sites and over the period of the intervention, a total of four students required such catch-up sessions. Thus, all students received instruction on all FORI lessons.

#### *2.5. Teacher Professional Development*

Prior to the intervention, the learning support teachers in this study participated in approximately fifteen hours professional development on fluency oriented reading instruction and aspects of reading intervention design. During the seminars, teachers were familiarised with the instructional models to be employed and were given sample lesson plans for the implementation of the proposed intervention. Seminars included videotapes that introduced oral reading fluency instructional strategies and that modelled the proper execution of fluency oriented instruction. Discussions were facilitated with the teachers regarding the integration of the proposed instructional approach into their own individual learning support programmes. During these seminars the teachers were encouraged to talk about what was going on in their respective classrooms and to work through any concerns and questions they had with implementing the proposed fluency oriented reading instruction. Materials for this instruction were identified (and in some cases designed) as part of the professional preparation for the intervention. In the course of these seminars, four levelled reading texts were identified and chosen as the focus for the intervention. These texts had reading levels that correlated closely to other books already in use in the three schools. The texts, though short, were interesting enough to warrant discussion and vocabulary instruction on individual words and were selected based on their suitability for the type of reading instruction planned. They had carefully controlled language, repetitive patterns and repeated vocabulary and were suitable for fluency oriented reading instruction and for repeated reading in particular.

#### *2.6. Scoring the Motivation for Reading Questionnaires*

For analysis purposes, and to triangulate the findings from the surveys with qualitative data, an overall reading motivation percentage score for each construct was derived from both the student questionnaires and the teacher questionnaires before and after the intervention.

#### 2.6.1. Scoring the Student Survey (S—YRMQ)

The Student Survey (S—YRMQ) comprised twenty-two multiple choice items with the set of potential answers for individual survey questions ranging from two to four possible responses. In order to quantify the level of motivation for each item, a percentage score was assigned to the nature of a response dependent on the number of answers to individual questions that were o ffered to students. For example, in the case of the sections assessing reading self-e fficacy and reading orientation, zero percent (0%) was assigned to the most negative response with one hundred percent (100%) representing the optimum positive answer. Items in the third section that assessed students' perceived reading di fficulty were phrased in such a manner that if a student answered 'yes' or 'always,' it represented a

high level of difficulty and percentages were assigned accordingly. Examples of percentages assigned to individual responses across the range of multiple choice questions can be seen in Figure 5.


**Figure 5.** Coding for motivation for reading survey (student form).

For analysis purposes, and to triangulate the findings from the surveys with qualitative data, an overall reading motivation percentage score for each construct was derived. This was achieved by scoring the individual student response on each item and then calculating the average percentage score for all students in each construct. The pre-intervention motivation scores for the students in one research site (School A) across all three constructs are presented in Table 3 as an example. The percentages included in this table represent the student self-rating responses only, with the reading efficacy percentage score for one student (SB4) highlighted for illustrative purposes. The figure of 22 percent for this student represents an average score for this construct derived from responses to the six items featured in the section on reading efficacy.


**Table 3.** Example of student self-rating scores (pre-intervention).

The responses of this student (SB4) to questions on efficacy for reading, administered before the intervention, are presented in Figure 6 along with earned percentage scores.


**Figure 6.** Example of scoring of quantitative measures (Student SB4: reading efficacy).

#### 2.6.2. Scoring the Teacher Survey (T-YRMR)

The Teacher Survey (T-YRMR) comprised 20 statements organised in three sections reflecting the constructs of reading motivation assessed in the study. Teachers were asked to rate the likelihood of a particular behaviour occurring and were given a selection of four potential answers: (i) *No, never*, (ii) *No, not usually,* (iii) *Yes, sometimes,* or (iv) *Yes, always*. The optimum positive response (*Yes, always*) was assigned 100%, with scaled scores down to 0% for the most negative response. In instances where the statements were phrased in the negative form, e.g., '*the student avoids participation in reading activities',* 100% was assigned to the "*No, never*" response with the scoring scaled down to 0% for the "*Yes, always"* response.

#### 2.6.3. Data Analysis for Interviews

The transcripts for interviews with teachers and parents and for conversational interviews with students for this phase of the study were coded in order to identify the source of the data and to ensure quotes could be traced back to the original transcript. In coding all these variables, for convenience, the letters A, B and C were assigned to the three schools to identify the three different sites. For example, using this method, data from the learning support teacher in School A was coded as LSA1, data from a particular student in School B received the coding SB1, SB2 and data from a parent focus group in School C was coded as PFGC. Interviews were categorised according to four major themes reflecting the research questions for this phase of the study and were also assigned a descriptive code. For example, quotes referring to the motivational constructs of self-efficacy, reading orientation and perceived difficulty with reading were assigned SE, RO and PRD, respectively. After each piece of data had been assigned a code, a further layer of analysis was conducted to extract and deduce the meaning of each one. Each quote that warranted inclusion was then numbered within the category for reference purposes.
