Next Article in Journal
Is Subjective Knowledge the Key to Fostering Sustainable Behavior? Mixed Evidence from an Education Intervention in Mexico
Next Article in Special Issue
How the Mastery Rubric for Statistical Literacy Can Generate Actionable Evidence about Statistical and Quantitative Learning Outcomes
Previous Article in Journal
Teaching Methods in Biology Education and Sustainability Education Including Outdoor Education for Promoting Sustainability—A Literature Review
Previous Article in Special Issue
Reevaluating Bloom’s Taxonomy: What Measurable Verbs Can and Cannot Say about Student Learning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evidence of Sustainable Learning from the Mastery Rubric for Ethical Reasoning

by
Rochelle E. Tractenberg
1,*,
Kevin T. FitzGerald
2 and
Jeff Collmann
3
1
Collaborative for Research on Outcomes and Metrics; Departments of Neurology; Biostatistics, Bioinformatics & Biomathematics; Rehabilitation Medicine, Georgetown University Medical Center, Suite 207 Building D, 4000 Reservoir Road NW, Washington, DC 20057, USA
2
Catholic Health Care Ethics; Pellegrino Center for Clinical Bioethics; Department of Oncology, Georgetown University Medical Center, Suite 236, Building D, 4000 Reservoir Road NW, Washington, DC 20057, USA
3
Professor Emeritus, Georgetown University, 37th & O Street, N.W., Washington, DC 20057, USA
*
Author to whom correspondence should be addressed.
Educ. Sci. 2017, 7(1), 2; https://doi.org/10.3390/educsci7010002
Submission received: 8 July 2016 / Revised: 5 December 2016 / Accepted: 8 December 2016 / Published: 23 December 2016
(This article belongs to the Special Issue Consequential Assessment of Student Learning)

Abstract

:
Interest in sustainable learning has been growing over the past 20 years but it has never been determined whether students—whose learning we are trying to sustain—can perceive either the sustainability of their learning or any of the features of this construct. A four-item survey was developed based on a published definition of “sustainable learning”, and was sent to the 12 graduate students who have completed a new seminar in ethical reasoning. A thematic analysis of the narrative responses was submitted to a degrees-of-freedom analysis to determine the level and type of evidence for student perception of sustainability. Respondents (n = 9) endorsed each of the four dimensions of sustainable learning—and each gave examples for each dimension outside of, and after the end of, the course. One respondent endorsed all dimensions of sustainable learning, but was uncertain whether the course itself led to one particular sustainability dimension. While these results must be considered preliminary because our sample is small and the survey is the first of its kind, they suggest that graduate students can and do perceive each of the four features of sustainability. The survey needs refinement for future/wider use; but this four-dimensional definition could be useful to develop and promote (and assess) sustainable learning in higher education.

1. Introduction

“Sustainable learning” is defined as learning that continues beyond the end of formal instruction [1]. Interest in sustainable learning has been building in educational communities outside the United States (e.g., [2,3,4,5]), although some of the conversation focuses on the role of assessment in the sustainability of learning [2,3,6]; see also [7]. Within the United States (although not exclusively), sustainable learning has been studied as “transfer”, the application of learned skills or knowledge from the learned-in context to other contexts (excellently reviewed by Barnett and Ceci [8] and extensively discussed by Ambrose et al [9]; particularly in Chapter 4). These latter references are generally (but neither explicitly nor exclusively) focused on undergraduate education and workplace training. In medical education, a similar construct is “lifelong learning” (e.g., [10]); however, “lifelong learning” is not closely aligned with transfer or sustainability. In fact, within medical and nursing education at least, “lifelong education” is defined in a manner that is inconsistent with either “transfer” or “sustainability”, and instead focuses on maintaining competency with respect to the “state of the science” and “keeping up to date” [11] (p. 15)—i.e., continuing to learn new things relevant to the profession (continued professional development). Continuing professional development is an expectation in many fields, including medicine (e.g., [10]), nursing (e.g., [11]) and statistics [12]. The assessment of ongoing growth and development of the knowledge, skills, and abilities that the educational experience were intended to initiate beyond the end of formal instruction (i.e., the sustainability of that learning) is challenging in every educational context (undergraduate, graduate, post-graduate/professional). In a recent policy statement, the National Institute for Learning Outcomes Assessment (NILOA) [13] articulated five principles for documenting learning outcomes in higher education:
  • Develop/articulate specific actionable learning outcomes;
  • Connect learning goals with student work;
  • Articulate learning outcomes collaboratively;
  • Outcomes support assessment that generates actionable evidence; and
  • Outcomes are focused on improvement.
The sustainability of learning may be a plausible addition to existing objectives for the documentation and assessment of learning outcomes [13]. It could conceivably be integrated into the articulation of “specific, actionable learning outcomes”, or be considered “assessment that generates actionable evidence”. However, for either of these options to be plausible, sustainability has to be assessable itself. Although the conversation has been continuing for over 15 years [7], no evidence that students can perceive “sustainability”—or that it can be assessed reliably—has been published.
Knapper (2006) [1] declared “(l)ifelong learning means effective and sustainable learning”, but he did not define sustainability, whereas Schwänke (2008) [14] (pp. 1–2) described four distinct features of “sustainable learning” that include, and go beyond, transfer (a more cognitive model) and continuing professional development (a more clinical-education model). These four dimensions are:
  • Lifelong learning: an additional level of depth, or dimension, that you bring to a course or experience unrelated to the (primary) topic;
  • Changing your learning behavior as a result of the specific learning: describe how your learning (fact-finding, thinking, understanding of something, or approach to learning something new) changed;
  • A process of personal development continuing beyond the course: something you did, or initiated, for your own sense of learning (i.e., not taking a course as part of your program, but a learning or training experience that you sought, created, or identified—not already planned);
  • Deconstruction/reconstruction: an idea or concept that you thought you understood, but that you recognized you did not truly understand (deconstruction) and so sought to understand more deeply, and discovered an error in your original understanding that you remedied or sought to remedy (reconstruction).
While the construct “lifelong learning” is typically only trackable insofar as individuals attend workshops, read materials and/or answer multiple-choice questions on this content, or complete other similar unindividualized work, Schwänke’s definition represents an opportunity to explore sustainability in the (relatively) short term (see Appendix A). Moreover, this definition implicitly includes transfer, and it also defines “lifelong learning” implicitly in terms of transfer. It also explicitly incorporates metacognition, the knowledge of, and ability to regulate, one’s thinking ([15,16]). Ambrose et al. (2010) [9], among others, argue that metacognition is one of seven principles that promote effective learning; the National Research Council (2001) [15] (p. 78) states that “(m)etacognition is crucial to effective thinking and competent performance”. As noted, continuing professional development and “lifelong learning”—evidence of which are obtained outside of the classroom and often in informal ways—typically do not include or emphasize metacognitive development or transfer.
In the context of higher education (undergraduate, graduate and post graduate training), while transfer might be a key objective (e.g., [8]), there are many challenges to be overcome before real sustainability in learning can be successfully integrated into instructional and learning objectives (see, e.g., [2,6,7,17,18]). A significant challenge is that many instructors are experts in their own discipline, but not in education or pedagogy. This makes them potentially excellent instructors, but at the same time, can make it difficult, if not impossible, for individual instructors to promote sustainability in the learning of their students (see, e.g., [9] (Chapter 4)). A related challenge is that, particularly if you teach “in the major”, the courses are structured and aligned with a deepening interest, experience, and knowledge base that is reiterated, and reinforced, throughout a program. Because students may be training for a future in the discipline in which they are studying, their engagement with higher education to achieve their desired future, and not the actual instruction itself, may bring about the sustained learning. If it seems like student learning is sustainable, there is little reason to “improve” or even “standardize” this attribute. It is also difficult to identify a point in a degree program where the sustainable learning would begin or even be best initiated. These are also considerations in the assessability of sustainability.
However, there are instances where sustainability cannot be brought to a course by the student’s engagement with the discipline, nor can it come from the organization of a program of study: for undergraduates these are the single courses outside the major that satisfy breadth or “general education” requirements. For graduate students and some post graduate students, these might be a single required (“singleton”) course in statistics or in ethics/responsible conduct of research. Although two of the authors have taught both of these stand-alone “required” courses (Rochelle E. Tractenberg, statistics/Kevin T. FitzGerald, ethics), in this manuscript we focus on the potential to identify and assess sustainability from within the context of training in the responsible conduct of research (RCR), or research ethics.
Like other “required” courses, a course on RCR is also often the only formal exposure students will have to the topic. However, unlike most required courses, a key assumption for training in research ethics, or the responsible conduct of research (RCR), is that the single course (whether it is three hours or a full semester) is sufficient to both promote and sustain research integrity for an entire career. Most other singleton courses are intended to serve as a survey, overview, or general introduction to the topic; if a student contemplates utilizing the topic of virtually any other singleton course for future work or study, it is likely that further engagement with that topic would be recommended or recognized as necessary by the student or the institution. Our joint experience in and with graduate programs in the sciences is that this is rarely the case for courses in research ethics. The typical paradigm is one course with the same training that all researchers, no matter what their career stage, role, or responsibilities, must complete, and compliance is the main documented learning outcome.
The National Academy of Engineering [19] (p. 36) declares this to be a flawed model for ethics education; and Novossiolova and Sture (2012) [20] reviewed the literature around ethics education and describe how this type of educational experience—even if successfully delivered—can fail to be brought to bear in practice. This failure may arise from the learning itself (e.g., [21]), or from perceived failures of the applicability of the ethics instruction to real world contexts and situations. From a cognitive scientific perspective, the dominant training paradigm for “training in ethics” assumes that mastering the information associated with RCR topics (functioning at the “cognitive stage”) will lead to the habits of mind that characterize real mastery of the key constructs in RCR (functioning at the “autonomous stage” [22] (pp. 281–282)). This paradigm replaces the community value for integrity (e.g., [23]) with community value for “completing required training”. While this may also be true for other single courses (outside of a major program of study), sustainability of the learning that is initiated in training for the responsible conduct of research is a high—if unacknowledged—priority. A critical feature of the scholarly discussion of sustainable learning in general or with respect to training in responsible conduct in research, is that it is entirely focused on what teachers can/do implement in their courses to promote sustainable learning. There has never been a study of whether or not the construct “sustainable learning” is meaningful or even perceptible to students.
One of the authors (Rochelle E. Tractenberg) created the Mastery Rubric [24], a tool that formally and explicitly combines the knowledge, skills, and abilities (KSAs) that a given curriculum seeks to deliver together with a description of how performance of each of these KSAs should change as the learner moves from a more novice to a more expert level of achievement on each. A Mastery Rubric is a curriculum building and evaluation tool, similar to a traditional rubric (e.g., [25]) in that the desired knowledge, skills and abilities for a curriculum—rather than an assignment or task—are outlined together with performance levels that characterize the respondent changing from novice to proficiency [24], rather than from the worst to the best grade or score. In 2012, two of the authors (Rochelle E. Tractenberg, Kevin T. FitzGerald) published the Mastery Rubric for Ethical Reasoning (MR-ER) [26]. The MR-ER focuses on the knowledge, skills, and abilities (KSAs) that comprise ethical reasoning, and performance levels follow a guild structure to support development from novice, through apprentice (beginner) to journeyman (independence). We included a fourth level, “master”, to differentiate the evidence required to support a claim that an individual has achieved the journeyman (independent functioning) level of performance of all reasoning KSAs from the evidence that is required to support a claim that an individual is not only capable of performing ethical reasoning independently (journeyman), but also has empirically demonstrated their abilities to diagnose and remediate the reasoning of less-advanced reasoners, i.e., that they are capable of taking and training an apprentice, i.e., the master [26,27].
A Mastery Rubric-based curriculum would specifically encourage individuals to reflectively monitor their own development of the identified curricular objectives (knowledge, skills, and abilities) so that individuals have—or know to seek—multiple opportunities to learn, practice, and demonstrate their mastery of specific knowledge, skills, and abilities that are consistent with the instructional objectives [24]. Because it is intended to be public, i.e., accessible to both the students and the instructors, a Mastery Rubric can promote student metacognition by encouraging them to evaluate the level at which they perform specific KSAs (see, e.g., [4,7,15,24,27]).
We created a semester-long course based on the MR-ER that has now been completed by three cohorts of graduate (Masters and PhD) students in the biomedical sciences. We designed the course (syllabus in Appendix B) so that, over a semester, each of the ethical reasoning KSAs are taught, and students are given practice employing, and metacognitively reflecting on, each KSA, using a federally-recommended topic list [28] to guide our selection of relevant case studies. While training in responsible research is not a focus of this manuscript, it outlines the first evidence that the MR-ER can support a course that might achieve the intention to encourage transfer, reflection, and self-monitoring in responsible conduct in research beyond the course itself (e.g., see [29]). The MR-ER-derived course happens to focus on research ethics, but the point of this study derives from the orientation of the Mastery Rubric towards supporting metacognition. Because of this orientation, and the dependence of sustainable learning on metacognition, we sought to provide the first empirical test, to our knowledge, that sustainable learning (as defined by Schwänke 2008 [14]; see also [2,19], and Appendix C) can be perceived by students.

2. Materials and Methods

The development of the survey, structure of the course whose sustainability we sought evidence for from our students, and our analytic methods are described below and summarized in Appendix D outlining how we met criteria for reporting qualitative research (following the consolidated criteria for reporting qualitative research, COREQ, [30]).

2.1. Development of the Survey

We created a four-item survey based on the features of sustainable learning as defined by Schwänke (2008) [14] and enumerated earlier. As can be seen in Appendix A, our only modifications to the definition of these features was to direct respondent attention to our course, and also to domains outside the course and to the time after the course was ended. This focus preserved the original definition features, which was a priority for us, but it also made the survey so specific to our course that we were not able to administer it to a ‘control’ group. The survey was the only data collection instrument and this study was its pilot test. No one has ever surveyed students about whether “sustainability” is a characteristic of their educational experience that they can detect. In this “proof of concept” study, we sought only to determine whether students can perceive sustainability; if the evidence supported its perception by the students, then we planned to revise the survey so it could be more general and used more widely.
Prior to distributing the survey, a foundation grant was obtained by two of the authors (Rochelle E. Tractenberg & Kevin T. FitzGerald) to engage doctoral students to participate in the course and to continue in a related project wherein portfolios were created based on the MR-ER. At this point, we obtained an exemption from our institutional review board, citing our intention to use any results and research publications to improve the course and our teaching. All data presented here were collected under this IRB exemption. All participants knew the two faculty who taught their class (co-authors Rochelle E. Tractenberg & Kevin T. FitzGerald) and knew of their interest in the course’s functioning; no students were from either instructors’ academic departments. None of the students knew the third co-author (Jeff Collmann) who is a research collaborator with the other two co-authors and who completed all the thematic (content) analyses independently. There was no interaction between the coder and the survey respondents.

2.2. The MR-ER Derived Course

The ethical reasoning KSAs were derived from compendia of scholarly work reflecting ethical decision-making [31] described in detail in [26], and also described as the component skills required for (any) case analysis by Ambrose et al. [9] (p. 99). The course and its assessments were designed around the elements of assessment validity outlined by Messick (1994) [32]:
  • What is/are the knowledge, skills, and abilities (KSAs) that students should possess (at the end of the curriculum)?
  • What actions/behaviors by the students will reveal these KSAs?
  • What tasks will elicit these specific actions or behaviors?
The syllabus (see Appendix B) reflects our purposes of providing instruction, and practice with formative feedback, around each KSA, with a final opportunity (highly scaffolded but, ultimately, summative) to argue, using work products from the semester as evidence, that the individual had moved from the Beginner to the Novice level (a change of one level on our four-level rubric). Moreover, the course was designed based on the 2012 manuscript the co-instructors (co-authors Rochelle E. Tractenberg & Kevin T. FitzGerald) published in 2012, and this is required reading for the course.
We specified to the students that the course focus is reasoning, not “ethics” or mastery of the factual material comprising the topics list from the National Institutes of Health (NIH, [28]). We never mentioned “sustainability”, “lifelong learning”, or “transfer” in any semester we offered the course, but we repeatedly discussed and modeled metacognition. One example of this emphasis is that we discussed each student’s case analysis in class meetings, focusing on their descriptions of their thought processes and how their writing did and did not reflect their own thinking in a way that would be accessible to any reader of their essays. For the two semester courses that enrolled only doctoral students, we repeatedly pointed out that, just as important as their developing ethical reasoning abilities, students’ awareness of their own thinking would be an important skillset for them to deploy, teach, and model when they had students of their own in the future.
As outlined in the syllabus (Appendix B), the first meetings of the course were devoted to orienting students to the component knowledge, skills, and abilities in the Mastery Rubric for Ethical Reasoning which were to be utilized in their case analyses each week, utilizing the NIH topics list to guide our selection of cases for discussion and reflection. During the first seven weeks, one week had been spent on each of the MR-ER KSAs in turn. We then asked the students to become more active in their identification of weaknesses that they perceived in their KSA performance. For the final third of the semester, case analyses were focused by each student on whichever of the MR-ER KSAs the student felt was most salient for considering, or resolving, the case, or the one for which they felt they needed additional practice or evidence of growth. Each meeting and weekly assignment focused on one KSA (with all of them being included in each meeting), to be demonstrated and emphasized, with a brief (20 min at most, in a 3 hour seminar) lecture on that KSA by faculty (co-authors Rochelle E. Tractenberg and/or Kevin T. FitzGerald). Following this overview, students demonstrated, and discussed, the metacognitive features of their case analysis, the target KSA, and its reflection in their own essays.
The first semester course was offered using the university online system, where all students (plus one auditor and two instructors, co-authors Rochelle E. Tractenberg & Kevin T. FitzGerald) called into a single line, once per week for a three-hour meeting. In the second and third semesters, all meetings (weekly three-hour discussions) were held on campus, led by the two instructors (the two auditors each co-led one session). The syllabus was similar to the one shown in Appendix B; we revised the first and second semester syllabi for clarity (not content or structure), based on input that the previous semester students provided. The final (current) syllabus appears in Appendix B. As outlined in the syllabus (see Appendix B), participants wrote ten 500-word essays for each meeting during the semester, and a final 1000-word essay.

2.3. Subjects

Twelve individuals have completed our course to date, all of whom had completed either at least one online module on the responsible conduct of research or a semester course in which these NIH topical areas were discussed. A total of ten graduate students enrolled in three semester offerings of this course (three students from a Master’s program in Spring 2012, three students from a PhD program in Fall 2012; four PhD students from three programs from our (n = 2), and a nearby (n = 2), university completed the course in Spring 2014). In addition to these ten students, we had two auditors. One of the Master’s students in the first semester (who had completed a PhD some 15 years previously) also audited the second semester, and two additional participants with PhDs (from the scientific community) contacted us requesting permission to audit the course. One of these individuals also audited the first two semesters; the other auditor participated in our Spring 2014 semester. The two individuals who participated in both of the two 2012 semesters specifically requested permission to continue, so as to further develop their reasoning skills for a second semester. Of the 12 participants in the course to date, all completed every written assignment and although there was a single missed meeting for 60% of attendees in each semester, no student missed more than two meetings, and some missed no meetings. The course was required for the students in the first semester (enrolled in a master’s program). None of the students were in the disciplines of any of the co-authors (respondents were not “our” students).
After obtaining an IRB exemption for the project, we emailed former students and requested representative experiences, if they had had any outside of and after our course, of each of the examples of sustainable learning described by Schwänke (2008) [14], outlined in the introduction and elaborated in Appendix C (lifelong learning; changing your behavior as a result of acquiring new knowledge; a process of personal development continuing beyond the course; and deconstruction/reconstruction).
All students were emailed with a request to participate in the survey; all students who responded were emailed the document with the four questions; all respondents completed the survey on their own and emailed them back to one of us (co-author Rochelle E. Tractenberg). De-identified surveys were then sent as a zipped file of uniquely-numbered surveys to the qualitatively-trained anthropologist on our team (co-author Jeff Collmann), who completed the thematic analysis as described below (see Appendix D). This analysis was independently reviewed by one other co-author with experience in qualitative analysis (co-author Rochelle E. Tractenberg).

2.4. Analysis

The narrative responses to these open-ended questions were content-analyzed by an independent coder (co-author Jeff Collmann) who had not been involved with the courses and so did not know the respondents. These data were therefore anonymous to the coder. Each of the four survey items has a yes/no part and a narrative part, so each item can be summarized as the proportion of the sample that endorsed (said YES to) it. No software was used to analyze these brief responses; as themes emerged from the narrative responses, they were collated into a table which was then sent to the independent reviewer of the thematic analysis (co-author Rochelle E. Tractenberg) as a summary—so this representation of the data was anonymous to that analyst as well. The thematic table is described in the next section. The purpose of this analysis was to determine if sustainability was perceptible to this group of students.
We then created a Degrees of Freedom Analysis [33,34,35] matrix with the four features of sustainability [14] representing our “predictions”, and the collation of whether students perceived these features representing our “degrees”. The purpose of this analysis was to explore whether different dimensions of sustainability were more or less perceptible to students.

3. Results

The survey is presented in Appendix A. It was derived explicitly from the definition of sustainable learning given by Schwänke (2008) [14]. Responses to the survey were received within 6 months of our initial inquiry (all after the course ended) from nine of the twelve course completers, requested between two months and two years after the course was completed. Three students from our first semester were lost to follow up due to job changes (n = 2) and maternity leave (n = 1).
Table 1 shows the content analysis results.
Table 1 represents two key features of survey responses: firstly, 100% of respondents answered YES to three of the four questions; a single “NO” response on one item was negative because the respondent was unsure whether the attribute was the direct result of our course or not. That is, all nine respondents recognized each of the dimensions of sustainability with respect to our course; the majority (8/9) attributed the changes in every dimension to the course and 1/9 endorsed the dimension, but could not attribute it to our course.
The second feature arises from the themes in the narrative responses to all items: all respondents identified work and/or life applications of the course, which is representative of transfer (and sustainability in a general sense). Three respondents (all students) focused on the impact of the course on their career planning when answering three of the four questions. Most examples refer to improvements in reasoning either in direct use of the KSA principles or through routine examination of alternatives in work or daily life. The examples as a whole suggest behavioral as well as cognitive changes. Respondents describe themselves as acting differently in the world as a result of changes in their reasoning and problem-solving approaches. The survey prompted them to identify changes in their behavior that they attribute to the learning in the course, a process that re-enforced the course’s fundamental emphasis on self-reflection and metacognition.
All of our respondents work in research contexts, so responses relating to “research” may be attributable to either work or to their career. For example, five of eight respondents identified “work” (or “workplace”) applications of lifelong learning (e.g., “learned how to structure my thoughts and reflect upon them => improved proposal writing” (Student #8) and “work to understand research work at a deeper level” (Student #2)). Three of the eight specifically identified “daily life” applications (“‘lifelong learning’…almost occurs on a daily basis, and most likely would not have occurred at all if it was not for my involvement with the ethical reasoning course” (Student #4) “(L)istening to alternate perspectives and then making a concrete decision” (Student #6)). One respondent endorsed the item but provided no text response. All respondents endorsed the lifelong learning dimension of sustainability.
All nine respondents endorsed the learning behavior change dimension of sustainability, and responses again represented work (e.g., “be sure I understand more than “going through the motions” when acquiring these techniques and am seeking the details/theories/physics behind the techniques” (Student #9); “generate alternative actions in solving problems, including science problems” (Student #7)) and daily life (e.g., “more productive, during conflict, to seek understanding of the various conflicts and find common ground” (Student #8); “improved organization of my thoughts” (Student #2))—including other learning experiences (“increased interest in learning as a “messy” or social constructivist experience” (Student #1); “My learning behavior has been completely renovated…When I hear something new I run it through the KSAs” (Student #4)).
When describing an ongoing process of personal development, not only did all nine respondents endorse this dimension, but they also provided examples that differed from their earlier responses. The examples again related to career and work (e.g., “(t)o realize that career development is crucial” (Student #2); “ended up attending the (leadership workshop) and finding it to be both reinforcing and broadening in my understanding of leadership qualities I possess” (Student #9)), or to daily life including new learning experiences (e.g., “I see how important alternative perspectives are, and just how differently everyone has the ability to see one single person, place, or thing” (Student #4); “in a new learning environment where I have to think critically about subject areas that are new to me” (Student #6); “Enhanced perspective and cognitive awareness” (Student #7)).
Finally, while all nine respondents endorsed having had deconstruction/reconstruction experiences outside of, and after, our course, one respondent was uncertain whether or not these were due to the course itself (“I do not know if my realization of these errors in reasoning was due to this course” (Student #9))—which is itself a highly reflective and metacognitive response—and these were two key elements of the course while the deconstruction and reconstruction aspects were not. The other responses were again reflective of daily life in three respondents (“my understanding of other’s perspectives…By thoroughly dissecting another’s perspective and truly trying to walk in its shoes, I am learning much more about myself, in addition to others, than I could have never imagined” (Student #4); “I thought I understood the process of expressing myself and of reflection. I now write daily and reflect on it” (Student #8); and, “I now know how to recognize moral dilemmas, understand the internal conflicts that created them and construct an approach that is ethically congruent” (Student #7)). The other four responses were more focused on work (e.g., “Securing funding requires a lot of different moving parts and my basic understanding of this entire process has given a greater appreciation for my discipline” (Student #6); “Now see the benefit of continuing scientific education, especially from experts at conferences. Increased understanding of how science affects policy and the importance of funding to science” (Student #2)).
Table 2 presents the summary of our data aligned with the predictions derived from the definition of sustainability that we used [14] in a Degrees of Freedom Analysis matrix [33,34,35]. Given the inclusion of one endorsement part of the survey items for each element of sustainability, the three co-authors’ consensus on the evidence (whether respondents did perceive that element) was unambiguous and consensus was reached immediately.
As was noted in the presentation of the content analysis of the narrative responses, every respondent indicated that yes, they perceived each of the four elements of sustainable learning. One respondent did perceive the first element (lifelong learning) but did not give an example; one respondent did perceive the last element (deconstruction/reconstruction), but in the narrative response said they were not certain that this was due to our course. Because we were most interested in just whether students could perceive the sustainability dimensions, we considered the endorsement of the elements of sustainability to be supportive of the conclusion that these students from this course did in fact perceive all four aspects of sustainability.

4. Discussion

We surveyed nine of our 12 completers (to date) on four features of sustainable learning and these preliminary results indicate that all completers but one identified at least one experience meeting all four criteria; most described one or more experiences representing each criterion. These respondents come from doctoral programs in different disciplines, and include two respondents who had already completed a PhD prior to our course. The one respondent who did not endorse one of the sustainability dimensions did recognize it, but was unable to determine if it was a direct result of our course. The four dimensional model of sustainability outlined in 2008 [14] did not create a “neat” survey—in fact, the items are fairly complex as currently written (see Appendix A), and are impossible to administer to a “control” group because the language is so specific to the course we developed. However, all nine respondents were able to recognize each of the four dimensions and provide examples from their lives of how their learning in a course was sustained. These data suggest that sustainable learning is observable to students; and their narrative responses render it observable to others. Thus, sustainability of learning may be a plausible addition to existing objectives for the documentation and assessment of learning outcomes [13]. Moreover, if these preliminary results showing the perceptibility of sustainability can be replicated with a wider, more representative sample of students, then sustainable learning could be considered to be “assessable”, and student perception of it could be considered “actionable evidence of student learning” [36].
Although sustainable learning might be a focus in the education of educators (see, e.g., [2,3,4,5,6,7,37]), enduring effects of “RCR training” have never been shown—nor to our knowledge has any empirical evidence of sustainability of higher education been published to date. More than showing that our students continued to learn ethics content after their “requirement” was satisfied—which technically fits the “lifelong learning” model (and meets both the spirit and the letter of the National Institutes of Health requirements for training in RCR, [28])—the responses of our students concretely represent their application of ethics content knowledge, reasoning, and metacognition independent of and beyond our course.
Continuing professional development is well-known to be difficult to foster, monitor, and document in medical and nursing education [10,11], and in those cases it is mandated; in research ethics the community value continues to emphasize a requirement-satisfaction model, rather than a developmental model or even actionable learning outcomes assessment [13]. The National Institutes of Health [28] require that “RCR training” spans the scientists’ career and Novossiolova and Sture (2012) [20] (pp. 80–81) recommended “continuing professional development” in ethics for scientists, and they call for research on how this might be achieved and assessed. Our work suggests that incorporating metacognition might promote the responsible conduct of research throughout a career, possibly as a model for actually-continuing “professional development”. Moreover, this preliminary evidence suggests that it may be possible to obtain actionable evidence of learning outcomes that have heretofore been unassessed/unassessable—namely, whether or not the learning that student transcripts reflect endures.
Our results are clearly preliminary, since only twelve participants have completed our course so far. Our major themes are not surprising, since graduate students are focused on their careers and daily work (in research, in these participants). This supports the interpretability of our results to the extent that they were predictable in some sense. With so narrow a focus and such a small sample, minor themes are difficult to justify reporting. However, this survey is also the first of its kind, and one clear result is that future study of students’ perception of sustainability will require a revision of the survey, because it is too specific to our course for general use (including being too unacceptably specific to permit surveying a plausibly-comparable control group). However, since this is a proof-of-concept study, the failure to include a control group is not a significant limitation but rather, an important feature to address in future research on sustainability as an actionable type of learning outcome in higher education.
Further triangulation of these preliminary results about student perception of sustainability of their learning comes from the intentions and hypotheses about the utility of the Mastery Rubric as a curriculum building and evaluation tool to support sustainable learning (see, e.g., [24,26]). We were unable to locate any peer-reviewed evidence of sustainable learning in higher education—neither from instructors (showing that their instruction was sustainable/sustained) nor from students (showing that student learning was sustained after a course not in the major/degree program). We also found no evidence in the literature that instructor or institutional learning objectives have explicitly mentioned or described sustainability of the learning that a course or curriculum was intended or designed to promote. In our future work, we plan to continue to focus on identifying features of sustainability from singleton courses, which may then lead to actionable evidence of sustainability in learning outcomes that would then be relevant for courses that are embedded in a program of study (i.e., not singleton courses).
The MR-ER [26] and the class that was created based on it are focused on decision-making and reasoning—found to be conspicuously absent or to worsen after RCR training by Antes et al. (2010) [38]—and not on the mastery of information alone. Both the course described here (See Appendix B) and the MR-ER itself were created with metacognitive development, and not the sustainable learning model in mind (see, e.g., [39]. Student work turned in during the semester supports our claims that metacognitive skills, as well as the target KSAs, have been learned, whereas the syllabus only supports claims that the material and these skills have been covered [40] (p. 11). Although reports have been published of the failure of traditional “RCR training” to produce positive or lasting effects (see, e.g., [41]), we were unable to find any published peer-reviewed evidence of sustainability of learning with respect to the responsible conduct of research or ethics. This sample is small, but sustainable learning is a highly abstract construct. We do not want to over-interpret the results, but we do feel that triangulation supporting our interpretation of these results derives from the fact that we were able to get interpretable, consistent results from all respondents, when the construct of “sustainable learning” is itself abstract and was never actually mentioned prior to the survey administration.
In Appendix C is a matrix that aligns the Boud and Falchikov description of sustainable assessment (2006) ([2]; see also [18]) with the four-part model of sustainable learning based on Schwänke (2008) [14] that we used in this study. This Appendix describes how a Mastery Rubric, and particularly how a course based on one, could support sustainability as described by both of these models.
However, three caveats must be considered. Firstly, the survey we created was a direct representation of the four-part definition of sustainable learning, and was not created with formal survey-creation methodology. We sent the survey to our former MS and PhD students via email and none of the respondents had any trouble understanding or answering the questions, but the wording is complex and responses might have been facilitated because our course involved so much writing and focused discussion on the topics in these four questions—although we never used the expression “sustainable learning” (since we hadn’t heard of it until late in the final semester represented in these analyses). We are not convinced that this survey in its current form would be useful generally; in fact, we created a second version (not included) to administer to students in typical “RCR training” courses (i.e., without allusion to our specific course) but we did not deploy it because the questions did not seem clear and modifications for clarification rendered the two surveys so inconsistent that responses might not have been directly comparable. We plan to use formal methods to develop a more generalizable survey that encompasses these four dimensions of sustainability, and welcome readers to use or adapt ours to suit their own purposes and courses.
Our second caveat relates to our sample: highly self-selected (our course was only required for our first cohort and was an elective for the second two cohorts) and including auditors who contacted us specifically requesting the opportunity to participate in the course. None of these participants were ethics “majors”, so this course does fit our introductory comments about the singleton course outside a program of study, but we cannot rule out that our respondents were themselves fully engaged in the paradigm and their engagement, rather than our course design, is the feature that led to the sustainability they all exhibited and described. The qualitative analysis of the narrative responses clearly documents the examples that respondents gave of their applications of the ethical reasoning KSAs outside and after the course, but this might be due to their own characteristics and not to the inherent objective of a course based on a Mastery Rubric to initiate this. In one sense though, this self-selected and motivated-to-demonstrate-sustainability sample are an ideal way to test the theory that sustainability in learning can be perceived by students (because if these students could not perceive it, then it probably cannot be perceived by any students). However, we cannot attribute the sustainability that our results show to the role of the Mastery Rubric in the course design.
Finally, our responses were obtained between two months and two years after the course, and not “after graduation” as the time frame suggested by Boud and Falchikov (2006) [2]. Our intention was simply to determine whether we could obtain evidence that students perceive sustainability in their own learning, using a concrete definition and a short-term view. The observation that 100% of respondents recognized and understood all four dimensions, although none of the terms or constructs in the definition of sustainability that we used for our survey (i.e., [14]) was ever mentioned in our course, supports our conclusion that the approach to studying this construct and its perceptibility by students was successful for this highly self-selected and small sample.
We do not wish to over-interpret these results, and we have plans to continue research into sustainable learning and how to both promote and assess it across courses and curricula in our institution. As was argued in the introduction, the singleton courses in graduate programs in particular seem to have a special burden and expectation of sustainability. With this survey pilot tested in our current sample, we can now begin to revise the survey for more generalizable administration. However, our main intention of the survey is not that it should serve as an “endpoint” in intervention studies; it is to support the instructor’s ability to teach and assess in such a way as to promote sustainability. Our experience leads us to pursue this evidence as a result of incorporating metacognition explicitly into other singleton courses (an international graduate-level course in biostatistics and informatics, in a current project). The proof-of-concept results here will support the integration of a revised version of this survey into efforts to estimate the effects of incorporating metacognition into courses at our institution and in an international setting. Those efforts will capture larger samples, and therefore can strengthen understanding about the construct of sustainable learning.
In conclusion, this qualitative analysis supports a claim that students, as well as instructors, can perceive sustainability in learning. A Mastery Rubric, when used to develop and/or evaluate a curriculum, permits the program or institution to document that, and how, important knowledge, skills and abilities have been “learned” (i.e., that teaching practice is aligned with educational outcomes [42]; see also Appendix C). If sustainability can be assessed by students with respect to their own learning, as these preliminary results suggest, then this feature of teaching and learning could be incorporated into models of learning outcomes assessment like that of the National Institute for Learning Outcomes Assessment [13,36].

Acknowledgments

Student participation in this project was supported by a grant to Rochelle E. Tractenberg & Kevin T. FitzGerald from the Mary Elizabeth Groff Surgical Medical Research and Education Charitable Trust; participation of Rochelle E. Tractenberg & Kevin T. FitzGerald in the project was supported by an NSF EESE grant (“A multidimensional and dynamic ethics education paradigm spanning the science career”; award #1237590) to Rochelle E. Tractenberg. Participation of Jeff Collmann was based upon work supported by the NSF under Grant No. SMA-1338507.

Author Contributions

Rochelle E. Tractenberg conceived and designed the study and survey; Rochelle E. Tractenberg & Kevin T. FitzGerald collected the data; Jeff Collmann was the principal data analyst and Rochelle E. Tractenberg independently reviewed the qualitative results. Rochelle E. Tractenberg, Kevin T. FitzGerald and Jeff Collmann collaboratively wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Post-Ethical Reasoning Course Sustainability Survey

Please read the following definitions of “lifelong learning”, “changing your behavior as a result of acquiring new knowledge”, “a process of personal development continuing beyond the course”, and “deconstruction/reconstruction”. Based on these definitions and thinking about your own reasoning in your daily life or work since you completed the ethical reasoning course, answer the questions below. Each question asks for ONE example, but if you can estimate how many times each type of experience has occurred, please do so! Describe as many experiences as you choose (no word limit!)—we only NEED one but will use everything!
Lifelong learning: an additional level of depth, or dimension, that the course’s emphasis on ethical reasoning skills and reflecting on your own thinking enabled you to bring to a course or experience unrelated to an ethical challenge.
Since the end of the ethical reasoning course, have you had at least one experience that fits the “lifelong learning” definition?
YES (ESTIMATED NUMBER: ) NO
Briefly describe how the course’s emphasis on ethical reasoning and your own reflection on your thinking led to an additional level of depth, thought, or reflection in another course or other experience. IF this has happened to augment your awareness of ethical challenges, please note that! However, this question is asking about your application of what you learned in our course to something other than ethical challenges.
Changing your learning behavior as a result of acquiring ethical reasoning KSAs: describe how your learning (fact-finding, thinking, understanding of something, or approach to learning something new) changed as a result of having completed the ethical reasoning course.
Since the end of the ethical reasoning course, have you had at least one experience that fits the “changing your learning behavior” definition?
YES (ESTIMATED NUMBER: ) NO
Briefly describe how the course led to changes in your learning. This question is asking about your application of the specific KSAs you learned in our course to other types of learning—including ethics (e.g., if you participated in a seminar or course on ethics) but not limited to that. For example, if your participation in/benefit from journal club meetings changed, use that!
A process of personal development continuing beyond the course: Something you did, or initiated, for your own sense of learning (i.e., not taking a course as part of your program, but a learning or training experience that you sought, created, or identified that was not already planned). Something you feel you will/would/would have benefitted from, that was not already in progress.
Since the end of the ethical reasoning course, have you had at least one experience that fits the “process of personal development beyond the course” definition?
YES (ESTIMATED NUMBER: ) NO
Briefly describe how the course led to your identification or creation of this experience (including, if appropriate, your recognition of an opportunity as useful that you hadn’t considered to be “useful” before—maybe you only thought it would be “interesting”, but not specifically contributory to your own personal development). This question is asking about whether and how your level of awareness of your own reasoning, and how to try and strengthen it, was changed after or beyond the course itself—including ethics (e.g., if you participated in another seminar or course on ethics) but not limited to that. For example, if your perception of an opportunity’s potential to contribute to your own development changed, use that! This can also include decisions NOT to participate in one opportunity because you perceived that the potential to strengthen or increase your own skills to be less than you desired.
Deconstruction/reconstruction: An idea or concept that, prior to our course, you thought you understood, but that you recognized you did not truly understand (deconstruction) and so sought to understand more deeply, and discovered an error in your original understanding that you remedied or sought to remedy (reconstruction).
Since the end of the ethical reasoning course, have you had at least one experience that fits the “deconstruction/reconstruction” definition?
YES (ESTIMATED NUMBER: ) NO
Thinking about the time after the course ended, briefly describe an idea or concept that you thought you understood before taking our course, but that you later recognized you did not truly understand. Describe how the course or reasoning KSAs led you to recognize that you did not truly understand it, and whether and how these same KSAs helped you to reconstruct the concept without the original error.
This question is asking about your ability to identify ideas in your head that might require deconstruction (in order to determine if you truly understand them)—and also about your ability to reconstruct them—by seeking new information or experiences, and rebuilding the concept without the original error or gap. For example, you may have chosen NOT to participate in work or learning opportunities because you originally perceived that there was no potential to benefit from them; however, developing the ability to isolate and reflect on your reasoning skills may have led you to perceive new/unknown potential for learning in experiences was greater than you were capable of recognizing before the course.

Appendix B. Semester Course Syllabus

Ethical Reasoning for Biomedical Scientists
Meetings: Monday 9–12
Location
Instructors: Rochelle E. Tractenberg ([email protected]); Kevin T. FitzGerald ([email protected]).
Office Hours: By appointment
Overview: This 3-credit course (pass/fail) is designed around two key learning goals: (1) to initiate a career-long developmental path for the reasoning skills required to explain and justify decisions regarding ethical questions and dilemmas; and (2) to learn as much as possible about the nine areas that the National Institutes of Health has identified as representing the key elements of training in the responsible conduct of research, as well as other areas the course directors have identified as likely to be crucial to careers in biomedical research. You will also be asked to consider the role of “responsible conduct of research” in your own work and collaborations—and to consider the purpose of this training for you, your colleagues, and your mentees.
Prerequisite or co-requisite requirements: the NIH human subjects training, Collaborative Institutional Training Initiative (CITI) Modules and HIPAA training all provide important background, factual information that is central to your ability to complete case analyses (representing the majority of your written work for this course) but is not sufficient to constitute adequate training in the responsible conduct of research. These training options—which are required for your participation in any research at GU, HU, or through the GUHUCTTS—are available through: http://www.georgetown.edu/gumc/ora/irb/irbTraining.htm. See also: NIH RCR training policy notice at http://grants.nih.gov/grants/guide/notice-files/not-od-10-019.html.
Required text: Introduction to the Responsible Conduct of Research, Office of Research Integrity (can be downloaded at http://ori.hhs.gov/documents/rcrintro.pdf).
Orientation: The course is structured around, and assessments are based on, Rochelle E. Tractenberg & Kevin T. FitzGerald (2012): A Mastery Rubric for the design and evaluation of an institutional curriculum in the responsible conduct of research, Assessment & Evaluation in Higher Education, 37(8): 1003-1021. DOI:10.1080/02602938.2011.596923.
The paper is provided in the Blackboard course site; to link to this article: http://dx.doi.org/10.1080/02602938.2011.596923.
Mechanics of the course: This course will meet weekly for a three-hour discussion, where your active participation is required. All course materials will be emailed and/or made available through BlackBoard.
Table B1. Course Objectives and Topics—Spring 2013.
Table B1. Course Objectives and Topics—Spring 2013.
Session TopicsObjectives
14 January: Introduction/Methods/MechanicsDescribe the course purposes and structure, and the case study method for teaching; introduce the Mastery Rubric and understand the structure of each week’s meetings, writings, and assessment. Assignment 1 given.
21 January: HOLIDAY
28 January: Policies regarding human subjects protection and international research. First writing assignment due.Discuss the utility of the prerequisite knowledge (CITI, NIH, HIPAA) as a basis for adequate reasoning and case study discussions. Assignment 2 given.
4 February: Research misconduct and policies for handling misconductFederal (DHHS) definitions of misconduct, comparing and contrasting this with the NIH definition of “responsible conduct”, and discussion of how training supports (or fails to support) one or the other of these definitions. Discuss how training supports (or fail to support) the recognition of ethical or moral dilemmas. Assignment 3 given.
11 February: Equipoise, recruiting research subjects, therapeutic misconception/compensation and payment of subjectsDiscuss decision-making frameworks and their relationships to case studies regarding the design of ethical clinical research, participant recruitment, and the concept of “informed” consent. Assignment 4 given.
18 February: HOLIDAY
25 February: Issues in animal researchIdentify and evaluate alternative actions with respect to current developments in animal research models. Assignment 5 given.
1 March: Spring Break
11 March: Conflicts of interestDiscuss decision-making and the justifications for the identification, management and/or removal of conflicts of interest. Assignment 6 given.
18 March: Mentor/mentee responsibilities and relationshipsReflect on decision-making in ethical dilemmas and how this supports, or fails to support, mentorship. Assignment 7 given.
25 March: Data acquisition and laboratory tools; management, sharing and ownership; privacy and confidentiality issues in data storing and sharing.Using the decision-making framework in the MR-RCR to work through case studies on data collection, management/storage, sharing, and ownership. Consideration of the funder and funding structure in data management, sharing, and ownership. Assignment 8 given. Students select one of seven reasoning skills to emphasize in assignments from now on.
1 April: HOLIDAY
Session TopicsObjectives
8 April: Issues in Genetics and GenomicsUsing the decision-making framework in the MR-RCR to discuss the interaction of personal values and social good in the scientific discoveries in genetics and genomics. Assignment 9 given.
15 April: Responsible authorship and publication, and peer review; collaborative research including collaborations with industry.Discuss decision-making for authorship and publication, and for your peer review of others (and the overall decision to obtain peer review) and the justifications for such decisions. Assignment 10 given.
22 April: The scientist as a responsible member of society, contemporary ethical issues in biomedical research, and the environmental and societal impacts of scientific researchExplore the “stewardship” model of the scientist with respect to scientific disciplines, societies of scientists, and society at large. Reflect on decision-making in ethical dilemmas and how this supports, or fails to support, stewardship. Final Assignment discussed/given.
29 April: Reasoning and the responsible conduct of research: training and research. Final writing assignment drafts due.Discussion of the course, the MR-RCR, and the sense that students have of what they have learned and whether/how they might continue to learn. Discuss other RCR training paradigms and RCR training opportunities.
6 May: Final writing assignment due.
Instructional Methods and Approaches: The course is structured around ethical reasoning, a framework for decision-making that follows a series of steps outlined in the Mastery Rubric for Responsible Conduct of Research. In each class meeting, we will refer to, and utilize, the knowledge, skills and abilities listed in the Mastery Rubric (see manuscript).
All students will submit a written analysis (500 words) of the case assigned for class. We will discuss this case as a group during the first 30–45 min of each class meeting; your case analysis will be the initial basis for your contributions to the discussion of the case. Students will use their case analysis and the Mastery Rubric to structure their contributions to the discussion.
After the class meeting, students will submit a revision of their initial case analysis that reflects the group discussion of the analyses submitted prior to class; this second analysis will incorporate additional points from the assigned readings or elements of the discussion that the individual student agrees or disagrees are important, and highlights one of the seven MR-RCR reasoning skills that is the focus of that week’s work.
The first eight meetings of the course are focused on orienting students to the knowledge, skills, and abilities in the Mastery Rubric for the Responsible Conduct of Research, utilizing the NIH topics list as content on which to practice each of these. After the eighth meeting of the course, we will shift from exploring and initiating development in, and practice of, the knowledge, skills and abilities to refining individual confidence with each.
It is this second, revised submission—that includes a synthesis of facts with experience and discussion, plus reflective reasoning—that is graded.
Student Performance Evaluation: Student achievement is determined by assessment of their written case submissions (with scoring as outlined below), their individual class discussion contributions, and their final assignment. Discussion contributions are opportunities for students to improve their abilities to assess their own performance. The final assignment is for each student to selectively use their second analyses from all meetings to propose, and support their perception of, what overall level in the MR-RCR they have achieved.
Assignments and Grading: Maximum total points for the course is 100. Four points are possible for the second written analysis each of 10 weeks (max = 40); however, if the first case write up was not turned in on time, only three (of four) points possible for the second write up. Formative feedback will be provided on the ungraded first analysis, in keeping with the Mastery Rubric, to support and strengthen the learner’s growing skill set.
Your active participation in each class meeting will be worth 4 points (max = 40). The final class meeting and final essay will each be worth 10 points. Your final essay will be graded, not on the level the student believes s/he has attained, but on the quality of the argument supporting whatever level is identified. This is a course on reasoning, and the final project is a demonstration of your reasoning skills.
Final Assignment: Use 1000 words to describe—using your essays to support your reasoning as evidence of your current level and how it has changed over the semester—your current level of performance on each KSA. Unlike previous assignments, this one asks you to look at your performance of each KSA at the start of the class and now, and describe -using your own homework—whether and how your KSA performance has changed over the semester. You want to say something like, “I am NOW a beginner on each KSA and I know I am at the beginner level because (A) here’s how that KSA has changed over the semester (using evidence from at least two essays from different points in the term); and (B) and here is the evidence of how I know my functioning on this KSA is at the Beginning level”.

Appendix C

Table C1. The Mastery Rubric (MR) and Sustainability in Learning and Assessment.
Table C1. The Mastery Rubric (MR) and Sustainability in Learning and Assessment.
Sustainable Asssessment Criteria (Boud and Falchikov 2006: 408–410)Sustainable Learning Criteria (Schwänke, 2008)
Applies Learning (and Metacognition) from One Context to Another.Changes One’s (Learning) Behavior in Another Context after Learning from a First ContextSustains Oneself as a Self-Motivated LearnerEngages in Construction, Deconstruction and Reconstruction of Knowledge
(General construct: Transfer and Metacognition)(General construct: Transfer and Metacognition)(General construct: “Lifelong Learning”)(General construct: Metacognition)
Engages with standards, criteria, and problem analysisThe MR outlines the standards; students must apply the standards to their work, with formative feedback, to make sure they have/generate evidence of learning. Because the standards are public/available to students, some responsibility for seeking new opportunities to build or demonstrate a KSA is ceded to students.In the course, students are challenged to evaluate their own homework as evidence supporting a claim of achievement.
Emphasizes importance of context Recognizing how metacognition (taught and practiced around ethics in research) can inform research as well as future teaching.A focus on KSAs and how they should be changing over time supports the appreciation for contexts in which learning and demonstration of growth differ.Identification of the role of metacognition in learning highlights the types of teaching and learning these future faculty/scientists will continue to do/create.
Involves working in association with othersIdentifying gaps in one’s own achievement leads to seeking new opportunities to learn/practice and demonstrate additional achievement. These opportunities may involve others.With a clear(er) idea of one’s own needs from an experience, involving others may be facilitated.Utilizing, and eventually, requesting specific, formative feedback from those who are more expert; identifying individuals who are qualified as teachers and mentors.
Involves authentic representation and productions Alignment and realignment of one’s work with the rubric; making a case for accomplishment (or identifying that additional training is needed) using one’s own work as evidence.Supports an appreciation for the authenticity of work as a representation of one’s achievement, and as truly representative of the construct under consideration.
Promotes transparency of knowledgeThe MR is public, and the “possession” of knowledge is not sufficient to support claims of achievement.Seeking clarity in what is being learned (or taught) in another context after experiencing the importance of this clarity in an MR-derived course.Transparency of knowledge observed in the context of the course that was developed using the MR can promote an individual’s seeking and optimizing this in other contexts.Students perceive that weaker comprehension may result from lack of transparency, so the process of deconstruction and reconstruction can support additional transparency.
Fosters reflexivityThe MR requires reflection on student’s part to identify –and/or create (via seeking of new opportunities) evidence to support claims of achievement.The MR provides a framework within which metacognition from contexts other than the course can be utilized.Reflection on learning (metacognitive development) is explicit—taught, practiced, and demonstrated—so that it can be used in other contexts.Reflection is critical for deconstruction, particularly to identify knowledge that is ripe for it. Reconstruction also requires reflection, to ensure what is being “learned” is actually transparent and authentic.
Builds learner agency and constructs active learnersSee above (“fosters reflexivity”).
Considers risk and confidence of judgmentThe development of metacognition entails strengthening judgment skills relating to one’s own work. This focus can make risk more perceptible, and can thereby promote conscious improvement in judgment (about one’s work). Increasing awareness of the role of reflection and development of the ability to judge one’s own work and needs can lead to a more automatic use of this judgment, promoting ongoing learning.
Promotes seeking appropriate feedbackDeveloping metacognition concretely and explicitly in one context with formative feedback can promote a search for that feedback in other contexts.Having obtained concrete and specific instruction, practice, feedback and the opportunity to demonstrate growth using this feedback leads to greater emphasis placed on seeking appropriate feedback.Seeking appropriate feedback is part of self-motivated learning. A model for what ‘future growth’ in the target KSAs would look like provides a framework for the learner to seek this feedback, even if the expert from whom the feedback is sought is unaware of the MR or the model the learner is using/internalizing.Ideally, construction and particularly deconstruction and reconstruction of the ideas that learners have about assessment, and the relative importance and utilities of feedback (formative and summative) will follow from greater metacognition and reflection.
Requires portrayal of outcomes for different purposes.Learning metacognitive skills includes developing an awareness of how these skills can be brought to bear in other contexts and for other purposes. Experience with the utility of reflection and feedback in a MR-derived course, and the opportunities to engage in active reflection—particularly on assignments that were completed earlier in a term—helps learners to see that their work can reflect both the state of their knowledge and how their KSAs are changing over time.

Appendix D

Table D1. COREQ [30] Checklist for Sustainability and Mastery Rubric analyses.
Table D1. COREQ [30] Checklist for Sustainability and Mastery Rubric analyses.
No.ItemGuide Questions/Description
Domain 1: Research team and reflexivity
Personal Characteristics
1.Interviewer/facilitatorStudents completed the survey on their own. (No facilitation).
2.CredentialsAll co-authors have PhDs.
3.OccupationAll co-authors are professors at the same institution (in different departments).
4.GenderCourse instructors are one male, one female. Thematic analyst (who never attended the course) is male.
5.Experience and trainingTwo co-authors have two PhDs; one is a statistician, cognitive scientist, and measurement expert who is also an internationally-known research methodologist (with publications in both quantitative and qualitative methods). One co-author is a qualitatively-trained anthropologist.
Relationship with participants
6.Relationship establishedTwo co-authors (AU1/AU2) co-taught the same course over three semesters. The course was required for the students in the first semester (enrolled in a master’s program). None of the students were in the disciplines of any of the co-authors (respondents were not “our” students).
7.Participant knowledge of the interviewerAll participants knew the two faculty who taught their class and knew of their interest in the course’s functioning. The course was designed based on a manuscript the co-instructors published in 2012 and this is required reading for the course.
8.Interviewer characteristicsAll information about the co-authors’ instruction and participant knowledge of their research interests is included in the manuscript.
Domain 2: Study design
Theoretical framework
9.Methodological orientation and TheoryContent analysis was the method of obtaining data from the survey; this was analyzed thematically and also, entered into a Degrees of Freedom Analysis table.
Participant selection
10.SamplingAll students who completed the course were sent the survey.
11.Method of approachAll students who completed the course were contacted by email to inquire if they would be willing to complete the survey.
12.Sample sizeNine of our 12 completers responded. Three students are lost to follow up.
13.Non-participationThree students could not be contacted.
Setting
14.Setting of data collectionWe emailed the surveys and students filled them in however/whenever they wanted.
15.Presence of non-participantsWe do not know if anyone was present when students filled in the surveys.
16.Description of sampleThe key attributes of the sample are that: (a) the students all came from different disciplines and that they were all graduate students. Three individuals audited the course (one of these was lost to follow up).
Data collection
17.Interview guideThe survey was the only data collection instrument and this was its pilot test. No one has ever surveyed students about whether “sustainability” is a characteristic of their educational experience that they can detect.
18.Repeat interviewsNo repeat interviews.
19.Audio/visual recordingThe only data collection was the survey.
20.Field notesNo field notes.
21.DurationNo duration data is available.
22.Data saturationWe surveyed everyone who had ever completed the course, so saturation was not an issue.
23.Transcripts returnedNA
Domain 3: Analysis and findings
Data analysis
24.Number of data codersOne coder worked on the data (because he was completely independent of the course).
25.Description of the coding treeThere are only four dimensions to the definition of “sustainability”, so only four brief narrative (open-ended) questions were available for each of the nine respondents. Coding was not sufficiently complicated for a “coding tree”.
26.Derivation of themesAll themes were derived from the data.
27.SoftwareNo data management was needed given the small amount of data.
28.Participant checkingOnce the paper is published, we will share the results with the participants. None of the respondents inquired/requested the results.
Reporting
29.Quotations presentedParticipant quotations are included in the table summarizing the themes that emerged from the response analyses.
30.Data and findings consistentThe data were extracted (content analysis) and then those results were examined in a Degrees of Freedom Analysis framework to address the research question of interest (do students perceive features of sustainability?)
31.Clarity of major themesThe survey was highly focused, so our themes and results are similarly focused.
32.Clarity of minor themesWith so narrow a focus and such a small sample, minor themes are difficult to justify reporting. However, this survey is also the first of its kind, and one minor theme we did identify is whether future study of student-perception of sustainability requires a revision of the survey.

References

  1. Knapper, C. Lifelong Learning Means Effective and Sustainable Learning: Reasons, Ideas, Concrete Measures. Seminar Presented at 25th International Course on Vocational Training and Education in Agriculture, Ontario, Canada. August 2006. Available online: http://www.ciea.ch/documents/s06_ref_knapper_e.pdf (accessed on 2 October 2013).
  2. Boud, D.; Falchikov, N. Aligning assessment with long-term learning. Assess. Eval. High. Educ. 2006, 31, 399–413. [Google Scholar] [CrossRef]
  3. Boud, D.; Falchikov, N. (Eds.) Rethinking Assessment in Higher Education: Learning for the Longer Term; Routledge: New York, NY, USA, 2007.
  4. Ecclestone, K. Instrumental or sustainable learning: The impact of learning cultures on formative assessment in vocational education. In Assessment, Learning and Judgment in Higher Education; Joughin, G., Ed.; Springer: Cham, Switzerland, 2009. [Google Scholar]
  5. Peris-Ortiz, M.; Lindahl, J.M.M. (Eds.) Sustainable Learning in Higher Education: Developing Competencies for the Global Marketplace; Springer: Cham, Switzerland, 2015.
  6. Boud, D.; Soler, R. Sustainable assessment revisited. Assess. Eval. High. Educ. 2016, 41, 400–413. [Google Scholar] [CrossRef]
  7. Carless, D.; Salter, D.; Yang, M.; Lam, J. Developing Sustainable Feedback Practices. Stud. High. Educ. 2011, 36, 395–407. [Google Scholar] [CrossRef] [Green Version]
  8. Barnett, S.M.; Ceci, S.J. When and where do we apply what we learn? Psychol. Bull. 2002, 128, 612–637. [Google Scholar] [CrossRef] [PubMed]
  9. Ambrose, S.A.; Bridges, M.W.; DiPietro, M.; Lovett, M.C.; Norman, M.K. How Learning Works: Seven Research-Based Principles for Smart Teaching; John Wiley & Sons: San Francisco, CA, USA, 2010. [Google Scholar]
  10. Macy Foundation. Continuing Education in the Health Professions: Improving Healthcare through Lifelong Learning [Monograph]. Conference Sponsored by the Josiah Macy, Jr. Foundation, Southhampton, Bermuda. November 2007. Available online: http://www.macyfoundation.org/docs/macy_pubs/pub_ContEd_inHealthProf.pdf (accessed on 21 October 2014).
  11. Macy Foundation. Lifelong learning in medicine and nursing. Final conference report. Association of American Medical Colleges, American Association of Colleges in Nursing. Funded by the Josiah Macy, Jr. Foundation. 2010. Available online: http://www.aacn.nche.edu/education/pdf/MacyReport.pdf (accessed on 21 October 2014).
  12. American Statistical Association. Guidelines for Voluntary Professional Accreditation by the American Statistical Association. (Rev. 1). 2011. Available online: http://www.amstat.org/accreditation/pdfs/Guidelines_for_ASAVoluntary_Professional_Accreditation.pdf (accessed on 1 July 2012).
  13. National Institute for Learning Outcomes Assessment. Higher Education Quality: Why Documenting Learning Matters; University of Illinois and Indiana University: Urbana, IL, USA, 2016. [Google Scholar]
  14. Schwänke, U. Sustainable Learning—How Storyline Can Support It. In Paper Presented at the Nordic Storyline Conference, Gothenburg, Sweden, 2008; pp. 1–2. Available online: http://www.storyline-methode.de/mediapool/43/436167/data/Sustainable_learning_-_nachhaltiges_Lernen.pdf (accessed on 10 October 2013).
  15. Schraw, G. Promoting general metacognitive awareness. Instr. Sci. 1998, 26, 113–125. [Google Scholar] [CrossRef]
  16. National Research Council. Knowing What Students Know; National Academy Press: Washington, DC, USA, 2001. [Google Scholar]
  17. Medland, E. Assessment in higher education: Drivers, barriers and directions for change in the UK. Assess. Eval. High. Educ. 2016, 41, 81–96. [Google Scholar] [CrossRef]
  18. Nguyen, T.T.H.; Walker, M. Sustainable assessment for lifelong learning. Assess. Eval. High. Educ. 2016, 41, 97–111. [Google Scholar] [CrossRef]
  19. National Academy of Engineering. Ethics Education and Scientific and Engineering Research: What’s Been Learned? What Should Be Done? Summary of a Workshop; National Academies Press: Washington, DC, USA, 2009. [Google Scholar]
  20. Novossiolova, T.; Sture, J. Towards the responsible conduct of scientific research: Is ethics education enough? Med. Confl. Surviv. 2012, 28, 73–84. [Google Scholar] [CrossRef] [PubMed]
  21. May, D.R.; Luth, M.T. The effectiveness of ethics education: A quasi-experimental field study. Sci. Eng. Ethics 2013, 19, 545–568. [Google Scholar] [CrossRef] [PubMed]
  22. Anderson, J.R. Cognitive Psychology and Its Implications, 6th ed.; Worth Publishers: New York, NY, USA, 2005. [Google Scholar]
  23. Mayer, D.M.; Nurmohamed, S.; Trevirlo, L.K.; Shapiro, D.L.; Schminke, M. Encouraging employees to report unethical conduct internally: It takes a village. Organ. Behav. Hum. Decis. Process. 2013, 121, 89–103. [Google Scholar] [CrossRef]
  24. Tractenberg, R.E.; McCarter, R.J.; Umans, J. A Mastery Rubric for clinical research training: Guiding curriculum design, admissions, and development of course objectives. Assess. Eval. High. Educ. 2010, 35, 15–32. [Google Scholar] [CrossRef] [PubMed]
  25. Stevens, D.D.; Levi, A.J. Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback and Promote Student Learning; Stylus Publishing: Portland, OR, USA, 2005. [Google Scholar]
  26. Tractenberg, R.E.; FitzGerald, K. A Mastery Rubric for the design and evaluation of an institutional curriculum in the responsible conduct of research. Assess. Eval. High. Educ. 2012, 37, 1003–1021. [Google Scholar] [CrossRef]
  27. Tractenberg, R.E.; FitzGerald, K.T. Responsibility in the conduct of Quantitative Sciences: Preparing future practitioners and certifying professionals. Presented at 2014 Joint Statistical Meetings, Boston, MA, USA. In Proceedings of the 2015 Joint Statistical Meetings, Seattle, WA, USA; pp. 4296–4309.
  28. National Institutes of Health. Update on the Requirement for Instruction in the Responsible Conduct of Research. NOT-OD-10-019; 2009. Available online: http://grants1.nih.gov/grants/guide/notice-files/NOT-OD-10-019.html (accessed on 25 January 2012). [Google Scholar]
  29. Sadler, D.R. Three in-course assessment reforms to improve higher education learning outcomes. Assess. Eval. High. Educ. 2015. [Google Scholar] [CrossRef]
  30. Tong, A.; Sainsbury, P.; Craig, J. Consolidated criteria for reporting qualitative research (COREQ): A 32-item checklist for interviews and focus groups. Int. J. Qual. Health Care 2007, 19, 349–357. [Google Scholar] [CrossRef] [PubMed]
  31. Santa Clara University. Ethical Reasoning. Available online: http://www.scu.edu/ethics/ (accessed on 29 November 2009).
  32. Messick, S. The interplay of evidence and consequences in the validation of performance assessments. Educ. Res. 1994, 23, 13–23. [Google Scholar] [CrossRef]
  33. Campbell, D.T. Degrees of freedom in the case study. Comp. Political Stud. 1975, 8, 178–193. [Google Scholar]
  34. Woodside, A.G. Case Study Research: Theory, Methods and Practice: Theory, Methods, Practice; Emerald Group: Bangles, UK, 2010. [Google Scholar]
  35. Tractenberg, R.E. Degrees of Freedom Analysis in Educational Research: Ensuring the capstone project functions as assessment. 2016. in review. [Google Scholar]
  36. Hutchings, P.; Kinzie, J.; Kuh, G.D. Evidence of student learning: What counts and what matters for improvement. In Using Evidence of Student Learning to Improve Higher Education; Kuh, G.D., Ikenberry, S.O., Jankowski, N.A., Eds.; Jossey-Bass: Somerset, NJ, USA, 2015; pp. 27–50. [Google Scholar]
  37. Boud, D. Assessment for Developing Practice. In Education for Future Practice; Higgs, J., Fish, D., Goulter, I., Loftus, S., Reid, J.-A., Trede, F., Eds.; Sense: Rotterdam, The Netherlands, 2010; pp. 251–262. [Google Scholar]
  38. Antes, A.L.; Wang, X.; Mumford, M.D.; Brown, R.P.; Connelly, S.; Devenport, L.D. Evaluating the effects that existing instruction on responsible conduct of research has on ethical decision making. Acad. Med. 2010, 85, 519–526. [Google Scholar] [CrossRef] [PubMed]
  39. McAllen, E.; Soden, R. Psychological knowledge for teaching critical thinking: The agency of epistemic activity, metacognitive regulative behaviour and (student-centred) learning. Instr. Sci. 2012, 40, 445–460. [Google Scholar]
  40. McKeachie, W.J.; Svinicki, M. McKeachie’s Teaching Tips, 12th ed.; Houghton Mifflin: Boston, MA, USA, 2011. [Google Scholar]
  41. Schmaling, K.B.; Blume, A.W. Ethics Instruction Increases Graduate Students’ Responsible Conduct of Research Knowledge but not Moral Reasoning. Account. Res. 2009, 16, 268–283. [Google Scholar] [CrossRef] [PubMed]
  42. Hutchings, P. Aligning Educational Outcomes and Practices; Occasional Paper No. 26; National Institute for Learning Outcomes Assessment (NILOA); University of Illinois: Urbana, IL, USA; Indiana University: Bloomington, IN, USA, 2016. [Google Scholar]
Table 1. Results of the Content Analysis of Sustainability Survey responses.
Table 1. Results of the Content Analysis of Sustainability Survey responses.
Sustainability Dimension
Lifelong LearningChange LearningPersonal DevelopmentDe/Reconstruction
Major Themes from ResponsesWorkCareerDaily LifeImproved ReasoningImproved ExpressionEnhanced ReflectionFocused on CareerWorkCareerReasoning
Survey
11 1 1 1
21 1 1 1
3 1 1 1 1
4 11 1 1
51 1111
6 11 1 1
7*1 1 1
81 1 1 1
91 1 1 **
Total5127273323
Notes: * No text response, but the item was endorsed (part 1 of each item); ** Respondent unsure that this course led to de/reconstruction, so gave no text response but did endorse item.
Table 2. Degrees of Freedom Analysis: do students perceive the four sustainability elements?
Table 2. Degrees of Freedom Analysis: do students perceive the four sustainability elements?
Respondent
Sustainability Element123456789Total
Lifelong learning1 *111111119
Change learning1111111119
Personal development11111 **11119
De/reconstruction111111111 ***9
Totals444444444
Notes: * The item was endorsed, but no narrative response was included; ** two different themes emerged from this single narrative response, the only case this happened across all narrative responses; *** The item was endorsed, but the respondent was not sure that this course was the cause of the perceived sustainability element.

Share and Cite

MDPI and ACS Style

Tractenberg, R.E.; FitzGerald, K.T.; Collmann, J. Evidence of Sustainable Learning from the Mastery Rubric for Ethical Reasoning. Educ. Sci. 2017, 7, 2. https://doi.org/10.3390/educsci7010002

AMA Style

Tractenberg RE, FitzGerald KT, Collmann J. Evidence of Sustainable Learning from the Mastery Rubric for Ethical Reasoning. Education Sciences. 2017; 7(1):2. https://doi.org/10.3390/educsci7010002

Chicago/Turabian Style

Tractenberg, Rochelle E., Kevin T. FitzGerald, and Jeff Collmann. 2017. "Evidence of Sustainable Learning from the Mastery Rubric for Ethical Reasoning" Education Sciences 7, no. 1: 2. https://doi.org/10.3390/educsci7010002

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop