Examples of changes made to courses, programs, and the unit in response to data gathered from the assessment system
Initial Programs
Elementary and secondary (multiple and single) credential program faculty members examine PACT assessment data each year. (See Standard 2, Exhibit 5 for an example.) They look at candidate performance on each criterion for the rubrics on three levels: across all programs, elementary and secondary candidate performance separately, and comparing the performance of two different groups of secondary teacher candidates (traditional and Master’s Credential Cohort). Across all candidates in 2008-2009 and 2009-2010, faculty in the Department of Learning and Teaching have noted that candidate performance is lower on the academic language criteria.
Faculty identified ways in which academic language could be taught more explicitly in the following courses: EDUC 381C/581C and EDUC 384C/584C. The full time faculty members who are responsible for those courses have made changes in course assignments and related assessments.
Advanced Programs
Following the use of the clinical observation assessment instrument for four years, in Fall 2010, school counseling faculty asked to examine the data collected using the instrument to ensure that the items on the assessment used for prepracticum, practicum and internship fieldwork assessed the appropriate knowledge, skills, and dispositions expected of school counseling candidates. Three years of data were summarized by the Director of Assessment Support for faculty review. They learned that not all of the items were observable in all of the fieldwork experiences. In fact, there were three items that were not observed in the prepracticum at all. This lead to a revision of the observation rubric used by prepracticum supervisors.
In 2008-2009 and 2009-2010, faculty in the Masters of Education programs used assessment data to refine all aspects of the instruction around and assessment of candidates’ action research projects. They examined results of action research project assessment for 2007-2008 and recognized that the rubric that was being used to guide and assess the action research projects needed to be refined to more clearly specify the expectations for the AR projects.
The evolution of the current action research rubric serves as an example of how faculty in the Department of Learning and Teaching have used data to improve their programs and courses. First, they examined data from the 2007-2008 action research projects and determined that they needed to develop a rubric that better captures all elements of an action research project. In Fall 2008, a team of faculty members worked with the Director of Assessment Support to develop a draft of a revised rubric. It was sent to other program faculty for edits and enhancement and underwent several revisions; all M.Ed. faculty approved the final version of the new rubric. This rubric has been integrated in EDUC 580 as candidates develop their action research projects; it appears in the Learning and Teaching Graduate Research Handbook. In addition, faculty decided that the first three criteria of the action research project would be used to assess the action research proposal in EDUC 500.
Candidate data from the 2008-2009 implementation of the rubric were examined to determine if the previously used 75% cut score for passing was reasonable. They determined that, with the added rigor the new action research criteria, a lower cut score needed to be set for 2009-2010, with a plan for setting the cut score higher 2010-2011, progressively higher in 2011-2012 and 2012-2013, when the cut score would be 75%. Faculty will determine if 75% should be maintained, or if a higher standard for candidate performance should be set. At the same time, faculty determined that the rubric criteria should all be assessed using a 4 point scale, with varied weighting of the criteria. This change permits the assessment of action research projects free of bias, because evaluators do not know what the final score will be.