University of Minnesota Morris
Home » Education » NCATE » Standard 2 » Assessment System

NCATE/BOT Accreditation Student learning...all students, all places

Assessment System

In 2001, the unit created a two-year assessment review plan in consultation with school district partners, UMM Teacher Education Committee and the UMM Assessment of Student Learning Committee. The unit has worked conscientiously to meet the goals of the plan. We have succeeded in many areas and continue to evaluate and to improve our assessment system. The first category was the review and improvement of current assessment plans. This involved the creation of a system of review to ensure assessment of state standards, to develop transition points (ElEd and SeEd) and key assessments, and to create a system to review and improve validity and reliability of assessments. The second category was the assessment of candidate effect on student learning. The main task in this category was to create and implement a reliable, valid, and systematic assessment. The third category was to create an organized system of data collection, analysis, and evaluation.

Evaluation of the assessment system is ongoing and inclusive. The small size of the UMM Teacher Education Program faculty allows us to have an assessment committee of the whole. As a unit, we discuss our system, the assessment measures, data results, and the implications of the data. Based on these discussions, we have adapted assignments and rubrics, held reliability sessions prior to scoring shared assignments, created new assessment measures, and made changes to courses. We also consult with the two members of our support staff about ways to improve the collection, storage, and dissemination of the data. The support personnel are critical to the success of our system evaluation.

We also seek external feedback in evaluation of our work. From 2001 to 2003, the Teacher Education Program Assessment Committee (TEPAC) provided multiple perspectives and areas of expertise in the review and development of the assessment system. Members of TEPAC included representatives from several constituencies: TEP faculty, the UMM Teacher Education Committee, UMM Assessment of Student Learning Committee, the UMM Teacher Education Advisory Council (a group of teachers, administrators, and other school personnel), and UMM teacher education candidates. This group provided invaluable input and feedback in review of current practice and especially in the creation of a standardized assessment of candidate impact on student learning.

We hold annual meetings with the Teacher Education Advisory Council, an invited group of school partners and higher education faculty. These meetings focus on our program improvement. UMM TEP faculty present information about changes in our program, reports on our progress and performance, and new information about state requirements or initiatives regarding teacher education. We also solicit feedback and suggestions.

Our higher education faculty colleagues also provide feedback and support to our assessment system. The UMM Assessment of Student Learning Committee monitors all program assessments and is currently leading assessment activities for our institution’s regional accreditation with the Higher Learning Commission. Four of the tenured/tenure-line teacher education faculty members are serving on subcommittees for the self study process, two as sub-committee chairs. The UMM Teacher Education Committee plays a marginal role in our assessment system. This committee of higher education faculty reviews the teacher education programs and provides feedback and suggestions related especially to content preparation.

Candidate Proficiencies

The assessment system is centered on the conceptual framework and standards required by the state standards which are based on national standards. Goals, instruction, and assessments are carefully aligned throughout the system. First, the conceptual framework is aligned to the standards of effective practice (Conceptual Framework Alignment). Next, as shown in SEP Course Alignment (ElEd and SeEd), the standards are assigned to specific courses. Within course syllabi, assignments and assessments are linked to relevant standards of effective practice and content standards. Finally, key assessments and follow-up surveys address the goals of the program and the state standards.

The Key Assessments Collection, Analysis, and Dissemination Plan describes and guides the unit’s assessment procedures. There are shared assessments among the elementary education and secondary education programs with variations due mostly to length of program. For decision-making purposes, all data are reviewed by the entire faculty of each program. Assessment results are also shared with the entire teacher education faculty for data analysis and discussion. Because the two programs (elementary and secondary) are closely related with combined candidates in the prerequisite introductory course and the final professional development course, assessment results impact both programs. In keeping with the institution’s assessment system, program data are also submitted to the UMM Assessment of Student Learning Committee for public dissemination. Aggregated unit data are shared with candidates and school partners at advisory meetings. We continue to strive for improved dissemination of our assessment results and are improving our website for that purpose.

Key assessments are aligned to program standards:

  • GPA data overall and in content areas provide evidence of subject matter knowledge.
  • Praxis I and Praxis II scores assess subject matter mastery and professional pedagogy.
  • Performance-based assessments (including summative evaluations of student teaching, reflective portfolio, and an analysis of student learning assignment) align to Minnesota Standards of Effective Practice and provide evidence of candidate proficiency in impacting student learning.
  • Follow-up surveys of graduates and employers are aligned to performance-based assessments and program standards.

Other assessments, especially surveys of school partners, are conducted as needed.


The unit works to ensure that assessment procedures are fair, accurate, consistent, and free of bias. First, the TEP faculty members have aligned course and program assessments to state standards and the program’s conceptual framework. This thorough alignment supports consistency across courses and establishes validity in that goals, instruction, and assessments are aligned.

All key assessments are created and reviewed by the teacher education faculty as a whole. This inclusive process ensures that assessments are clear and well understood. It allows faculty members to understand the complete assessment system and recognize individual roles in assuring ongoing accuracy and alignment.

Multiple measures also help to establish accuracy and freedom from bias. Key assessments include standardized examinations, candidate self-assessments and reflections, performance tasks in field and clinical experiences, and follow-up surveys.

Candidate performance is evaluated both by cooperating teachers and university supervisors. Evaluation procedures are reviewed in a number of ways in order to establish consistency, fairness, and accuracy. Professional education faculty members who are supervising student teachers review summative evaluation procedures at teacher education and discipline meetings. Discipline coordinators review the procedures with adjunct supervisors. Cooperating teachers receive information about the evaluation in the handbook, from the university supervisor, and the discipline coordinators. A midterm evaluation of student teaching is also conducted to provide discussion points and identify areas of scoring inconsistencies. On 2008 summative evaluations, scores assigned by cooperating teachers and university supervisors were within one rating level nearly 100% of the time. Scores were identical on particular items approximately half of the time, with especially strong agreement for the items related to reflection, collaboration, and professionalism.

The faculty conducts ongoing analysis of key assessments and scoring guides. Assignments have been improved based on discussions of data and efforts to improve scoring consistency. Reliability sessions are conducted prior to scoring the Analysis of Student Learning and the final portfolios. In these sessions, faculty first review and discuss the scoring rubric. Samples are graded individually and results shared. The process continues until the samples receive the same scores from the reviewers.

Program Improvement

In UMM Teacher Education Program uses several assessments and evaluations to help manage and improve our operations and programs.

Candidate data are systematically collected to evaluate program elements and operations. At the course level, candidates complete Student Opinion of Teaching (SOT) for every class. (Beginning Fall 2008, the new form will be the Student Rating of Teaching.) These evaluations of teaching are reviewed by the academic dean and the division chair, and then given to the faculty member. The faculty may use the information to make improvements to courses. Adjunct faculty members are also assessed by the candidates with the SOT. Candidate performance on key assessments is also used to judge program and unit effectiveness. In our ongoing data analysis, we discover performance trends that lead us to make changes at the course, program, and unit level. For example, low candidate scores on portions of their analysis of student learning, led to significant changes in the assignments and the scoring rubric for the next year. Special surveys also provide opportunities for candidates to provide feedback on the program and unit. After revising and rescheduling the Analysis of Student Learning (based on candidate performance data), we conducted the 2008 survey of graduating seniors to collect their opinions and perspectives on the process. Their favorable comments supported our decision to continue the assignment and their suggestions informed us of the need to clarify expectations.

The quality of the faculty members and their instructional performance is a part of the overall success of programs. Tenured or tenure-line faculty members are evaluated as prescribed in the institution’s procedures for promotion and tenure. In 2007, the unit proposed new Criteria and Evaluation Procedures in line with institution and system requirements and expectations (currently awaiting UM system approval). Prior to tenure, faculty members create a portfolio to provide evidence of their success in teaching, service, and research. Tenured and tenure-line faculty members submit updated vitae annually and under the unit’s proposed procedures, they also write professional goals for the year. These are shared with the division chair and provide a guide for productivity, growth, and assessment. We are developing additional assessment procedures which will be used to assist us in evaluating the performance of adjunct faculty members and using the information for program improvement. This may include lesson observations by the discipline coordinator.

The data from follow-up studies is used for improvement at the program and unit level. Annually, graduates and employers complete surveys that focus on performance. The graduates also rate the value of specific program components. This summarized data is distributed to the faculty at the program and unit level for review, analysis, and discussion.

Feedback from school partners is solicited systematically both formally and informally. Every year, university supervisors meet with individual cooperating teachers to ask specific questions about the program, the performance of the candidate, communication, and other topics determined by the faculty. At semi-annual teacher education advisory council meetings, feedback and suggestions are also solicited, discussed, and used in program analysis. Occasional surveys are also administered to assess items of interest to the program and to provide an open opportunity for school professionals to share comments, concerns, or suggestions.

At the institutional level, data are also collected and analyzed. The institution collects data from a Graduation Exit Survey. This survey is given to graduates on all of the University of Minnesota campuses, and the results are disaggregated by campus. The results reveal that UMM graduates are very satisfied with their experience at UMM and give the highest ratings in the UM system on many items. These include overall satisfaction with experiences at UMM, quality and availability of faculty, and program quality. UMM first year students and seniors also participate in the National Survey of Student Engagement (NSSE). The UMM NSSE survey results also suggest that UMM graduates are extremely satisfied with their entire experience at UMM. On most of the items, UMM results either match or exceed national norms. A goal for UMM TEP is to collaborate with institutional data analysts to disaggregate the data further in order to examine the results for teacher education candidates.