Rubrics

What is a rubric? Rubrics assist the instructor in making explicit, objective, and consistent the criteria for performance that otherwise would be implicit, subjective, and inconsistent if only a single letter grade were used as an indicator of performance. Rubrics delineate what knowledge, content, skills, and behaviors indicate various levels of learning or mastery. Ideally, “grading” rubrics are shared with students before an exam, presentation, writing project, or other assessment activity. Awareness of what they are expected to learn helps students organize their work, encourages self-reflection about how and what they are learning, and allows opportunities for self-assessment during the learning process.

What are some of the criteria that may be used within a rubric to evaluate student work? Criteria can include sophistication, organization, grammar and style, competence, accuracy, synthesis, analysis, and expressiveness, among others.

What are the basic types of rubrics?

The TLT Group, via Penn State University’s Schreyer Institute for Teaching Excellence, clarifies the different types of rubrics in the following chart:

Types Purpose/Distinction* Focal Use View Samples
Holistic
To provide a single score based on an overall impression of learner achievement on a task. To provide overall evaluation guidelines that clarify how grades relate to performance/achievement, such as in course grades. Course grading rubricPresentation Rubric
Analytic
To provide specific feedback along several dimensions. To break assignments or scores down into separate components for grading (description, analysis, grammar, references, etc.). Practicum Portfolio Rubric/Scoring Sheet**
General
To contain criteria that are general across tasks. Designed to provide general guidance as to expectations, such as for grading of written assignments. Course grading rubric/Position Paper Scoring/Feedback Sheet**
Task-specific
Are unique to a task/assignment. Designed to provide detailed guidance regarding a specific assignment or task. Practicum Portfolio Rubric/Research Paper Scoring/Feedback Sheet
*For more information, please see the Schreyer Institute’s The Basics of Rubrics.

The TLT Group also lists several steps for creating an assessment rubric:

  1. Identify the type and purpose of the rubric. Consider what you want to assess/evaluate and why (see matrix in the link above).
  2. Identify distinct criteria to be evaluated. Develop/reference the existing description of the course/assignment/activity and pull your criteria directly from your objectives/expectations. Make sure the distinction between the assessment criteria are clear.
  3. Determine your levels of assessment. Identify your range and scoring scales. Are they linked to simple numeric base scores? Percentages? Grades or GPAs?
  4. Describe each level for each of the criteria, clearly differentiating between them. For each criteria, differentiate clearly between the levels of expectation. Whether holistically or specifically, there should be no question as to where a product/performance would fall along the continuum of levels. (Hint: Start at the bottom (unacceptable) and top (mastery) levels and work your way “in.”)
  5. Involve learners in the development and effective use of the rubric. Whether it’s the first time you’re using a particular rubric or the 100th time, learner engagement in the initial design or ongoing development of the assessment rubric helps to increase students’ knowledge of expectations and make them explicitly aware of what and how they are learning.
  6. Pre-test and re-test your rubric. A valid and reliable rubric generally takes time to develop. Each use with a new group of learners–or a colleague–provides an opportunity to tweak and enhance the original rubric.

More examples of rubrics from the Association for the Assessment of Learning in Higher Education

2 Responses to Rubrics

  1. Leanne McWatters says:

    deleted text from this page:

    * Simple checklists can be used to record whether the relevant or important components of an assignment are addressed in a student’s work. A rubric, for instance, might be used to assess whether a laboratory report or writing sample contains all the assigned components. A checklist of this sort is categorical: it records whether an assignment’s specific requirements are present, but it doesn’t record quantitative information about the level of competence or relative skill level the student has demonstrated.
    * Simple rating scales record the level of student work or categorize it hierarchically. A simple rating scale, for instance, can indicate whether student work is deficient, adequate, or exemplary, or it can assign a numerical “code” to indicate the quality of the work. In most cases, when a numerical scale is used, it should contain a clear neutral midpoint (i.e., the scale should contain an odd number of rating points). However, survey designers should determine when this might not be appropriate; occasionally, such scales are intentionally designed without a midpoint in order to force a non-neutral response.
    * Detailed rating scales explicitly describe what constitutes deficient, adequate, or exemplary performance for each criterion. Detailed rating scales are especially useful when several faculty members are scoring student work, because they communicate common performance standards and therefore make the scores more consistent. It can be helpful to present detailed rating scales to students when an assignment is given or at the beginning of a semester. Departments may also want to consider sharing program rubrics with students as they begin their course of study. These scales will provide students with a clear description of what they are expected to learn and the criteria upon which their learning will be judged.
    * Holistic rating scales define deficient, adequate, or exemplary student work as an aggregate by assigning a single score to a constellation of characteristics that have been fulfilled to a substantial degree, rather than rating each criterion separately. Holistic rating scales are often used when evaluating student work that may vary so widely in form and content that the same criteria may not apply at all to some work. Capstone projects in an art program, for example, might vary so that they cannot all be judged using the same specific criteria. In that case, faculty could create a generic description of what constitutes exemplary work, adequate work, and so on, regardless of the individual project’s medium or focus.

  2. Pingback: Week eight – Reflection three | RESOURCES AND IDEAS TO USE IN EARLY CHILDHOOD SETTINGS.

Leave a Reply