Faculty instructors have all engaged at some point in assessment. There is no simple formula for assessment; the strategies you use depend on what you are assessing.  Before considering specific strategies, ask yourself: what do you want to know and why do you want to know it? Assessments are used by faculty, departments, university administration and external funding/accreditation agencies to evidence learning and improve instruction in order to measure the success of university programs and curriculum.   While assessment takes place at several levels (student, course, program, institutional), this guide will focus on how to demonstrate success in achieving course goals in the online environment.

“Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning.”
The University of Oregon (Huba & Freed, 2000)

Course assessment is valuable to the teaching process because it allows you to identify what is “working” in the classroom, make informed revisions in future courses and record success for funding agencies and other actors. (Poe & Stassen, n.d.)

Assessment tools can be used to:

  • evaluate how well your course facilitated learning with the aim of demonstrating quality of teaching to outside actors (funding and accreditation agencies, merit review committees). This type of assessment, what educators call summative assessment, provides a snap-shot of student performance in the course.
  • evaluate students’ ongoing progress toward learning goals with the aim of improving teaching strategies. (Harlen & James, 1997) This type of assessment, called formative assessment, provides the instructor with information about whether students are keeping pace with content and skills goals throughout the quarter. (Stassen et. al., 2001, p. 18)

Assessing online courses draws on the same theories and techniques developed for the face-to-face learning environment.  Effective teaching can be defined, very simply, as methods that promote student learning.  It encompasses all of those instructor behaviors that foster student learning of the instructor’s and/or of the institution’s educational objectives.  Ideally, the students will also share at least some of these objectives.  This definition of effective teaching includes curriculum and course development, advising, and supervision of student research as well as classroom performance.  Given this broad definition, no single approach is sufficient for evaluating effective teaching.  Rather, student ratings, self-reviews, peer evaluation, and objective criteria such as student performances are all useful tools for evaluating different aspects of teaching.

The most important sources of data that can be used to measure effective teaching fall into three main types: students, peers, and the instructor him/herself (through self-reflection). Since measuring teaching is clearly not an exact science, the more varied the data sources, the more useful the measurement is likely to be.

Important questions asked in choosing strategies and measures of assessment include:

  • What are your overarching goals for the course?
  • What are the desired learning outcomes?
  • What types of teaching methods would facilitate student mastery of learning outcomes?
  • What types of assignments, quizzes, exams or exercises ask students to engage desired skills and contents
Effective teaching can be measured through different types of direct and indirect assessment.

Direct Assessment

Direct assessment analyzes student performance and course-work as evidence of learning.  Direct assessment strategies measure the achievement of course outcomes and mastery of content.  They provide concrete and measurable evidence of what students have and have not learned from course instruction.

Direct assessment strategies should reflect - or “align” with - the following four elements: 
  • over-arching goals: broad, generalized statements about what is to be learned
  • desired learning outcomes (DLOs): narrow, specific statements about concrete, measurable skills or content to be gained in the course
  • teaching methods: teaching strategies aimed at building desired knowledge or skills
  • student assessment strategies: tools and strategies that analyze student performance and products as evidence of teaching effectiveness (e.g., assignments, quizzes, exams)

Without a doubt, any course that you have ever created includes all of these elements; however, they may not be evident in ways that inform your curriculum or others of your teaching success.

The following matrix [Table A: Learning Outcome Assessment Matrix] will guide you in the process of aligning over-arching goals, desired learning outcomes, teaching methods and assessment strategies for your course. Table B is an example of a completed, yet simplified matrix [Table B] for a statistics course organized according to these principles. Table C is a more comprehensive sample of a history course [Table C].  Keep in mind that the matrix for your course will reflect the specific goals and methodologies of your discipline.

    Table A: Learning Outcomes Assessment Matrix (download here)

Learning Outcome Assessment Matrix (Blank)

    Table B: Learning Outcome Assessment Matrix – Statistics Sample (download here)

    Table C: Learning Outcome Assessment Matrix – History Sample (download here; download template here)

When developing measurable learning outcomes, consider including a combination of lower- and higher-order thinking skills. For example, the statistics course [Table B] identifies two desired learning outcomes that are based on recall and memorization of basic content and concepts – what are considered lower-order thinking skills.  The third desired learning outcome is based on the application of course content and concepts in an effort to measure higher-order thinking skills.  The sample course in Table C provides a greater balance of lower- and higher-order thinking in desired learning outcomes, which would be more representative of a UCLA-level course.  This draws on the “Bloom Taxonomy,” a method used to organize the kind of knowledge and intellectual engagement expected from students.  The Bloom Taxonomy provides a list of skills that students develop in various university courses (see link).

Direct Assessment Tools

  • Pre- and Post-Assessments:
  • Pre- and post-assessments measure student learning by comparing results from tests conducted at the start and end of the course.  This type of assessment identifies progress and/or mastery of desired learning goals among students with diverse educational backgrounds, and assesses the “value-added” by the course.  Pre- and post-assessments should reflect the general goals of the course and align with the specific set of skills and content identified in the course’s learning objectives.  

    Pre- and post-tests provide information that demonstrates student improvement as a result of learning during the course.  In the above diagram, the pre- and post-tests reveal the progress of hypothetical students A and B toward learning objectives.  While Student B gained greater mastery of course skills and content in comparison to Student A, Student B also began the course with a more comprehensive background in the material and relevant skill-sets. Student A, while not achieving the same level as Student B, actually showed greater improvement as a result of teaching strategies and course instruction.

    Table D provides a sample of pre- and post-test assessment content aligned with the overarching goals, desired learning outcomes and teaching methods in a learning outcome assessment matrix.

      Table D:  Learning Outcome Assessment Matrix / Pre-Post Test (download here)

  • Embedded Assessment Using Rubrics

  • Rubrics are useful teaching tools in assessing student achievement of learning goals.  Rubrics are standards for performance which score individual components of student work with defined criteria.  Rubrics can be used as grading guidelines and as a means to provide feedback on learning efforts.  They communicate transparent expectations for student performance, pinpoint common weaknesses in work and provide structured feedback.   For sample rubrics, see here.

    In assessing student learning in online discussion, for example, it is extremely important to establish and communicate to students the rubric used to evaluate performance.  The following rubric offers a sample of clear guidelines for student participation in an online course.

    Table E: Rubric for Online Participation (download here)

  • Embedded Assessment Using Quiz Tools
  • Quizzes assess students’ understanding of course material.  They typically include a short list of multiple-choice, true/false, matching and short-answer questions.  Students are sometimes allowed to use notes and given a limited time to complete each question in the online system. These types of exercises offer instructors the opportunity to gauge comprehension of material and achievement of specific learning goals. (Mihram, n.a)  Instructors have also noted that frequent so-called “low-stakes” quizzes decrease testing anxiety and improve retention of course content. (Cherem, 2011)  UCLA’s course management system, CCLE, offers a Quiz Tool that provides a variety of question types (from calculated to essay, matching, and true/false models) and modalities.

  • Embedded Assessment Using ePortfolios
  • Faculty may also consider course-level eportfolios (also known as electronic or digital portfolios) as a way to assess student work.  ePortfolios are collections of electronic evidence demonstrating mastery of (or progress toward) knowledge and skills gained through course instruction.  ePortfolios are, like the traditional paper portfolio, comprised of a selection of student work from throughout the course.  Portfolios often also include indirect self-assessment activities, in which students consider their own achievement of learning outcomes.  For examples of eportfolio assignment prompts, see here.  View student portfolio sample from Penn State in a course on Rhetoric and Civic Life.   For more information on determining the audience, format and rubric for evaluating student portfolios, contact the UCLA Office of Instructional Development.

  • Other types of direct assessment include traditional exams, essays, and final projects.

Using Grades As an Assessment Tool

Traditional grading provides few details that allow instructors to link students’ performance to individual teaching strategies since they do not account for students’ prior skills and background.   Grades are only useful measures of assessment if the instructor administers a pre-test to identify a baseline of students’ previous knowledge when entering the course.  Nevertheless, grades are still considered imprecise measures of course success because they also often reflect elements not related to learning outcomes, such as extra credit and penalties for absences.

Indirect Assessment

Indirect assessment analyzes how stakeholders (i.e., students, faculty, TAs, etc.) perceive the learning experience and achievement of course goals.  This type of assessment is satisfaction-based and relies on proxy signs that indicate learning.  Examples of indirect assessments include:

  • Surveys and Evaluations
  • One way of gauging student learning and satisfaction is via anonymous mid-quarter student surveys and end-of-term evaluations of instruction.  Mid-quarter evaluations can take a variety of forms: a simple survey asking students to describe what is working, what is not working, and suggestions for change can be conducted via the CCLE Survey Tool.  The Survey Tool allows you to administer three types of online surveys: (1) Attitudes to Thinking and Learning Survey (ATTLS), aimed at encouraging self-reflection and evaluation of students’ learning styles; (2) Critical Incidents Survey, which asks students to review and comment on their engagement in class; (3) Constructivist Online Learning Environment Survey (COLLES), specifically designed to target students’ reflections on online learning.  You may also design your own survey using a commercial online survey tool.  Results from end-of-term online student evaluations conducted by the Evaluation of Instruction Program at the Office of Instructional Development are also useful measures of general student satisfaction with learning strategies and course design. 

  • Minute Papers
  • This evaluation tool can be used at the end of weekly modules to gauge student learning.  It derives its name from the face-to-face version, which asks students to spend no more than one minute answering one or several short questions.

    The minute paper exercise asks students to respond briefly to a set of weekly questions including one or more of the following:

    1. What was the most important thing you learned during class?
    2. What unanswered questions do you have?
    3. What was the muddiest point for you?
    4. At what point this week were you most engaged as a learner?
    5. Can you summarize this week’s lesson in one sentence? If so, please summarize it.
    6. What has been most helpful to you this week in learning the course materials?

    In the online environment, instructors may require that students complete these responses as text/word documents or via a CCLE Survey or Quiz.

  • Other types of indirect assessment include informal feedback strategies (incorporating specific questions into a lesson or through the online course forum) focus groups, self-reflection assignments and individual student interviews.


  • Angelo, T.A. & Cross, K.P. (1993).  Classroom assessment techniques. San Francisco: Jossey-Bass.
  • Barchfeld-Venet, P. (2005).  Formative assessment: The basics. Alliance Access, 9(1), 2-3.
  • Cherem, B. (2011). Using online formative assessments for improved learning. Currents in Teaching and Learning, 3(2), 41-48.
  • Davis, B. G. (1993). Tools for teaching. San Francisco: Jossey-Bass. 
  • Lindholm, J. (2009). Guidelines for developing and assessing student learning outcomes for undergraduate majors. Retrieved from
  • Loeher, L. L., Haas, K., Valli-Marill, J., & Chang, J. (2006). UCLA Office of Instructional Development guide to evaluation of instruction.  Retrieved from
  • Mihram, D. (n.a.). Classroom assessment techniques. University of Southern California Center for Excellence in Teaching. Retrieved from
  • Mosteller, F. (1989). The ‘muddiest point in the lecture’ as a feedback device,” On Teaching and Learning: The Journal of the Harvard-Danforth Center,  3, 10–21.
  • Pastor, V. (2011).  Best practices in academic assessment in higher education: A case in formative and shared assessment.  Journal of Technology and Science Education, 1(2), 25-39.
  • Poe, M., & Stassen, M. L. (n.d.). Teaching and learning online - communication, community, and assessment: A handbook for University of Massachusetts Amherst faculty.  Retrieved from
  • Race, P. (2013). The lecturer's toolkit: A practical guide to assessment, learning and teaching. London: Routledge.
  • Sadler, R. (1989). Formative assessment and the design of instructional systems. Instructional Science, 18, 119-144.
  • Simpson-Beck, V. (2011).  Assessing classroom assessment techniques. Active Learning in Higher Education 12(2), 125-132.
  • Stassen, M. L., Doherty, K., & Poe, M. (2001). OAPA handbook course-based review and assessment • UMass Amherst. Retrieved from
  • Wiggens, G. (1998).  Educational assessment: Designing assessments to inform and improve student practice. San Francisco: Jossey-Bass.