Myth #3: Assessments are not essential to effective training

Both formative and summative assessments play a vital role in evaluating learner progress, providing feedback, and ensuring that training objectives are met.

  • Formative assessments are integrated throughout the learning experience to support retrieval practice in a low-stakes environment, meaning they have little or no impact on the overall score. Their purpose is to provide learners with targeted feedback on their progress and identify areas for improvement. For example, incorporating knowledge checks or practice activities allows learners to pause and confirm their understanding before advancing.

  • Summative assessments, on the other hand, are used to determine whether learners have achieved the intended learning outcomes or mastered the material. These assessments aim to evaluate learning for certification purposes and are often high-stakes.

Reality

  • Completing a workplace-based project or activity. Typically measured via direct observation and expert review. These assessments validate that the learner can apply and perform on-the-job.

  • Completing a simulation, lab, or practice activity in a staged or simulated environment. Typically measured via AI or technology-mediated scoring and feedback. These assessments help confirm the development of capabilities and tangible skills. However, they don't guarantee these capabilities can be performed in the workplace.

  • An activity that requires learners to answer questions mapped to specific competencies and learning outcomes. Typically measured via multiple-choice, open-ended questions, or decision-based branching scenarios.

    David M. Merrill recommends using multiple-choice questions for the following learning outcomes.

    • Identify components or parts. When you want learner to identify locations of parts with respect to the whole. For example, identify specific components of a technical architecture diagram.

    • Recognize good examples. When you want learner to recognize examples and non-examples of recommended use cases. For example, identify the right and wrong use cases for using serverless computing.

    • Make accurate predictions. When to predict a consequence given a set of conditions or predict faulted conditions given an unexpected consequence.

    Following are techniques for writing effective multiple-choice questions:

    • Multiple choice questions are best suited for assessing simple learning outcomes, not complex tasks.

    • Each question should align with at least one terminal learning objective (TLO) or enabling learning objective (ELO).

    • Write each question (the stem) using a mini-scenario format to immerse learners in realistic contexts and relevant situations. 

    • Well-written multiple choice questions use distractors, or incorrect answer choices, to identify learner misconceptions. Each answer choice (correct and incorrect) should address a specific, relevant concept or misconception.

    • Provide learners with a short (one- to two-sentence) explanation about why the answer they selected is either correct or incorrect. Feedback for incorrect answers explain why the choice is incorrect. For example, explaining the misconception underlying the answer choice without giving away the correct answer.

  • The learner is asked to reflect on their beliefs about their current knowledge and skill levels. Typically measured via surveys or interviews.

This section describes different assessment types. The types are ordered from top to bottom, higher validity of performance (workplace-based assessment) to lower validity and accuracy (self-assessment).

Assessment types

Resources