Performance Architecture: Evaluation – Begin at the End

In the course of designing, developing, and implementing a project, at what points do you conduct your evaluations? We assume you do something like this:

  • Evaluate the current situation as part of setting goals and objectives, scoping the project, scheduling, identifying your team—what Performance Architects term front-end analysis
  • Regularly evaluate as the project continues to check milestones, revisit objectives, redirect or modify as needed
  • Evaluate again post-implementation to determine if the objectives and goals have been met and if the change(s) worked

Working Backwards

As you have likely experienced, not all evaluations yield the information we need to accurately determine if a project is successful. It is all too easy to feel good about the information an evaluation collects and then realize that we still don't know if what we've implemented is working.

Performance = Activity + Results

A common reason why evaluations can fall short is an incomplete understanding of what Performance is. If an employee comes to learn a new system and shows up every day to learn and practice, that is an important, measurable Activity. But, can the employee also correctly use the system to do work? That is the Result, and evaluations often stop at measuring the Activity rather than going further to identify and measure the Result.

Some other obstacles to building successful evaluations that produce actionable information include:

  • Incomplete or unclear objectives for the evaluation
  • Failure to evaluate at critical junctures
  • Flawed evaluation methodologies
  • Asking questions without knowing what you will do with the answers
  • Add your own here

Begin at the End

To prevent unpleasant surprises in evaluation, Performance Architects recommend building your evaluations at the beginning as part of your project framework. In our experience this best practice, though challenging, yields multiple benefits. It:

  • Aligns the desired results of your project with those of the evaluation(s)
  • Provides another form of progress check to ensure that you can use the evaluation information to sharpen the focus of your work as the project moves ahead
  • Ensures that you can work with the information gathered in the evaluation to continuously improve your project results

Example: Yellow Belt Training Evaluation

With the proliferation of Six Sigma training available on the Internet, we looked at examples of training evaluations used in some of these programs. Here is one from a Yellow Belt Training program:

Rate Yellow Belt Training + Evaluation

As you read the evaluation, did you ask yourself:

  • What information will this provide?
  • What decisions will I be able to make based on the information?
  • What does the evaluation tell me about the learner's ability to use skills and knowledge from the training?

Admittedly, the sample evaluation asks questions typical of many post-training happy sheets. This is a polite term for evaluations that gather information about how the learners felt about their training rather than endeavoring to find out what they can now do differently or better on the job. Evaluations like this are more typical than we'd like. Developing an evaluation of any kind—for a training program, a presentation, a project phase, or any other aspect of work—is a rigorous process that requires some thought. The designer of this evaluation would have been greatly helped by some guidance in building an effective evaluation.

Evaluating an Evaluation

To avoid the pitfalls of an ineffective evaluation, consider a tool to guide you in this work; one that will help you design evaluations that ensure you don't miss anything. We offer two such tools: the 10 Criteria for Evaluating Six Sigma Projects and the Learning Transfer Evaluation Model.

Let's begin with an abbreviated version of the 10 Criteria for Evaluating Six Sigma Projects. While this tool speaks to Six Sigma projects, the criteria can be applied to other business process evaluation designs. We find this tool especially useful because it describes specifically what does/does not constitute successful project work. See the complete tool at https://www.isixsigma.com/implementation/project-selection-tracking/10-criteria-use-evaluating-six-sigma-projects/


10 Criteria to Use for Evaluating Six Sigma Projects

Thomas Bertels and Arne Buthmann

A relatively simple 10-point checklist can be used for ongoing project evaluation at specific milestones as well as part of the lessons learned exercise after project completion. Anticipating potential project failures also can help drive an effective project selection.

1. Link to Strategic Imperatives

  • Low – The project has no visible impact on any of the key metrics for the organization.
  • Medium – It is not clear exactly how the project will help impact key metrics.
  • High –The project is built into the strategic plan and the goals/objectives of the organization.

2. Application of Six Sigma Tools

  • Low – The project team has neither a thorough understanding of the individual tools nor has it followed a logical and consistent thought process.
  • Medium – Team has followed a logical thought process. Most tools have been used correctly.
  • High – A review of the project demonstrates appropriate use of tools in the Six Sigma toolkit.

3. Active Sponsor Engagement

  • Low – The sponsor has had only marginal involvement with few interactions with the team.
  • Medium – The sponsor's engagement was primarily reactive. This project is not a high priority.
  • High – Highly visible engagement of the sponsor has been demonstrated throughout.

4. Team Actively Engaged

  • Low – The team leader is the main force. The team members have no clear understanding of the process and the tools being used.
  • Medium – There is a visible lack of engagement among parts of the team.
  • High – Work is distributed among team members according to interest and capability.

5. Broad Organizational Awareness of the Project

  • Low – The project is invisible to the rest of the organization. There is no formal communication plan.
  • Medium – Despite a communication plan, there is very little awareness of what the team is trying to accomplish.
  • High – Almost every member of the organization is aware of the project and understands how it will impact his or her area of responsibility.

6. Project Delivered the Anticipated Results

  • Low – The deliverables of the project do not meet the expectations laid out in the charter.
  • Medium – The deliverables fall short of expectations. The project sponsor agreed to move forward with the project regardless of this issue.
  • High – The project delivered the promised results.

7. Project Completed on Time

  • Low – While the project was eventually completed, the overall duration exceeded the initial schedule by far.
  • Medium – The team has been struggling to complete specific phases.
  • High – The team completed the project within the allotted time and the project leader has managed the project effectively.

8. Successful Transition of Ownership to Process Owner

  • Low – No process owner has been identified and a formal hand-off has not occurred.
  • Medium – There are disagreements between the team and the process owner on how to manage the process once the team dissolves.
  • High –The process owner has accepted responsibility for the changes implemented by the team and is using the new methods and control systems to continuously improve the process.

9. Improvement Sustained Over Time

  • Low –The data suggests that either the changes introduced by the team have not been adopted by the organization or the team has failed to address the true root cause.
  • Medium – Overall the process performance is significantly better compared to the baseline of the project, but not all of the changes have been adopted by the organization.
  • High –The process owner is actively engaged in managing the new process and is driving continuous improvement efforts to extend the benefits already attained.

10. Replication of Results

  • Low – The team has not conducted a thorough analysis of whether and how the results of this project could be replicated.
  • Medium – The team has identified opportunities for replicating the results of the original project but does not have a comprehensive plan for how the organization can make this happen.
  • High – The team has developed a thorough plan that not only shows how the improvements could be replicated but also who will be involved. (isixsigma.com, edited)

The Learning-Transfer Evaluation Model (LTEM)

The second model we offer is newly developed and designed for assessing learning evaluations. Like the 10 Criteria for Evaluating Six Sigma Projects, the LTEM is applicable to any project evaluation that involves the use of skills and knowledge in the performance of work tasks.

Our colleague, Will Thalheimer, developed the LTEM. Like all of Dr. Thalheimer's work, it is science-based. It is built to help users determine if their evaluation methods effectively provide valid feedback. (Thalheimer, p. 11)

We like the organization of the LTEM particularly because it illustrates what we advocate: begin at the end by developing your evaluations at the start of your project. In the LTEM, we work from Tier 8 with the most useful information yield, backward to Tier 1, which produces the least useful information.

The Learning-Transfer Evaluation Model

Tiers 5 – 8

While your project may benefit from evaluations that address all eight Tiers, Tiers 5 – 8 have the most pay-off for your evaluations because they guide you to questions and measurements that will tell you how well your project's results match its goals.

Tier 5 – Decision Making Competence
You can evaluate Decision Making Competence during or immediately following learning using realistic scenarios. Several days post-learning, if learners remember the Decision Making Competencies, they will have achieved Decision Making Competence.

Tier 6 – Task Competence
Workers demonstrate Task Competence by making appropriate decisions and taking actions in two timeframes:

  • During training or immediately after
  • Several days after learning relevant skills and knowledge

Workers are considered Task Competent when they are still performing correctly several days after learning. However, having Task Competent workers does not guarantee they will consistently and correctly perform the task.

Tier 7 – Transfer
Transfer of learning occurs when the worker uses new skills and knowledge successfully to perform tasks on the job.

  • Assisted Transfer occurs when the worker is significantly supported in applying new skills and knowledge
  • Full Transfer occurs when the worker applies the learning fully and without prompting

Observing workers using their new skills and knowledge on the job at regular intervals is one way to objectively evaluate the success of transfer.

Tier 8 – Effects of Transfer
While we want learning transfer to occur, this Tier specifically asks us to:

  • Certify that learning has transferred to the job
  • Assess the results of the transfer, both positive and negative as they impact other workers, the organization, community, society, etc.

For example: Positive transfer occurs when a worker correctly uses the new skills and knowledge as part of a process and the results are improved by 30% percent. Negative transfer occurs when skills and knowledge are not successfully used and the process is not improved.

The remaining Tiers (1-4) in the LTEM demonstrate the weaknesses of the Yellow Belt training evaluation we looked at earlier. However, if you are in need of the information these Tiers will yield, by all means include them in your evaluations.

10 Criteria Model vs. the LTEM

So which model will best help you build and then assess your project evaluations? It depends on the project you are evaluating and the results you are designing it to achieve.

Following is a comparison of the two models that makes their commonalities and differences readily apparent:

The Learning-Transfer Evaluation Model

Application Exercise

Consider a current or recently completed project and how you constructed or will construction evaluations. In light of the two models we've explored here:

  • What are the primary similarities and differences between them?
  • What crucial evaluation information does management expect for your project?
  • Which model best supports the content of the evaluation(s) to be built for your project?
  • How might you combine some elements from each model for more a more valuable evaluation of your project?

Summary

Performance = Activity + Results. Performance Architects evaluate performance based on the Results we expect. Unfortunately, many people who construct evaluations for training programs, change management projects, and new business processes measure only the Activity of the workers trying out new skills and knowledge. When we include the measurement of the Results of the Activity, we significantly enhance the power of our evaluation.

Fortunately, there are models to help us assess the effectiveness of the evaluations we build. Two of these are:

  • 10 Criteria to Use for Evaluating Six Sigma Projects
  • The Learning-Transfer Evaluation Model

With some commonalities and some differences, these are valuable tools for improving existing project evaluations and constructing new ones.


References

Addison, R. and Haig, C. https://www.bptrends.com/performance-architecture-e-value-ation-measure-what-matters/

Bertels, T. and Buthmann, A.10 Criteria to Use for Evaluation Six-Sigma Projects. Retrieved from:
https://www.isixsigma.com/implementation/project-selection-tracking/10-criteria-use-evaluating-six-sigma-projects/

Thalheimer, W. (2018). The learning-transfer evaluation model: Sending messages to enable learning effectiveness. Available at https://WorkLearning.com/Catalog

Thalheimer, W. (2017). How effective are your smile sheets? Available at:
https://smilesheets.com/smile-sheet-diagnostic/

Yellow Belt Training Evaluation. Retrieved from:
https://goleansixsigma.com/yellow-belt-training-evaluation/

Roger Addison & Carol Haig

Roger Addison & Carol Haig

Roger Addison has a Ph.D. in Educational Psychology from Baylor and is Certified in Performance Improvement Technologies (CPT). He is the co-author of Performance Architecture and an internationally respected performance improvement consultant. He is the founder and Chief Performance Officer of Addison Consulting. Previously he was the Senior Director of Human Performance Improvement for the International Society for Performance Improvement (ISPI) where he was responsible for educational programs and implementing performance improvement systems. Carol Haig is a Certified Performance Technologist (CPT) and has more than 30 years of multi-industry experience partnering with organizations to improve their employees' performance. Carol is known for her superior skills in project management, analysis and problem/opportunity identification, and instructional design and facilitation. She has consulted with executives and line managers, established and managed training departments, trained trainers, written for professional publications and mentored performance consultants. She is co-author of Performance Architecture.
Share

Speak Your Mind

*

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Share
Share