Printer Friendly

Creating and using supplier scorecards; a scorecard with well-developed performance indicators works to keep the vendor focused on the customer's corporate goals.

Supplier scorecards are an integral piece of a supplier relationship management program.

Properly constructed and used, the scorecard assesses performance and customer satisfaction with individual suppliers, as well as compares suppliers across categories. The scorecard provides objective measurements of performance and indicates the supplier's conformance to requirements. The feedback from a scorecard frequently provides a supplier incentive for a continuous improvement program and as input to the supplier's own quality programs.

Overview

The term "scorecard" is borrowed from the academic "report card" and is developed and used in much the same way. Students have several measures of performance within a subject: homework, quizzes, attendance, participation in class, and class behavior, all of which contribute to a subject "grade." There are several "subject" grades, of course, and all of the subject grades add up to an "overall grade."

In much the same way, we'd like to have measures of supplier performance: on-time deliveries, meeting service levels, customer satisfaction, and responsiveness to problems, etc. Some of these measures will be objective, such as response time and average availability. Some of the measures will also be subjective, as in customer satisfaction. One thing that all of these measures need, however, is the ability to apply some sort of metrics. These measures, which are usually composed of an objective and a target for the objective, are called performance indicators.

One of the frequent confusing parts of scorecards is the difference between performance indicators and another indicator of supplier results measurement--service levels. The usual differentiation is that the service levels are performance measurements that are included in the contract and usually have contractual consequences and/or penalties for failure to meet them. Service levels are usually more granular and are measured as a direct part of the contract.

In our comparison with a report card, service levels might correspond to questions on a test. Some examples of service levels are

* Time to resolution for severity-one errors: 95 percent of repairs will be completed within four hours,

* System outages: No more than two monthly outages in any rolling six-month time frame, and

* System interruptions: No more than two interruptions to service that last longer than 10 minutes during systems availability in a two-month rolling period for all platforms.

Performance indicators are measurement criteria that may include summary components of the contract or factors that are apart from the contract. Performance indicators are the components of an overall scorecard.

Some examples of performance indicators are as follows:

* Product prices,

* Percentage of on-time deliveries in the last month,

* Customer satisfaction, and

* Percentage of defects and errors on delivered products.

As you can see, some performance indicators may also, in some form, be used as service levels.

The scorecard, then, presents the performance indicators and provides for a sum of all of the performance indicators with an overall rating of the supplier. Usually, the consequence of a failing scorecard has no direct contractual impact (unless this is built in) but has significant impact on the supplier-customer relationship.

Developing a Scorecard

The most important step is to create the right criteria to measure. On-time deliveries are usually more important for suppliers delivering products, not services (although "on-time deliverables" are a consideration). Price may apply to all suppliers, as does customer satisfaction.

Performance against overall service levels only applies if the service levels have been written into the contract. And, most of all, since the level of performance may affect the price paid for goods or services, some products and services may have different standards. If you are running a just-in-time inventory operation, it is critical that your stock arrives when it is needed and not too soon (storage) or too late (plant shutdown). So your "on-time deliveries," in this example may be 96 percent, not too early, and 99.5 percent too late. However, if the supplies are not critical to core operations, the window might be broader, such as 90 percent of deliveries are on time.

The first step is to determine the appropriate performance indicators for the category of supplier relationship. Column 1 of Table 1 (on page 24) lists several performance indicators that might be appropriate for any supplier who delivers commodity products--general inventory products, office furniture, desktop hardware equipment, and office supplies.

The second step is to determine the target ratings. These will vary by type of performance indicator. Some performance indicators will lend themselves to objective type measures, i.e., 95 percent of the time, 92 percent of the service levels. Others will be more subjective, such as "Has the supplier provided good customer support?" See Column 2 on Table 1 (on page 24) for examples.

The final step is to determine how important each performance indicator relates to the other. This can be done by assigning weights to each indicator. As a rule, it is desirable to have the weights add up to 100 percent. Then, when the ratings are applied, a formula will convert the rating to scores, which will add up to a portion of a maximum of 100 points. This allows comparison across suppliers and categories of products, even when a different number of performance indictors are used. Since all will add up to 100 percent, any supplier can be compared to any other. (See Column 3 on Table 1 below) for examples.

Now that the model is complete, the scorecard can move into actual use. Several more columns need to be added to support actual use, however, including columns for the actual performance during the time period, the rating, the score, and comments.

* The actual performance, when applicable, would be provided from data points collected during the performance rating period.

* The rating is set up to be 1-5. Where available, the actual performance should guide the rating.

1 - unsatisfactory performance

2 - marginal performance

3 - satisfactory performance

4 - very good performance

5 - outstanding performance

* The score is a calculation based on the weight. If a performance indicator has a weight of 20 percent, then, on a scale of 1-100, only 20 points are available in this category. If the supplier is perfect in this category, getting a 5 rating, then the score will be 20 points. A supplier receiving all perfect scores (all 5s) will have a total score of 100.

* Comments should be added to document issues behind ratings.

The rating and the comments are the only fields completed by the rater. The others are either preset or calculated. Table 1 on this page illustrates a complete sample scorecard, after the performance actuals are entered.

It is usually desirable to have multiple people rate suppliers in order to get a broader view of supplier performance. In such a case, scorecards are sent to all the internal reviewers, who each complete an individual scorecard. Upon return, the scorecards are consolidated across all reviewers by summing and averaging.

Although the raw total is sometimes used in supplier relationship management programs, it is also frequent to assign a letter "grade" to suppliers, once a total is calculated. This scale is common:
     A - 90-100
     B - 80-89
     C - 70-79
     D - 60-69
     F - < 59


The most frequent alteration to the scale is on the lower end, where companies may determine that "59" is entirely too low for a "failing" supplier. Rather, suppliers are expected to maintain a 70 or better.

From this point, overall supplier reporting will compare suppliers within categories and across categories, leading to supplier relationship management programs, continuous improvements, and remediation actions.

Supplier Performance Program

It is essential for the relationship between the customer and supplier that the scorecard not be "sprung" on the supplier. It is sometimes helpful to actually develop it with the input of the supplier, thus building commitment to both the process and the ratings. During this development period, the customer and supplier will need to determine an appropriate frequency of usage--monthly, quarterly, yearly--and have procedures for the role of the scorecard in the overall supplier relationship management program.

Following the scorecard rating, the results should be reviewed with the supplier. This keeps the supplier informed on its performance, allows for mid-review period (or mid-contract) corrections, eliminates excuses, and, in many cases, provides incentives for improved performance. A comparison to last periods' scorecards also identifies performance trends--how did the supplier do in each of the performance indicators as well as overall? Is performance getting better or worse?

The outcome of the scorecard review with the supplier should identify any required corrective actions, remedial programs, and continuous improvement programs. Corrective actions, the first-level problem resolution processes, are used to improve the results of one or more individual poor performance scores.

Usually, the supplier prepares a plan that will address specific actions and implement that plan with a timetable for deliverables or results. Remedial programs are used for suppliers whose overall results are failing. Now, more actions are needed to bring the scorecard up to a satisfactory performance level. Finally, continuous improvement programs are for satisfactorily performing suppliers and used to raise the standard of performance.

For the customer, the scorecard use should go beyond grading an individual and move into comparing suppliers both within categories and across categories. There are many opportunities to produce charts and reports identifying performance trends and poor performers. The scorecard should also feed into supplier replacement--a non-performer should either be eliminated entirely or not used. The customer can re-RFP (request for proposal) to add new suppliers to its category as failing suppliers are "let go."

Conclusion

The use of scorecards is critical in a supplier relationship management program, in the same way as report cards are used in public school systems. Providing quantitative metrics for performance improves relationships and can increase incentive to perform. More importantly, focus is shifted onto the goals of the customer, rather than misaligned or actually lazy performance.

A scorecard with well-developed performance indicators works to keep the vendor focused on the customer's corporate goals because those goals are reflected in the ratings. If on-time deliveries and responsiveness to customer support requests are important, they become part of the scorecard, and the supplier's performance with the customer is evaluated based on those goals. Other customers may have different goals, such as low cost and low error rates. In addition, because the scorecard identifies problem areas, the supplier will now focus on areas for improvement and innovation.
Table 1.

Sample Scorecard

Performance   Target  Weight  Actual  Rating  Score  Comments
Indicator
Description

On-Time       95%
Deliveries    90%
              85%     20%     92%
              80%
              <75%

Price         Lowest  10%

Product       99.9%
Availability  99.6%
              99.4%   10%     99.7%
              99.1%
              <99.0%

Customer              20%
Support

Meets All     95%     20%     86%
Service       92%
Levels        90%
              85%
              75%

Overall               20%
Customer
Satisfaction

                      100%


About the Author

SHARON HORTON is a senior consultant and project manager at Contract Management Solutions, Inc, in Winterpark, Florida. She is a member of the NCMA Suncoast Chapter. Send comments on this article to cm@ncmahq.org.
COPYRIGHT 2004 National Contract Management Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2004 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Horton, Sharon
Publication:Contract Management
Geographic Code:1USA
Date:Sep 1, 2004
Words:1812
Previous Article:Doing great business deals: it takes more than just "compliance": having the right process steps, tools, and compliance knowledge is only part of...
Next Article:Spreading roots: corporate culture beyond the corporation: today, ethics is a basic necessity. A corporate culture without ethics is only a rogue...
Topics:

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters