If you have been involved in any large project procurements, I'm sure you've encountered the use of weighted scale ranking of proposals.
These attempt to bring objectivity to the evaluation of proposals while dealing with a large amount of diverse information in the proposal.
They usually end up in a matrix such as:
There are numerous problems with the approach:
In most cases the weights are arbitrary: unscaled, uncalibrated and without repeatable reference to the real world. There is also scale compression for low weighted scores vs high weighted, misleading scorers as to their scores. They are nothing like a rating for uni grades where a score is weighted by the proportion of the academic program that the course represents.
The score can be sensitive to low weight 'herding': a number of high scores on low weight factors can overwhelm a high score on a high weight factor, such as is illustrated below.
One could argue that the weightings as a whole deal with this in terms of the objective of the project, but without evidence that the problems of scaling, calibration and variability arising from arbitrary assignment, the vulnerability of the scheme to even inadvertent manipulation or outright error is unaddressed. I wonder how many procurements of large projects have gone off the rail due to this approach?
There are a few ways of overcoming this. I will address them in the next post on this topic.