As a starting point, the win-loss percentage of each team is used. Each game is given a rating per team based on the scoreline of the game and the strength of the opponent played. Any win, no matter how weak the opponent is, is better than any loss, no matter how strong the opponent. The ratings of each game are then summed and run through another formula to be normalized. That is, they are strictly between 0 and 1. This normalized rating is then used in place of the win-loss percentage, and this process repeats until a definitive ranking is formed. In this way, not only are any one team's opponents being analyzed, but also the opponents' opponents, as well as the opponents' opponents' opponents, and so on.
On each iteration of the rankings calculations, the separation between each team's normalized ratings and the next ranked team's normalized ratings is calculated as well as the difference between each team's normalized ratings compared to that same team's normalized ratings after the previous iteration. These are used to determine when the rankings have converged. To do this, the minimum separation and maximum difference metrics are compared. Both of these numbers are strictly decreasing as the number of iterations increases. If the maximum difference is less than the minimum separation by a factor of 10 (i.e. the normalized rating that changed the most changed by at least a factor of 10 less than the smallest difference between any two teams' ratings), then I determine that the rankings have converged.
These rankings, along with other advanced computer rankings, media polls, etc. can be found at Ken Massey's Ranking Composite (https://masseyratings.com/cf/compare.htm). I would like to thank Ken for his dedication to this massive project he has undertaken for decades. These rankings are compared in his project to other, similar rankings, along with relative statistics on his website. Be sure to check it out.
No comments:
Post a Comment