India and the ICC Ranking System
As the reality of an Indian defeat to South Africa became apparent, Ducking Beamers posed an interesting question on the nature of India's stay at number one on the official rankings. Given the official rankings are supposed to be transparent and simple, this should be a relatively easy question to answer. Unfortunately, the official rankings are neither transparent, nor simple. The formula is simple enough, but when assessing its merits as a predictor, something is lost.
Firstly, ignoring wikipedia's series points (which don't affect the maths) you'll note that the ranking varies depending on how closely matched a team is. There is a reason for this, which I will get to later, but let's first note the formula for a standard rating:
series_result * (rating_opp + 50) + series_result_opp * (rating_opp - 50 )
This can be simplified, greatly, as follows:
series_result * (rating_opp + 50) + (series_length - series_result) * (rating_opp - 50 )
= series_result * rating_opp + series_result * 50 + series_length * rating_opp - series_result * rating_opp - series_length * 50 + series_result * 50
= series_result * 100 + series_length * (rating_opp - 50 )
In other words, the rating is made of two parts. The result multiplied by 100, which holds true regardless of opposition (it is included in the the alternative methods as well) and a rating adjustment for opposition that takes no account of the result. Strange choice. I won't say this doesn't work, but it strikes me as odd.
How then, did India manage to get to number one. Well, oddly enough, on merit:
Aus Eng Ind Pak NZ SAf Sri WI Ban
win pt 2550 1925 2300 700 1000 2300 1950 800 375
opp pt 1725 2093.5 1920.5 1087.5 1071 1817 1270 757 0
str pt 326 60 130.5 44 -22.5 106.5 81 0 0
weak pt 0 0 0 0 213 0 0 677 -166.5
games 39 39 35 22.5 28.5 34.5 29 29 21.5
avg win 65.38 49.36 65.71 31.11 35.09 66.67 67.24 27.59 17.44
avg opp 52.59 55.22 58.6 50.29 44.26 55.75 46.59 48.93 -7.74
rating 117.97 104.58 124.31 81.4 79.35 122.42 113.83 76.52 9.7
The avg win and avg opp are the key fields here. Note that India have almost the highest avg win (which, broadly speaking is just a percentage of games won) and the highest opposition value. Their opponents have actually been harder than any other team's. (Note also, that the ratings above are a little approximate, due to rounding and other calculation difficulties).
Should we then all acknowledge India as (at least for the moment) the undisputed test number one? Possibly not. Because this rating system is a long way from being infallible.
Let's start at the bottom. Bangladesh achieve an impossibly low rating, given there is an automatic 50 points (on average) for playing someone else. This is because they don't get this rating, because to do so, breaks other things. If Bangladesh was rated in the normal way, their rating would be close to 50 (practically no points for winning, but an opposition rating of 50). If a team played Bangladesh, then their maximum points from that contest would be 100 + Bangladesh's rating - 50, or about 100. In other words, playing rubbish sides hurts your ranking, because the 50 point calculation artificially limits it (the same applies to New Zealand, Pakistan and the West Indies now - the most you can get is 130 points).
To get around this the rating system does something odd - in a mismatch, it ignores the rating, and gives a team the points for winning, plus your rating minus 90 (or your rating above 100 plus 10, since the rating system is centred - sort of). But Bangladesh, being rubbish, get the win points plus their rating MINUS 10. Which is a negative number.
And in case you don't think this is a great injustice, note this: if Bangladesh were to perform as a below average side, winning 1 in 3 games (or roughly the same as the three teams above them) for three whole years (the entire measured period of the ratings), their ranking would be about 40, whereas those other teams would still be ranked about 80. That is not right.
For teams playing against inferior opposition, it is possible to endlessly increase your rating, provided that you maintain a win percentage of 90%. That number is assumed by the rating system, regardless of the quality of the lowly-ranked opposition. Thus if your ranking is high, it is much better to play Bangladesh than the West Indies, New Zealand or Pakistan, against whom a 90% win percentage is actually difficult.
Perhaps fewer readers will care about the much smaller injustice faced by Australia, but note that it might soon happen to India, and worry. When Australia had a ranking of 140, their opposition took 90 points per game played against them, regardless of result, while Australia got the opposition rating (around 50). In general, a team should garner as many points from each series as their ranking would expect, and so, while Australia remained a 140 team, their ranking remained at 140. Thus, when playing India, a 110 team, Australia would get 60 points from playing India, and 80 (on average) from wins.
But when Australia became equal with India (more or less), their points are redistributed, raising India up to a 120 team and lowering Australia down. Australia's points from winning drop, to just 50, and India's increase, up to 50 (from 30). But, in that immediate period the points from playing the opposition do not change, Australia continues to get 60 points for playing India, and India 90 for playing Australia. The rating change over-shoots a little.
Now, this should not matter, because, the rankings would balance out after a few series as different teams compete against each other. However, these ratings have a cut-off. Every August the ratings are rolled over, the fourth year is discarded, the second given half its value. What happens then, as happened last August, is that the parts of the average maintaining Australia's high rating, in spite of the over-shoot, are discarded, and the average drops far below what it should be. Next August, Australia's 5-0 Ashes triumph will disappear (average points: 160), and they'll probably drop to fourth again (or worse).
The oddest aspect of this though, is not that the cut-off has strange effects, but that a cut-off is entirely unnecessary. The ratings are balanced against each other; if a better rated team does worse than expected their rating will fall. Results from three years ago already have very little bearing on the rating, because more recent results pull a rating into place like a pendulum. A weighting for the new ranking, based on the number of games played in recent years is both sufficient and better.
Apart from being completely unreliable for teams which never win, or teams that always win, or teams that have had a recent change in their rating, or have done so in the past four years, the ICC ratings are moderately accurate measure of a team's performance. This shouldn't be surprising, however. Of the dozen or so rating systems in existence, all of them are pretty good at predicting the easy things. Deciding who is the best team out of India, Australia and South Africa however. That is not possible, and any rating system putting more than a couple of percentage points between them (as the ICC one does, incidentally) is wrong.
What astounds me about the ICC system though, is that in trying to be simple, it is actually complicated, and yet, despite that simplicity, it is in many ways, mathematically unsound. It works, in spite of itself. Which is an odd thing.
11th February, 2010 20:01:36