Compared to recent years there is relatively little carping about this year's final BCS college football rankings; the winner of the titanic Jan. 4 Rose Bowl tilt between Texas and USC will certainly be the rightful national champion. Still, the system is goofy enough that Congress wants to get involved. I find the idea of holding Congressional hearing over this every bit as ludicrous as you do, but it doesn't mean we can't have an intelligent discussion about it.
For those of you who don't follow college football, matchups in the holiday bowl games are decided by an arcane mix of politics, money, and pseudo-objective criteria. The lower tier bowls rely somewhat on conference tie-ins; the Outback Bowl, for example, matches the 4th place team from the Big 10 and the 4th place SEC team. The major bowls - Rose, Sugar, Orange, and Fiesta - constitute the Bowl Championship Series (BCS), and the 'national championship' game rotates among them. The champions of the six major college football conferences (ACC, Big East, Big 10, Big 12, Pac-10, and SEC) are each guaranteed a spot in a BCS bowl, and the remaining two berths are 'at-large' selections. It is a very big deal for the universities involved, as a BCS bowl berth could mean an $10-$20 million pay off compared to $2 million or less for a second-tier bowl.
To introduce a semblance of objectivity to the selection process, the BCS introduced a numerical ranking system -- the 'BCS Formula' -- to select and seed the opponents in these four prime games. And that's when all hell broke loose. Every year, there is at least one university that has a major gripe that they were shortchanged. Last year it was undefeated Auburn, who was denied a berth in the national championship game. This year it's 10-1 Oregon, consigned to the second-tier Holiday Bowl while 9-2 Notre Dame enjoys a fat BCS paycheck from the Fiesta Bowl. These gripes go to the heart of the BCS formula.
The BCS formula is basically a weighted sum of two components -- (1) a set of 'expert' opinion polls, and (2) a set of computer rating methods. Two polls used in the formula: USA Today's sampling of 62 college coaches, and the Harris poll of 113 ex-athletes and writers. The computer component is a composite ranking based on 7 different statistical ranking methods.
Both pieces have serious weaknesses. The 'expert' polls are frequently criticized because of potential bias and lack of information. The folk wisdom is that coaches tend to over-rank their own teams and teams from their conference. West coast teams often complain of 'East Coast bias', that poll voters tend to rank teams from the West artificially low because they have never seen them on TV due to time zone scheduling. Worst, this poll is vulnerable to politicking. For example, if the Oregon coach was a voter in the coaches poll and wanted to get a BCS spot, he might be inclined to rank his own team #1 and ignore Notre Dame altogether (and lobby fellow coaches to do likewise).
To counterbalance concerns about human bias, the BCS has the computer ranking component. The underlying methods are ostensibly objective, but they are probably the most hated part of the BCS formula because few fans understand how they work; in truth, they are all based on some variant of linear regression, but statistical esoterica is little comfort to a fan who's spending New Years Eve in Shreveport instead of Miami. That's why last year I advocated an alternative BCS ranking approach I called the 'Bounty Method.' Like the current computer rankings, it is objective; but, by contrast, it is based on a very simple, intuitive, and objective 'points chase' approach that any fan can verify with a pocket calculator (and only using the '+' key).
The idea goes like this: teams are like Old West gunslingers, each with his own 'Wanted' poster. On each is a reward based only on the number of showdowns the gunslinger has won, and a bonus reward based on the reward values of all his showdown victims. Each team starts the season with a bounty of 0, and the bounty values start to accumulate with each game.
At the beginning of the season I proposed a set of scoring weights:
OWN BOUNTY VALUE:
Each home win over a Division 1A opponent: 10 points.
Each neutral site win over a Division 1A opponent: 11 points.
Each road win over a Division 1A opponent: 12 points.
Each home win over a non-Division 1A opponent: 5 points.
Each neutral site win over a non-Division 1A opponent: 5.5 points.
Each road win over a non-Division 1A opponent: 6 points.
COLLECTED BOUNTIES: sum of all current bounty values of Division 1-A teams defeated during the season.TOTAL RANKING POINTS: 5x(Own Bounty Value) + (Collected Bounties).
and began tracking the results weekly. Using those weights, here is the final pre-bowl Top 25 (complete rankings here):
RANK TEAM TOTAL PTS
1 Southern Cal 1414
2 Texas 1385
3 Penn State 1173
4 Georgia 1143
5 Virginia Tech 1131
6 TCU 1082
7 LSU 1025
8 Ohio State 1020
9 West Virginia 1006
10 Oregon 997
11 Miami FL 975
12 Notre Dame 955
13 Wisconsin 927
14t Alabama 921
14t UCLA 921
16 Auburn 898
17 Florida 883
18 Boston College 872
19 Louisville 840
20 Michigan 837
21t Boise St 818
21t Tulsa 818
23 Texas Tech 815
24 Florida St 808
25t Central Florida 791
25t Georgia Tech 791
Fairly close to the consensus opinion polls, with a couple of exceptions (which I'll get to). I noted last week that the bounty method has a flaw in that it doesn't account for differential number of games played; most teams played an 11 game schedule while some play 12. The 12-gamers have an advantage in that they had one more chance to increase their bounty value. To put things on an equal 11 game footing, I suggested deducting the 'worst win' of the teams with 12 games -- i.e., the victory that contributed the least towards their total ranking points. Here are the 12-gamers and their respective 'worst wins':
TEAM OPPONENT TOTAL PTS EARNED BY VICTORY
Akron Kent St 55
Boise St Portland St 25
Central Florida Tulane 68
Colorado New Mexico St 50
Florida St The Citadel 25
Fresno St Weber St 25
Georgia Kentucky 77
Hawai`i New Mexico St 50
LSU Appalachian St 25
Northern Illinois Tennessee Tech 25
San Diego St San José St 75
Southern Cal Arizona 77
Texas Rice 60
Tulsa Rice 70
Virginia Tech Duke 65
Wisconsin Temple 50
After deducting these 'worst win' points from their respective teams, the adjusted Top 25 is:
RANK TEAM TOTAL PTS
1 Southern Cal 1337
2 Texas 1325
3 Penn State 1173
4t TCU 1082
4t Georgia 1066
6 Virginia Tech 1066
7 Ohio State 1020
8 West Virginia 1006
9 LSU 1000
10 Oregon 997
11 Miami FL 975
12 Notre Dame 955
13t Alabama 921
13t UCLA 921
15 Auburn 898
16 Florida 883
17 Wisconsin 877
18 Boston College 872
19 Louisville 840
20 Michigan 837
21 Texas Tech 815
22 Boise St 793
23 Georgia Tech 791
24 Northwestern 788
25 Florida St 783
Complete rankings here. Again, fairly close to the consensus opinion polls, with one glaring exception: TCU is at #4, compared their #14 BCS rankings. Lots of readers have tweaked me about this; TCU is a nice mid major 10-1 team, so the opinion goes, but they are not top 5 material. The principle reason TCU ranks this high is that the initial scoring weights make no distinctions between Division I-A teams. To adjust for this, I tried a set of modified scoring weights that recognize an additional distinction between BCS conference teams (ACC, Big East, B10, B12, P10, SEC and Notre Dame) and remaining Division 1A mid-majors:
OWN BOUNTY VALUE: 10 points for each win over a BCS opponent; 8 points for each win over remaining D-1A teams; 5 points for non-Division 1A victories. Add 1 point for each win at a neutral site, add 2 points for each road victory.
COLLECTED BOUNTIES: sum of all current bounty values of Division 1-A teams defeated during the season.
TOTAL RANKING POINTS: 5x(Own Bounty Value) + (Collected Bounties)
Using these alternative scoring weights (and applying the same 12th game deduction above), TCU drops to #11 in the Top 25 (complete rankings here):
RANK TEAM ADJUSTED TOTAL PTS
1 Texas 1271
2 Southern Cal 1267
3 Penn State 1117
4 Virginia Tech 1012
5 Georgia 1006
6t LSU 966
6t West Virginia 966
8 Ohio State 954
9 Miami FL 943
10 Oregon 941
11 TCU 916
12 Notre Dame 897
13 UCLA 873
14 Auburn 860
15 Alabama 851
16 Florida 833
17 Wisconsin 817
18 Louisville 810
19 Boston College 802
20 Michigan 783
21 Texas Tech 779
22 Georgia Tech 777
23 Florida St 763
24 Northwestern 734
25 Oklahoma 725
Which is fairly consisted with the results of the complicated BCS formula. The precise value of the weights are a minor detail, however. The point is that it is possible to have a college football ranking method that (a) is completely objective, (b) ignores blowouts, and (c) is transparent and simple enough to be double-checked by any fan.
If there is anyone reading this connected with the BCS, NCAA, or Congress, I'd like to make a modest proposal: replace the current BCS formula with this simple method or a variant. The rules would be known before the season began, and there would be no quibbling at the end. And if it takes legislation, please call it the BCS Bullshit Reduction Act of 2006.