As an avid college football fan, there is so much I love about the game: the stadiums, the pageantry, the marching bands, the tradition... Then there is the game itself: part wit and strategy, part toughness and resolve. However, like many fans today, I have grown increasingly frustrated with rankings: innately flawed and dependent on subjective judgments and hearsay. Half the college football fans in America watch more games than the people who vote in weekly polls. This setup would be nothing more than a nuisance, however, if the polls were just opinions, holding no sway. The problem is that rankings do matter. The highest ranked teams at the beginning of the season control their own destinies - if they win, they have close to a 100% chance of playing in the BCS championship game. And bias is always shown toward established, "traditional" powerhouses - If Texas, Ohio State and Vanderbilt go undefeated this year, Texas and Ohio State will play for the BCS championship. Replace Vanderbilt with Florida or Alabama, and Texas gets left out, because Ohio State will likely start this year ahead of UT in thepreseason polls.
I looked to develop a ranking system to remove subjective bias from the rankings. My goal was to attempt to fix several problems I saw that were obviously wrong with current rankings. The three biggest problems with how we rank football teams are, in my opinion:
1. Prospective Rankings - especially rampant in early polls, rankings are based on how well a team is expected to do in the future. The thinking is as follows: "I think Oklahoma will have a great team this year. But ooh, look at that schedule. They are probably a top-15 team talent-wise, but with that harsh schedule, I think they'll lose at least 3 games - I'll rank them 20." The problem with this line of thinking is simple - if Oklahoma is a top-15 team, they should be ranked in the top 15. Are sportswriters and other AP voters simply afraid to be wrong at the end of the year?
2. Ugly Wins and Admirable Losses - there is a near-unanimous feeling among AP voters that if a team loses, they should be dropped in the rankings. Period. Spend one second thinking about this and it is preposterous. Case in point: in September 2009, no. 19 Nebraska played at no. 13 Virginia Tech. Virginia Tech prevailed, 16-15, on a last-minute bomb and subsequent touchdown. The next week, Nebraska dropped to no. 25. But wait: isn't the number 19 teamsupposed to lose to the number 13 team? A one-point loss by the number 25 team to the number 1 team should prove that the number 25 team is better than advertised (or the number 1 team is worse). Losing valiantly should not lower a team's ranking. Similarly, an ugly win over an inferior opponent may sometimes be a symptom of a larger problem, and could result in a lower ranking. Win-and-you're-in is too exclusive (see #1) - Ugly wins (and admirable losses) are just as important.
3. Statistics - Winning and losing is important to the equation, but, as in #2, how you win or loseis as well. How can Wisconsin's offense be compared to Arkansas'? They share no common opponents and run vastly different styles of attack. This problem of unequal comparisons is what brought me into this project in the first place - I was annoyed at ESPN's reporting of Texas as having the nation's best run defense several years ago - not because I disagreed (UT's run defense probably was the best in the nation that year) but because of their use of simple averages to make such a bold pronouncement. UT might have given up just 60yd/game, but in the pass-happy (former) BigXII, did that really mean anything? It is important to look at not just how few rushing yards Texas allows, but also, equally important, how many yards Texas' opponents averaged rushing per game, and how many they achieved when they played UT. A rushing defense comparison could only be truly useful if it incorporated a season average rush D, an individual game rush D and opponent's season average rush O. It is in this area that my analysis succeeded so very well. With my method, the following questions can finally be answered: Are SEC defenses that good, or are SEC offenses that bad? Are BigXII offenses that good, or are their defenses that bad? The truth has been there, all along, in the statistics; I just found a way to coax it out.
This system is simply a method of obtaining information and finding correlations and solving conundrums. It is not a predictor of future success - only a compiler of past events. But in the mounds of data college football generates each year, within first-down statistics and yards-per-rush is a wealth of knowledge - giving the ability to make true comparisons in any way imaginable, and a few more. I hope the information I will make available as the 2010 season unfurls provide points of controversy (the Big East might just be one of the best conferences) and intrigue (BigXII teams had the lowest penalty rate by far last year - but only in conference games...what does that mean???) for college football fans everywhere to discuss and debate.
...Is September here yet?