Reading Time: 3 minutes

Back in 2015, when Division I Football  switched from the BCS to the College Football Playoffs, fans breathed a sigh of relief. The mystery behind the computer rankings were gone.

Instead, they were set to be replaced by a committee of humans that are actually able to watch the game. However, with the 30th edition of the CFP Committee Rankings coming out, I have some serious questions. We know the statistics that the committee values.

Wins, Wins over Ranked Opponents, Wins over Common Opponents, Strength of Schedule, Conference Championships, and everybody’s favorite, the eye test.

That’s fine and all, but what I really dislike his how the committee fails to put forth a formula on how each of these statistics and metrics go from just that, to the rankings. One year, Notre Dame gets hate for the lack of winning a Conference Title, but another year, one-loss Ohio State is placed at the fifth seed, outside of the playoffs, after winning a Conference Championship. And this year, BYU and Cincinnati both have 7+ wins and zero losses, one has an excellent defense and one an excellent offense, and they are right next to each other in strength of schedule rankings, in the bottom 75% of the sport.

However, the separation between the teams is so large, and the Committee won’t tell us what BYU needs to do to be more favorable to them. And… well, instead of staying on my soap-box, I decided to do what the Committee never did. After this week of College Football concluded, I put together what I call, the “College Football Composite Rankings.”

For all of the stat-heads out there, here is how I factored data into my composite rankings:

25% AP Poll
25% Coaches’ Pol
5% Total Wins
6% Total Losses
7% Total Win Percentage
6% Total Conference Wins
5% Total Conference Losses
4% Points Per Game
4% Opponent’s Points Per Game
5% SRS (Simple Rating System)
2% SOS (Strength of Schedule)
2% EFF (Overall Efficiency)
1% OEFF (Offensive Efficiency)
1% DEEF (Defensive Efficiency)
1% STEFF (Special Teams Efficiency)

This is where the BCS-esque rankings come in. This data is compiled, and each team receives a final score that’s a percent, just like the BCS! For example, Alabama finished this week’s rankings with a score of .979 out of 1.000. Then, the teams are put into order, and I take the top 25 from my list to include in the rankings.

However, I have two more “swap rules” I use. Here they are:

Any Non-Power Five Team Swaps one down with the team below them if the score differential is within .025 (This swap can only take place once, meaning that if 12 drops to 13 because of this clause, it cannot be used again to drop the team to 14)

If a specific team has a head-to-head win vs a team one or two spots ahead of them, the lower team can swap with the team ahead of the team if the score differential is within .050 (This clause can only be used once, and is used on the lower team if a team is ranked directly between two teams they have head-to-head wins over)

So, after using Statistics, Media, and Coaches’ Rankings to determine the rankings, and swapping teams with the rules listed above, I get the final poll for each week. So, here are the 2020 Week 14 CFB Composite Rankings:

Final Week 14 Rankings:

1) Alabama (.979)
2) Notre Dame (.951)
3) Clemson (.897)
4) Ohio State (.849)
5) Florida (.840)
6) Texas A&M (.815)
7) BYU (.842)
8) Cincinnati (.836)
9) Miami (FL) (.754)
10) Georgia (.705)
11) Indiana (.695)
12) Iowa State (.676)
13) Coastal Carolina (.692)
14) Oklahoma (.642)
15) Marshall (.637)
16) Northwestern (.560)
17) USC (.513)
18) Louisiana (.517)
19) Oklahoma State (.506)
20) Wisconsin (.493)
21) Auburn (.436)
22) Tulsa (.453)
23) Texas (.432)
24) Oregon (.416)
25) Buffalo (.398)

Photo: CFB Playoffs