Performance Over Perception: A UX Approach to Fixing the College Football Playoff
- On December 16, 2024
Estimated read time: 8-10 minutes
College football is more than just a game—it’s the cornerstone of American sports culture, a passion that unites fans, families, and entire communities every Saturday in the fall (especially in the South). But year after year, its postseason system leaves fans divided, frustrated, and questioning its integrity. From the days of a lone two-team finale in the BCS era, then expanding to a four-team playoff in 2014, and now the 12-team format in 2024, one thing has remained consistent: controversy.
This year was no exception. As conference champions were crowned and the College Football Playoff (CFP) selections announced, I couldn’t stop asking myself “Why” and thinking that for a system designed to create a field of “the best of the best,” the process just feels fundamentally broken. We all thought the CFP would be better than the BCS, but the CFP system still prioritizes subjective opinions (just with a committee now rather than a coaches poll) and bias instead of rewarding performance.
The Problem: Subjectivity Over Objectivity
Before diving into a solution, let’s take a moment to reflect on some of the glaring issues from this year’s selection process:
- South Carolina vs. Alabama: Despite nearly identical metrics, South Carolina outperformed Alabama in key areas like loss quality. Yet Alabama’s brand power kept them in the conversation.
- Clemson’s Inclusion: A 9-3 team with zero Top 10 wins and zero Top 25 wins found its way into playoff discussions simply because they won the ACC. Does that justify a potential first-round bye?
- The “Eye Test”: This subjective measure holds far too much weight, favoring historically dominant brands over consistent performance.
- Indiana’s Free Pass: With no ranked wins, a weak schedule, and no conference championship appearance, how did Indiana make the field over more deserving teams?
- Inconsistent Treatment of Losses: Georgia took its second loss on the road to a #15-ranked Ole Miss and dropped nine spots in the rankings. Meanwhile, Ohio State suffered its second loss at home to an unranked Michigan but fell just four spots. This glaring inconsistency underscores how certain teams are held to different standards.
These questions expose a fundamental flaw in the CFP selection process: it leans far too heavily on subjective criteria while failing to prioritize measurable, performance-based metrics.
As a UX designer during the day (obvious Armchair Coach by night), I can’t help but see the CFP for what it truly is—a flawed system/interface at its core used to help determine a National Champion. Instead of focusing on fairness and objectivity, the system gives outsized influence to human bias and subjective opinions. Decisions are driven by the infamous “eye test” and a team’s “brand power” (let’s call it what it is—TV ratings and ticket sales) rather than the actual results on the field. Not only is it frustrating; it’s a betrayal of what makes college football so great.
And this isn’t just about rankings—it’s about trust. Trust in the system. Trust that the work teams put in, the sacrifices players make, and the moments that define a season all actually mean something. Players (and fans) pour their hearts into this game week after week, and deserve a system that rewards performance, not perception.
In UX design, we know that trust and fairness stems from data, not intuition. It’s about eliminating bias and building systems that are clear, consistent, and free of hidden agendas. Yet, the CFP committee continues to rely on subjective measures, leaving us all to wonder: What’s the point of a competitive regular season and conference championships if perception outweighs performance?
A Data-Driven Solution
It doesn’t have to be this way. The solution is clear: data-driven seeding.
By using objective metrics, we can put performance first, eliminate bias, and ensure every team earns its shot. College football is built on effort, strategy, and grit—so why not let the numbers tell the story?
The CFP should reflect the sport’s core values: competition, excellence, and (arguably) fairness. A true playoff system celebrates what happens on the field, not decisions made behind closed doors. A data-driven approach doesn’t just fix the system—it honors what makes college football great.
Eliminating Bias and Letting the Numbers Speak
Like most UX designers, my instinct is to always ask “Why?” Why does a system meant to select the best teams in college football still spark outrage every year? Why does the subjective “eye test” outweigh actual performance data? And why does human bias for historically dominant brands overshadow the achievements of deserving teams?
When thinking about what could potentially make the system better, I focused on the a few factors commonly referenced by both fans and analysis:
- Account for Strength of Schedule: Not all conferences are created equal, and teams that play tougher schedules should be rewarded for it.
- Reward Conference Championships: These games are high-stakes battles that should matter—not just for the title, but for the risk and effort involved.
- Differentiate Wins and Losses: Beating a ranked opponent isn’t the same as beating an unranked one, just as losing to a powerhouse team isn’t the same as losing to an unranked one.
A fair system goes beyond numbers—it fosters transparency, consistency, and trust. By removing bias, we can ensure the best teams earn their place and the postseason reflects the core values of the sport. College football deserves better than the politics that plague its current system.
What an Algorithm Would Look Like
Borrowing from UX methodology, let’s consider an algorithm-driven approach to the playoff selection that I’ve dubbed SMART Metrics—a nod to Georgia’s Head Coach, Kirby Smart, and continued underappreciation in the playoff system (looking at you, 2020 and 2023). SMART Metrics is designed to quantify each team’s performance using clearly defined, data-driven criteria, minimizing subjectivity and building trust in the system. Here’s how I calculated team rankings using the most basic metrics:
- Strength of Schedule (SoS): Adjusted scores based on data from TeamRankings.com at the end of the regular season.
- Conference Championships: Bonus points for making (10 points) and winning (10 additional points) the conference championship. And no penalty for losing. However, conference champions shouldn’t receive a bye. Mainly because not all conferences are created equally and the Top 4 teams in general should receive those byes.
- Quality Wins: 10 points for Top 10 wins, 5 points for Top 25 wins.
- Total Wins: 1 point for each regular season win.
- Penalties for Losses: -10 points for ranked losses, -15 for unranked losses.
- Committee Influence: Teams were awarded points based on their final committee seeding (e.g., the #1 seed received 12 points, #12 seed received 1 point).
These metrics resulted in a more transparent scoring system, free of brand bias and “feel-good” narratives about teams “playing their best football right now.” As any UX designer knows, consistency and clarity are key to user satisfaction.
Human Bias in the CFP: A UX Problem
Human bias is an unavoidable flaw in any system that relies on subjective decisions, and the CFP selection process is a prime example. Teams like Alabama and Ohio State are routinely favored based on reputation rather than performance, much like designing a product that prioritizes stakeholder preferences over user research and actual needs. The result? Frustrated users—or in this case, disillusioned fans and overlooked teams.
Two glaring examples of this bias are South Carolina and Alabama’s playoff consideration and the inconsistent treatment of second losses for Georgia and Ohio State.
South Carolina vs. Alabama: Ignoring Performance
University of Michigan Athletics Director and 2024 CFP Committee Chairman Warde Manuel defended Alabama’s selection, stating that despite South Carolina’s strong finish—including a significant win over Clemson—their overall record and metrics didn’t surpass Alabama’s. But when evaluated through SMART Metrics, South Carolina objectively outperformed Alabama.
Both teams had similar stats: one Top 10 win and two Top 25 wins. However, South Carolina had no unranked losses (despite one loss being to a ranked Alabama), while Alabama had two unranked losses. Yet Alabama was favored, once again highlighting the committee’s reliance on reputation over performance. This is like ignoring usability testing feedback in favor of subjective opinion—a practice no UX designer would recommend.
Georgia vs. Ohio State: Double Standards
Then there’s the inconsistent treatment of Georgia and Ohio State’s second losses. Georgia’s road loss to a ranked Ole Miss resulted in a nine-spot drop, leaving them at risk of being eliminated from playoff contention if they lost the SEC Championship. Meanwhile, Ohio State’s home loss to an unranked Michigan, where they scored just one touchdown, caused only a four-spot drop.
This disparity gave Ohio State a significant advantage, keeping them in playoff contention and even earning them a home playoff game. If losses were treated fairly, Tennessee likely would have hosted Ohio State instead.
The Eye Test: Subjectivity at Its Worst
These examples highlight the danger of subjective measures like the “eye test,” which often favors historically dominant brands over actual performance. It’s akin to a judge ignoring facts and convicting someone based purely on a hunch.
Rich Clark, CFP Executive Director, insists, “The committee’s job is to pick the best teams—not based on their jersey, what they’re wearing, what conference they’re in, even their record.” But history suggests otherwise. Year after year, subjective decisions and brand bias undermine the credibility of the process—just look at last year’s controversies.
It’s clear that the CFP’s reliance on reputation and perception erodes trust in the system. It’s time for a shift toward transparency, fairness, and a true commitment to rewarding performance on the field.
When applying SMART Metrics to the 2024 playoff field, I eliminated the subjectivity of Conference Champion byes. The result? A transparent, objective ranking that puts performance first. Here’s how the updated seeding looks (click here for data breakdown):
- Georgia: 74.5 points
- Oregon: 64.7 points
- Texas: 46.1 points
- Penn State: 31.3 points
- Boise State: 28.9 points
- Arizona State: 26.6 points
- Notre Dame: 24.9 points
- SMU: 20.5 points
- Ohio State: 20.3 points
- Tennessee: 17.8 points
- Clemson: 16 points
- Indiana: 9.9 points
- South Carolina: 8.1 points
- Alabama: 0.3 points
- Ole Miss: -10 points
This algorithmic approach removes the inherent bias of the “eye test” and avoids rewarding teams based on brand recognition alone. For example, conference championships carried their rightful weight without over-penalizing teams for participating in them (like SMU).
Flaws and Future Considerations
While this algorithm is a good starting point, it has its limitations. It doesn’t account for factors like road wins/losses, crowd noise, game margins, or injuries—elements that could enhance a more refined model, much like iterative UX design.
Even in its simplicity, the algorithm highlights how data can reduce subjectivity, as seen in last year’s 2023 College Football Playoff selections. When applied to the Top 8 teams, it produces a more fair outcome—and likely what America wanted.
- Washington: 68.4 points
- Michigan: 67.3 points
- Georgia: 55.8 points
- Florida State: 55.7 points
- Texas: 55.3
- Alabama: 49.6
- Oregon: 32.4
- Ohio State: 31.6
Compare this to the committee’s rankings, which drew controversy for excluding Georgia and Florida State—Georgia for a single loss in the Conference Championship and Florida State due to their quarterback’s season-ending injury. The algorithm, grounded in objective data, removes such bias and leaves little room for debate.
UX Lessons for the CFP Committee
1. User-Centered Design: Focus on Fans, Players, and Teams
The CFP system should be designed for its primary stakeholders: fans, players, and teams. Just as UX designers create systems that prioritize user needs, the CFP must focus on rewarding performance and creating a process fans trust. Transparent criteria like SMART Metrics empower users to understand and trust the system.
2. Consistency Builds Trust
In UX, consistency across interactions creates predictability and builds user confidence. The CFP, however, applies criteria inconsistently, destroying trust. Standardizing evaluations—such as treating losses and wins equally across teams—ensures fairness and makes the process more reliable.
3. Transparency Creates Clarity
Confusing decision-making alienates users. The CFP’s reliance on vague measures like the “eye test” leaves fans and teams questioning its integrity. Adopting clear, objective metrics like SMART Metrics eliminates confusion and strengthens confidence in the system.
4. Iterative Improvement: Evolve with Feedback
In UX, systems are never static—they evolve through user feedback and testing. The CFP must follow a similar approach, evaluating its methods and refining them annually to address inconsistencies and biases. SMART Metrics is a starting point, but factors like road wins, game margins, and injuries can be iteratively added to improve accuracy.
Better Future for College Football Playoff
The College Football Playoff is in desperate need of a redesign—one grounded in fairness, transparency, and performance. By applying UX principles and embracing a data-driven approach like SMART Metrics, the CFP can eliminate bias and build trust among fans and teams alike.
Much like great design, a great playoff system should be clear, consistent, and user-focused. By embracing a data-driven approach, the CFP can eliminate controversy and celebrate what happens on the field.
College football deserves better, and it’s time to design a playoff experience that reflects the integrity of the sport itself.
0 Comments