TL;DR
- Points and finishing positions can hide who actually executed better; simulations let you compare teammates under equalised conditions.
- You’ll learn how to separate car performance, race luck, and driver execution using scenario-based modelling.
- You’ll learn which inputs matter most (qualifying conversion, tyre life, clean-air vs traffic, penalties/DNFs) and how to stress-test them.
- Run your own teammate comparison in the Drivers tool and read the results as ranges, not as a single “who’s faster” verdict.
Teammate comparisons are the cleanest dataset in Formula 1—same team, same engineering group, broadly the same car concept. And yet they’re also one of the easiest analyses to get wrong, because the public-facing outputs (points, podiums, headline results) are a noisy mix of execution and randomness. If you want to compare teammates fairly, you don’t need a hotter take. You need a way to hold conditions constant and ask: when the world is equal, who extracts more? That’s exactly what simulations and calculators are for, and it’s why RaceMate treats teammate comparison as a modelling problem first—then a narrative.
Why points alone are a bad teammate metric (even in the same car)
Points feel objective because they’re discrete and official. But points are a compressed record of a weekend: they bundle qualifying, race pace, tyre management, strategy, traffic exposure, Safety Car timing, reliability, penalties, and even whether a driver happened to be the one asked to take an “offset” strategy for the team.
That compression matters because F1 points are not linear. The difference between P2 and P4 can represent a huge points swing while being only a few seconds of pace across a race distance. Meanwhile, the difference between P10 and P12 can be one poorly timed Virtual Safety Car, not a meaningful performance gap. Since 2025 there’s no fastest-lap bonus point, which helps remove one incentive-driven distortion, but the points table is still a steep step function: small execution differences can look massive in points, and massive underlying differences can be disguised by a single DNF.
If you want to answer “who is better” in a way that stands up over time, you need to ask a more precise question: given equalised conditions, what is the expected advantage, and how often does it flip? That shift—from single outcomes to distributions—is the core reason to use an F1 calculator or simulator for teammate analysis.
What “equalised conditions” actually means in F1 modelling
Equalising conditions doesn’t mean pretending both drivers had identical weekends. It means removing or controlling variables that are not primarily about execution, so the remaining delta is easier to interpret.
In practice, equalisation usually means some combination of:
- Equal car performance assumptions: treat the baseline pace potential as the same, so you’re measuring who converts potential into lap time and points.
- Controlled reliability and incident noise: reduce the impact of DNFs, random punctures, or one-off failures so you don’t overfit to bad luck.
- Normalised strategy environment: remove the advantage of being the “priority” car on strategy, or test both drivers under the same strategic risk profile.
- Traffic and track-position sensitivity: separate “good in clean air” from “good in dirty air” by running scenarios with different overtaking difficulty.
You’re not trying to erase reality; you’re trying to answer a counterfactual: if we replay this season 1,000 times with the same underlying driver traits, what does the average gap look like, and what creates the tails?
To do that in a repeatable way, start your work inside the Drivers tool. The point is to build a comparison workflow you can reuse, not a one-off argument.
The RaceMate workflow: compare teammates like a modeller, not a commentator
A good teammate comparison has three layers: a baseline, an equalised re-run, and a stress test. The outcome isn’t “Driver A is faster.” The outcome is a map of where the advantage comes from and when it disappears.
1) Baseline: what happened (without pretending it’s the truth)
Begin with the straightforward comparison: qualifying head-to-head, race finishes, points, and any obvious context (grid penalties, DNFs, sprint weekends). This baseline is useful because it anchors your model in the real season shape.
But treat it as a starting distribution, not a verdict. If the baseline gap is driven by two races where one driver lost a massive points haul to a mechanical failure, you already know your next step: test what happens when that failure mode is equalised.
Use the Drivers tool to set the comparison window (season-to-date, last N races, or a specific era) and to keep the baseline consistent across sessions. The goal is not to cherry-pick—it’s to make sure your “what happened” snapshot is at least internally coherent.
2) Equalised run: hold the environment constant and isolate conversion
Once you have a baseline, re-run the comparison under controlled assumptions. In teammate terms, “conversion” is where most of the meaningful separation lives:
- Qualifying conversion: how often does a driver land the lap when the tyre is in the window and the fuel is low? This is execution under pressure, and it matters disproportionately on track-position circuits.
- Race conversion: given similar pace potential, who turns it into a clean stint structure—few mistakes, good tyre life, stable lap time under changing conditions?
- Opportunity conversion: when a big result becomes possible (Safety Car, alternate strategy, late-race restart), who captures it without adding unacceptable downside?
Equalised simulation is where you stop arguing about “luck” and start quantifying it. If Driver A beats Driver B by 40 points in reality, but only by 8 points under equalised reliability and penalty noise, you’ve learned something actionable: the headline gap was amplified by rare events. Conversely, if the gap stays large even after equalisation, that’s a stronger signal of execution.
Run this as a scenario in the Drivers tool, then look for two outputs: the expected advantage and the spread (how often the advantage flips). A gap that persists across many equalised runs is more meaningful than a gap that relies on one or two outlier weekends.
3) Stress test: change the track mix and see who benefits
The most common teammate-analysis mistake is assuming performance is uniform. It isn’t. Driver traits interact with track characteristics:
A qualifying-strong driver will look “dominant” on high track-position circuits because starting position locks in points. A driver with better tyre life may look average in qualifying but repeatedly overperform on high-deg, high-thermal races where stint management creates strategic flexibility. If you only look at points, you can mislabel track fit as driver superiority.
Stress-testing means re-running the comparison under different season environments: more high-deg races, more street circuits, more high-speed circuits, more rain variance, more Safety Car probability. You don’t need perfect forecasting; you need to see whether one driver’s advantage is robust.
This is where RaceMate’s platform approach matters: teammate comparison isn’t isolated from championship modelling. If you’re evaluating “who should be backed for the title push,” you’re implicitly asking how their traits will convert into points in the remaining calendar. Use the Drivers tool to identify which traits are driving the gap, then sanity-check the points impact by running a broader scenario in the Season Simulator.
Interpreting the output: what a “fair” teammate gap looks like
A fair simulation-based comparison doesn’t deliver a single finishing order. It delivers a range of outcomes and the reasons behind that range.
Here’s how to read it without turning it into fake certainty.
First, prioritise the median or expected gap over the “most likely” single outcome. In a high-variance sport, the mode can be misleading—one cluster of common outcomes can exist alongside a long tail of rare but decisive swings.
Second, look for flip frequency: how often does Driver B beat Driver A across runs? A driver who loses on average but wins a meaningful fraction of simulated seasons may be more valuable than the points table suggests—especially if their upside aligns with championship needs (for example, better at capitalising on chaotic races).
Third, separate pace delta from execution delta. Pace delta is the underlying speed potential; execution delta is how much of that potential becomes points when you add starts, traffic, tyre wear, and decision-making. In teammate comparisons, execution is often the differentiator.
Finally, be honest about assumptions. If your model treats reliability as equal and strategy as neutral, say so. Those are choices, not facts. The purpose of a tool-first workflow is that you can change those choices and see whether your conclusion survives.
Common misunderstandings (and how simulations keep you honest)
Two myths dominate teammate debate.
The first is that “same car = perfectly equal.” In reality, teammates share a concept, not an identical experience. Upgrade timing, setup direction, parts allocation, and team-level strategy decisions all introduce asymmetry. Equalised simulations don’t deny that—they let you explore what the comparison looks like when you remove the asymmetry, so you don’t mistake circumstance for skill.
The second myth is that “simulation = prediction.” A RaceMate run is a structured what-if, not an oracle. The most valuable outcome is often learning that two drivers are separated by a small average gap but large variance, which tells you the debate shouldn’t be about who’s “better,” but about which driver profile is more robust under the remaining uncertainty.
If you want one practical rule: treat simulation outputs like you’d treat tyre degradation estimates—use them to guide decisions, not to claim inevitability.
Conclusion: make teammate analysis reusable, not reactive
The best teammate comparisons are calm, repeatable, and explicit about what’s being held constant. Points will always matter, but they’re a result, not a diagnostic. If you want to know who executed better—and why—run the same season under equalised conditions, stress it with realistic uncertainty, and read the gap as a distribution.
Start your comparison in the Drivers tool. Run an equalised scenario, then change one assumption at a time until you understand what’s actually moving the result. That’s how teammate debate stops being loud and starts being useful.