Skip to main content
Blog/Article

Unit-Based vs. Kelly CriterionWhich Staking System Actually Works?

W
WagerBird Team
Mar 16, 20266 min read
1–3%

Typical unit size of bankroll

Kelly

Maximizes growth when edge is accurate

Ruin

Risk if you oversize with bad estimates

Key Takeaways

  • Flat unit betting keeps things simple and protects you from catastrophic drawdowns.
  • The Kelly Criterion can maximize long-term growth but is extremely sensitive to mis-estimated edges.
  • Most serious bettors use a fractional Kelly or unit-based hybrid for practical risk management.

Bankroll Management: What the Conversation Gets Right and What It Keeps Getting Wrong

Every serious conversation about sports betting eventually arrives at the same two frameworks: fixed unit sizing and the Kelly Criterion. Understanding both is table stakes. Knowing their limits is where the real work begins.

This is not a knock on either approach. Both exist for legitimate reasons, and both contain real insight about how disciplined bettors should think about capital. The issue isn’t that these frameworks are wrong. The issue is that the conversation almost always stops before it gets to the part that actually matters — what drives the sizing decision in the first place, and whether the information feeding that decision is any good.

Start there, and everything downstream changes.

THE APPEAL OF FIXED UNITS

Fixed unit sizing is exactly what it sounds like. You designate a set percentage of your bankroll — typically somewhere between one and five percent — and that becomes your unit. Every wager gets the same allocation. You might stake $100 on a Monday night spread and $100 on a totals play three days later, regardless of how confident you are in either. The bet size is predetermined. The framework is the decision.

The appeal is discipline, and that appeal is legitimate. No single loss can materially damage your position. You’re not chasing. You’re not sizing up because you had a good week or because a matchup looks too clean to pass on. The system removes a layer of emotional decision-making that costs recreational bettors enormous amounts of money over time. Overconfidence is one of the most consistent and expensive behavioral patterns in sports betting, and flat staking addresses it directly by decoupling stake size from gut feeling.

There’s also a practical argument for it. Most bettors who try more sophisticated sizing systems don’t have the data infrastructure to support them. Without a reliable confidence estimate — a real one, not a feeling — variable sizing tends to become a mechanism for rationalizing larger bets on games you’re irrationally attached to. A flat unit prevents that. It imposes structure where structure is genuinely needed.

But here is the problem with stopping at flat units, and it’s a fundamental one.

Treating a 54% edge identically to a 74% edge isn’t discipline — it’s indifference to information. When the signal quality on a given play is meaningfully higher than your baseline, flat staking leaves real money on the table. Capital deployed without regard to edge magnitude isn’t protected capital. It’s under-deployed capital. The system keeps you safe from your worst decisions, but it also caps your best ones. Over a large enough sample, that inefficiency compounds in the wrong direction.

The math on this is not subtle. If you have ten plays rated at varying confidence levels and you stake each one identically, you are systematically over-allocating to your lowest-confidence positions and under-allocating to your highest-confidence ones. That is the opposite of what a market-trained approach would do, and it’s why unit-based thinking, however clean it looks on paper, runs fundamentally counter to how serious capital is actually deployed.

WHERE KELLY COMES IN

The Kelly Criterion is the mathematically optimal answer to the problem flat staking creates. Kelly ties stake size directly to your estimated edge: the larger the advantage, the larger the position. When the edge is small, Kelly keeps you small. When the edge is real and substantial, Kelly scales up accordingly. Over a long enough sample, a bettor applying Kelly correctly will outgrow any flat-staking approach. That’s not opinion — it’s the mathematical property the formula was designed to produce.

The formula itself is straightforward. You take your estimated probability of winning, subtract your estimated probability of losing multiplied by the odds ratio, and that gives you the fraction of your bankroll to deploy. Clean inputs, clean output. The logic is elegant, and the underlying principle — that bet size should be proportional to edge — is the correct principle.

The operative phrase is applying Kelly correctly. And this is where most bettors run into serious trouble.

The formula is only as good as your probability inputs. If your edge estimate is off — even marginally — Kelly compounds that error into position sizes that can turn aggressive faster than most bettors anticipate. The formula has no mechanism for humility. It takes whatever probability you hand it and optimizes accordingly. Hand it a number that’s 5% too high, and it recommends a stake that may be significantly larger than your true edge warrants. Do that across a slate of games and the variance exposure can overwhelm the actual advantage.

This is not a theoretical concern. It’s the reason a framework that is mathematically optimal in theory has blown up more than a few sophisticated bettors in practice. The edge estimation problem isn’t a minor footnote to Kelly — it is the central challenge, and the formula itself doesn’t solve it.

THE FRACTIONAL KELLY COMPROMISE

This is why most serious practitioners use fractional Kelly. Half Kelly, quarter Kelly — the specific fraction varies, but the principle is consistent: preserve the edge-sensitive sizing logic while building in a buffer against the inevitable imprecision of probability estimation. You keep the core insight that larger edges warrant larger positions. You just don’t bet the full recommendation, because your probability estimates carry uncertainty that the formula doesn’t account for on its own.

Fractional Kelly isn’t a workaround for disciplined bettors who can’t stomach volatility. It’s a rational acknowledgment that your inputs have error bars. The fraction you apply is, in effect, an implicit confidence interval on your own edge estimates. Bet half Kelly and you’re saying: I believe my edge calculation is directionally correct, but I’m not willing to assume it’s precisely correct. That’s not timidity. That’s calibration.

The fractional Kelly approach also has meaningful psychological benefits that compound over time. Full Kelly can generate swing sizing — stakes that move dramatically from one play to the next as edge estimates fluctuate — and most bettors can’t execute that cleanly over a long season without emotional interference. Fractional Kelly smooths that volatility. The sizing still responds to edge, but the drawdown exposure stays manageable enough to maintain process discipline when losing stretches inevitably come.

WHAT BOTH FRAMEWORKS MISS

Here’s where the conversation has to go further than it usually does.

Both fixed units and Kelly answer the question of how much to bet. Neither one answers the harder question underneath it: what is the quality of the information driving that bet size?

Unit-based betting answers the sizing question with a fixed number. Kelly answers it with a formula. But in both cases, the bettor still has to generate the underlying edge estimate on their own. The model is only as good as what goes into it.

Most bankroll management content treats edge estimation as a given. You know your edge, now size appropriately. But in real sports betting markets — against a counterparty that employs quantitative analysts, monitors line movement globally, and sets prices based on far more information than any individual bettor has access to — the assumption that you know your edge is the assumption that does the most damage.

The price in a sports betting market is not neutral. It is not a prediction, and it is not a consensus estimate. The opening line is set by a market maker who has their own position, their own exposure, and their own view on where money is going to flow. The juice — the premium attached to one side of a market — is an expression of that view. When a book charges more to access a particular side, they are communicating something. Either they believe that side wins and want compensation for the risk, or they believe the public is going to hammer that side and they’re pricing it to extract maximum margin from predictable behavior.

The distinction between those two scenarios is the most important analytical read in sports betting. A side that is expensive because the book has genuine conviction — what we call a Conviction play — is a signal to follow. A side that is expensive because the book is trapping public money onto a predictable loser — what we call a Bait setup — is a signal to fade. The number on the board is secondary. The price tells you more.

This is where the Levitt research is critical. The 2004 study established something the betting industry still hasn’t fully absorbed: bookmakers are not trying to balance action. They are not simply setting lines to attract equal money on both sides and lock in the vig. They are taking positions. They exploit systematic bettor biases — the documented tendency of the public to overweight favorites, to overvalue popular teams, to chase narrative — and they price accordingly. They increase gross profits 20 to 30 percent by doing so. The line is not the prediction. The line is inventory management.

Once you understand that, the edge estimation problem looks fundamentally different. Your model’s output is one input. The book’s pricing behavior is another. When those two signals align — when your quantitative edge and the book’s expressed conviction point in the same direction — the confidence in that position is materially higher than either signal would warrant alone. When they diverge, particularly when your model shows a large edge and the book’s pricing suggests a trap, the right move is often to invert the model and follow the book’s read. Four or more points of model edge in a direction that conflicts with book psychology is not a strong play. It is a warning sign.

SIZING AS A FUNCTION OF A REAL READ

At WagerBird, the sizing decision begins with the confidence score — a calibrated measure of edge that integrates model output with book psychology signals and market context. Proportional sizing from a scored position isn’t a workaround for Kelly. It’s Kelly operationalized for how these markets actually function, where the price is a signal, the juice is a confession, and your probability estimate has to account for what the market maker already knows and is expressing through the cost of access.

The confidence score drives the position. Higher confidence warrants a larger allocation within the risk envelope you defined at the start. Lower confidence means a smaller stake or no stake at all. The system doesn’t treat a 59-point play the same as an 84-point play, and it shouldn’t. Those are different things with different expected values, and allocating to them identically isn’t discipline — it’s a missed opportunity at best and a structural inefficiency at worst.

Risk control still comes first. That’s not a platitude — it’s the sequence. You define what you’re willing to put in action before a single position is sized. Define the total exposure, establish what a drawdown looks like, set the boundaries. That work happens before anything else. But once the risk envelope is established, the framework governing how you allocate within it matters enormously. Flat units leave edge on the table. Uncalibrated Kelly puts you in front of your own assumptions. And both approaches, applied without a serious read of the market, treat the sportsbook as a passive counterparty when it is anything but.

The goal is a system that survives long enough for the edge to compound. Survival is not separate from profitability — it is the prerequisite for it. A bettor who protects their position during bad variance and stays disciplined to their process during good runs gives their edge time to express itself. That requires precision about what the edge actually is, where it comes from, and what the market is telling you about whether you’ve actually found it.

Define the risk. Measure the edge — all of it, not just your model’s output. Then size accordingly.

That’s the framework. Everything else is noise.