Hacker News new | past | comments | ask | show | jobs | submit login
The Kelly criterion: How to size bets (2019) (paulbutler.org)
271 points by airstrike on Feb 8, 2022 | hide | past | favorite | 122 comments



Past related threads:

Kelly Criterion – how to calculate optimal bet sizes - https://news.ycombinator.com/item?id=27431627 - June 2021 (129 comments)

Kelly Criterion - https://news.ycombinator.com/item?id=26834333 - April 2021 (194 comments)

Log Optimal Betting – An interactive demo of the Kelly criterion - https://news.ycombinator.com/item?id=23894425 - July 2020 (1 comment)

Kelly Criterion - https://news.ycombinator.com/item?id=21559782 - Nov 2019 (1 comment)

Kelly Criterion (2007) - https://news.ycombinator.com/item?id=18484631 - Nov 2018 (96 comments - great top subthread!)

Kelly Criterion in detail - https://news.ycombinator.com/item?id=13143821 - Dec 2016 (8 comments)

Kelly Criterion in detail - https://news.ycombinator.com/item?id=9401821 - April 2015 (1 comment)

Kelly Criterion For Sport Betting - https://news.ycombinator.com/item?id=2504222 - May 2011 (23 comments)

Ed Thorp: It is realistic to multiply your capital by 1000 using Kelly Criterion - https://news.ycombinator.com/item?id=1119318 - Feb 2010 (1 comment)


sorta an obcession, huh


Think of it more as spaced repetitive learning for the HN crowd.

I rremembered what Kelly Criterion was, that it was an ideal betting method, but could not remember the formula. Now I might remember both the next time it comes up. And maybe use it in my life!

Ie should I play the lottery?

1/45m - (1-1/45m)/1M

0.00000002 - 0.000001

-0.00000098

so, no.


Not really. I do it because a lot of readers like looking at past threads and because I think it's good to reflect the community accurately to itself—in this case, to point out how rich the archives are.


Trying to avoid site amnesia, like Reddit has, I assume.


Kelly criterion is how I explain the importance of insurance. Insurance is a net loss and you’re better off on average not having it, but you’re betting your entire wealth on that working out.


This is precisely why I absolutely always get house and car insurance but never ever pay for an extended warranty.


House (if you have a mortgage) and car insurance are not optional. The bet you'd be making by not having them has much higher stakes.


I haven’t carried collision insurance for decades. There was one exception where I financed an electric car at 0% for a while. Obviously, the lender required comprehensive insurance, most of the cost of which I viewed as the finance charge and paid off the loan and dropped collision coverage once the balance got low enough to make the imputed interest rate unattractive.

You need liability insurance (until you’re ridiculously wealthy perhaps), but fire/structure coverage and collision coverage are much more optional if the home/car value is low enough relative to your net worth.


You can get minimal "liability-only" car insurance (at least in the USA) and still be legally covered.


Called third party in New Zealand, I assume it's basically the same thing.


Likewise, I have car insurance but not contents insurance for my house. I can just buy a bed and a laptop if my (rented) home burns down, everything else is really non essential and can be picked up again slowly.


Living is coastal Florida it makes no sense. My goal is to pay off my mortgage and drop home owners insurance.


Absolutely correct. Measured in log wealth (call it growth, or utility, or whatever you want), insurance is sometimes a net gain -- even if the straight arithmetic expectation would indicate it's a loss.

I forget who, but one person I respect a lot in risk management said that "the only surprising thing about insurance is that people don't get more of it."

But of course, it's not always a net gain. And even a rule of thumb like "for expensive things insure, otherwise don't" has problems. That rule would have lead me to get unemployment insurance, but doing the numbers I realised it would be a net loss, even considering the Kelly criterion.

You have to make the computations each time to bet rationally.


> the only surprising thing about insurance is that people don't get more of it.

That's a weird formulation. To me, the most surprising thing is that people insure... wrong things. I mean, there are exactly two reasons to take insurance:

1. The economical effects of the event you insure against are beyond your comfort zone.

2. You are for some reason confident that the insurance company has miscalculated your risk profile.

Now, I ignore the second one. The question is, why people take travel insurances that cover lost sunglasses? I mean, to me, it would be obvious that the equilibrium is that people choose so high deductibles that they just can afford/are comfortable with, which means cheaper insurances for two reasons: Insurance company pays out less money and needs to do less work per paid out dollar. But high deductible insurances are almost non-existent.


> The question is, why people take travel insurances that cover lost sunglasses?

I think the answer could be a quite mundane appeal to 'bundling'. i.e. I get very comprehensive travel insurance when travelling to the US due to the punitive cost of healthcare (your (1)). This comphrensive insurance can often cover unrelated and unneeded items, but it's hard to slice and dice the coverage exactly.


You're right, of course. I think the formulation was aimed more at general life events that we don't even think of as insurable. I can't recall any concrete examples from that person, whoever it was, but one thing I've long wanted is insurance against bad weather on family outings planned well in advance. Some days when I have more important meetings to get to farther away, I would also like insurance against unexpected traffic jams and other holdups.

Essentially, we make all these tiny little bets every day, and while it's possible to hedge them[1], it's complicated compared to just straight-up money-based insurance.

[1]: I can for example reserve a table at a fancy restaurant halfway to the important meeting, so if I get stuck on my way and miss the meeting I can at least get a nice meal out of it, at the cost of the table reservation.


Yep. If you can survive catastrophic loss (for some appropriate definition of survive), then insurance is always a losing proposition.


> always a losing proposition

It is not always, and in fact there are common cases where it has significant positive expected outcomes.

Let's assume that you have sufficient wealth (say $40MM) that you can pay for a massive medical bill out of pocket. Let's furthermore say that you know, due to a hereditary illness in your family, that you have 90% chance you will be on the hook for a very large (say $1MM) bill when you're in the age range 20-30.

An american health insurance company legally cannot charge you more just because of preexisting conditions or family history, so health insurance will be a winning proposition for you.

Similarly, if you have information that the insurance company does not have, then you can "win" at other forms of insurance. If you have an ex-boyfriend who is prone to stealing bikes or setting homes on fire, then insurance covering those will have a higher expected value to you, and the insurance company is unlikely to account for that increased risk.

If you happen to know you're a bad driver, but have never been in an accident (only close calls), you might look normal on paper, and thus get a rate that has positive expected returns for you.

Said another way, insurance is not always a losing proposition. It can be a winning proposition if the insurance company doesn't understand the risks correctly or if laws prevent the company from accounting for certain risks.


Another easy counterfactual is an insurance company's public financial disclosures where the payouts > collected premiums.[0] Not sure I'd argue there was a moral information hazard, but, in retrospect, it's favorable risk transfer per $ for the insured.

[0] https://www.reuters.com/business/life-insurers-adapt-pandemi...


It isn't if you know your own risk is significantly worse than the average buyer. This is called adverse selection.


It's a really interesting (to me) example of how a trade can be mutually beneficial when there's no obvious difference (like trading an excess of chickens for something you lack). The odds on the bet are/can be the same for both sides yet getting insurance can make sense as can selling insurance.


There is an obvious difference though. As an individual, you provide a small amount to a collective pool kept by a third-party. You can't do much with the effective amount, but the combined wealth of all of an insurers premiums is quite a large amount of money.

As an insurer, you're essentially a deposit-only bank with some special cases that allow you to withdraw (make a claim). You have a large amount of money that you can now invest and make actually meaningful profits. What you trade for that is the assurance that should someone require a large sum of money for a loss, you will pay for it.

I don't really now where people get the idea that insurance companies are just sitting on the stacks of cash they pull in from premiums. You have a minimum reserve you have to keep to assure you have enough to cover claims, just like a bank doesn't have your cash on hand all the time. (You all should watch It's a Wonderful Life sometime.) The long run expected gain from a "properly priced" insurance product is net zero (in reality its usually a loss). They make money off your money, not directly off selling insurance.


I dunno, for-profit insurance has always struck me as something predatory by nature. The only functional difference between the two sides is that the sellers of insurance have capital to absorb loss--sure, there's some overhead in organization and risk calculation, but fundamentally it's leveraging the wealth of a few people so they can extract some wealth from many more.


>fundamentally it's leveraging the wealth of a few people so they can extract some wealth from many more

The insurance company isn't paying you out of the coffers of their wealthy owners. They're paying you out of the coffers from you and all your fellow insurance buyers' premiums. The primary transfering of money is from the little guy--- to the little guy. Any massive profits come from skimming a tiny bit of that at a very large scale. And competition pretty much dictates that that won't be predatory.


What I mean is that some group of people earns dividends from the profit of the insurance company (stockholders or private owners), the insurance company earns profit by charging each little guy more than they are expected to cost.

When I say 'inherently predatory', I don't mean that every insurance company is doing the equivalent of loan sharking, I mean that the concept of mandatory participation in a system that then also makes its own determination of how much wealth is reasonable to take from you in exchange for its service seems fundamentally immoral.

You know, when I write it like that, this applies to life in basically any situation --

* fully-capitalist (have to work, a market you have no control over determines your value)

* fully-communist (have to work, a government you have no control over determines your value)

* completely anarchist (have to work, the conditions of the world around you determine how much work you need to do to stay alive)

So I think what I'm actually saying might just be 'life isn't fair' which is kinda banal.


You can think of an insurer as someone who provides a service of pooling resources together to help those who got unlucky. That service is worth something and is not necessarily predatory.


I think the idea of that service being all three of mandatory, not owned by its participants, and profit-driven is what causes it to feel inherently predatory -- that there exist companies which take on risk on behalf of entities who cannot take on that risk (and therefore must participate), and then profit by charging those entities more than they are expected to cost and using the pool of wealth created as investment funds for which only owners of that company derive benefit.

The service is obviously not worthless--some entity has to assess risk, coordinate, and administrate--but it is predicated on terms that make it (I would assert) immoral to transfer more than is necessary from those that participate in the service to those that own the service.


I'm not sure I understand how Kelly applies to insurance, as by definition the Kelly of any -EV bet is 0. Can you elaborate on how to do a Kelly calculation with a wager of negative expected value? Or what am I missing?


A theoretical bet from this analysis in real life can be any situation where your outcome depends on result of a random event.

Every time you don't take insurance you're betting all your wealth on there being no ruin- level disaster.


The Kelly bet for any negative arithmetic EV is 0 when you are fully in control of the bet size. Sometimes you're forced to bet, and then you might want to hedge your bet to avoid large losses which will set you back more than the insurance, when you look at it from a growth perspective.

In other words, when you are choosing between "a loss" and "no loss", then the correct Kelly bet is of course "no loss".

However, when you are choosing between "a loss" and "a different loss", which is the case when it comes to insurance, then you need to whip out your slide rule and do the numbers.

This is an example I used with another group of people in another context:

----

Let's say you get the opportunity to try to hover a helicopter close to ground, for whatever reason. There's a real pilot next to you who will take control when you screw up (because hovering a helicopter is hard!)

However, there's a small (2 %) chance you will screw up so bad the other pilot won't be able to recover control and you crash the helicopter. You will be fine, but you will have to pay $10 k to repair the helicopter, if that happens.

You can get insurance before you go, which will cover $6 k of helicopter damage (so even with insurance, you have to pay $4 k in addition to the insurance premium if you crash), but cost you $150 up front.

Do you pay a $150 premium to reduce an unlikely (2 %) loss of $10 k down to a still sizeable $4 k?

----

If we do the arithmetic expectations, we'll find that with insurance, the expectation is negative $200 if you don't go for the insurance, and negative $230 with the insurance. So always skip the insurance, right?

Not so fast. That is correct if we had effectively infinite money in the bank. If we have an infinite amount of money in the bank, we can repeat that "no insurance" bet over and over and get the arithmetic expectation.

In practise, we don't always have an effectively infinite amount of money compared to the losses in question, so we need to consider how the growth of what we have is affected by the losses.

----

The answer then, according to the Kelly criterion, is "it depends". Specifically, it depends on how much money you have in the bank.

If you have more than $35 k in the bank, then the $10 k loss is small enough to not affect the growth of your money significantly. If you have less than that, the $10 k loss is sizeable enough that it's worth spending $150 to reduce it down to $4 k.

Going the other way around, if you have $20 k in your bank, you should be willing to spend as much as $186 on the insurance, to protect against the risk of halving your available money.

----

In terms of how the calculation is done: I find it easiest to do it the way Bernoulli did it back in the 1700's when he invented the mathematical formulation of the Kelly criterion: use the geometric expecation. (This is equivalent to the arithmetic expectation of the log.)

Here's the equation set up to compute what wealth is needed to decline the insurance in the helicopter example: https://www.wolframalpha.com/input?i=solve+w%5Ep+*+%28w+-+L%...

(Hopefully the symbols are obvious, but in case they are not:

- w = current wealth

- L = loss with no insurance

- p = probability of no adverse event

- q = 1 - p = probability of adverse event

- c = premium of insurance

- d = deductible of insurance)


It's strange to me how you overcomplicate it all by thinking about growth instead of utility of money. Once you switch everything becomes easier and more intuitive. You will also see why Kelly is not a good guideline for most cases as it's utility that matters (by the very definition of it) and not growth.


I commented elsewhere already, but I have a blog post where I go through some examples of applications of the Kelly Criterion, including two that are related to insurance: https://blog.paulhankin.net/kellycriterion/


You're really betting your net worth with odds of say p 0.995 = net worth - insurance payment p 0.005 = 0 (or worse)


It is not the same. The Kelly criterion is for when your capital is required to make you money; without capital you can't make bets so you'll be ruined forever. Without insurance you risk losing all your money, but if you have a day job, you'll eventually make it back again.


It is the same. It takes the same time for your savings to go from $1 k to $10 k as it does from $10 k to $100 k.

In other words, if you have $10 k in savings, and then skip the insurance and go down to $1 k, you might end up retiring with $10 k. Whereas if you do take the insurance and your savings stay at roughly $10 k, you can retire with $100 k – given the exact same day job, etc.

Your savings compound. That's the only requirement of the Kelly criterion, regardless of everything else.


> Your savings compound. That's the only requirement of the Kelly criterion, regardless of everything else.

The use case of the Kelly criterion is to determine the optimal size of your capital to put at risk, where the profit you're expecting to make is linearly related to that size.

In your example you have savings which will bring in some percentage no matter what. And you have the choice of taking an insurance or not. Taking the insurance would cut your profit, but most likely not in the linear way that the Kelly criterion assumes when it calculates the optimal bet size.


This paper discusses that exact issue:

Insurance makes wealth grow faster Ole Peters, Alexander Adamou https://arxiv.org/abs/1507.04655


Which given how much people underestimate how easy it is to lose it all starts to make it make a lot of sense


Do what you can to avoid financial ruin.


I highly recommend the book "Fortune's Formula: The Untold Story of the Scientific Betting System That Beat the Casinos and Wall Street"



And if you want to dig deeper, I'd follow up with The Kelly Capital Growth Investment Criterion. It's just a collection of peer-reviewed science on the Kelly criterion and various applications. Very dense reading, but also the best overview of the subject I've seen.

https://www.amazon.com/Kelly-Capital-Growth-Investment-Crite...


Big book. Any of the chapters/papers within that you'd particularly recommend?


TLDR?


How should you bet if you're not comfortable with variance? I tried the "autobetter" simulation at the bottom of the page, always clicking heads, and it took 150 tries to max out. For most of the time it was way below the initial $25 investment: https://imgur.com/a/HKKHMRL


If you're not comfortable with variance, you should bet 0$ each time.

This is only partially tongue-in-cheek, since the only way to reduce variance is to bet lower amounts, but this will also reduce your expected return, all the way to zero at 0$ bets.


A simple way would be to pick a strategy with a slightly different utility function from Kelly's (which implicitly uses log as the utility function).

The downside of this is that Kelly's utility function has some very nice properties. In particular the total utility of a sequence of bets will turn into a sum which means you can ignore dependencies between events. Note for instance that the argument in this article still holds even if you've got very strong dependencies between coin flips, it could repeat the pattern HTHTH forever and the Kelly criterion would still give the same optimal proportion, though obviously there are better strategies if you don't restrict yourself to blindly betting on heads.


The way to reduce variance is to keep some of your wealth out of the equation. In other words, put half of your money to the side, and then apply the Kelly criterion to that which remains.

This is a "fractional Kelly" strategy, which optimises growth under the condition that you want to reduce variance.


Yes. The 'fully Kelly' betting is the fastest growth, but the variance is sky-high in practice. If I remember the math right, while betting using full Kelly stakes, you will have a 50% chance of losing half of your current bank at some point in the future. Most people (rightly or wrongly) don't have the stomach for such wild swings in fortune, and so using reduced stakes will lead to less variance while still being profitable.

Also, in the real world, you never accurately know how big your % edge actually is, so there's a real chance that you are over-betting when using Kelly stakes. Using reduced stakes is a way to compensate for this.


If you bet with the -1/x utility function, then the optimal strategy is to always bet about 10.1% of your money. I find this to give a good tradeoff between the variance and the rate of exponential growth.


Not to mention that in real life, there is probably a minimum bet.


Would this work for calculating the odds of startup success?

Runway of 500k, a 5% chance, 100:1 odds, so don't bet more than ~$22k on each bet/pivot.


The only number you know there is the 500k… everything else is just a guess.


It is definitely one way to approach startup investment and why VC funds spread their bets out on so many startups. Almost all go to 0, some go 100x.


I think it would be a good heuristic if you were able to be that precise about expenses, but I think most startups aren’t.


(Author here) I agree, but I think there is some applicability of the Kelly Criterion to startups (at least qualitativey) in explaining why an entrepreneur who is all-in on a company may be more inclined to take an exit than their investors, who have spread their bets.


The Kelly criterion is a U-shaped optimisation problem, so it is rather tolerant of imprecise estimations!


An alternative derivation using dynamic programming for multi-round bets: https://adityam.github.io/stochastic-control/mdp/optimal-gam...


The dynamic programming and multi-round bets are a distraction there. Since it uses a logarithmic utility function the Kelly criterion is optimal for even a single bet.


No, the two betting strategies diverge. The Kelly criterion assumes an unlimited steam of future bets, while the dynamic programming approach assumes a fixed number of remaining bets.


The logarithmic utility function makes the Kelly criterion optimal even when there is only one remaining bet. For example say you have $x, the probability of winning is p, and you bet $y. Then you want to maximise

    plog(x+y) + (1-p)log(x-y).
Setting the derivative with respect to y equal to 0 yields

    p/(x+y) - (1-p)/(x-y) = 0.
This rearranges to give

    y = (2p - 1)x,
which is precisely the Kelly criterion.


This is precise the argument at the penultimate time-step in the dynamic programming solution of the multi-round case. The other interesting aspect is that the expected returns are logarithmic, i.e, with y = (2p -1) x

    p log(x+y) + (1-p) log(x-y) = log(x) + C
where C is the Shannon capacity of the binary symmetric channel with cross-over probability p.

By the same argument, the expected wealth after T rounds will be

    log(x) + T C
So, in addition to the optimal strategy, we have also derived the rate of growth of wealth. This is also in tune with the motivation of Kelly's paper where he was showing a relationship between Shannon capacity and optimal gambling (without using a dynamic programming argument)


Correction: I should have said that the Kelly criterion and DP-based approaches diverge when there is a maximum amount of wealth that can be attained (e.g., $250 in TFA), not in scenarios where there are a fixed number of betting rounds remaining.


We can consider the limit T -> infinity to recover the setting with infinite number of bets (but need to look at the rate of growth, otherwise multiple strategies can give infinite returns).


In the result for multi-round bets, if we take the betting horizon T=1, we recover the result for a single bit. But the other way round is not obvious.


Here's a similar, but slightly more advanced, game I made for a lunch-and-learn session at work: https://static.loop54.com/ship-investor.html

Use the "link to this scenario" button if you want to compete with friends on the same seed.


The Kelly criterion is only optimal if you have logarithmic utility. People like to pretend that it somehow becomes optimal in other cases if you have multiple rounds, but when you do the calculations it just doesn't. Having the highest probability to win among the strategies that bet a constant amount is not a useful property, because you're not in a competition and don't have to bet a constant amount.

In particular, in that Haghani and Dewey study they mention, the optimal strategy actually involved playing a lot safer that the Kelly bet. (Although the participants weren't told about the cap at the start of the game, so they would be reasonable to take more risk. Incidentally, it strikes me as slightly unethical to mislead the participants in a way that makes it more likely that they'll risk losing more money.)


This is a common misconception. The Kelly criterion maximises growth when the wealth compounds multiplicatively. It makes no assumption about the shape of utility.

In other words, it's prescriptive, not descriptive. The Kelly criterion says that if you want to maximise growth, you should adopt log utility. If you don't have log utility, you won't maximise growth.

(Though, as you point out, maximising growth isn't the only goal possible -- limiting drawdown is another useful goal. The linear combination of "no bet" and "full Kelly" is an optimal frontier that maximises growth given a particular limit to drawdown.)


> The Kelly criterion maximises growth when the wealth compounds multiplicatively. It makes no assumption about the shape of utility.

I looked this up and you're right: the Kelly maximizes the expected geometric growth rate. However, the question to ask is: why do I want that? The obvious thing to want to maximize is not the expected geometric growth rate, but the expected amount of money (which it seems Kelly does NOT maximize).

The rationale to choose to maximize expected geometric growth rate is that it is equivalent to maximizing expected logarithmic utility, which is something that you actually want.


> why do I want that?

The reason why you want to do that is because, given unbounded amount of betting, the bankroll of someone following the Kelly strategy will, with probability 1, eventually permanently exceed that of any other strategy.


Yeah but why would anyone care about that in real life? We don't have unlimited bets and even if we had them getting quicker to (for example) 10 billion is more important than eventually, maybe in 100 years catching up at 200b level.


An interesting argument, and a good intuition for why the Kelly criterion is good.

As far as it relates to my comment and comment: I think you're advancing a third answer to the "why" question, not showing why "maximizing expected geometric growth" is a better "why" than "maximizing (logarithmic) utility".

With that said, I still think the "maximizing (logarithmic) utility" answer is the only good motivation. My reasoning is that most people (myself included) define rationality to be: maximizing expected utility. From this, we can derive the Kelly criterion.


Whether it is a third "why" or not is a bit philosophical. I think the property I stated leans heavy towards arguing for maximizing expected geometric growth, because this property has nothing to do with the notion of utility. Indeed, no matter what you utility function is, as long as it is strictly monotone, it will be the case that a Kelly strategy always eventually permanently yields a bankroll with more utility than any other strategy. This holds for the simple reason that when reinvesting profits into the next bet, you must reinvest actual money, not utility.

As for me, I often waffle between thinking this Kelly property I stated is important versus expected utility of some sort.


Only if that other strategy is also betting a constant fraction of its wealth each turn. There are other strategies that will beat Kelly most of the time. For example, follow a slightly riskier strategy than Kelly until you are either bankrupt or ahead of Kelly, then follow Kelly. More than 50% of the time you will get ahead of Kelly and then stay ahead.


Interesting. I think whether you follow a riskier strategy or not depends on the shape of the bets you are making. If you are picking up pennies in front of a steamroller (high probability of low payout and low probability of losses), then what you say seems to make sense. If you are playing lottery tickets (low probability of high payout and high probability of losses) then I guess you want to bet conservatively until the Kelly strategy falls below you and then you switch to the Kelly strategy.


It's nice that it maximizes "growth" but it's just a curiosity. It's useful because it maximizes a utility function which might for many situations be similar to log. There is little reason to ever try to maximize growth. It's also confusing. Utility of money is a simple concept people can understand and apply at personal or business level. Growth? Why would anyone try to maximize that?

I feel we had this conversation in the past and it feels to me you're a bit stuck in thinking about it the backwards way.


The rate of growth is defined as log(wealth/starting_wealth)/time, so it's a tautology that maximising it is the same as maximising log utility.


This as what I was thinking… the game presented no cost or limit to how many bets I could play, so I just chose a small number and mashed the bet button. Greatly reduced chance of bankruptcy, and I will eventually get to whatever target I want if I hit the bet button enough times.

Unless there is a cost to placing a bet, why take any risk at all?


It's probably an excess of analytic mentality, but I reached exactly the same conclusion. Because the game imposes no limits on the number of bets, the only way I lose is if I go bankrupt. Because a random walk will visit every point with probability 1, if I don't lose then I almost certainly win. Hence the only rational bet is the lowest nonzero amount.

The way they compare strategies (i.e. which is more likely to "be ahead" after N rounds) is more interesting, but it feels like they flipped the script on me :)


What about in the limit where the number of rounds tends to infinity?


The longer you have to reach the cap, the smaller your initial bet becomes, because you have more time to catch up. It doesn't converge to Kelly, it converges to 0.


Can you elaborate on these "other cases" with multiple rounds where logarithmic utility doesn't make sense?


The way it is used in movies and tv comes to mind.


I have a blog-post on the Kelly Criterion, and some (I think interesting) realistic examples of its application in decision-making: https://blog.paulhankin.net/kellycriterion/


Using the Kelly criterion is brutal though - the volatility is crazy.


Fractional kelly is the answer.


Time is money, so my definition of the best strategy is the one that gets you your $250 asap. Potentially giving up if your bankroll gets so small that it would take unreasonably long to climb back.

With this idea we can assign a dollar value to every possible bankroll. Those values should satisfy.

    V(250) = 250
    V(x) = max(x, max over 0<y<=x pV(x+y) + (1-p)V(x-y) - a)
I can't immediately see how to solve it though.


The Kelly criterion, which maximises long-term growth, also has the nice property that it gets to any particular desired level faster than any other strategy, on average.


I don't think this can be true. Consider if you start with x money and want to reach x+epsilon. Then the first bet in an optimal strategy should be to bet epsilon (if you win you reach your goal, and if you dont the setback is smaller than kelly).


But what do you do if you don't win? Then the target level is two epsilon away -- do you bet that amount?

Maybe that works if the probability of success is high enough and the initial epsilon is small enough. If that is not the case, it starts to sound like a martingale type strategy.


If you start super close to the goal, then yes you would double for every loss... for a while. When it starts actually cutting into your bankroll the bets would grow slower and slower and then even start shrinking. It should not surprise me if, far from the goal, you would bet a fraction of your bankroll every time, like Kelly does.


I was betting 20% before reading the article and still went almost straight to zero. That’s gambling I guess.


If you knew you had a 60% chance of winning, wouldn't you want to bet a small amount to reduce variance?

I guess on the down side that is A LOT of clicking. It is similar to how when playing a better team in basketball, you want to slow down the pace of the game to limit possessions.


This depends on the distinction between a player having infinite plays (in which case the optimal strategy is to bet their minimum infinity times), and the player having a finite number of plays but considering the optimal play as that finite number of plays approaches infinity. It's not intuitive, and in retrospect, I think I could have explained it better in the post.


The assumption is that you want the maximum return for any given number of bets. If you have unlimited bets then sure, maximizing the return per bet is not really important. But where can you find unlimited bets with positive expected return?

There are other assumptions in there, like that there's no minimum bet and you can divide your bet to arbitrary precision. Always check your assumptions! I'm sure someone has modified the Kelly criterion for minimum bets and quantized bets, but a cursory search didn't turn it up immediately.


That assumption was never stated in the question. It didn’t say there was any cost to placing a bet, and I learned quite quickly that I could mash the bet button very fast… why would I take any risk at all when I can get a guaranteed win betting small and mashing the button?


Yep just bet a dollar every time. Over enough presses you’ll see the law of large numbers guarantee you getting something that looks like exponential growth.


If you bet a dollar every time, you'll average $0.60 of growth every click no matter how much money you accumulate. That is linear growth, not exponential.


That’s right. If you bet 10% every time you’d get exponential growth.


Kelly criterion is the solution for optimal bet size with the assumption that utility is a log function.

While it's often modelled as such we know it's not perfect (utility of money is bounded or even goes downwards after certain point while log is unbounded) and more importantly utility of money is different for different people.

That means that, while being an important illustration of the concept of utility of money, Kelly is most likely not a good practical guideline for most people. Many professional gamblers for example feel that something closer to 1/2 Kelly or 1/4 Kelly is better but again it all depends on your personal situation and what money means to you.


Another comment already posted this misconception. The Kelly criterion assumes only two things: compounding and that the objective is maximised growth.

It makes no assumptions about utility functions.

There are several good reasons to use fractional Kelly strategies, but shapes of utility functions has nothing to do with it.


How do you define maximized growth and why is it important for anything if utility is not log?

Shapes of utility function have everything to do with determining your bet sizes. Can you provide one reason to use Kelly that is not connected to the shape of the utility function?

Not to escalate but calling a very well known fact about Kelly a misconception doesn't make for a good start. I mean it's math you can verify or open Wikipedia and read the very first paragraph there.


How do I cash out? Do you support ACH transfers?


The first time I encountered Kelly Criterion was when I was scouring books on trading.

It got mentioned in one chapter in a book that happened to be a collection of interviews of traders, out of about 30 books on trading that I had read.

Bet sizing can be used for money management and risk control for professional traders.


I took a similar approach where I defined "ruin" as losing more than half my money and set a target "probability of ruin" for 5%. With a specific set of parameters I think this ends up being very similar to the maximize the median approach.


It's 1.58 dollar heads double your money at 100 bets and 220 bets.

How did I do that?

I use to win money playing poker to pay for college books at beg of Semester using the same math strategy as it always has to be between I/15th and 1/20 if you have 60% odds


I ended up like this by simple reducing my win wager to $10 and doubling my loss wager every time I lost.

https://i.imgur.com/QUbOopz.jpg

Essentially using the Martingale system.


For the first 50 throws, your Martingale-like strategy actually performs fairly competitively with Kelly – if we ignore the very serious consequence that it can lead to complete ruin. Kelly is red, and your system is black in this plot: https://i.xkqr.org/kelly-vs-martingale-50.png

Of course, given a longer opportunity to invest, Kelly wins every time by growing exponentially: https://i.xkqr.org/kelly-vs-martingale-100.png


I went with betting half my assets every time, and for whatever reason, the first go around I managed to cash out the first time at 7 flips. (I did better than 79% of folks) I was so spooked I ran the simulation again, this time just maxing out. After a couple dozen tries I cashed out at 5 attempts. I still only did better than 79% of participants in that study.

(I always went with 60% no matter what, in both extreme cases I never lost)


“When the probability of heads is 50%, there is no benefit from betting.”

:)


I did in 23 flips always betting heads, but raising my bet if I lose twice in a row. Also the first couple times I all-in for a streak of 2.


I think it misses the explanation that you can play multiple times...


optimizing your log wealth at a single step seems pretty good, even if you don't get multiple plays of the same game.


It's a nice piece of math, but it just confirms common sense. If you have a small edge and the payoff is very likely, you bet a decent chunk of cash. If you have a big edge and payoff is unlikely, you bet a half decent chunk of cash. If you have a big edge and the payoff is very likely, you bet a double decent sized of cash. If you have a small edge and payoff is unlikely, you bet a little bit. If you don't have an edge, you don't bet.


You'd think the optimal strategy is common sense but for small probabilities and large payoffs it really isn't. Many people simply ignore very risky investments on the theory that they're just too risky, when the actual optimal thing is to invest small amounts if the payoff is big enough.


Do people really ignore this? The entire VC industry seems dedicated to these exact bets. The guys at Sequoia for example are approximating the kelly criterion and likely don't ever think about it explicitly.


You're describing the anti-martingale which is a different strategy with a statistically expected profit of $0 for random coin flip type games.

The Kelly criterion is different and has a positive expected profit.


No, that's not what I'm describing at all. Re-read it, or re-read what martingale/anti-martingale is. I don't say anything about betting size based on a previous bet.


> If you have a big edge and payoff is unlikely

Why would the payoff be unlikely if you have a big edge?


Say there's a horse race. Paul Revere is listed as having 500:1 odds of victory. You, however, know that because the weather is clear, he actually has a 5% chance to win. It's a great idea to bet on Paul Revere, but you're still most likely going to lose your money.


Exactly. And common sense tells you a) you want to make the bet, but b) that you don't want to bet 50% of your bankroll on this. Maybe 5% makes sense.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: