The Kelly criterion is only optimal if you have logarithmic utility. People like to pretend that it somehow becomes optimal in other cases if you have multiple rounds, but when you do the calculations it just doesn't. Having the highest probability to win among the strategies that bet a constant amount is not a useful property, because you're not in a competition and don't have to bet a constant amount.
In particular, in that Haghani and Dewey study they mention, the optimal strategy actually involved playing a lot safer that the Kelly bet. (Although the participants weren't told about the cap at the start of the game, so they would be reasonable to take more risk. Incidentally, it strikes me as slightly unethical to mislead the participants in a way that makes it more likely that they'll risk losing more money.)
This is a common misconception. The Kelly criterion maximises growth when the wealth compounds multiplicatively. It makes no assumption about the shape of utility.
In other words, it's prescriptive, not descriptive. The Kelly criterion says that if you want to maximise growth, you should adopt log utility. If you don't have log utility, you won't maximise growth.
(Though, as you point out, maximising growth isn't the only goal possible -- limiting drawdown is another useful goal. The linear combination of "no bet" and "full Kelly" is an optimal frontier that maximises growth given a particular limit to drawdown.)
> The Kelly criterion maximises growth when the wealth compounds multiplicatively. It makes no assumption about the shape of utility.
I looked this up and you're right: the Kelly maximizes the expected geometric growth rate. However, the question to ask is: why do I want that? The obvious thing to want to maximize is not the expected geometric growth rate, but the expected amount of money (which it seems Kelly does NOT maximize).
The rationale to choose to maximize expected geometric growth rate is that it is equivalent to maximizing expected logarithmic utility, which is something that you actually want.
The reason why you want to do that is because, given unbounded amount of betting, the bankroll of someone following the Kelly strategy will, with probability 1, eventually permanently exceed that of any other strategy.
Yeah but why would anyone care about that in real life? We don't have unlimited bets and even if we had them getting quicker to (for example) 10 billion is more important than eventually, maybe in 100 years catching up at 200b level.
An interesting argument, and a good intuition for why the Kelly criterion is good.
As far as it relates to my comment and comment: I think you're advancing a third answer to the "why" question, not showing why "maximizing expected geometric growth" is a better "why" than "maximizing (logarithmic) utility".
With that said, I still think the "maximizing (logarithmic) utility" answer is the only good motivation. My reasoning is that most people (myself included) define rationality to be: maximizing expected utility. From this, we can derive the Kelly criterion.
Whether it is a third "why" or not is a bit philosophical. I think the property I stated leans heavy towards arguing for maximizing expected geometric growth, because this property has nothing to do with the notion of utility. Indeed, no matter what you utility function is, as long as it is strictly monotone, it will be the case that a Kelly strategy always eventually permanently yields a bankroll with more utility than any other strategy. This holds for the simple reason that when reinvesting profits into the next bet, you must reinvest actual money, not utility.
As for me, I often waffle between thinking this Kelly property I stated is important versus expected utility of some sort.
Only if that other strategy is also betting a constant fraction of its wealth each turn. There are other strategies that will beat Kelly most of the time. For example, follow a slightly riskier strategy than Kelly until you are either bankrupt or ahead of Kelly, then follow Kelly. More than 50% of the time you will get ahead of Kelly and then stay ahead.
Interesting. I think whether you follow a riskier strategy or not depends on the shape of the bets you are making. If you are picking up pennies in front of a steamroller (high probability of low payout and low probability of losses), then what you say seems to make sense. If you are playing lottery tickets (low probability of high payout and high probability of losses) then I guess you want to bet conservatively until the Kelly strategy falls below you and then you switch to the Kelly strategy.
It's nice that it maximizes "growth" but it's just a curiosity. It's useful because it maximizes a utility function which might for many situations be similar to log. There is little reason to ever try to maximize growth. It's also confusing. Utility of money is a simple concept people can understand and apply at personal or business level. Growth? Why would anyone try to maximize that?
I feel we had this conversation in the past and it feels to me you're a bit stuck in thinking about it the backwards way.
This as what I was thinking… the game presented no cost or limit to how many bets I could play, so I just chose a small number and mashed the bet button. Greatly reduced chance of bankruptcy, and I will eventually get to whatever target I want if I hit the bet button enough times.
Unless there is a cost to placing a bet, why take any risk at all?
It's probably an excess of analytic mentality, but I reached exactly the same conclusion. Because the game imposes no limits on the number of bets, the only way I lose is if I go bankrupt. Because a random walk will visit every point with probability 1, if I don't lose then I almost certainly win. Hence the only rational bet is the lowest nonzero amount.
The way they compare strategies (i.e. which is more likely to "be ahead" after N rounds) is more interesting, but it feels like they flipped the script on me :)
The longer you have to reach the cap, the smaller your initial bet becomes, because you have more time to catch up. It doesn't converge to Kelly, it converges to 0.
In particular, in that Haghani and Dewey study they mention, the optimal strategy actually involved playing a lot safer that the Kelly bet. (Although the participants weren't told about the cap at the start of the game, so they would be reasonable to take more risk. Incidentally, it strikes me as slightly unethical to mislead the participants in a way that makes it more likely that they'll risk losing more money.)