Introduction
In this post, we will discuss how the edge ability Breaking the Limit (BL) influences the number of expected successes on a roll in the pen-and-paper role-playing game Shadowrun (5th edition). This post is part of a series where I compare Second Chance (SC) against Breaking the Limit. The introductory post can be found here and the discussion about Second Chance is here. Furthermore, there might someday be a blog post about how not to approach the math behind Breaking the Limit (I still feel the pain 🙈).
As a quick recap, if you want to do something complicated in Shadowrun the dice will decide whether you succeed or not. You have a pool of six-sided dice (or d6 for short), which you roll. All 5s and 6s count as successes. If you pass a threshold you succeed, otherwise, you fail. Every character has a certain amount of edge, which they can use in tight spots to increase their chance of success. One option is called Breaking the Limit. This ability lets you reroll all your 6s and if you roll another 6 you can roll again and again until your streak stops. What is the effect on your expected number of successes? Can we even find the probability mass function for this process?
The math
First, I want to point out a conceptual difference between BL and both rolling normally and using SC. For the latter, two if your dice pool consists of d6, you can have at most successes (mathematically speaking the probability distribution has finite support). For BL this is no longer the case! Indeed, even for a single d6 and any positive integer there is a small, but non-zero probability to hit exactly successes.
This observation can already give us an inkling that this problem is way harder to solve than the SC case. In fact, I have been unable to find a closed-form solution for the probability mass function which holds for an arbitrary dice pool , yet. If any of you find a solution, please reach out to me! I would be very interested 😜 However, as it turns out, we can still prove the existence of the expected value for the distribution and compute its exact value. The central ingredient is a recursion formula for .
While playing the game you generally roll your complete dice pool at once, count your successes and then reroll all your 6s at once and continue like that. Thinking about the problem like this lead me straight into a road block (see my blog post about how not to tackle the problem, once I write it). To end up with the recursion formula, I had to perform a conceptual shift. Assume I have a dice pool of dice and would like to compute the probability for successes given this pool, i.e. I want to compute (we handle the case below). Let us assume instead of rolling all dice at once, I start rolling the first die. Then there are 3 cases:
- I can roll a 1-4, i.e. a fail which means I still need successes, but only have dice left.
- I can roll a 5, which is a success, so I only need to roll successes with the remaining dice.
- I can roll a 6, which is also a success with the added benefit that I can reroll the die. However, that is the same as saying that I get the die back for my pool. In summary, I now need to roll successes with dice.
Lastly, note that for all my rolls need to be fails which directly implies . This leads to the following central recursion relation for and :
Now we only need suitable initial values and we have a chance to get something out of this equation. This condition can be chosen to be
which is the degenerate case of an empty dice pool. In this case, we will always have 0 successes. Furthermore, we need introduced above. I have tried and failed to use the above recursion formula to find a closed-form expression for . At the bottom of the article, you can find expressions for and the hypotheses I have extracted from them. Since I am mainly interested in the expected value, I will forgoe further discussion about the closed form solution here.
For now, let us assume that the distributions defined by the recursion above have a finite expected value. Can we compute it? Yes! For a distribution defined on the non-negative integers with a probability mass function, the expected value can be computed via
For we can use the recursion as follows:
This equation can be solved for :
So after all this lengthy computation, we arrive at a rather simple formula for the expected value. There is just one caveat left: We haven't proven that the expected value actually exists, yet. Maybe this already follows out of the above computations, however, I'm not completely convinced that it does. Luckily, we can find out enough about to prove the finiteness of .
Properties of
Let me start this section by simply writing down for and (we know what happens at ).
For me, there seems to be a pattern, namely that all expressions start with multiplied with a polynomial of degree . In addition, I noted that if you add the absolute value of the coefficients you end up with for these first examples.
Although I was unable to prove the exact form of you can easily prove via induction that takes the form
where is a polynomial function. I will omit the proof in the post for the sake of brevity and focus instead on using the result. We want to prove that is finite, i.e. we want to show that the series
converges. Note that we absorbed into which yields yet another polynomial . However, using the ratio test for series, we see that any series that has an exponential factor (with absolute value of the base < 1) times a polynomial converges absolutely (the key observation is that for for any (non-zero) polynomial ). With that, we have proven (or at least sketched a proof) that is indeed finite which validates the above computations.
Computational confirmation
I have chosen to simulate the rolls to have an empirical sanity check of the above arguments. A jupyter notebook containing the code can be found here (bear with me it's a bit ugly). In this first figure, you can see that the closed formulas I derived for match the simulated probabilities quite nicely except in the tails. However, this is to be expected because of our limited sample size. Furthermore, the effect is amplified because of the logarithmic scale on thy y-axis.
In this second plot, you can see that the formula for the expected values matches even for larger .
Conclusion
We were able to compute the number of expected successes as where is the dice pool size. What I find interesting is that this is a rather small increase in expected successes compared to rolling only once (40% vs 33%) considering that in the Breaking the Limit case the probability mass function has infinite support. However, we saw that the probability decays exponentially with , which explains the small effect. The above analysis ignored two properties of Breaking the Limit. First, the increased dice pool, and second, this ability allows you to ignore 'limits' in the game. The first problem is addressed in the main article. The second might lead you to use Breaking the Limit even though Second Chance is better from a pure expectation value standpoint.
Thank you for reading and I hope you enjoyed the post.