Shadowrun: Breaking the Limit

Author
Tim Adler
Publishing date
March 13, 2022

Introduction

In this post, we will discuss how the edge ability Breaking the Limit (BL) influences the number of expected successes on a roll in the pen-and-paper role-playing game Shadowrun (5th edition). This post is part of a series where I compare Second Chance (SC) against Breaking the Limit. The introductory post can be found here and the discussion about Second Chance is here. Furthermore, there might someday be a blog post about how not to approach the math behind Breaking the Limit (I still feel the pain 🙈).

As a quick recap, if you want to do something complicated in Shadowrun the dice will decide whether you succeed or not. You have a pool of six-sided dice (or d6 for short), which you roll. All 5s and 6s count as successes. If you pass a threshold you succeed, otherwise, you fail. Every character has a certain amount of edge, which they can use in tight spots to increase their chance of success. One option is called Breaking the Limit. This ability lets you reroll all your 6s and if you roll another 6 you can roll again and again until your streak stops. What is the effect on your expected number of successes? Can we even find the probability mass function for this process?

The math

First, I want to point out a conceptual difference between BL and both rolling normally and using SC. For the latter, two if your dice pool consists of nn d6, you can have at most nn successes (mathematically speaking the probability distribution has finite support). For BL this is no longer the case! Indeed, even for a single d6 and any positive integer kk there is a small, but non-zero probability to hit exactly kk successes.

This observation can already give us an inkling that this problem is way harder to solve than the SC case. In fact, I have been unable to find a closed-form solution for the probability mass function pBL(n)p_\text{BL}(\cdot \mid n) which holds for an arbitrary dice pool nn, yet. If any of you find a solution, please reach out to me! I would be very interested 😜 However, as it turns out, we can still prove the existence of the expected value for the distribution and compute its exact value. The central ingredient is a recursion formula for pBLp_\text{BL}.

While playing the game you generally roll your complete dice pool at once, count your successes and then reroll all your 6s at once and continue like that. Thinking about the problem like this lead me straight into a road block (see my blog post about how not to tackle the problem, once I write it). To end up with the recursion formula, I had to perform a conceptual shift. Assume I have a dice pool of nn dice and would like to compute the probability for k1k \geq 1 successes given this pool, i.e. I want to compute pBL(kn)p_\text{BL}(k \mid n) (we handle the case k=0k=0 below). Let us assume instead of rolling all dice at once, I start rolling the first die. Then there are 3 cases:

  1. I can roll a 1-4, i.e. a fail which means I still need kk successes, but only have n1n-1 dice left.
  2. I can roll a 5, which is a success, so I only need to roll k1k-1 successes with the remaining n1n-1 dice.
  3. I can roll a 6, which is also a success with the added benefit that I can reroll the die. However, that is the same as saying that I get the die back for my pool. In summary, I now need to roll k1k-1 successes with nn dice.

Lastly, note that for k=0k=0 all my rolls need to be fails which directly implies pBL(0n)=(23)np_\text{BL}(0 \mid n) = \left ( \frac23 \right)^n. This leads to the following central recursion relation for k1k \geq 1 and n1n \geq 1:

pBL(kn)=pfailpBL(kn1)+p5pBL(k1n1)+p6pBL(k1n)=23pBL(kn1)+16pBL(k1n1)+16pBL(k1n)\begin{aligned} p_\text{BL}(k \mid n) & = p_\text{fail} \cdot p_\text{BL}(k \mid n-1) + p_5 \cdot p_\text{BL}(k-1 \mid n-1) \\ & \quad + p_6 \cdot p_\text{BL}(k-1 \mid n) \\ & = \frac23 \cdot p_\text{BL}(k \mid n-1) + \frac16 \cdot p_\text{BL}(k-1 \mid n-1) \\ & \quad + \frac16 \cdot p_\text{BL}(k-1 \mid n) \end{aligned}

Now we only need suitable initial values and we have a chance to get something out of this equation. This condition can be chosen to be

pBL(k0)={1if k=00else,\begin{aligned} p_\text{BL}(k \mid 0) & = \begin{cases} 1 & \text{if } k = 0\\ 0 & \text{else} \end{cases}, \\ \end{aligned}

which is the degenerate case of an empty dice pool. In this case, we will always have 0 successes. Furthermore, we need pBL(0n)=(23)np_\text{BL}(0 \mid n) = \left( \frac23 \right)^n introduced above. I have tried and failed to use the above recursion formula to find a closed-form expression for pBL(kn)p_\text{BL}(k \mid n). At the bottom of the article, you can find expressions for n3n \leq 3 and the hypotheses I have extracted from them. Since I am mainly interested in the expected value, I will forgoe further discussion about the closed form solution here.

For now, let us assume that the distributions defined by the recursion above have a finite expected value. Can we compute it? Yes! For a distribution defined on the non-negative integers with a probability mass function, the expected value can be computed via

EBL[kn]=k=0kpBL(kn).\mathbb{E}_\text{BL}[k \mid n] = \sum_{k=0}^\infty k \cdot p_\text{BL}(k \mid n).

For n1n\geq 1 we can use the recursion as follows:

EBL[kn]=k=0kpBL(kn)=0pBL(0n)+k=1kpBL(kn)=k=1k(23pBL(kn1)+16pBL(k1n1)+16pBL(k1n))=23EBL[kn1]+16k=1k(pBL(k1n1)+pBL(k1n))=23EBL[kn1]+16k=0(k+1)(pBL(kn1)+pBL(kn))=23EBL[kn1]+16k=0k(pBL(kn1)+pBL(kn))+16k=0(pBL(kn1)+pBL(kn))=23EBL[kn1]+16(EBL[kn1]+EBL[kn])+162=56EBL[kn1]+16EBL[kn]+13\begin{aligned} \mathbb{E}_\text{BL}[k \mid n] & = \sum_{k=0}^\infty k \cdot p_\text{BL}(k \mid n) \\ & = 0 \cdot p_\text{BL}(0 \mid n) + \sum_{k=1}^\infty k \cdot p_\text{BL}(k \mid n) \\ & = \sum_{k=1}^\infty k \cdot (\frac23 \cdot p_\text{BL}(k \mid n-1) + \frac16 \cdot p_\text{BL}(k-1 \mid n-1) \\ & \quad + \frac16 \cdot p_\text{BL}(k-1 \mid n)) \\ & = \frac23 \cdot \mathbb{E}_\text{BL}[k \mid n-1] + \frac16 \sum_{k=1}^\infty k \cdot (p_\text{BL}(k-1 \mid n-1)\\& \qquad + p_\text{BL}(k-1 \mid n))\\ & = \frac23 \cdot \mathbb{E}_\text{BL}[k \mid n-1] + \frac16 \sum_{k=0}^\infty (k+1) \cdot (p_\text{BL}(k \mid n-1)\\ & \qquad + p_\text{BL}(k \mid n))\\ & = \frac23 \cdot \mathbb{E}_\text{BL}[k \mid n-1] + \frac16 \sum_{k=0}^\infty k \cdot (p_\text{BL}(k \mid n-1)\\& \qquad + p_\text{BL}(k \mid n)) + \frac16 \sum_{k=0}^\infty (p_\text{BL}(k \mid n-1) + p_\text{BL}(k \mid n))\\ & = \frac23 \cdot \mathbb{E}_\text{BL}[k \mid n-1] + \frac16 (\mathbb{E}_\text{BL}[k \mid n-1] + \mathbb{E}_\text{BL}[k \mid n])\\& \qquad + \frac16 \cdot 2\\ & = \frac56 \cdot \mathbb{E}_\text{BL}[k \mid n-1] + \frac16 \cdot \mathbb{E}_\text{BL}[k \mid n] + \frac13 \end{aligned}

This equation can be solved for EBL[kn]\mathbb{E}_\text{BL}[k \mid n]:

EBL[kn]=EBL[kn1]+25=EBL[kn2]+25+25==EBL[k0]+25n=0+25n.\begin{aligned} \mathbb{E}_\text{BL}[k \mid n] & = \mathbb{E}_\text{BL}[k \mid n-1] + \frac25\\ & = \mathbb{E}_\text{BL}[k \mid n-2] + \frac25 + \frac25\\ & = \dots \\ & = \mathbb{E}_\text{BL}[k \mid 0] + \frac25 \cdot n \\ & = 0 + \frac25 \cdot n. \end{aligned}

So after all this lengthy computation, we arrive at a rather simple formula for the expected value. There is just one caveat left: We haven't proven that the expected value actually exists, yet. Maybe this already follows out of the above computations, however, I'm not completely convinced that it does. Luckily, we can find out enough about pBLp_\text{BL} to prove the finiteness of EBL\mathbb{E}_\text{BL}.

Properties of pBLp_\text{BL}

Let me start this section by simply writing down pBLp_\text{BL} for n=1,2,3n = 1, 2, 3 and k1k \geq 1 (we know what happens at k=0k=0).

p(k1)=53(16)kp(k2)=53(16)k(53k13)p(k3)=53(16)k(2518k256k+79)\begin{aligned} p(k \mid 1) & = \frac53 \cdot \left(\frac16\right)^k \\ p(k \mid 2) & = \frac53 \cdot \left(\frac16\right)^k \cdot \left(\frac53 k - \frac13\right)\\ p(k \mid 3) & = \frac53 \cdot \left(\frac16\right)^k \cdot \left(\frac{25}{18}k^2 - \frac{5}{6}k + \frac79\right) \end{aligned}

For me, there seems to be a pattern, namely that all expressions start with 53(16)k\frac53 \cdot \left(\frac16\right)^k multiplied with a polynomial of degree k1k-1. In addition, I noted that if you add the absolute value of the coefficients you end up with nn for these first examples.

Although I was unable to prove the exact form of pBLp_\text{BL} you can easily prove via induction that pBLp_\text{BL} takes the form

pBL(kn)=53(16)kfn(k),p_\text{BL}(k \mid n) = \frac53 \cdot \left(\frac16\right)^k \cdot f_n(k),

where fnf_n is a polynomial function. I will omit the proof in the post for the sake of brevity and focus instead on using the result. We want to prove that EBL[kn]\mathbb{E}_\text{BL}[k \mid n] is finite, i.e. we want to show that the series

k=1k53(16)kfn(k)=k=1(16)kgn(k)\sum_{k=1}^\infty k \cdot \frac53 \cdot \left(\frac16\right)^k \cdot f_n(k) = \sum_{k=1}^\infty \left(\frac16\right)^k \cdot g_n(k)

converges. Note that we absorbed 53k\frac53k into fnf_n which yields yet another polynomial gng_n. However, using the ratio test for series, we see that any series that has an exponential factor (with absolute value of the base < 1) times a polynomial converges absolutely (the key observation is that gn(k+1)gn(k)1\frac{g_n(k+1)}{g_n(k)} \to 1 for kk \to \infty for any (non-zero) polynomial gng_n). With that, we have proven (or at least sketched a proof) that EBL\mathbb{E}_\text{BL} is indeed finite which validates the above computations.

Computational confirmation

I have chosen to simulate the rolls to have an empirical sanity check of the above arguments. A jupyter notebook containing the code can be found here (bear with me it's a bit ugly). In this first figure, you can see that the closed formulas I derived for n3n \leq 3 match the simulated probabilities quite nicely except in the tails. However, this is to be expected because of our limited sample size. Furthermore, the effect is amplified because of the logarithmic scale on thy y-axis.

image

In this second plot, you can see that the formula for the expected values matches even for larger nn.

image

Conclusion

We were able to compute the number of expected successes as 25n\frac25n where nn is the dice pool size. What I find interesting is that this is a rather small increase in expected successes compared to rolling only once (40% vs 33%) considering that in the Breaking the Limit case the probability mass function has infinite support. However, we saw that the probability decays exponentially with (16)k\left(\frac16\right)^k, which explains the small effect. The above analysis ignored two properties of Breaking the Limit. First, the increased dice pool, and second, this ability allows you to ignore 'limits' in the game. The first problem is addressed in the main article. The second might lead you to use Breaking the Limit even though Second Chance is better from a pure expectation value standpoint.

Thank you for reading and I hope you enjoyed the post.