Shadowrun: Second Chance

Author
Tim Adler
Publishing date
March 13, 2022

Introduction

In this post, we will discuss how the edge ability Second Chance influences the number of expected successes on a roll in the pen-and-paper role-playing game Shadowrun (5th edition). This post is part of a series where I compare Second Chance against Breaking the Limit. The introductory post can be found here. As a quick recap, if you want to do something complicated in Shadowrun, the dice will decide whether you succeed or not. You have a pool of six-sided dice (or d6 for short), which you roll. All 5s and 6s count as successes. If you pass a threshold, you succeed; otherwise, you fail. Every character has a certain amount of edge, which they can use in tight spots to increase their chance of success. One option is called Second Chance. This ability lets you reroll all your fails (i.e., all dice showing 1-4). What is the effect on your expected number of successes? Can we even find the probability mass function for this process? Let's find out, shall we 🙂

The computation

First, we need to formalize and define a couple of things: Let nNn \in \mathbb{N} denote the number of dice, k{0,,n}k \in \{0, \dots, n\} the number of successes and p[0,1]p \in [0, 1] the probability of success (in our case p=13p=\frac13, but the formula holds more generally). Then we would like to compute the probability mass function (pmf) while using second chance, which we denote by pSCp_\text{SC}. We get

pSC(kn)=l=0np(l successes on 1. roll)p(kl successes on 2. rolll successes on 1. roll)=l=0npb(ln, p)pb(klnl, p)=l=0k(nl)pl(1p)nl(nlkl)pkl(1p)nl(kl)=l=0kpk(1p)2nlkn!(nl)!l!(nl)!(kl)!(nk)!=pkl=0k(1p)2nlkn!l!(kl)!(nk)!k!k!=(nĸ)pkl=0k(kl)(1p)2nlk+2k2k=(nk)pk(1p)2n2kl=0k(kl)(1p)kl=(nk)pk(1p)2n2k(1+(1p))k=(nk)pk(12p+p2)nk(2p)k=(nk)(2pp2)k(1(2pp2))nk=(nk)qk(1q)nk=pb(kn, q)\begin{aligned}p_\text{SC}(k\mid n) & = \sum_{l=0}^n p(l \text{ successes on 1. roll}) \cdot p(k - l \text{ successes on 2. roll}\mid l \text{ successes on 1. roll})\\ & =\sum_{l=0}^n p_\text{b}(l\mid n,\ p) \cdot p_\text{b}(k-l\mid n-l,\ p)\\ & = \sum_{l=0}^k \binom nl \cdot p^l \cdot (1-p)^{n-l} \cdot \binom{n-l}{k-l} \cdot p^{k-l} \cdot (1-p)^{n-l -(k-l)}\\ & = \sum_{l=0}^k p^k (1-p)^{2n -l -k} \frac{n!\cdot(n-l)!}{l!\cdot(n-l)!\cdot(k-l)!\cdot(n-k)!}\\ & = p^k \sum_{l=0}^k (1-p)^{2n -l -k} \frac{n!}{l!\cdot(k-l)!\cdot(n-k)!} \cdot \frac{k!}{k!}\\ & = \binom{n}{ĸ} p^k \sum_{l=0}^k \binom{k}{l} (1-p)^{2n-l-k + 2k - 2k}\\ & = \binom{n}{k} p^k (1-p)^{2n - 2k} \sum_{l=0}^k \binom{k}{l}(1-p)^{k - l} \\ & \stackrel{*}{=} \binom{n}{k} p^k(1-p)^{2n-2k} (1 + (1-p))^k\\ & = \binom{n}{k} p^k (1 - 2p + p^2)^{n-k}(2 - p)^k\\ & = \binom{n}{k} (2p - p^2)^k (1- (2p - p^2))^{n-k}\\ & = \binom{n}{k} q^k (1-q)^{n-k}\\ & = p_\text{b}(k \mid n,\ q)\end{aligned}

where pb(n, p)p_\text{b}(\cdot \mid n,\ p) denotes the probability mass function of the binomial distribution with nn rolls and a success probability of pp , and we defined q2pp2q \coloneqq 2p- p^2. In the first equation, we use the definition of Second Chance and the law of total probability. For the second equal sign, we use that the two dice rolls follow binomial distributions. Afterward, we plug everything in and collect different terms. In equation (*), we apply the binomial theorem to get rid of the sum. For the last equation, we only note that this is the formula for a binomial distribution with probability qq, number of successes kk , and number of rolls nn. Surprisingly (at least for me), the distribution for the successes after second chance is given by a binomial distribution, just like for regular dice rolls. However, instead of the standard 13\frac13 chance of success, we have a new probability qq which evaluates to q=59q=\frac59. This leads to the observation that the probability of rolling a success is larger than 50%! Next, it would be great to compute the expected number of successes. However, since we have a standard binomial distribution, this is easy:

ESC[kn]=Eb[kn, q]=qn=(2pp2)n=59n\mathbb{E}_\text{SC}[k\mid n]= \mathbb{E}_\text{b}[k\mid n,\ q] = q\cdot n = (2p - p^2) \cdot n = \frac{5}{9} n

Comparison to Simulation

Maybe you are a person like me, who does not trust in your own computations. I mean, the math looks right, but perhaps I overlooked something? Or is there some subtle error hidden in one of the equations? Lucky us, in this setting, we can run experiments to check our computation! The most straightforward approach would be to spend an afternoon rolling dice at your table and count the successes. In the evening, you can compare your distribution of successes to the number predicted by pSCp_\text{SC}. Of course, this becomes tedious rather quickly, and depending on how fast you roll, you might not get enough rolls done for the law of large numbers to kick in (especially since you would need to do your experiment for all the nn you are interested in). So instead, you can let a computer roll the dice. This is called a Monte Carlo (MC) simulation. You write a script that tells the computer to roll a dice a million times, you wait a short period of time, and then you have your success probabilities. Naturally, writing scripts is another error source, but I assume that the chances of making an error in the script and one in the computation in such a way that the results coincide are relatively low. You can find a jupyter notebook that compares the formula to the MC simulation here (Please let me know if you find any errors). First, we implement a method which rolls the dice for us:

def second_chance_mc(n_pool: int, n_sample: int) -> list:
    res = []
    
    for _ in range(n_sample):
        roll = np.random.randint(1, 7, n_pool)
        suc = (roll >= 5).sum()
        roll = np.random.randint(1, 7, n_pool - suc)
        suc += (roll >= 5).sum()
        res.append(suc)
        
    return res

The method second_chance_mc rolls n_pool dice and does this n_sample times. As a result we receive a list with the number of successes for each roll. Our formula can be easily implemented using the scipy.stats.binom module:

p = 1/3
pa = 2*p - p**2

def p_second_chance_comp(n_pool: int, p: float = pa) -> pd.Series:
    return pd.Series({k: binom.pmf(k, n_pool, p) for k in range(n_pool+1)})

With these two methods in place, we can plot the probability mass functions via the formula and via our simulated roles. For nn up to 16, this leads to the following chart:

image

You can see that the crosses match up pretty nicely with the circles. So it really seems that our formula holds 😃 Last, but not least, we can also compare the means (which of course coincide, if the pmfs do, but still):

image

Conclusion

We have found that Second Chance lifts the expected number of successes for nn rolls from 13\frac13 to 59\frac59. Furthermore, the distribution of successes follows again a binomial distribution, which was not obvious to me from the start. Also, I would not have expected that a single reroll would lift the probability above 50% (0.5<0.5ˉ=590.5 < 0.\bar 5 = \frac59).

This was the first half of the computations that I owed you from the introductory post. You can find the second half, which is a bit more involved and turns around Breaking the Limits here.

As always, I'm happy to hear from you. Feel free to reach out!