Bayesian Workflow (Police Officer’s Dilemma)#

[1]:

import arviz as az
import bambi as bmb
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd

[2]:

az.style.use('arviz-darkgrid')
np.random.seed(1234)
pd.options.mode.chained_assignment = None


Here we will analyze a dataset from experimental psychology in which a sample of 36 human participants engaged in what is called the shooter task, yielding 3600 responses and reaction times (100 from each subject). The link above gives some more information about the shooter task, but basically it is a sort of crude first-person-shooter video game in which the subject plays the role of a police officer. The subject views a variety of urban scenes, and in each round or “trial” a person or “target” appears on the screen after some random interval. This person is either Black or White (with 50% probability), and they are holding some object that is either a gun or some other object like a phone or wallet (with 50% probability). When a target appears, the subject has a very brief response window – 0.85 seconds in this particular experiment – within which to press one of two keyboard buttons indicating a “shoot” or “don’t shoot” response. Subjects receive points for correct and timely responses in each trial; subjects’ scores are penalized for incorrect reponses (i.e., shooting an unarmed person or failing to shoot an armed person) or if they don’t respond within the 0.85 response window. The goal of the task, from the subject’s perspective, is to maximize their score.

The typical findings in the shooter task are that (a) subjects are quicker to respond to armed targets than to unarmed targets, but are especially quick toward armed black targets and especially slow toward unarmed black targets, and (b) subjects are more likely to shoot black targets than white targets, whether they are armed or not.

[3]:

shooter = pd.read_csv('data/shooter.csv', na_values='.')

[3]:

subject target trial race object time response
0 1 w05 19 white nogun 658.0 correct
1 2 b07 19 black gun 573.0 correct
2 3 w05 19 white gun 369.0 correct
3 4 w07 19 white gun 495.0 correct
4 5 w15 19 white nogun 483.0 correct
5 6 w96 19 white nogun 786.0 correct
6 7 w13 19 white nogun 519.0 correct
7 8 w06 19 white nogun 567.0 correct
8 9 b14 19 black gun 672.0 incorrect
9 10 w90 19 white gun 457.0 correct

The design of the experiment is such that the subject, target, and object (i.e., gun vs. no gun) factors are fully crossed: each subject views each target twice, once with a gun and once without a gun.

[4]:

pd.crosstab(shooter['subject'], [shooter['target'], shooter['object']])

[4]:

target b01 b02 b03 b04 b05 ... w95 w96 w97 w98 w99
object gun nogun gun nogun gun nogun gun nogun gun nogun ... gun nogun gun nogun gun nogun gun nogun gun nogun
subject
1 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
2 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
3 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
4 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
5 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
6 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
7 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
8 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
9 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
10 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
11 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
12 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
13 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
14 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
15 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
16 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
17 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
18 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
19 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
20 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
21 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
22 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
23 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
24 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
25 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
26 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
27 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
28 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
29 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
30 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
31 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
32 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
33 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
34 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
35 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1
36 1 1 1 1 1 1 1 1 1 1 ... 1 1 1 1 1 1 1 1 1 1

36 rows × 100 columns

The response speeds on each trial are recorded given as reaction times (milliseconds per response), but here we invert them to and multiply by 1000 so that we are analyzing response rates (responses per second). There is no theoretical reason to prefer one of these metrics over the other, but it turns out that response rates tend to have nicer distributional properties than reaction times (i.e., deviate less strongly from the standard Gaussian assumptions), so response rates will be a little more convenient for us by allowing us to use some fairly standard distributional models.

[5]:

shooter['rate'] = 1000.0 / shooter['time']

[6]:

plt.hist(shooter['rate'].dropna());


Fit response rate models#

Subject specific effects only#

Our first model is analogous to how the data from the shooter task are usually analyzed: incorporating all subject-level sources of variability, but ignoring the sampling variability due to the sample of 50 targets. This is a Bayesian generalized linear mixed model (GLMM) with a Normal response and with intercepts and slopes that vary randomly across subjects. Of note here is the C(x, Sum) syntax, which is from the Patsy library that we use to parse formulae. This instructs Bambi to use contrast codes of -1 and +1 for the two levels of each of the common factors of race (black vs. white) and object (gun vs. no gun), so that the race and object coefficients can be interpreted as simple effects on average across the levels of the other factor (directly analogous, but not quite equivalent, to the main effects). This is the standard coding used in ANOVA.

[7]:

subj_model = bmb.Model(shooter, dropna=True)
subj_fitted = subj_model.fit('rate ~ C(race, Sum)*C(object, Sum)',
group_specific=['C(race, Sum)*C(object, Sum)|subject'],
draws=1000, init=None)

Automatically removing 98/3600 rows from the dataset.
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [rate_sigma, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sigma, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sigma, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sigma, 1|subject_offset, 1|subject_sigma, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]

100.00% [4000/4000 00:26<00:00 Sampling 2 chains, 0 divergences]
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 27 seconds.
The number of effective samples is smaller than 25% for some parameters.


First let’s visualize the default priors that Bambi automatically decided on for each of the parameters. We do this by calling the .plot_priors() method of the Model object.

[8]:

subj_model.plot_priors();


The priors on the common effects seem quite reasonable. Recall that because of the -1 vs +1 contrast coding, the coefficients correspond to half the difference betwen the two levels of each factor. So the priors on the common effects essentially say that the black vs. white and gun vs. no gun (and their interaction) response rate differences are very unlikely to be as large as a full response per second. The priors on the standard deviations of the group specific effects are yoked to the corresponding common effect priors (see our technical paper for more info).

Now let’s visualize the model estimates. We do this by passing the object that resulted from the Model.fit() call to az.plot_trace().

[9]:

az.plot_trace(subj_fitted, compact=True);


Each distribution in the plots above has 2 densities because we used 2 MCMC chains, so we are viewing the results of all 2 chains prior to their aggregation. The main message from the plot above is that the chains all seem to have converged well and the resulting posterior distributions all look quite reasonable. It’s a bit easier to digest all this information in a concise, tabular form, which we can get by passing the object that resulted from the Model.fit() call to az.summary().

[10]:

# by default the variables plotted by subj_model.plot() are different than those from:
az.summary(subj_fitted, var_names=['Intercept', 'C(race, Sum)', 'C(object, Sum)', 'C(race, Sum):C(object, Sum)',
'1|subject', 'C(race, Sum)[S.black]|subject',
'C(object, Sum)[S.gun]|subject',
'C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject',
'rate_sigma'])

[10]:

mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
Intercept 1.706 0.014 1.681 1.735 0.001 0.001 395.0 395.0 397.0 781.0 1.01
C(race, Sum) -0.001 0.004 -0.009 0.006 0.000 0.000 2432.0 728.0 2427.0 1231.0 1.00
C(object, Sum) 0.093 0.006 0.081 0.105 0.000 0.000 1228.0 1216.0 1258.0 1125.0 1.00
C(race, Sum):C(object, Sum) 0.024 0.004 0.016 0.031 0.000 0.000 2314.0 2115.0 2372.0 1373.0 1.00
1|subject[0] -0.001 0.026 -0.048 0.050 0.001 0.001 1168.0 765.0 1166.0 1236.0 1.00
... ... ... ... ... ... ... ... ... ... ... ...
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[32] 0.000 0.006 -0.012 0.011 0.000 0.000 2050.0 1275.0 2371.0 1499.0 1.00
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[33] 0.000 0.006 -0.013 0.010 0.000 0.000 2095.0 1223.0 2341.0 1461.0 1.00
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[34] -0.000 0.006 -0.011 0.013 0.000 0.000 2195.0 1179.0 2331.0 1538.0 1.00
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[35] -0.001 0.006 -0.014 0.010 0.000 0.000 1959.0 1246.0 1994.0 1546.0 1.00
rate_sigma 0.240 0.003 0.235 0.245 0.000 0.000 2647.0 2647.0 2614.0 1415.0 1.00

149 rows × 11 columns

The take-home message from the analysis seems to be that we do find evidence for the usual finding that subjects are especially quick to respond (presumably with a shoot response) to armed black targets and especially slow to respond to unarmed black targets (while unarmed white targets receive “don’t shoot” responses with less hesitation). We see this in the fact that the marginal posterior for the C(race, Sum):C(object, Sum) interaction coefficient is concentrated strongly away from 0.

Stimulus specific effects#

A major flaw in the analysis above is that stimulus specific effects are ignored. The model does include group specific effects for subjects, reflecting the fact that the subjects we observed are but a sample from the broader population of subjects we are interested in and that potentially could have appeared in our study. But the targets we observed – the 50 photographs of white and black men that subjets responded to – are also but a sample from the broader theoretical population of targets we are interested in talking about, and that we could have just as easily and justifiably used as the experimental stimuli in the study. Since the stimuli comprise a random sample, they are subject to sampling variability, and this sampling variability should be accounted in the analysis by including stimulus specific effects. For some more information on this, see here, particularly pages 62-63.

To account for this, we let the intercept and slope for object be different for each target. Specific slopes for object across targets are possible because, if you recall, the design of the study was such that each target gets viewed twice by each subject, once with a gun and once without a gun. However, because each target is always either white or black, it’s not possible to add group specific slopes for the race factor or the interaction.

[11]:

stim_model = bmb.Model(shooter, dropna=True)
stim_fitted = stim_model.fit('rate ~ C(race, Sum)*C(object, Sum)',
group_specific=['C(race, Sum)*C(object, Sum)|subject', 'C(object, Sum)|target'],
draws=1000)

Automatically removing 98/3600 rows from the dataset.
Auto-assigning NUTS sampler...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [rate_sigma, C(object, Sum)[S.gun]|target_offset, C(object, Sum)[S.gun]|target_sigma, 1|target_offset, 1|target_sigma, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sigma, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sigma, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sigma, 1|subject_offset, 1|subject_sigma, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]

100.00% [4000/4000 00:41<00:00 Sampling 2 chains, 0 divergences]
Sampling 2 chains for 1_000 tune and 1_000 draw iterations (2_000 + 2_000 draws total) took 41 seconds.
The estimated number of effective samples is smaller than 200 for some parameters.


Now let’s look at the results…

[12]:

az.plot_trace(stim_fitted, compact=True);

[13]:

az.summary(stim_fitted, var_names=['Intercept', 'C(race, Sum)', 'C(object, Sum)', 'C(race, Sum):C(object, Sum)',
'1|subject', 'C(object, Sum)[S.gun]|subject',
'C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject',
'rate_sigma'])

[13]:

mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
Intercept 1.704 0.020 1.668 1.739 0.001 0.001 190.0 189.0 192.0 251.0 1.01
C(race, Sum) -0.001 0.014 -0.026 0.026 0.001 0.001 212.0 212.0 213.0 510.0 1.01
C(object, Sum) 0.095 0.015 0.067 0.125 0.001 0.001 154.0 154.0 156.0 360.0 1.01
C(race, Sum):C(object, Sum) 0.025 0.014 -0.001 0.050 0.001 0.001 262.0 262.0 266.0 446.0 1.01
1|subject[0] -0.004 0.024 -0.051 0.039 0.001 0.001 672.0 672.0 677.0 1118.0 1.00
... ... ... ... ... ... ... ... ... ... ... ...
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[32] -0.000 0.006 -0.016 0.011 0.000 0.000 1757.0 1285.0 1848.0 1466.0 1.00
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[33] 0.000 0.006 -0.012 0.014 0.000 0.000 2472.0 1223.0 2590.0 1452.0 1.00
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[34] 0.000 0.007 -0.012 0.014 0.000 0.000 1759.0 1224.0 1814.0 1309.0 1.00
C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject[35] -0.002 0.007 -0.017 0.010 0.000 0.000 1554.0 1083.0 1916.0 1422.0 1.00
rate_sigma 0.205 0.002 0.201 0.210 0.000 0.000 2683.0 2683.0 2694.0 1371.0 1.00

113 rows × 11 columns

There are two interesting things to note here. The first is that the key interaction effect, C(race, Sum):C(object, Sum) is much less clear now. The marginal posterior is still mostly concentrated away from 0, but there’s certainly a nontrivial part that overlaps with 0; 3.65% of the distribution, to be exact.

[14]:

(stim_fitted.posterior['C(race, Sum):C(object, Sum)'] < 0).mean()

[14]:

<xarray.DataArray 'C(race, Sum):C(object, Sum)' ()>
array(0.0365)

The second interesting thing is that the two new variance components in the model, those associated with the stimulus specific effects, are actually rather large. This actually largely explains the first fact above, since if these where estimated to be close to 0 anyway, the model estimates wouldn’t be much different than they were in the subj_model. It makes sense that there is a strong tendency for different targets to elicit difference reaction times on average, which leads to a large estimate of 1|target_sigma. Less obviously, the large estimate of C(object, Sum)[S.gun]|target_sigma (targets tend to vary a lot in their response rate differences when they have a gun vs. some other object) also makes sense, because in this experiment, different targets were pictured with different non-gun objects. Some of these objects, such as a bright red can of Coca-Cola, are not easily confused with a gun, so subjects are able to quickly decide on the correct response. Other objects, such as a black cell phone, are possibly easier to confuse with a gun, so subjects take longer to decide on the correct response when confronted with this object. Since each target is yoked to a particular non-gun object, there is good reason to expect large target-to-target variability in the object effect, which is indeed what we see in the model estimates.

Fit response models#

Here we seek evidence of the second traditional finding, that subjects are more likely to response ‘shoot’ toward black targets than toward white targets, regardless of whether they are armed or not. Currently the dataset just records whether the given response was correct or not, so first we transformed this into whether the response was ‘shoot’ or ‘dontshoot’.

[15]:

shooter['shoot_or_not'] = shooter['response'].astype(str)

# armed targets
new_vals = {'correct': 'shoot', 'incorrect':'dontshoot', 'timeout': np.nan}
shooter['shoot_or_not'][shooter['object']=='gun'] = \
shooter['response'][shooter['object']=='gun'].astype(str).replace(new_vals)
# unarmed targets
new_vals = {'correct': 'dontshoot', 'incorrect':'shoot', 'timeout': np.nan}
shooter['shoot_or_not'][shooter['object']=='nogun'] = \
shooter['response'][shooter['object']=='nogun'].astype(str).replace(new_vals)

# view result

[15]:

subject target trial race object time response rate shoot_or_not
0 1 w05 19 white nogun 658.0 correct 1.519757 dontshoot
1 2 b07 19 black gun 573.0 correct 1.745201 shoot
2 3 w05 19 white gun 369.0 correct 2.710027 shoot
3 4 w07 19 white gun 495.0 correct 2.020202 shoot
4 5 w15 19 white nogun 483.0 correct 2.070393 dontshoot
5 6 w96 19 white nogun 786.0 correct 1.272265 dontshoot
6 7 w13 19 white nogun 519.0 correct 1.926782 dontshoot
7 8 w06 19 white nogun 567.0 correct 1.763668 dontshoot
8 9 b14 19 black gun 672.0 incorrect 1.488095 dontshoot
9 10 w90 19 white gun 457.0 correct 2.188184 shoot
10 11 w91 19 white nogun 599.0 correct 1.669449 dontshoot
11 12 b17 19 black nogun 769.0 correct 1.300390 dontshoot
12 13 b04 19 black nogun 600.0 correct 1.666667 dontshoot
13 14 w17 19 white nogun 653.0 correct 1.531394 dontshoot
14 15 b93 19 black gun 468.0 correct 2.136752 shoot
15 16 w96 19 white gun 546.0 correct 1.831502 shoot
16 17 w91 19 white gun 591.0 incorrect 1.692047 dontshoot
17 18 b95 19 black gun NaN timeout NaN NaN
18 19 b09 19 black gun 656.0 correct 1.524390 shoot
19 20 b02 19 black gun 617.0 correct 1.620746 shoot

Let’s skip straight to the correct model that includes stimulus specific effects. This looks quite similiar to the stim_model from above except that we change the response to the new shoot_or_not variable – notice the [shoot] syntax indicating that we wish to model the prbability that shoot_or_not=='shoot', not shoot_or_not=='dontshoot' – and then change to family='bernoulli' to indicate a mixed effects logistic regression.

[16]:

stim_response_model = bmb.Model(shooter.dropna())
# Note we increased target_accept from default 0.8 to 0.9 because there were divergences
stim_response_fitted = stim_response_model.fit(
'shoot_or_not[shoot] ~ C(race, Sum)*C(object, Sum)',
group_specific=['C(race, Sum)*C(object, Sum)|subject', 'C(object, Sum)|target'],
draws=1500, target_accept=0.9, family='bernoulli')

Modeling the probability that shoot_or_not==shoot
Auto-assigning NUTS sampler...
Multiprocess sampling (2 chains in 2 jobs)
NUTS: [C(object, Sum)[S.gun]|target_offset, C(object, Sum)[S.gun]|target_sigma, 1|target_offset, 1|target_sigma, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_offset, C(race, Sum)[S.black]:C(object, Sum)[S.gun]|subject_sigma, C(object, Sum)[S.gun]|subject_offset, C(object, Sum)[S.gun]|subject_sigma, C(race, Sum)[S.black]|subject_offset, C(race, Sum)[S.black]|subject_sigma, 1|subject_offset, 1|subject_sigma, C(race, Sum):C(object, Sum), C(object, Sum), C(race, Sum), Intercept]

100.00% [5000/5000 00:59<00:00 Sampling 2 chains, 0 divergences]
Sampling 2 chains for 1_000 tune and 1_500 draw iterations (2_000 + 3_000 draws total) took 60 seconds.
The number of effective samples is smaller than 25% for some parameters.


Show the trace plot

[17]:

az.plot_trace(stim_response_fitted, compact=True);


Looks pretty good! Now for the more concise summary.

[18]:

az.summary(stim_response_fitted)

[18]:

mean sd hdi_3% hdi_97% mcse_mean mcse_sd ess_mean ess_sd ess_bulk ess_tail r_hat
Intercept -0.021 0.149 -0.315 0.241 0.004 0.003 1752.0 1389.0 1780.0 1810.0 1.0
C(race, Sum) 0.217 0.151 -0.060 0.502 0.004 0.003 1818.0 1818.0 1821.0 2084.0 1.0
C(object, Sum) 4.161 0.247 3.700 4.621 0.007 0.005 1180.0 1170.0 1194.0 1648.0 1.0
C(race, Sum):C(object, Sum) 0.196 0.165 -0.106 0.506 0.004 0.003 1841.0 1699.0 1849.0 2010.0 1.0
1|subject_sigma 0.233 0.151 0.000 0.483 0.005 0.003 1032.0 1032.0 932.0 1199.0 1.0
... ... ... ... ... ... ... ... ... ... ... ...
C(object, Sum)[S.gun]|target[45] 0.346 0.591 -0.726 1.482 0.011 0.009 3152.0 2159.0 3187.0 2567.0 1.0
C(object, Sum)[S.gun]|target[46] 0.028 0.536 -1.029 0.989 0.009 0.010 3214.0 1538.0 3237.0 2124.0 1.0
C(object, Sum)[S.gun]|target[47] 0.293 0.548 -0.702 1.325 0.009 0.009 3436.0 1824.0 3522.0 2245.0 1.0
C(object, Sum)[S.gun]|target[48] 0.348 0.581 -0.705 1.439 0.010 0.009 3480.0 1935.0 3595.0 2151.0 1.0
C(object, Sum)[S.gun]|target[49] 0.004 0.546 -1.055 0.985 0.009 0.010 4023.0 1523.0 4156.0 2087.0 1.0

254 rows × 11 columns

There is some slight evidence here for the hypothesis that subjects are more likely to shoot the black targets, regardless of whether they are armed or not, but the evidence is not too strong. The marginal posterior for the C(race, Sum) coefficient is mostly concentrated away from 0, but it overlaps even more in this case with 0 than did the key interaction effect in the previous model.

[19]:

(stim_response_fitted.posterior['C(race, Sum)'] < 0).mean()

[19]:

<xarray.DataArray 'C(race, Sum)' ()>
array(0.07066667)
[20]:

%load_ext watermark
%watermark -n -u -v -iv -w

arviz  0.10.0
numpy  1.19.4
pandas 1.1.5
bambi  0.2.0
last updated: Wed Dec 16 2020

CPython 3.8.5
IPython 7.18.1
watermark 2.0.2