The tab "Compute an E-value" computes the E-value, defined as the minimum strength of association on the risk ratio scale that an unmeasured confounder would need to have with both the exposure and the outcome, conditional on the measured covariates, to fully explain away a specific exposure-outcome association. Note that for outcome types other than relative risks, assumptions are involved with the approximate conversions used. See citation (2) for details.

Alternatively, you can consider the confounding strength capable of moving the observed association to any other value (e.g. attenuating the observed association to a true causal effect that is no longer scientifically important, or alternatively increasing a near-null observed association to a value that is of scientific importance). For this purpose, simply type a non-null effect size into the box "True causal effect to which to shift estimate" when computing the E-value.

Additionally, if you have substantive knowledge on the strength of the relationships between the unmeasured confounder(s) and the exposure and outcome, you can use these numbers to calculate the bias factor.

Note: You are calculating a "non-null" E-value, i.e., an E-value for the minimum
amount of unmeasured confounding needed to move the estimate and confidence interval
to your specified true value rather than to the null value.

Note: You are calculating a "non-null" E-value, i.e., an E-value for the minimum
amount of unmeasured confounding needed to move the estimate and confidence interval
to your specified true value rather than to the null value.

Note: You are calculating a "non-null" E-value, i.e., an E-value for the minimum
amount of unmeasured confounding needed to move the estimate and confidence interval
to your specified true value rather than to the null value.

Note: Using the standard deviation of the outcome yields a conservative approximation
of the standardized mean difference. For a non-conservative estimate, you could instead use the estimated residual standard deviation from your linear
regression model. Regardless, the reported E-value for the confidence interval treats the
standard deviation as known, not estimated.

In addition to using this website, you can alternatively compute E-values (VanderWeele & Ding, 2017) using the R package EValue (Mathur et al., 2018) or the Stata module EVALUE (Linden et al., 2020).

For more information on the interpretation of the E-value and further technical details, see Ding & VanderWeele (2016), Haneuse et al. (2019), VanderWeele et al. (2019a), and VanderWeele et al. (2019b).

Methods and tools are also available to conduct analogous sensitivity analyses for other types of biases, including:

- Selection bias (Smith & VanderWeele, 2019a; website or R package EValue)
- Measurement error (VanderWeele & Li, 2019; R package EValue)
- A combination of unmeasured confounding, selection bias, and measurement error simultaneously (Smith et al, 2020; R package EValue)

Finally, similar approaches are also available to assess biases in meta-analyses including:

- Unmeasured confounding in meta-analyses (Mathur & VanderWeele, 2020a; website or R package EValue)
- Publication bias in meta-analyses (Mathur & VanderWeele, 2020b; R package PublicationBias)

This website was created by Maya Mathur, Peng Ding, Corinne Riddell, Louisa Smith, and Tyler VanderWeele.

- Ding P & VanderWeele TJ (2016). Sensitivity analysis without assumptions.
*Epidemiology*, 27(3), 368–377. - Haneuse S, VanderWeele TJ, & Arterburn D (2019). Using the E-value to assess the potential effect of unmeasured confounding in observational studies.
*Journal of the American Medical Association*, 321(6), 602-603. - Linden A, Mathur MB, & VanderWeele TJ (2020). Conducting sensitivity analysis for unmeasured confounding in observational studies using E-values: The evalue package.
*The Stata Journal*(in press). - Mathur MB, Ding P, Riddell CA, & VanderWeele TJ (2018). Website and R package for computing E-values.
*Epidemiology*29(5), e45. - Mathur MB & VanderWeele TJ (2020a). Sensitivity analysis for unmeasured confounding in meta-analyses.
*Journal of the American Statistical Association*115(529), 163-170. - Mathur MB & VanderWeele TJ (2020b). Sensitivity analysis for publication bias in meta-analyses.
*Journal of the Royal Statistical Society: Series C. In press.* - Smith LH & VanderWeele TJ (2019a). Bounding bias due to selection.
*Epidemiology*30(4), 509. - Smith LH & VanderWeele TJ (2019b). Mediational E-values: Approximate sensitivity analysis for mediator-outcome confounding.
*Epidemiology*30(6), 835-837. - VanderWeele TJ & Ding P (2017). Sensitivity analysis in observational research: Introducing the E-value.
*Annals of Internal Medicine*, 167(4), 268-274. - VanderWeele TJ, Ding P, & Mathur MB (2019a). Technical considerations in the use of the E-value.
*Journal of Causal Inference*, 7(2). - VanderWeele TJ, Mathur MB, & Ding P (2019b). Correcting misinterpretations of the E-value.
*Annals of Internal Medicine*170(2), 131-132. - VanderWeele TJ & Li Y (2019). Simple sensitivity analysis for differential measurement error.
*American Journal of Epidemiology*, 188(10), 1823-1829.