Relative risk
2622159
224464964
2008-07-08T23:22:19Z
84.107.207.26
I assume RR is pronounced ar ar starting with a vowel sound and therefore an and no a
In [[statistics]] and mathematical [[epidemiology]], '''relative risk (RR)''' is the risk of an event (or of developing a disease) relative to exposure. Relative risk is a [[ratio]] of the [[probability]] of the event occurring in the exposed group versus a non-exposed group.
:<math>RR= \frac {p_\mathrm{exposed}}{p_\mathrm{non-exposed}} </math>
For example, if the [[probability]] of developing lung cancer among smokers was 20% and among non-smokers 1%, then the relative risk of cancer associated with smoking would be 20. Smokers would be twenty times as likely as non-smokers to develop lung cancer.
Another term for the '''relative risk''' is the '''risk ratio''' because it is the ratio of the risk in the exposed divided by the risk in the unexposed.
== Statistical use and meaning ==
Relative risk is used frequently in the statistical analysis of binary outcomes where the outcome of interest has relatively low probability. It is thus often suited to [[clinical trial]] data, where it is used to compare the risk of developing a disease, in people not receiving the new medical treatment (or receiving a placebo) versus people who are receiving an established (standard of care) treatment. Alternatively, it is used to compare the risk of developing a side effect in people receiving a drug as compared to the people who are not receiving the treatment (or receiving a placebo). It is particularly attractive because it can be calculated by hand in the simple case, but is also susceptible to [[regression analysis|regression modelling]], typically in a [[Poisson regression]] framework.
In a simple comparison between an experimental group and a control group:
*A relative risk of 1 means there is no difference in risk between the two groups.
*An RR of < 1 means the event is less likely to occur in the experimental group than in the control group.
*An RR of > 1 means the event is more likely to occur in the experimental group than in the control group.
As a consequence of the [[Delta method#example|Delta method]], the [[Logarithm|log]] of the relative risk has a sampling distribution that is approximately [[normal distribution|normal]] with variance that can be estimated by a formula involving the number of subjects in each group and the event rates in each group (see [[Delta method#example|Delta method]]) <ref>See e.g. Stata FAQ on CIs for odds ratios, hazard ratios, IRRs and RRRs at http://www.stata.com/support/faqs/stat/2deltameth.html</ref>. This permits the construction of a [[confidence interval]] (CI) which is symmetric around <math>\log(RR)</math>, i.e.
:<math>CI = \log(RR)\pm \mathrm{SE}\times z_\alpha</math>
where <math>z_\alpha</math> is the [[standard score]] for the chosen level of [[statistical significance|significance]] and SE the [[Standard error (statistics)|standard error]]. The [[antilog]] can be taken of the two bounds of the log-CI, giving the high and low bounds for an asymmetric confidence interval around the relative risk.
In regression models, the treatment is typically included as a [[dummy variable]] along with other factors that may affect risk. The relative risk is normally reported as calculated for the [[mean]] of the sample values of the explanatory variables.
=== Association with odds ratio ===
Relative risk is different from the [[odds ratio]], although it asymptotically approaches it for small probabilities. In fact, the odds ratio has much wider use in statistics, since [[logistic regression]], often associated with [[clinical trial]]s, works with the log of the odds ratio, not relative risk. Because the log of the odds ratio is estimated as a linear function of the explanatory variables, the estimated odds ratio for 70-year-olds and 60-year-olds associated with type of treatment would be the same in a logistic regression models where the outcome is associated with drug and age, although the relative risk might be significantly different. In cases like this, statistical models of the odds ratio often reflect the underlying mechanisms more effectively.
Since relative risk is a more intuitive measure of effectiveness, the distinction is important especially in cases of medium to high probabilities. If action A carries a risk of 99.9% and action B a risk of 99.0% then the relative risk is just over 1, while the odds associated with action A are almost 10 times higher than the odds with B.
In medical research, the [[odds ratio]] is favoured for [[case-control study|case-control studies]] and [[retrospective study|retrospective studies]]. Relative risk is used in [[randomized controlled trial]]s and [[cohort study|cohort studies]].<ref>Medical University of South Carolina. [http://www.musc.edu/dc/icrebm/oddsratio.html Odds ratio versus relative risk]. Accessed on: [[September 8]], [[2005]].</ref>
In statistical modelling, approaches like [[poisson regression]] (for counts of events per unit exposure) have relative risk interpretations: the estimated effect of an explanatory variable is multiplicative on the rate, and thus leads to a risk ratio or relative risk. [[Logistic regression]] (for binary outcomes, or counts of successes out of a number of trials) must be interpreted in odds-ratio terms: the effect of an explanatory variable is multiplicative on the odds and thus leads to an odds ratio.
==Size of relative risk and relevance==
In the standard or classical hypothesis testing framework, the null hypothesis is that RR=1 (the putative risk factor has no effect). The null hypothesis can be rejected in favor of the alternative hypothesis of that the factor in question does affect risk if the confidence interval for RR excludes 1.
Critics of the standard approach, notably including [[John Brignell]] and [[Steven Milloy]], believe published studies suffer from unduly high [[type I error]] rates, and have argued for an additional requirement that the point estimate of RR should exceed 2 [http://www.numberwatch.co.uk/RR.htm] [http://www.numberwatch.co.uk/2005%20November.htm#RR] [http://www.junkscience.com/news/sws/sws-chapter2.html] (or, if risks are reduced, be below 0.5) and have cited a variety of statements by statisticians and others supporting this view. The issue has arisen particularly in relation to debates about the effects of [[passive smoking]], where the effect size appears to be small (relative to smoking), and exposure levels are difficult to quantify in the affected population.
In support of this claim, it may be observed that, if the base level of risk is low, a small proportionate increase in risk may be of little practical significance. (In the case of lung cancer, however, the base risk is substantial).
In addition, if estimates are biased by the exclusion of relevant factors, the likelihood of a spurious finding of significance is greater if the estimated RR is close to 1. In his paper "Why Most Published Research Findings Are False" John Ioannidis writes,<ref>{{Cite journal
| author = [[John P. A. Ioannidis]]
| title = Why Most Published Research Findings Are False
| url = http://medicine.plosjournals.org/perlserv/?request=get-document&doi=10.1371%2Fjournal.pmed.0020124
| journal = [[PLoS Medicine]]
| year = 2005
| volume = 2
| issue = 8
| pages = e124
| doi = 10.1371/journal.pmed.0020124
}}</ref> "The smaller the effect sizes in a scientific field, the less likely the research findings are to be true. [...] research findings are more likely true in scientific fields with [...] relative risks 3–20 [...], than in scientific fields where postulated effects are small [...] (relative risks 1.1–1.5)." "if the majority of true genetic or nutritional determinants of complex diseases confer relative risks less than 1.05, genetic or nutritional epidemiology would be largely utopian endeavors."
In assessing results claiming an increase of relative risk arising from exposure to a hazard, statisticians and epidemiologists consider a range of factors including the size of the effect, the level of statistical significance, whether the results arise from a clinical trial or observation of a population, the significance of possible confounding factors , the extent to which results have been replicated, and the presence or absence of a biomedical model for the claimed effect. Important confounding factors for observational studies of health risks include [[tobacco smoking]] and social class.
While few statisticians accept the general claim that a relative risk level greater than 2 is required before a finding of increased risk can be accepted, most agree with this view in relation to findings from single studies without biomedical support. Marcia Angell of the New England Journal of Medicine has stated
:As a general rule of thumb we are looking for a relative risk of three or more [before accepting a paper for publication], particularly if it is biologically implausible or if it's a brand-new finding. [http://www.nasw.org/awards/1996/96Taubesarticle.htm]
The arguments of Milloy, Brignell and others, put forward in relation to passive smoking, have been criticised by epidemiologists. Their approach to epidemiology, involving efforts to discredit individual studies rather than addressing the evidence as a whole, was described in the ''[[American Journal of Public Health]]'':
<blockquote>A major component of the industry attack was the mounting of a campaign to establish a "bar" for "sound science" that could not be fully met by most individual investigations, leaving studies that did not meet the criteria to be dismissed as "junk science." The campaign also included attempts to characterize relative risks of 2 or less as highly questionable and not amenable to investigation by epidemiologic methods.<ref>{{cite journal
| author= [[Jonathan M. Samet]] & [[Thomas A. Burke]]
| title = Turning science into junk: the tobacco industry and passive smoking
| journal = [[American journal of public health]]
| volume=91
| issue=11
| pages=1742–1744
| year = [[2001]]
| month = November
| pmid=11684591
| url = http://www.ajph.org/cgi/content/full/91/11/1742
}}</ref></blockquote>
These efforts were largely abandoned by the tobacco industry when it became clear that no independent epidemiological organization would agree to the standards proposed by Philip Morris et al.<ref>{{cite journal |author=Ong EK, Glantz SA |title=Constructing "Sound Science" and "Good Epidemiology": Tobacco, Lawyers, and Public Relations Firms |journal=American Journal of Public Health |volume=91 |issue=11 |pages=1749–57 |year=2001 |pmid=11684593 |doi=}}</ref>
===Statistical significance (confidence) and relative risk===
Whether a given relative risk can be considered [[statistical significance|statistically significant]] is dependent on the relative difference between the conditions compared, the amount of measurement and the noise associated with the measurement (of the events considered). In other words, the confidence one has, in a given relative risk being non-random (i.e. it is not a consequence of [[chance]]), depends on the [[signal-to-noise ratio]] and the sample size.
Expressed mathematically, the confidence that a result is not by random chance is given by the following formula by [[David Sackett|Sackett]]<ref>Sackett DL. Why randomized controlled trials fail but needn't: 2. Failure to employ physiological statistics, or the only formula a clinician-trialist is ever likely to need (or understand!). CMAJ. 2001 Oct 30;165(9):1226-37. PMID 11706914. [http://www.cmaj.ca/cgi/content/full/165/9/1226 Free Full Text].</ref>:
<math>confidence = \frac{signal}{noise} \times \sqrt{sample\ size}</math>
For clarity, the above formula is presented in tabular form below.
'''Dependence of confidence with noise, signal and sample size (tabular form)'''
{| border="1" cellpadding="2"
!Parameter
!Parameter increases
!Parameter decreases
|-
|Noise
|Confidence decreases
|Confidence increases
|-
|Signal
|Confidence increases
|Confidence decreases
|-
|Sample size
|Confidence increases
|Confidence decreases
|}
In words, the confidence is higher if the noise is lower and/or the sample size is larger and/or the effect size (signal) is increased. The confidence of a relative risk value (and its associated confidence interval) is ''not'' dependent on effect size alone. If the sample size is large and the noise is low a small effect size can be measured with great confidence. Whether a small effect size is considered important is dependent on the context of the events compared.
In medicine, small effect sizes (reflected by small relative risk values) are usually considered clinically relevant (if there is great confidence in them) and are frequently used to guide treatment decisions. A relative risk of 1.10 may seem very small, but over a large number of patients will make a noticeable difference. Whether a given treatment is considered a worthy endeavour is dependent on the risks, benefits and costs.
==See also==
*[[Confidence interval]]
*[[Odds ratio]]
*[[Hazard ratio]]
*[[Number needed to treat]] (NNT)
*[[Number needed to harm]] (NNH)
*[[OpenEpi]]
*[[Epi_Info]]
==References==
{{reflist}}
==External links==
*[http://www.cebm.utoronto.ca/glossary/ EBM glossary]
*[http://www.childrens-mercy.org/stats/journal/oddsratio.asp Odds ratio versus relative risk]
*[http://www.musc.edu/dc/icrebm/oddsratio.html Odds Ratio vs. Relative Risk] Medical University of South Carolina
*[http://www.medcalc.be/calc/relative_risk.php Relative risk online calculator]
[[Category:Epidemiology]]
[[Category:Biostatistics]]
[[de:Relatives Risiko]]
[[pl:Ryzyko względne]]
[[zh:相对危险度]]