Для того чтобы воспользоваться данной функцией,
необходимо войти или зарегистрироваться.


Войти или зарегистрироваться

Забыли свой пароль?
Войти как пользователь:
Войти как пользователь
Вы можете войти на сайт, если вы зарегистрированы на одном из этих сервисов:


01 Октября 2010 Журнал "Journal of Athletic Training"

Виды спорта: Общеспортивная тематика

Рубрики: Спортивная наука

Автор: Comstock R. Dawn

Authors' Reply

We read with interest the commentary provided by Drs Knowles, Kucera, and Marshall regarding the most appropriate use and interpretation of the injury proportion ratio (IPR). We appreciate the opportunity to enter into a methodologic debate over analytical techniques in the pages of a clinical journal, and we thank the Journal of Athletic Training (JAT) for allowing this epidemiologic sidebar. Knowles et al raise a concern regarding the possible misinterpretation of the IPR and suggest recommending against its use (without specifically stating same). This is reminiscent of the now-classic, and somewhat humorous, P value versus confidence interval debate of the mid 1990s.1-6 Although we agree with Knowles et al that the IPR is not the same as the injury rate ratio (IRR)—just as the P value is not the same as the confidence interval— we disagree with their assertion that the IPR is inferior to the IRR and should be interpreted with more caution. Just as history has borne out the fact that neither the P value nor the confidence interval is inferior to the other, IPRs and IRRs are simply different. Because Knowles et al appear to call for a restriction on the analytical tools available to researchers, and due to our confidence in the value and important contributions of both the IRR and IPR, the similarities to the prior debate prompted us to reply in the form of an homage. Below we liberally quote from that historical debate, with our changes reflecting the current topic presented in square brackets.

First, in response to the assertion by Knowles et al that the IPR is not the same as the IRR, ''[Knowles et al] stress an important point that we emphasized as well.''3 In this manuscript as well as our other manuscripts cited by Knowles et al, we used IPRs as well as IRRs precisely because the 2 provide clinicians, researchers, and policy makers with different methods of evaluating the burden of injury in populations of interest. Incidence provides one measure of burden: How many athletes presented with the injury of interest? The IRR provides another measure of burden: Which of 2 subgroups of athletes had a higher rate of the injury of interest (ie, a higher incidence of injury per unit of exposure)? The IPR provides yet another measure of burden: In which of 2 subgroups of athletes does the injury of interest represent a greater percentage of all injuries (ie, account for a higher proportion of the total number of injuries)? Each measure provides valuable, albeit different, information. ''Clearly the 2 views are compatible.''3

We use data from a prior JAT publication to illustrate this point. In our published comparison7 of concussions among high school and collegiate athletes, we reported that the incidences were fairly similar, with 396 concussions reported to the high school surveillance system by 100 high schools and 482 concussions reported to the collegiate surveillance system by 180 collegiate institutions during the study period. The rate of concussion was significantly higher in collegiate athletes (0.43 per 1000 athlete-exposures) than high school athletes (0.23 per 1000 athlete-exposures) (IRR = 1.86, 95% confidence interval = 1.63, 2.12). Concussions represented a higher proportion of all injuries among high school athletes (8.9% of all injuries) than collegiate athletes (5.8% of all injuries) (IPR = 1.53, 95% confidence interval = 1.35, 1.74). So, in which subgroup of athletes was concussion a greater burden? If we were limited to using IRR as our only analytic tool, we would conclude that concussion posed a much greater burden to collegiate athletes. This is not an incorrect conclusion because collegiate athletes were at higher risk of concussion than high school athletes, as evidenced by the IRR. However, far more high school students played sports than college students did; therefore, if our goal was to determine best allocation of clinical resources, incidence establishes that the burden of concussions was far greater among high school athletes. Similarly, if the goal was to drive injury-prevention efforts, we should conclude the burden of injury was higher in high school athletes; a successful concussion intervention in that population would have a greater effect on their health status because concussions represented a greater proportion of their injuries. All 3 measures, in conjunction, provide the most complete picture of the burden of concussion in these 2 populations of athletes. Thus, ''[We] don't see a [relative rate] versus [IPR] issue here; they both can play a role. The real question, as usual, is what is the most important biological question, and how do we efficiently, but not simplistically, summarize the data to address it?''5 Stated even more succinctly, ''More than one type of analysis may be required to obtain a full perspective on the data.''6

Second, in response to the assertion of Knowles et al that the IPR can easily be misinterpreted, ''The problem of unthinking interpretation of epidemiologic analyses is not solved by the use of [rate ratios] since no method of presentation can force a reader or researcher to become a thoughtful interpreter of data. We should, however, at least provide data summarized in the appropriate form for readers who are thoughtful and, through that presentation, help to educate students of the discipline.''5 Unfortunately, the assertion by Knowles et al that the IPR is analogous to the proportional mortality ratio (PMR) is a misinterpretation of the IPR. Although the PMR compares the observed proportion of an outcome of interest in one subgroup with the expected proportion of that outcome in a comparison population, the IPR compares the observed proportion of an outcome of interest in one subgroup with the observed proportion of that outcome in a second subgroup. This is a slight but important distinction. Similarly, they claim that because the IPR is not based on incidence rates and because the sum of all proportionate causes must equal to 1, the IPR inherently has more limited ''validity and generalizability'' than the IRR. This statement is erroneous. Validity (simplistically defined as a lack of systematic error) is challenged by factors such as bias and confounding, which equally affect the IRR and IPR. Generalizability (the ability to abstract universal statements from observations) similarly shows no preference for either analytic technique. Additionally, "Ironically, the example presented by [Knowles et al to compare IPR with IRR] poses a different problem for [us] than the one they see.''5 In this example, their conclusions are based on several assumptions (eg, female participation over time, playing time by sex). We simply would never be comfortable basing scientific analyses upon such assumptions: ''Some epidemiologists may find a certain value in such computations in preliminary stages of analysis ... A thorough analysis would never stop there, however.''3 We feel that calculating IPRs using observed data is preferable to calculating IRRs using hypothetical data. Thus, we reported IPRs alone in this manuscript, rather than IRRs and IPRs, precisely because no reliable exposure data were available to enable accurate calculation of IRRs. However, reliable data were available to enable calculation of IPRs. We feel strongly that our manuscript, devoid as it is of IRRs, still makes an important contribution to the body of scientific knowledge regarding ice hockey injuries and that the presented IPRs provide insight into clinically important subgroup differences in patterns of ice hockey-related injuries. Again, we agree that IPRs and IRRs are different, and we assert that both should be interpreted with caution.

Finally, ''[we] doubt that there are any serious differences between [Knowles et al] and [us] in our philosophies of data analysis.''6 We conclude that both IRRs and IPRs can provide valuable information, and ''we [project] that both trends will continue.''3 Although we applaud the efforts of Marshall and colleagues8-13 to educate sports injury researchers and readers on methodologic approaches, we are concerned that some readers and editors may misinterpret educational guidance as methodologic dictums. Researchers must be allowed to use the analytical techniques they feel are most appropriate given their research question and the available data, whereas researchers and readers alike must be responsible for understanding the context of and correctly interpreting reported results. In conclusion, we believe ''[o]ur efforts are best spent discussing how to teach researchers to knit together biologic questions with quantitative evidence and to report that synthesis effectively. [We] think that debates about the strengths and weaknesses of various [analytical] summaries are valuable only insofar as they directly bear on that issue.''5

R. Dawn Comstock, PhD Center for Injury Research and Policy The Research Institute at Nationwide Children's Hospital

Ohio State University Colleges of Medicine and Public Health 700 Children's Drive, Columbus, OH 43205 e-mail: Dawn.Comstock@nationwidechildrens.org

Sarah K. Fields, JD, PhD School of Physical Activity and Educational Services

A268 PAES Building 305 West 17th Avenue, Columbus, OH 43210-1224

e-mail: fields.214@osu.edu


1. Savitz DA, Tolo KA, Poole C. Statistical significance testing in the American Journal of Epidemiology, 1970-1990 [letter to the editor]. Am J Epidemiol. 1994;139(10):1047-1052.

2. Witte JS, Thomas DC, Langholz B. Re: statistical significance testing in the American Journal of Epidemiology, 1970-1990 [letter to the editor]. Am J Epidemiol. 1995;142(1):101.

3. Poole C, Savitz DA. Two authors reply [letter to the editor]. Am J Epidemiol. 1995;142(1):102.

4. Greenland S. Dr. Greenland replies [letter to the editor]. Am J Epidemiol. 1995;142(1):102-103.

5. Goodman SN. Dr. Goodman replies [letter to the editor]. Am J Epidemiol. 1995;142(1):103.

6. Walter SD. Methods of reporting statistical results from medical research studies. Am J Epidemiol. 1995;141(10):896-906.

7. Gessel LM, Fields SK, Collins CL, Dick RW, Comstock RD. Concussions among United States high school and collegiate athletes. J Athl Train. 2007;42(4):495-503.

8. Marshall SW. Testing with confidence: the use (and misuse) of confidence intervals in biomedical research. J Sci Med Sport. 2004; 7(2):135-137.

9. Knowles SB, Marshall SW, Guskiewicz KM. Issues in estimating risks and rates in sports injury research. J Athl Train. 2006;41(2): 207-215.

10. Hopkins WG, Marshall SW, Quarrie KL, Hume PA. Risk factors and risk statistics for sports injuries. Clin J Sport Med. 2007;17(3): 208-210.

11. Marshall SW. Power for tests of interaction: effect of raising the type I error rate. Epidemiol Perspect Innov. 2007;19(4):4.

12. Marshall SW. Injury case-control studies using ''other injuries'' as controls. Epidemiology. 2008;19(2):270-276.

13. Hopkins WG, Marshall SW, Batterham AM, Hanin J. Progressive statistics for studies in sports medicine and exercise science. Med Sci Sports Exerc. 2009;41(1):3-13.

Помимо статей, в нашей спортивной библиотеке вы можете найти много других полезных материалов: спортивную периодику (газеты и журналы), книги о спорте, биографию интересующего вас спортсмена или тренера, словарь спортивных терминов, а также многое другое.

Похожие статьи

Социальные комментарии Cackle