When failing to demonstrate statistically different levels between treatment groups, investigators may resort to breaking down the analysis to smaller and smaller subgroups in order to find a difference.
While access to computer-based statistical packages can facilitate application of increasingly complex analytic procedures, inappropriate uses of these packages can result in abuses as well.
Every field of study has developed its accepted practices for data analysis.
It is conceivable that multiple statistical tests could yield a significant finding by chance alone rather than reflecting a true effect.
Integrity is compromised if the investigator only reports tests with significant findings, and neglects to mention a large number of tests failing to reach significance.
Any bias occurring in the collection of the data, or selection of method of analysis, will increase the likelihood of drawing a biased inference.
Bias can occur when recruitment of study participants falls below minimum number required to demonstrate statistical power or failure to maintain a sufficient follow-up period needed to demonstrate an effect (Altman, 2001).
Ideally, investigators should have substantially more than a basic understanding of the rationale for selecting one method of analysis over another.
This can allow investigators to better supervise staff who conduct the data analyses process and make informed decisions While methods of analysis may differ by scientific discipline, the optimal stage for determining appropriate analytic procedures occurs early in the research process and should not be an afterthought.
According to Smeeton and Goda (2003), “Statistical advice should be obtained at the stage of initial planning of an investigation so that, for example, the method of sampling and design of questionnaire are appropriate”.
The chief aim of analysis is to distinguish between an event occurring as either reflecting a true effect versus a false one.