What Are Nonsignificant Findings?
Nonsignificant findings do not meet the criteria the researcher set for significant results. Statistics traditionally used for hypothesis testing determine if any difference seen is greater than might be seen by chance alone (Polit & Beck, 2014). In the case of nonsignificant findings, no apparent difference exists between the two groups studied in experimental research; in the case of observational research, no apparent relationship was found between two or more variables. Usually significance is set at p=0.05, or p=0.01 in selected situations, based on how much a researcher is willing to risk being wrong. Importantly, lack of statistical significance does not mean a finding lacks practical or clinical importance. Additional indicators such as effect size or the magnitude of the change can be useful (Sainani, 2012). Researchers should determine what would be an important clinical effect before conducting a study.
Hypothesis testing is not the only way to determine support or lack of support for an intervention. The use of confidence intervals also has been advocated (Clarke, 2012). Confidence intervals estimate the mean (point estimate) for the entire population and report a range of scores within the 95% confidence interval. Some experts suggest authors avoid using the terms statistically significant or not significant, but instead report the p values, point estimates, and effect sizes. Authors are encouraged to discuss confidence in the evidence rather than significance (Cumming, 2013).
What Can Account for Inconclusive Findings?
Obviously, researchers should consider an intervention may not have the expected effect or the hypothesized relationship between variables does not exist. On the other hand, lack of evidence of an effect does not mean there is no effect (Polit & Beck, 2014). It is impossible to distinguish a null effect from a very small effect. In addition, the null hypothesis could be false (a Type II error) for a variety of reasons. Perhaps researchers conducted the study in a way that biased the results (Guyatt et al., 2011; Polit & Beck, 2014). For example, the sample could have been too small and therefore lacked the power to show a difference between groups or establish any relationship. The statistical tests could be weak or the measures used unreliable (Polit & Beck, 2014). The theory used to develop an intervention could be missing important concepts or variables.
Reporting Nonsignificant Results
Researchers should be honest in reporting their findings, including any that are nonsignificant. In one study about wound care research, Lockyer, Hodgson, Dumville, and Cullum (2013) found 74% (32/43) of reviewed articles with no clear primary outcome included one statistically nonsignificant finding not reflected in the abstract. Authors of 71% (20/28) of the studies without statistical significance in the primary outcome used selective outcome reporting that could misrepresent results to readers. Readers should be careful to read the full report of a study, not just the abstract.
Researchers also should not recommend an intervention or practice that failed to demonstrate clinical significance. They should focus instead on what might be done differently in future research, or determine if replication of the study is warranted. The accumulation of evidence is the ultimate deciding factor for using research findings (Cumming, 2013). They should be particularly honest about the limitations of their study. In this issue, Donaldson and co-authors (2017) and Eastwick and colleagues (2017) were admirably clear about the limitations of their studies.
Readers should not dismiss studies that report non-significant findings. These findings offer valuable information to the body of research on a subject. Reading a report of research critically and examining all findings are important for clinical practice.
Clarke, J. (2012). What is a CI? Evidence-Based Nursing, 15(3), 66.
Cumming, G. (2013). The new statistics: Why and how. Psychological Science, 25(1), 7-29.
Donaldson, J., Ingrago, C., Drake, D., & Ocampo, E. (2017). The effect of aromatherapy on anxiety experienced by hospital nurses. MEDSURG Nursing, 26(3), 201-206.
Eastwick, E., Leise, J., Sabo, J., Clute, L., & Stoj, P (2017). The effect of gum chewing on bowel motility in postoperative elective colon resection patients. MEDSURG Nursing, 26(3), 185-189.
Guyatt, G.H., Oxman, A.D., Vist, G., Kunz, R., Bozek, J., Alonso-Coello, P, ... Schunemann, H.J. (2011). GrAdE guidelines: 4. Rating the quality of evidence-study limitations (risk of bias). Journal of Clinical Epidemiology, 64(4), 407-415.
Lockyer, S., Hodgson, R., Dumville, J.C., & Cullum, N. (2013). "Spin" in wound care research: The reporting and interpretation of randomized controlled trails with statistically non-significant primary outcomes results or unspecified primary outcomes. Trails, 14(1), 371.
Polit, D.F., & Beck, C.T. (2014). Essentials of nursing research: Appraising evidence for nursing practice (8th ed.). Philadelphia, PA: Wolters Kluwer/Lippincott Williams & Wilkins.
Sainani, K.L. (2012). Clinical versus statistical significance. PM&R: Journal of Injury, Function & Rehabilitation, 4(6), 442-445.
Lynne M. Connelly, PhD, RN, is Associate Professor and Director of Nursing, Robert J. Dehaemers Endowed Chair, Benedictine College Atchison, KS. She is Research Editor for MEDSURG Nursing.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||Understanding Research|
|Author:||Connelly, Lynne M.|
|Date:||May 1, 2017|
|Previous Article:||Underreporting of medical errors.|
|Next Article:||The patient CaringTouch System: a framework for positive practice environments.|