Printer Friendly

School Library Research: Publication Bias and the File Drawer Effect?

Introduction

The need for empirical investigation into the effect of school librarians on student outcomes is critical as school librarians compete for limited funding and resources in schools (American Association of School Librarians, 2014; Hughes, 2014; Morrison, 2015). Simply put, the sustainability of the school library profession relies upon credible and reliable findings. Although the effect of school librarians on student achievement has been investigated for over two decades, the vast majority of the studies have relied upon descriptive and correlational analyses (Morris & Cahill, 2017). Despite the fact that frequently cited state-commissioned school library studies have not withstood the peer review process, their findings have shaped many practitioners' and researchers' beliefs that "the mere presence of a librarian is associated with better student outcomes" (Johnston & Green, 2018; Lance & Kachel, 2018, p. 17; Lance, Rodney, & Hamilton-Pennell, 2001). Although correlational research may reveal strong associations that may prove useful in the design of interventions to test causation in experimental studies and allow researchers to manipulate the independent variable (Conn, 2017), correlational analyses do not take into account other underlying variables that may be causing or confounding the effect.

In 2014, the American Association of School Librarians (AASL) announced a need for a "credible way" (p.9) to provide evidence of the "positive influence" of state-certified school librarians on student learning (p. 4). To work towards that goal, AASL, with funding provided by the Institute of Museum and Library Services (IMLS), invited 50 scholars across a broad spectrum of disciplines to explore how researchers could design research to support causal inference. A white paper summarizes the outcomes of the 2014 CLASS Summit and acknowledges that although "over 25 correlational studies of library effects on student and teacher outcomes" had been "valuable in identifying possible effects and the features of libraries and librarians that may cause them, [the studies] are generally not able to rule out plausible alternative explanations in a credible way" (American Association of School Librarians, 2014, p. 15). Calls for accountability and shrinking school budgets place pressure on school librarians to provide evidence of what they do, in terms of instructional activities, or what they provide, in terms of informational resources, and how such offerings and/or resources influence student outcomes. In order to establish a credible link between the hypothetical cause (the school librarian) and the effect (improved student achievement), researchers must demonstrate that they have investigated and rejected all other plausible explanations except the investigated causal one (Murnane & Willett, 2011). This has yet to be accomplished in school library research.

Threats to Scientific Integrity: The Legitimacy of Published Results

It is critical for scholars exploring ways to provide credible evidence of the school librarian effect to be aware of discussions across the wider scientific community concerning pervasive errors and biases in data collection, analyses, and publication of research results that threaten the reliability of contemporary scientific inquiry. The widespread prevalence of such weaknesses, empirically confirmed by researchers across disciplines (Button, Bal, Clark, & Shipley, 2016; Higgitt, 2012; Ioannidis, 2005; Jha, 2012; Wicherts, 2017), leads many researchers and statisticians to question the trustworthiness and legitimacy of published results (Ekmekci, 2017; Francis, 2013; Gelman & Loken, 2014; Ioannidis, 2005). As similar discussions concerning these issues have not taken place within the school library research community, in this paper we explore two of these threats within the context of school library research. It is hoped this will foster an open and candid conversation along with critical reflection on commonly held school library beliefs.

The Challenge of School Library Research

For nearly two decades a preponderance of school library studies continue to uphold the belief that the "mere presence of a school librarian" improves student achievement (Lance, 2001; Lance & Kachel, 2018; Lance et al., 2001). However, this simplistic stance ignores the multifaceted role of today's state-certified school librarian and the large number of overlapping, interwoven, and often confounding variables that exist within the learning environment. "The number of different players who contribute to education, and the complexity of their interactions, make it difficult to formulate parsimonious, compelling theories about the consequences of particular educational policies" (Murnane and Willett, 2011, p. 19). Although it is difficult to isolate the influence of the school librarian apart from all other activities and academic interventions, school, student, and library-related variables must be included in investigations in an effort to rule out other plausible alternative explanations and establish a credible link between the hypothetical cause (the school librarian) and the effect (student achievement). Although it is difficult in quasi-experimental and observational research to rule out all plausible competing explanations for the hypothesized link between cause and effect, for "each rival explanation that you do succeed in ruling out explicitly, the stronger is your case for claiming a causal link between treatment and outcome" (Murnane & Willett, 2011, p. 38).

The reality is that school librarians often work in a school building (or multiple buildings), across grade levels and disciplines, and with a range of student populations, teachers, administrators, educational staff, aides, parents, guardians, and community stakeholders. School librarians are employed in a variety of community settings (i.e., rural, suburban, and urban), and may or may not have access to community and cultural resources. School librarians may have access to a wide range of physical and electronic resources, or a limited number. School librarians' schedules may be fixed, flexible, blended, or a combination thereof. Each scheduling configuration, along with the nature of instruction that takes place and the instructional resources utilized, needs to be examined to try and determine what effect the intervention--what the school librarian may do (e.g. the amount of time the school librarian spends with students (the independent variable) has on student learning (the dependent variable). Student demographics, building climate, prior student achievement, and school library variables (certification, full-time status, years of employment, number of years in the building, educational background etc.,) must be considered along with the instructional, technological, and staffing resources of the district, school, and community. Other factors to consider include whether technology is available to students outside of school (personal devices and/or technology provided within the community i.e., school, public libraries, and/or afterschool centers), and the amount of time school librarians spend with students and teachers. Many of these factors may be determined by building or district administrators who often exert influence on the collaborative culture of a school and the level of support that the school librarian receives (Huguet, 2017; Ketterlin-Geller, Baumer, & Lichon, 2015). Furthermore, school librarians engender different dispositions (values, beliefs, attitudes, and commitment), educational backgrounds, professional experiences, instructional styles, technological expertise, and beliefs regarding how students learn. These constructs and combinations of variables have the potential to influence student achievement, making it difficult to understand how the "mere presence" of a school librarian is all that is required to improve student performance. So how and why has the field come to rely on such a simplistic explanation?

Publication Bias

Publication bias--the tendency to publish studies with positive findings with more frequency than those with negative or inconclusive results--is prevalent throughout scientific literature (Bial, 2018). In an oft-cited study, Ioannidis (2005) uses Bayes' theorem and reports that more than half of published research findings are false. While not all researchers agree with the magnitude of this conclusion, concerns about publication bias focusing on reporting positive results have been supported by empirical evidence from researchers across scientific disciplines for the past sixty years. Researchers and statisticians (Gelman, 2015; Gelman & Loken, 2014; Sterling, 1959) have documented publication bias across a variety of academic disciplines including animal studies, biology, chemistry, physiology and sociology (Fanelli, 2010; Scargle, 2000; Sterling, 1959; Sterling, Rosenbaum, & Weinkam, 1995), behavioral sciences (Rosenthal, 1979), education (Piotrowskj, 2015; Welner & Molnar, 2007), special education (Cook & Therrien, 2017; Makel et al., 2016), ecology (Statzner & Resh, 2010), medicine (Ekmekci, 2017; Ioannidis, 2005, 2014; Jukola, 2015; Matosin, Frank, Engle, Lum, & Newell, 2014; Mlinaric, Horvat, & Smolcic, 2017), psychology (Button et al., 2016; Francis, 2013; Jha, 2012; Owuamalam, Rubin, & Spears, 2018; Rosenthal, 1979; Sterling, 1959; Sterling et al., 1995), and theatre and performance (Bial, 2018).

Sterling (1959) explains that publication bias leads to a biased sample of research shared with researchers and practitioners, noting that 97.3% of papers published in four major psychology journals report statistically significant outcomes for their scientific hypotheses. Sterling (1959) also notes that positive outcome bias, which he describes as "malpractices," is pervasive throughout the sciences (p. 34). Sterling (1959) explains further that the reality of negative results is that they "occur with lesser frequency in the literature than they may reasonably be expected to happen in the laboratory" (p. 34). When only positive studies are published, the aggregate result is a distorted view of the field in which success stories about a phenomenon appear more common than they actually occur (Bial, 2018).

While it is beneficial to know what factors exert a significant positive effect on a phenomenon, it is equally important to know what factors, or combination thereof, have a negative and/or nonsignificant effect. As John Stuart Mill explained in 1843, "... we never really know what a thing is, unless we are also able to give a sufficient account of its opposite" (as cited in Mill, 2006, p. 735). Without knowing the full results of a study, scientists are left with an abridged and limited understanding of the phenomenon being investigated. Scholars, unaware of studies related to the investigation that may have yielded negative and/or inconclusive results, may conduct similar studies, attain comparable negative or inconclusive results, and waste time and intellectual effort that could have been directed elsewhere had the full corpus of results from the original study been shared.

The File Drawer Phenomenon

Rosenthal (1979) was the first to describe the phenomena of the "file drawer problem" (p. 638). Rosenthal noted that studies published in the behavioral sciences reflect a biased sample that do not include the multitude of studies that reveal negative, inconclusive, and/or nonsignificant results. Thus, many more negative results remain unpublished and left in file drawers back in the lab. While we can update the description to virtual files stored electronically, the reality is that negative results still remain largely unpublished and unseen by colleagues, practitioners, and readers. In 1995, Sterling, revisiting the issue, examined data in 11 major journals and found that the practice leading to publication bias of positive results had not changed in over 30 years (Sterling et al., 1995). Withholding negative, inconclusive, or nonsignificant findings distorts the understanding of research within a domain and causes the potential benefits of an intervention to be overestimated (Ekmekci, 2017). Rosenthal (1979) explains, "small numbers of studies that are not very significant, even when their combined p is significant, may well be misleading in that only a few studies filed away could change the combined significant result to a nonsignificant result" (p. 640). Consequently, it only takes a small number of studies left in the file drawer to "produce a significant bias" (Scargle, 2000, p. 91).

Loss of Negative Data

It is clear that negative research findings, which are likely to far surpass the number of positive findings, continue to be left behind, unpublished, or tucked away in file drawers (Cook & Therrien, 2017; Rosenthal, 1979; Scargle, 2000; Sterling, 1959). Fanelli (2012) analyzed over 4,600 papers in all disciplines between 1990 and 2007 and verified a bias against negative and non-significant results. His empirical findings reveal that the overall frequency of positive support has grown by over 22%, and that "the increase was significantly stronger in the social sciences ... and in applied disciplines" (p. 895). The loss of negative data is concerning because results that do not conform to expectations and/or contradict a hypothesis are essential for scientific progress. Questioning results helps to foster and sustain a collective "self-correction process" (Fanelli, 2012, p. 892). Negative findings are important to consider as they encourage researchers to think critically, re-evaluate, challenge, correct, and/or validate currently held beliefs, and "fundamentally move us to an unabridged science" (Matosin et al., 2014, p. 171). It is, therefore, essential that all findings--positive, negative, and null--are available to scientists to ensure valid and complete research syntheses to inform policy, practice, and research.

Factors Contributing to the Distortion of Scientific Knowledge

Researchers hypothesize that increased competition for shrinking funding for research, with pressure to publish in high ranking journals (Button et al., 2016; Fanelli, 2010) for tenure and promotion, and publication bias towards positive findings by journals, may lead to the distortion of scientific knowledge (Higgitt, 2012; Ioannidis, 2012; Mlinariae et al., 2017; Statzner & Resh, 2010).

Replication studies, pre-registration (publication of hypotheses along with proposed design, data sources, and analyses prior to initiation of a study), and multiverse analysis (Steegen, et al, 2016) are three ways that researchers across disciplines are attempting to address the problem of high researcher degrees of freedom in research design and analysis and increase the publication of negative findings. Button et al. (2016) explain that the bias for positive results is influenced by a highly competitive culture in which scientists find themselves competing for the same positions and vying for similar funding streams. Researchers choose not to report negative findings in hopes of being published, hired, funded, and promoted (Fanelli, 2012). Other factors, including psychological, social, and societal factors, also contribute to publication bias. As human beings, scientists "tend to select information that supports their hypotheses about the world" (Fanelli, 2010, p. 1).

According to system-justification theory "people have an inherent need to support societal systems and to maintain the status quo" (Owuamalam et al., 2018, p. 91). Although we are taught to believe that scientists embrace the notion that scientific principles are subject to systematic and ongoing review and new evidence is welcomed whether it supports, contradicts, and/or negates the original hypotheses, this is often not the reality of research practice. When confronted with the challenge of rejecting previously published ideas with new research, cognitive bias makes it difficult for researchers to "fight for the paradigm shift" (Matosin et al., 2014; Owuamalam et al., 2018, p. 171). However, Matosin et al., (2014) stress that as scientists we are accountable to share all data no matter the outcome because a negative finding is still an important finding.

The Status of School Library Research

For decades, school library researchers and practitioners have sought to provide evidence of the positive effect of school librarians on student achievement. In a systematic review of over 300 peer-reviewed publications that explore the relationship of school librarians and student achievement, Stefl-Mabry & Radlick (2017) empirically identified the following weaknesses:

(1) lack of an underlying theory of action

(2) disproportionate reliance on descriptive data

(3) conflation of correlation with causation

(4) problems in measurement and statistical analyses

(5) absence studies of replication

(6) weak designs without comparability between library and non-library groups, and

(7) evidence of publication bias focusing on positive results.

Of the 300 studies, the authors found only 28 had designs that attempted to control for confounding variables. Though not discussed in the 2017 article, a matrix available on the authors' research project website (see: https://sites.google.com/view/slesany/2015-2018-project) indicates that of the 28 studies just 13 were peer-reviewed; only a handful refute or negate the widely stated, putatively strong positive effects of a school librarian on student achievement, and even fewer report effect sizes. "Effect size is the magnitude of the difference between groups'" (Sullivan & Feinn, 2012, p. 279). Researchers need to report not only whether school librarians affect student outcomes, but also the direction (positive or negative) and magnitude of the affect. The focus on positive findings, largely driven by advocacy for the school library profession, has led many school library researchers and practitioners to believe that there is a preponderance of strong evidence that the presence of a school librarian in a school improves student achievement. However, as the AASL (American Association of School Librarians, 2014) explained, many school library studies rely on correlation and descriptive data including "asking students how the school library has helped them succeed" as "evidence" that the school librarian has a positive effect on student achievement (Johnston & Green, 2018, p. 7). Despite the knowledge that correlation is not evidence of causation, many authors with correlational findings interpret their results as though they were causal (Conn, 2017). Conn explains that "the temptation is a seductive one," and whether consciously or unconsciously, authors may use causation language to "exaggerate the significance" and "elevate the importance of their work" (p. 731). Misinterpreting or misrepresenting correlational findings as causal "may impede future research and misinform practice" (Conn, 2017, p. 731). However, there was a time when the content of the school library research file drawer was shared, and positive and negative results related to the school librarian effect on student achievement were discussed openly.

An Open File Drawer Regarding School Library Research

Nearly two decades ago, when describing the findings of the Alaskan state study, Lance, Hamilton-Pennell, Rodney, Petersen, and Sitter (1999) acknowledged that "community conditions proved to have the strongest impact" on student achievement (p. 4). In the Oregon state study as well, Lance, Rodney, and Hamilton-Pennell (2001) confirmed that poverty and minority status were the two most important predictors of student academic success. In addition, when Lance et al. (2001) factored in poverty and "racial ethnic minority groups" as variables in regression models, "none of the remaining community, school or library media predictors demonstrated any additional impact" on student academic achievement (p. 80). Lance et al. (2001) explained that this negative and "unexpected finding is consistent with a pattern uncovered in the original Colorado study" (p. 80). Such negative results, however, are rarely discussed or cited in contemporary school library research. Although Lance (2001), whose school library impact studies have had a profound influence on school library research, reported "the evidence is mounting!" explaining that approximately 75 school library impact studies demonstrate "the value of the mere presence of a professionally trained and credentialed library media specialist" (p. 3). He also cautioned that such correlations (authors' emphasis) "beg the question of what the school librarians are doing that makes a difference" (p. 3). Nearly two decades later, school library researchers are still trying to determine what it is that school librarians do, what resources school librarians provide, and how instructional strategies or the provision of information sources influence student learning (American Association of School Librarians, 2014).

In 2005, Callison, editor of the School Library Media Research Journal, conducted an electronic interview with Lance to reflect on the status of school library research. Callison and Lance proposed recommendations to strengthen future school library research. Lance explained that, in the "Colorado-style" studies, the strongest predictor of test scores continued to be socio-economic factors and recommended that, "because it confounds the effects of so many other variables of interest," it was time to explore new methodological options to provide "stronger causal evidence" of the school library effect (2005, p. 8). Lance's vibrant description of a school as a living social organism warned that
   ... the organic interrelationship of the myriad factors impacting
   achievement suggests that it is an unrealistic oversimplification
   to suggest that changing one element of the situation alone can, or
   should, make such a dramatic difference. (p. 8)


However, 14 years later, school library researchers continue to espouse the belief that one element, the "mere presence" of n school librarian, leads to better student outcomes (Lance & Kachel, 2018, p. 17). The original negative results, describing the non-effect of the school librarian on student achievement when factoring in poverty and minority status and other community conditions, appears to have vanished from contemporary publications and discussions. The file drawer appears to have closed. The question is "why"?

Disappearing Negative Findings

Perhaps the file drawer phenomenon together with a low tolerance for null results is to blame (Rosenthal, 1979). However, a more compelling reason may be, as Johnston & Green (2018) explain, that between 2004 and 2014 many school librarian positions were being cut and/or eliminated; therefore, "... studies were conducted to demonstrate the value of having a full-time school librarian to support students' learning" (p. 8). However, despite the overwhelmingly positive results that were reported in multiple state impact studies, Kaplan (2010) reports that the studies had very little impact on the integration of the school library within the overall school curriculum and little impact on local or federal legislation related to school library media programs or school librarians (p. 58).

In order to support school libraries, future research should lend depth and sophistication to the relationships suggested by correlation and not focus on advocacy. Additional work must examine why, where, when, and how specific types of interactions occur between school librarians, students, teachers, and staff to determine what the effects are within the educational ecosystem. By exploring such dimensions as school librarians' demographics, educational background, professional development, experience, leadership, and dispositions, using techniques beyond correlational analyses--and in contexts larger than that of a school library--causal relationships can be uncovered, and realistic expectations can be set for how the school librarian and school library program may contribute to student outcomes.

Recommendations

Science is a collaborative discipline, and negative or inconclusive results are a natural consequence of experimentation. Just as with statistically positive results, authors, reviewers, and editors need to distinguish "true negative results from those obtained from a poorly designed and executed research" (Mlinaric et al., 2017, p. 450). The priority should be on the quality of the research theory and design of the study, not on preferable outcomes. Whether the results are positive or negative, it is recommended that effect size and confidence intervals are routinely reported (Mlinaric et al., 2017; Wicherts, 2017).

Some journals are beginning to pilot a peer review system whereby the results of the study are not included in the initial peer review process, indicating that the peer review process itself may be adding to the problem (Button, Bal, Clark, & Shipley, 2016). Button et al. (2016) suggest that, by withholding the results and discussion section, the reviewers would then be required to judge the submission entirely on the research design and not be unduly influenced by the results. They also suggest that journals could move toward a system where a publication commitment is made based solely on the research design before the study has been conducted.

Academic social networking sites such as ResearchGate, as well as digital libraries in the form of preprint servers and institutional repositories, may affect how and which scholarly voices are heard in the 21st century. Such platforms allow scholars to share their findings in instances where publication bias has occurred, and some, like ResearchGate, have a specific option for uploading negative results (ResearchGate, n.d.), indicating that this problem is not unknown in the research community. Additional suggestions for researchers include preregistration or publication of study protocols with services such as Registry of Efficacy and Effectiveness Studies (REES) (https://sreereg. icpsr.umich.edu/pages/about.php), which is a database of causal inference studies in education and related science fields. The purpose of REES is to increase "transparency and access to information about both ongoing and completed impact studies."

The direction of scientific research, no matter the field or discipline, should not be determined by the pursuit of positive findings. Publishing negative results saves colleagues time by knowing what has already been explored. "Filing away" negative or inconclusive results prevents scientists from access to a full understanding of the phenomenon being investigated. Matosin et al. (2014) explain that as scientists we have a responsibility to "(1) publish all data, no matter what the outcome ... and (2) have a hypothesis to explain the finding" (p. 173). Published research that exclusively reports positive results and stores negative data away in "file drawers" distorts the authenticity of the investigation and results in an abridged version of reality. To increase what Ioannidis (2014) refers to as "true research findings," he recommends the following research practices:

* large-scale collaborative research

* adoption of a culture of replication

* registration (of studies, protocols, analysis codes, datasets, raw data, and results)

* sharing (of data, protocols, materials, software, and other tools)

* reproducibility practices

* containment of conflicted sponsors and authors

* more appropriate statistical methods

* standardization of definitions and analyses

* more stringent thresholds for claiming discoveries or ''successes''

* improvement of study design standards

* improvements in peer review, reporting, and dissemination of research, and

* better training of scientific workforce in methods and statistical literacy (p. 2).

It is imperative that the field of school library research engage in a critical examination of its research and publication practices and work to adopt effective replicable research methodologies that can build a credible base of evidence that will be convincing not only to practitioners and library and information science researchers, but also to the wider scientific research community.

Conclusion

School library researchers globally describe a critical need "for concrete evidence now on exactly how school libraries and librarians do--or don't--add value to pupils' educational, social, and developmental wellbeing" (Gildersleeves, 2012, p. 406; Hughes, 2014). To date, the vast majority of school library studies suffer from significant limitations in research design and analysis. While researchers may feel more productive when they discover positive or statistically significant findings, including effect sizes and statistically significant negative findings are important to advance science. Negative results enable us to understand what variables don't contribute to or influence the school library effect and challenge us to think critically and dig more deeply into what, when, where, and how school librarians influence student outcomes within the context of other confounding variables within a school. Given the multifaceted professional role of the school librarian, it is not surprising that trying to definitively answer the question about the school librarian effect has been difficult.

The purpose of this article is not to minimize the importance of correlational research, which is often useful to determine how variables are associated. However, correlational research is, by design, a nonexperimental research method (Curtis, Comiskey, & Dempsey, 2016; Johnson & Christensen, 2017; Leedy & Ormrod, 2016). After over two decades, the field of school library research must move beyond correlational and descriptive research. At present, the challenge for educational researchers is to identify what it is that school librarians do in terms of instruction or the provision of resources that influences student outcomes.

An understanding of the inefficient research practices that have created a "crisis of confidence" in contemporary science (Wicherts, 2017, p. 1) can help school library and educational researchers design methodologies to improve the credibility and effectiveness of future scientific investigations (Ioannidis, 2014). This knowledge will help school library researchers ensure that appropriate rigor and safeguards are put into place as they labor to construct a robust, credible, and reliable theoretical foundation of school library research.

Joette Stefl-Mabry

State University of New York, Albany

Michael Radlick

Learning Technology Visions, LLC

Shannon Mersand

Yenisei Gulatee

State University of New York, Albany

References

American Association of School Librarians. (2014). Causality: School libraries and student success (CLASS). White Paper. American Association of School Librarians. Retrieved from https://eric.ed.gov/?id=ED561868

Bial, H. (2018). Guest editor's introduction: Failing better. Theatre Topics, 28(1), 61-62.

Button, K. S., Bal, L., Clark, A., & Shipley, T. (2016). Preventing the ends from justifying the means: Withholding results to address publication bias in peer-review. BMC Psychology, 4(1), 59-59.

Callison, D. (2005). Enough already?: Blazing new trails for school library research: An interview with Keith Curry Lance, Director, Library Research Service, Colorado State Library & University of Denver [computer file]. In (Vol. 8, pp. 1).

Conn, V. S. (2017). Don't rock the analytical boat: Correlation is not causation. Western Journal of Nursing Research, 39(6), 731-732. doi:10.1177/019394 5917701090

Cook, B. G., & Therrien, W. J. (2017). Null effects and publication bias in special education research. Behavioral Disorders, 42(4), 149-158. doi:10.1177/019 8742917709473

Curtis, E. A., Comiskey, C., & Dempsey, O. (2016). Importance and use of correlational research. Nurse Researcher, 23(6), 20-25. doi:10.7748/nr.2016.e1382

Ekmekci, P. E. (2017). An increasing problem in publication ethics: Publication bias and editors' role in avoiding it. Medicine, Health Care, And Philosophy, 20(2), 171-178. doi:10.1007/s11019-017-9767-0

Fanelli, D. (2010). Do pressures to publish increase scientists' bias? An empirical support from US states data. PLoS ONE, 5(4). doi:10.1371/journal. pone.0010271

Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891-904. doi:10.1007/s11192-011-0494-7

Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology, 57, 153-169. doi:10.1016/j.jmp.2013.02.003

Gelman, A. (2015). Statistics and research integrity. European Science Editing, 41(1), 13-14.

Gelman, A., & Loken, E. (2014). The statistical crisis in science. American Scientist, 102(6), 460-465.

Gildersleeves, L. (2012). Do school libraries make a difference? Some considerations on investigating school library impact in the United Kingdom. Library Management, 33(6/7), 403.

Higgitt, R. (2012, September 13). Fraud and the decline of science. The Guardian.

Hughes, H. (2014). School libraries, teacher-librarians and student outcomes: Presenting and using the evidence. School Libraries Worldwide, 20(1), 2950. doi:10.14265.20.1.004

Huguet, B. C. S. (2017). Effective leadership can positively impact school performance. On the Horizon, 25(2), 96-102. doi:10.1108/0TH-07-2016-0044

Ioannidis, J. P. A. (2005). Why most published research findings are false. PLoS Medicine, 2(8), e124. https://doi.org/10.1371/journal.pmed.0020124

Ioannidis, J. P. A. (2012). Scientific inbreeding and same-team replication: Type D personality as an example. Journal of Psychosomatic Research, 73(6), 408-410. doi:10.1016/j.jpsychores.2012.09.014

Ioannidis, J. P. A. (2014). How to make more published research true. PLoS Medicine, 11(10), 1-6. doi:10.1371/journal.pmed.1001747

Jha, A. (2012). False positives: Fraud and misconduct are threatening scientific research. The Guardian.

Johnson, B., & Christensen, L. B. (2017). Educational research: Quantitative, qualitative, and mixed approaches (Sixth ed.). Thousand Oaks, CA: SAGE Publications, Inc.

Johnston, M. P., & Green, L. S. (2018). Still polishing the diamond: School library research over the last decade. School Library Research, 21, 1-63.

Jukola, S. (2015). Meta-analysis, ideals of objectivity, and the reliability of medical knowledge. Science & Technology Studies, 28(3), 101-121.

Kaplan, A. G. (2010). School library impact studies and school library media programs in the United States. School Libraries Worldwide, 16(2), 55-63.

Ketterlin-Geller, L. R., Baumer, P., & Lichon, K. (2015). Administrators as advocates for teacher collaboration. Intervention in School & Clinic, 51(1), 51-57. doi:10.1177/1053451214542044

Lance, K. C. (2001). Proof of the power: Recent research on the impact of school library media programs on the academic achievement of U.S. public school students. ERIC Digest, ED456861 2001-00-00, 1-21.

Lance, K. C., Hamilton-Pennell, C., & Rodney, M. J. (1999). Information empowered: The school librarian as an agent of academic achievement in Alaska schools. Revised Edition. Juneau, AK. Retrieved from https://eric. ed.gov/?id=ED443445

Lance, K. C., & Kachel, D. E. (2018). Why school librarians matter: What years of research tell us. Phi Delta Kappan, 99(7), 15-20. doi:10.1177/00317217 18767854

Lance, K. C., Rodney, M. J., & Hamilton-Pennell, C. (2001). Good schools have school librarians: Oregon school librarians collaborate to improve academic achievement. Salem, OR: Oregon Educational Media Association.

Leedy, P. D., & Ormrod, J. E. (2016). Practical research: Planning and design (11th ed.). Boston, MA: Pearson.

Makel, M. C., Plucker, J. A., Freeman, J., Lombardi, A., Simonsen, B., & Coyne, M. (2016). Replication of special education research: Necessary but far too rare. Remedial and Special Education, 37(4), 205-212.

Matosin, N., Frank, E., Engel, M., Lum, J. S., & Newell, K. A. (2014). Negativity towards negative results: A discussion of the disconnect between scientific worth and scientific culture. Disease Models & Mechanisms, 7(2), 171-173. https://doi.org/10.1242/dmm.015123

Mill, J. S. (2006). A system of logic ratiocinative and inductive booksTV-VI (Liberty Fund pbk. ed.). Indianapolis, Ind.: Liberty Fund.

Mlinaric, A., Horvat, M., & Smolcic, V. S. (2017). Dealing with the positive publication bias: Why you should really publish your negative results. Biochemia Medica, 27(3), 447-452. doi:10.11613/BM.2017.030201

Morris, R. J., & Cahill, M. (2017). A study of how we study: Methodologies of school library research 2007 through July 2015. School Library Media Research, 20.

Morrison, O. (2015). Number of libraries dwindles in N.Y.C. schools. Retrieved June 25, 2019, from https://www.edweek.org/ew/articles/2015/03/18/numberof-libraries-dwindle-in-nyc-schools.html

Murnane, R. J., & Willett, J. B. (2011). Methods matter: Improving causal inference in educational and social science research. Oxford, UK Oxford University Press.

Owuamalam, C. K., Rubin, M., & Spears, R. (2018). Addressing evidential and theoretical inconsistencies in system-justification theory with a social identity model of system attitudes. Current Directions in Psychological Science, 27(2), 91-96. doi:10.1177/0963721417737136

Piotrowskj, C. (2015). Scholarly research on educational adaption of social media: Is there evidence of publication bias? College Student Journal, 49(3), 447-451.

ResearchGate. (n.d.). How to add research. Retrieved June 20, 2019, from https:// explore.researchgate.net/display/support/How+to+add+research

Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 3, 638.

Scargle, J. D. (2000). Publication bias (the 'file-drawer problem') in scientific inference. Journal of Scientific Exploration, 14(1), 91-106.

Statzner, B., & Resh, V. H. (2010). Negative changes in the scientific publication process in ecology: Potential causes and consequences. Freshwater Biology, 55(12), 2639-2653. doi:10.1111/j.1365-2427.2010.02484.x

Steegen, S., Tuerlinckx, F., Gelman, A., & Vanpaemel, W. (2016). Increasing transparency through a multiverse analysis. Perspectives on Psychological Science, 11(5) 702-712.

Stefl-Mabry, J. and M. S. Radlick (2017). School library research in the real world--What does it really take? International Association of School Librarians Conference Proceedings. Long Beach, CA.

Sterling, T. D. (1959). Publication decisions and their possible effects on inferences drawn from tests of significance--Or vice versa. Journal of the American Statistical Association, 285, 30. doi:10.2307/2282137

Sterling, T. D., Rosenbaum, W. L., & Weinkam, J. J. (1995). Publication decisions revisited: The effect of the outcome of statistical tests on the decision to publish and vice versa. The American Statistician, 49(1), 108-112. doi:10.1080/00031305.1995.10476125

Sullivan, G. M., & Feinn, R. (2012). Using effect size-or why the P value is not enough. Journal of Graduate Medical Education, 4(3), 279-282. doi:10.4300/ JGME-D-12-00156.1

Welner, K. G., & Molnar, A. (2007). Truthiness in education. Education Week. Retrieved from http://nepc.colorado.edu/files/edweek2-28-07.pdf

Wicherts, J. M. (2017). The weak spots in contemporary science (and how to fix them). Animals (2076-2615), 7(12), 1-19. doi:10.3390/ani7120090
COPYRIGHT 2019 Caddo Gap Press
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Stefl-Mabry, Joette; Radlick, Michael; Mersand, Shannon; Gulatee, Yenisei
Publication:Journal of Thought
Date:Sep 22, 2019
Words:5756
Previous Article:Educational Reforms: In Praise of Folly.
Next Article:Untangling the Trigger-Warning Debate: Curating a Complete Toolkit for Compassionate Praxis in the Classroom.
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters