Printer Friendly

An ounce of prevention: an associate editor's view.

It is my pleasure and honor to serve as the new associate editor of research for the Journal of Mental Health (JMHC) and to continue working with such distinguished colleagues as Drs. James Rogers, Loreto Prieto, and Victoria Kress. I have had the privilege of serving these past two years as a reviewer for JMHC, paying particular attention to the soundness of the statistical and methodological sections of manuscripts submitted. On average I have reviewed one manuscript every eight weeks. As I read these manuscripts, I was pleased to note the quality of preparation and the soundness of the studies conducted. Because no field can advance beyond the quality of its research, this encourages me as both researcher and counselor.

ROLE OF THE RESEARCH EDITOR

In general, I view the role of research editor as similar to that of every JMHC reviewer: I am a gatekeeper, checking both the quality and scope of research studies published in the journal. Any researcher who has questions about the appropriateness of a submission to JMHC may, of course, consult the Guidelines for Authors printed in each JMHC issue. I also encourage prospective authors to read Prieto (2005) and Rogers (2006). The Prieto article should be particularly useful for the novice researcher contemplating a future study, especially if the goal is submission to the JMHC.

Based upon my experiences as research mentor and consultant as well as reviewer of manuscripts for JMHC, I see my role as research editor as having a preventive component: If at all possible, I would like to help those submitting manuscripts to avoid mistakes that might cripple or destroy what could have been a useful study. Naturally, I cannot offer one-on-one advice to the many who submit work for publication in JMHC. What I can do is use opportunities like this editorial to offer words of encouragement, guidance, and caution that are seasoned by my own experience. Therefore, in what follows I offer JMHC contributors information that might help them to produce solid research studies. Though my advice cannot guarantee publication, I hope it will challenge researchers to examine both how they view themselves as researchers and their research practice.

"BUT I'M A COUNSELOR ..."

As both a student of professional counseling and a teacher of statistics, I have witnessed firsthand the fear among counselors-in-training about learning and using statistics. Often you can see it in their eyes, which say, "I entered the field of counseling because I want to counsel. Learning research methodology and statistics is an evil I do not wish to face." Indeed, both students and colleagues have said so to me directly. However, many of these same professionals recognize the need to inform the counseling profession competently by contributing sound research. So, how to face this daunting task? After all, most counselors in training complete only a handful of courses in research methods and statistics. Is it possible to conduct competent research after only a few statistics and research courses?

I believe so. This is admittedly an idea on which my thinking has come full circle since some unusual choices in my own graduate education. I completed my first research course as a master's student in school counseling. Until then I had had no exposure to research or statistics. After completing my master's training with its two research courses, I headed straight into full-time doctoral study in counselor education, which meant five required research and statistics courses. I planned to conduct my own studies in the future, not the least of which would be my dissertation, and I remember thinking, "Five courses is not enough. How can any counselor-in-training expect to be proficient in research without adding years onto an already demanding course of doctoral study?" My solution was to change my course of study mid-degree from counselor education to applied research and statistical methods. This decision entailed a career shift, but I felt I could be of more service to counselors-in-training and to those in the counseling and education professions generally if I had a solid background in research and statistics.

As a professor of statistics, what I realized was that the question was still unanswered: How can professional counselors be expected to conduct their own research having had only a handful of formal statistics and research courses?

"Distilled advice" is my answer: Pay attention to the suggestions and resources that recur most in teaching, consultation, and manuscript review situations. Based upon my own experiences, I believe these resources form a core of research competency. These are the tidbits I offer now to the contributors and readers of JMHC.

WHAT WAS THE QUESTION?

As Ben Franklin once noted, "An ounce of prevention is worth a pound of cure." In the case of research, there is no substitute for careful planning from the outset. I cannot emphasize this enough: First, formulate a clear, manageable research question, and write it down. Refer to the question often as you think about the population to be studied, a representative sample, instrumentation, and analysis. Fong and Malone (1994) examined research articles published in Counselor Education and Supervision (CES) from 1991 to 1992, and one of the common errors they found was the absence of clearly stated research questions.

How can this be? One possible explanation is that the researchers "knew" what they wished to study but did not refer regularly to a written research question as they planned and executed their studies. Or perhaps the researchers did not plan well but simply rushed to complete the study. I remember this temptation: My dissertation seemed to take so long to plan, I had to keep fighting the urge "just to get it done." However, instead of rushing and creating a possibly irredeemable situation, I asked two of my colleagues to help me plan, anticipating potential difficulties and alternate courses of action. The planning did take time, but once I began my study, the entire process ran smoothly and was a success because of the planning.

I read the Fong and Malone (1994) study early in the writing of my dissertation, and I took their findings seriously. Besides the lack of clear research questions, they also identified as problems incorrect analyses and inflated Type I error. Their conclusion: "No matter how important the topic or how great the efforts of the researcher, the results [of poorly designed and executed studies] could not be used, and the discipline missed the opportunity to build upon its empirical base" (Fong & Malone, as cited in Schneider, 2002, p. 7). I encourage counseling researchers to read this study for themselves, keeping the conclusions in mind as they plan their own work.

A second work researchers might find helpful for streamlining the publication process is the editorial by Kline and Farrell (2005) on manuscript errors. Manuscript processing can be delayed when articles do not conform to submission guidelines. Retaining the formatting requirements of a journal that has rejected a manuscript and not altering it to meet guidelines for the journal to which the manuscript is currently submitted may seem to the researcher to "save time." Here I am reminded of one particular manuscript I reviewed. It was clear that the authors had paid little attention to guidelines of the JMHC or the Publication Manual of the American Psychological Association, fifth edition (APA, 2001). The inappropriate formatting conveyed to me as a reviewer a nonchalant attitude that made it difficult for me to overlook the formatting and concentrate on the merits of the piece. Following publication guidelines conveys respect for the journal's editorial staff and encourages in reviewers a clear-minded assessment of the study itself.

"I'LL JUST WRITE A SURVEY ..."

Survey construction seems to be an area researchers mistakenly believe requires little skill. As a research consultant I have often heard researchers say that they will "just" write a survey. After probing, I also found no intent to examine formal tenets of survey construction. There is no "just" to sound survey construction. Indeed, regarding CES submissions, Kline and Farrell (2005) stated that "Survey research ... demonstrated the highest rejection rate for all research manuscripts reviewed" (Survey Research section, [paragraph] 1). This finding is not surprising to me, given the casual attitude many researchers seem to have toward survey construction.

It is the researcher's responsibility to provide evidence of formal procedure regarding construction of researcher-generated surveys (Kline & Farrell, 2005) as well as evidence that the survey instrument is reliable and valid. Reliability evidence shows that the survey is consistently measuring something; validity evidence demonstrates what that "something" is, including application of information elicited by the survey to decision making (Popham, 2005).

A useful resource for survey construction is Thorndike (1997). Factor analysis (both exploratory and confirmatory) can be used to test reliability (Kline, 1994). SPSS also offers a "reliability analysis" option in its packages--scale if item deleted--that I have found particularly useful in distinguishing suitable from poor survey items. As for validity, exploratory and confirmatory factor analysis and canonical correlation analysis are useful for providing evidence in naming the construct measured, but keep in mind that whether usage of the survey instrument is valid or appropriate is governed not by statistical analysis but only by the researcher. Tabachnick and Fidell (2007) is a helpful resource for conducting these analyses.

A few more words of advice about survey research:

(1) Pilot the survey on a sample taken from the population of interest, not an alternative convenience population.

(2) Review the literature for potentially usable instruments before constructing a unique survey instrument.

(3) Use separate samples for construction/validation and for interpretation of survey outcomes.

(4) Provide reliability evidence for each and every sample.

UNIVARIATE OR MULTIVARIATE?

Assuming the chosen research question might be examined quantitatively, the researcher must decide whether to opt for univariate or multivariate analysis. In order to narrow this discussion, I will look at only one popular univariate analysis, ANOVA, and its multivariate counterpart, MANOVA. The difference is in the number of outcome variables. Often, complex research questions entail numerous outcome measures. If these measures are logically related, such as subscales on a given instrument, is the better choice; separating subscales by using separate analyses would destroy the whole of the instrument. If the outcome measures are not logically related, putting all measures together in a single MANOVA is indefensible; separate ANOVAs should be used.

If the researcher selects ANOVA, the rationale should not be based on fear of MANOVA. In discussing the results of Fong and Malone (1994), Schneider (2002) noted, "The most common data analysis errors ... seriously ignored the issue of dependent variable interdependence by attempting to analyze data using ... multiple univariate means" (p.6). There was a time when researchers computed multiple ANOVAs because computer technology did not allow for easy computation of MANOVA. Here Tabachnick and Fidell (2007) are again useful for researchers in both conducting and interpreting multivariate analyses, including MANOVA. The text includes examples of SPSS and SAS output as well as examples of written reporting.

In the case of MANOVA significance, researchers should use multivariate post hoc analyses (such as the [T.sup.2] of Hotelling) if the post hoc interest is in the independent variables, and descriptive discriminant analysis (DDA) if the post hoc interest is in the relative contribution of MANOVA outcome variables. Even though ANOVA results are available as a part of MANOVA output, I suggest not using ANOVA, a univariate analysis, to try to understand the results of MANOVA, a multivariate analysis. ANOVA cannot preserve the interrelationship among MANOVA outcome variables because it can only examine each outcome separately, ignoring the interrelatedness that is tested in MANOVA (Schneider, 2002).

SAMPLE SIZE, EFFECT SIZE, AND POWER

From the outset of a quantitative study, the conscientious researcher considers issues of sample size, realizing that the sample will affect the power of a study. If the sample is too small, otherwise significant results will seem nonsignificant (a Type I error). In considering what sample size will be adequate, the researcher must also anticipate the effect size--the obviousness of a particular measure as that measure differs from chance in the population associated with the sample studied. Though he discourages the use of rules of thumb in place of calculated effect sizes, Cohen (1992) realized that behavioral researchers needed a less technical guide to determine effect size and, by extension, determine sample sizes suitable for achieving adequate power (generally defined as 1 - [beta] = .80). He noted that a medium effect size "approximates the average size of observed effects in various fields" (p. 156). The Cohen article is an easy-to-use resource for determining adequate sample size for a number of univariate analyses, including ANOVA. Huck (2004) also offers an easy-to-read discussion of effect size and power. A more detailed discussion of power analysis, including that of given multivariate analyses, is in Cohen (1988). Tabachnick and Fidell (2007) also offer sample size suggestions for achieving sufficient power in multivariate studies.

In situations where it is not possible to get an adequate sample, instead of using parametric procedures like ANOVA, the researcher can use nonparametric procedures. Though literally "nonparametric" means "without parameters (rules)," these procedures are mainly free only of sample size restrictions. Nonparametric procedures are also suitable for data measured using the lower levels of measurement (nominal and ordinal) rather than higher levels (interval and ratio). SPSS offers some nonparametric options, the best-known of which is the Chi-square test. Moreover, because nonparametric statistics use basic mathematics, the researcher with only a moderate background in statistics can feasibly calculate them by hand. I encourage researchers to forego the fear of hand calculation and investigate the idea of "doing it yourself." Conover (1999) offers a variety of nonparametric tests, complete with examples.

If a study yields nonsignificant results, the researcher should provide proof that it had sufficient power. Power is the ability to detect significance when that is present. If a study yields significant results, power must have been sufficient, but in the case of nonsignificance, there are two possibilities: (1) the nonsignificant finding is correct, or (2) the power was not sufficient to detect possible significance. I encourage researchers to address the interrelated issues of sample size, effect size, and power at the outset of a study, rather than uselessly expending time, effort, and money on a study rendered either virtually or completely useless because of low power.

RESPECT QUALITATIVE RESEARCH

As a research consultant I have too often heard researchers say that they prefer to conduct qualitative rather than quantitative studies. Asked to elaborate, they often do not discuss the merits of qualitative research but instead speak of their fear of conducting a quantitative study. Many were unable to discuss the different types of qualitative studies or the general steps a researcher must take in preparing for and conducting a given type.

I invite researchers to submit for publication rigorously conducted qualitative studies on topics suited to JMHC. Furthermore, I encourage researchers to conduct qualitative studies for the answers the study might provide to specific research questions, not simply to escape the fear of statistics. It is beyond the scope of this editorial to examine the numerous types of qualitative studies, but texts like Bodgan and Bilkin's (1998) survey of qualitative research can provide a starting point for the researcher who wishes to fully understand qualitative methodology.

CONCLUDING REMARKS

Social learning theory tells us that a positive aspect of experience is its potential transferability. The points I have discussed here are based upon my years as research student, instructor, consultant, and reviewer. The editorial is not exhaustive, but its contents represent a core of research competency. I am convinced that by benefiting from my experience and the experiences of the researchers cited, researchers can forego the heartache of learning firsthand what not to do when conducting research and submitting manuscripts.

I look forward to my tenure as JMHC's associate editor-research, and I wish JMHC readers well with all their research undertakings.

REFERENCES

American Psychological Association. (2001). Publication manual of the American Psychological Association (5th ed.). Washington, DC: Author.

Bogdan, R. C., & Bilkin, S. K. (1998). Qualitative research for education: An introduction to theory and methods'. Boston: Allyn & Bacon.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Lawrence Erlbaum.

Cohen. J. (1992). A power primer. Psychological Bulletin, 112, 155-159.

Conover, W. J. (1999). Practical nonparametric statistics (3rd ed.). New York: John Wiley and Sons.

Fong, M. L., & Malone, C. M. (1994). Defeating ourselves: Common errors in counseling research. Counselor Education and Supervision, 33, 35-362.

Huck, S. W. (2004). Reading statistics and research (4th ed.). Boston: Allyn & Bacon.

Kline, P. (1994). An easy guide to factor analysis. New York: Routledge.

Kline, W. B., & Farrell, C. A. (2005, March 1). Recurring manuscript problems: recommendations for writing, training, and research ]electronic version]. Counselor Education and Supervision, 45. Retrieved July 25, 2008, from http://www.accessmylibrary.com/coms2/summary_0286-17343045_ITM.

Popham, W. J. (2005). Classroom assessment: What teachers need to know. Boston: Allyn & Bacon.

Prieto, L. R. (2005). Research submissions to JMHC: Perspectives from the associate editor. Journal of Mental Health Counseling, 27, 197-04.

Rogers, J. R. (2006). Developing JMHC content-related submission guidelines. Journal of Mental Health Counseling, 28, 283-285.

Schneider, M. K. (2002). A Monte Carlo investigation of the Type I error and power associated with descriptive discriminant analysis as a MANOVA post hoc procedure. Unpublished dissertation. University of Northern Colorado.

Tabachnick, B. G., & Fidell, L. S. (2007). Using multivariate statistics (5th ed.). Boston: Allyn & Bacon.

Thorndike, R. M. (1997). Measurement and evaluation in psychology and education (6th ed.). Upper Saddle River, NJ: Merrill.

Mercedes K. Schneider, Ph.D., currently teaches for the St. Tammany Parish Public Schools, Louisiana. Email: deutsch29@aol.com.
COPYRIGHT 2009 American Mental Health Counselors Association
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2009 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Schneider, Mercedes K.
Publication:Journal of Mental Health Counseling
Article Type:Editorial
Geographic Code:1USA
Date:Jan 1, 2009
Words:2941
Previous Article:The nature of confirmatory strategies in the initial assessment process.
Next Article:Comprehensive program development in mental health counseling: design, implementation, and evaluation.
Topics:

Terms of use | Copyright © 2017 Farlex, Inc. | Feedback | For webmasters