Printer Friendly

Expert? What does that mean? Describing the Term "Expert" in Agricultural Communications, Education, Extension, and Leadership Research.

Introduction

The American Association for Agricultural Education National Research Agenda (Roberts, Harder, & Brashears, 2016) is a guide for researchers in agricultural communications, education, extension, and leadership disciplines. It was created as a guide to assist ACEEL researchers address the complex problems that exist in agriculture. As such, ACEEL researchers are encouraged to design "high quality applied research" (Roberts et al., 2016, p. 7) programs with seven priorities in mind: public and policy maker understanding of agriculture and natural resources; new technologies, practices, and products adoption decisions; sufficient scientific and professional workforce that addresses the challenges of the 21st century; meaningful, engaged learning in all environments; efficient and effective agricultural education programs; vibrant, resilient communities; and addressing complex problems. In the spirit of conducting "high quality applied research" (Roberts et al., p. 7), researchers in ACEEL disciplines should not only address the research priorities outlined in the agenda, but they should also address the ways in which social science research studies are conducted. Ensuring consistency, transparency, replicability, rigor, and integrity in social science research studies in ACEEL disciplines is a research priority not explicitly stated in the agenda, but arguably implicit to all studies. A research method is a systematic plan for conducting research, which can be quantitative or qualitative in nature (Bryman, 2012). Fraenkel, Wallen, and Hyun (2012) said a research method is a way of "testing ideas in the public arena" (p. 5), so it stands to reason consistency, transparency, replicability, rigor, and integrity rests in researchers' diligent adherence to the parameters of the chosen research method, and would therefore be a standard by which "high quality applied research" (Roberts et al., 2016, p. 7) is evaluated.

Content Analysis and Delphi Study Methods

There are many research methods at ACEEL researchers' disposal (e.g., causal-comparative, case study, experiment). Content analyses and studies using the Delphi method are widely used for researching phenomenon that cannot be directly tested or observed and for which consensus or agreement is necessary. In content analysis, the data from the communication (e.g., newspaper article, students' written reflections) are analyzed by coders, either the researchers themselves or people retained by the research team, who have been trained to follow an explicit set of instructions (e.g., codebook). Clear coding instructions ensure each coder is following the same processes and criteria to achieve an acceptable level of agreement (Bryman, 2012). Similarly, the primary objective of the Delphi method is to build consensus and consistency of opinion from a panel of experts regarding an area of interest or inquiry (Hasson, Keeney & McKenna, 2000; Winzenried, 1997; Yang, 2003). The Delphi method is based on multiple rounds of questions used to gather responses with the ultimate purpose to combine the responses into "one useful statement" (Saucier, McKim, & Tummons, 2012, p. 139). In both research methods, external reliability may be established, in part, on the expertise of the coders and panelists (Dalkey, 1969; Krippendorff, 2013; Linstone & Turoff, 1975). Expert coders and panelists are individuals who are chosen because they have specific backgrounds (e.g., educational, cultural) and possess professional proficiency, knowledge, experience, and/or familiarity with the phenomenon under investigation.

The term "expert" is defined by Merriam-Webster (2017) as individuals "having, involving, or displaying special skill or knowledge derived from training or experience." Dalkey (1969), who originated the Delphi method, asserted at least 11 people were required to serve on the expert panel in a Delphi study to establish an acceptable level of external reliability. External reliability refers to the extent to which a study can be replicated with similar results to a preceding study (Bryman, 2012). For external reliability to be satisfied, procedures from the preceding study must be followed as closely as possible in the succeeding study, which is why debate exists on whether individuals serving as coders in a content analysis need to be experts. Krippendorff (2013), a leading developer of various content analysis techniques, emphasized the value coders with expert knowledge and experience bring to content analysis. Krippendorff (2013) also encouraged analysts to clearly describe why coders' were selected so that future research teams could select coders with experiences and backgrounds similar to those of the original research (Krippendorff, 2013). Additionally, Krippendorff (2013) recommended researchers select coders who have high cognitive abilities, high familiarity with the phenomenon of interest, and who are accessible in the general population. Potter and Levine-Donnerstein (1999) said the expert standard of coders should be driven by the type of content being examined (e.g., manifest, latent, projective). In cases where the content is projective--that which relies on coder's to access their pre-existing mental schema to make interpretations and judgements of the meaning of the content--coders who have expertise or a higher level of cognitive ability should be retained (Potter & Levine-Donnerstein, 1999).

Some research professionals questioned whether or not calling on experts for content analysis coding was necessary. Bryman (2012) asserted as long as the content analyst was trained on how to code the content, and inter-coder reliability was established at an acceptable level, anyone could serve as a coder. Similarly, experts may not be readily found in a population (Neuendorf, 2002). Therefore, a coding scheme that was only usable by experts would limit the study. To resolve this issue, Neuendorf (2002) recommended researchers design coding schemes that could be "usable by a wide variety of coders," (p. 116). Fraenkel et al. (2012) agreed, noting:
For all their study and training, what experts know is still based
primarily on what they have learned from reading and thinking, from
listening to and observing others, and from their own experiences. No
expert, however, has studied or experienced all there is to know in a
given field, and thus, even an expert can never be totally sure. All
any expert can do is give us an opinion based on what he or she knows,
and no matter how much this is, it is never all there is to know (p. 5).


A uniform method for describing the expertise coders and panelists bring to a study could assist ACEEL researchers in choosing the individuals to serve in the role, while at the same time ensuring consistency, transparency, replicability, rigor, and integrity in research across ACEEL disciplines. If researchers are choosing coders and panelists based on convenience or a nomination, they may be missing the opportunity of having someone participate who can bring greater depth, experience, skill, or content knowledge to a study. Presently, the only way to know what coders or panelists bring to a study is the way the researcher describes their credentials in the description of the methods or procedures. Therefore, investigating the ways ACEEL researchers are describing content analysis coders and Delphi study panelists would be beneficial in providing consistency, transparency, replicability, rigor, and integrity in research studies using content analysis and Delphi study methods across ACEEL disciplines.

Statement of the Problem

Currently, a uniform way to quantify expertise does not exist in ACEEL literature. For this reason, it is possible some ACEEL studies using content analysis and/or Delphi study methods lack consistency, transparency, replicability, rigor, and integrity. If researchers are choosing content analysis coders and Delphi study panelists based on convenience or a nomination, they may be missing the opportunity of having individuals participate who can bring a greater level of expertise to a study. Although not all studies require the contributions of an expert (Bryman, 2012; Fraenkel et al., 2012; Neuendorf, 2002), it is important that the level of expertise a coder or panelist provided to a study is clearly described in the literature so that researchers may replicate the study as precisely as possible in the future. Presently, the only way to know what level of expertise an individual brings to a study is the way the researcher describes the expert in the literature. Investigating the ways ACEEL researchers are describing experts and/or the level of expertise the content analysis coders and Delphi study panelists contribute to the research study would be beneficial in providing consistency, transparency, replicability, rigor, and integrity in research studies using content analysis and Delphi study methods across ACEEL disciplines. Therefore, the purpose of this study was to describe the ways in which ACEEL researchers using content analysis and Delphi study methods described the qualifications of the individuals who served as expert coders and panelists. This study will be the first in a series of studies aimed at creating a tool, model, or system of definitions to serve as an indication of an individual's level of expertise so that expertise may be consistently and accurately reported in all ACEEL research studies.

Literature Review & Conceptual Framework

There is no over-arching definition of an expert or expertise in the ACEEL literature. Therefore, before investigating the ways ACEEL researchers are describing the individuals they are using as content analysts and Delphi study panelists (e.g., experts), it is important to first conceptualize expertise.

Expertise Explained

Expertise is a complex, multifaceted phenomenon researchers have sought to define for decades (Goldman, 2015; Ericsson & Smith, 1991; Herling, 2000; Hoffman, 1998; Weinstein, 1993). As a result, the literature is filled with hundreds of iterations of expertise and the characteristics constituting an expert. In his seminal research in expertise, Ryle (1946) substantiated the categorization of expertise in two ways: epistemic, or knowing that, and performative, or knowing how. Epistemic expertise is an individual's deep understanding of a construct, and performative expertise is an individual's ability to perform a task with impeccable skill and accuracy (Weinstein, 1993). Ericsson and Smith (1991) believed expertise was a product of practicing a skill or studying a body of knowledge--guided by those who are themselves considered to be experts--for a minimum period of 10 years. According to Herling (2000), expertise implies proficiency or a level of knowledge gained from having experience or training in a particular phenomenon, and that proficiency can be recognized or observed by others.

Indeed, expertise is founded in both an individual's knowledge of a subject or issue and the ability to apply certain skills in professional or vocational contexts (Goldman, 2015; Winch, 2010). Scardamalia and Bereiter (1991) hypothesized expert knowledge was a product of striving beyond one's comfort zone:
Experts acquire their vast knowledge resources not by doing what falls
comfortably within their competence but by working on real problems
that force them to extend their knowledge and competence. That is not
only how they become experts, we suggest, but also how they remain
experts and avoid falling into ruts worn by repeated execution of
familiar routines (pp. 173-174).


Similarly, Camerer and Johnson (1991) asserted an expert is "a person who is experienced in making predictions in a domain and has some professional or social credentials" (p. 196). In terms of defining expertise in relation to cognitive development, Hoffman (1998) said expertise depended upon how the expertise was developed, as well as experts' knowledge structures and reasoning processes. Collins and Evans (2002) asserted expertise existed at three distinct levels: no experience, interactional experience, and contributory experience. Individuals with no experience lack any knowledge of a construct or practice. Those who have interactional experience are not skilled practitioners. However, these individuals can articulate knowledge of a construct or practice even if they have no personal experience with it. For example, a person may be able to explain the use of a baseball bat even if they have never played the sport. The third level of experience is contributory experience. Those who have contributory experience possess both the high level knowledge and performance skills required to weigh in on the science or scholarship of the construct or practice under examination.

Schon (1984) believed professionals use a form of tacit experiential knowledge he called knowing-in-action. Reflection is a competency necessary to evaluate and learn from experience, which aids in the acquisition of expertise (Schon, 1984). Reflective proficiency is a product of reflecting in action and reflecting on action. Therefore, experts reflect in the moments when events are occurring, as well as retrospectively using knowledge and experience gleaned from previous contexts and situations (Schon, 1984; Winch, 2010).

In summary, expertise is dynamic, domain specific, and characterized according to an individual's level of knowledge, experience, and problem-solving ability. Expertise can be used as an indicator of an individual's ability to effectively serve as a coder in an analysis of content or on a panel in a Delphi study. Researchers' choice of coders and panelists could be a reflection of their commitment to following the guidelines of their selected research method and to producing results that are consistent, transparent, replicable, rigorous, and grounded in academic integrity.

In the spirit of producing "high quality applied research" (Roberts et al., 2016, p. 7), researchers in ACEEL disciplines should examine the ways research is conducted. Ensuring consistency, transparency, replicability, rigor, and integrity is crucial in all research studies. As such, the conceptual framework of this study was established in the previous scholarship of ACEEL research professionals who have analyzed the premier ACEEL journals (Edgar, Edgar, Briers, & Rutherford, 2008; Edgar & Rutherford, 2011) in the following areas: curriculum (Cannon, Specht, & Buck, 2016; Shinn, Wingenbach, Briers, Lindner, & Baker, 2009); research themes and trends (Edgar, Rutherford, & Briers, 2009; Naile, Robertson, & Cartmell, 2010; Rodriguez & Evans, 2016; Williford, Edgar, Rucker, & Estes, 2016), prolific authors (Edgar et al., 2008; Harder & Roberts, 2006); theories, models, and methodologies used (Baker & King, 2016; Edgar, Rutherford, & Briers, 2009), and cited literature (Edgar & Cox, 2010, Edgar & Rutherford, 2011). Conceptually this study was focused on the ways ACEEL researchers are describing the qualifications of the coders and panelists who are retained for studies that employ content analysis or Delphi method.

Method

As with all research endeavors, choosing a method that is best suited to the line of inquiry is crucial to eliciting useful results. Although there were a number of methods at our disposal (e.g., grounded theory, content analysis, case study), I used a qualitative descriptive study design. Qualitative description has been identified as appropriate for research that is explanatory in nature, to answer research questions that are focused phenomenon not be commonly understood, or when a straightforward description of phenomenon is desired (Sandelowski, 2000). Researchers using qualitative description generally draw from a naturalistic perspective, which contends reality is best understood when examined contextually and in everyday terms (Sandelowski, 2000). The naturalistic paradigm is comprised of five fundamental principles: (a) certainties are multiple, constructed, and holistic; (b) the knower and the known are interactive and inseparable; (c) only time and context-bound working hypotheses are possible; (d) all entities are in a state of mutual simultaneous shaping; and (e) inquiry is value bound. Further, the researcher in naturalistic inquiry serves as the research instrument used to study the phenomena because nonhuman instruments are unable to comprehend all of the certainties it can encounter; however, humans can interpret and understand the meaning and bias that may exist in text (Lincoln & Guba, 1985).

I reviewed studies published in the Journal of Applied Communications, Journal of Agricultural Education, Journal of International Agriculture and Extension Education, Journal of Leadership Education, Journal of Extension, and North American Colleges and Teachers of Agriculture Journal from 2007 to 2017. These journals were selected because they comprise the "premier journals identified in the agricultural education discipline" (Edgar & Rutherford, 2011, p. 2). These years were chosen because electronic versions of the journal for these years were available online. Thus, keywords could be easily input into the online search function for each journal, making the journals "accessible" (Williford et al., 2016, p. 66). Criteria for inclusion of articles in the population included publication in an ACEEL premier journal from 2007 - 2017 and using content analysis or Delphi study to gather data. Potential articles were obtained by accessing the online journal archives: newprairiepress.org/jac/, www.jae-online.org, www.aiaee.org, www.joe.org, www.jouralofleadership.org, www.nactateachers.org.

I conducted two separate keyword searches--first using the keywords content analysis and then using the word Delphi. Database searches combined for all journals yielded a population of 382 articles that contained the key words content analysis and 141 articles that included the key words Delphi. The paragraph that indicated where the key words appeared in the article were reviewed, and articles that came up in the search that contained the key words, but did not appear to use a content analysis or Delphi method as a research method to gather data, were eliminated. Next, I read the method sections of the remaining articles and removed any articles that did not use content analysis or Delphi study methods. For examples, in some articles, the authors mentioned content analysis or Delphi study as methods they considered using but did not select. In other instances, the keywords appeared in the references section of the article and not in the methods section. Therefore, 126 articles using content analysis and 56 articles using Delphi methods comprised the sample for this study. A breakdown of the number of articles included in this study, by journal, was displayed in Table 1.

Because the focus of this paper was to describe the ways in which ACEEL researchers described the qualifications of the coders and panelists they used in their studies using content analysis and Delphi study methods, all articles were reviewed and the following items documented: journal, study title, author(s), method, identification of who coded the data, a description of the coders' and panelists' qualifications, and identification of the literature used to support the researchers' selection of coders and panelists.

Further, my inductive analysis involved a two-cycle coding process (Saldana, 2009). First cycle coding was descriptive and used to extract the verbiage that described the coder and panelist's qualifications from the methods sections of each journal article. Focused coding was used for the second cycle of coding to elicit a deeper understanding of the data corpus. Focused coding was initiated during the peer review process. The peer review process was designed to help establish dependability. During the peer review, participants served as a system of checks and balances to ensure dependability, consistency, and quality in the coding (Creswell, 2007; Lincoln & Guba, 1985; Merriam & Tisdell, 2015). During the peer review, I provided each peer reviewer my codebook. Using my coding instructions, each peer reviewer randomly selected articles from each journal and checked my coding records to ensure that I had coded the data correctly and reported the descriptions accurately.

A doctoral candidate and a doctoral student in a college of agriculture and life sciences at a Southern land-grant institution participated in the peer review. In addition to participants' academic training in research principles and methods, each participant had worked in industry for more than 15 years before attending graduate school. Therefore, each peer reviewer brought a unique blend of academic and industry knowledge, skill, and problem-solving abilities to the peer review process. Inconsistencies would have been discussed as a group and rectified as necessary. However, there were no inconsistencies between my coding and the peer reviewers' coding, which resulted in consensual validation. Consensual validation is often the product of a peer review when the opinion of others not involved in the initial research process is sought and agreement that the description, interpretation, and evaluation of the data among them is reached (Creswell, 2014). My reflection journal containing process notes (i.e., methodological notes, trustworthiness notes, and audit trail notes) established confirmability (Lincoln & Guba, 1985).

Findings

From 2007 - 2017, in 126 articles researchers indicated using a content analysis to collect data and in 56 articles researchers indicated using the Delphi method to collect data. These articles came from the premier agricultural journals (Edgar & Rutherford, 2011): Journal of Applied Communications, Journal of Agricultural Education, Journal of International Agriculture and Extension Education, Journal of Extension, Journal of Leadership Education, and North American Colleges and Teachers of Agriculture Journal. In each observation of articles published in JIAEE, JOLE, and NACTA analyzed for this study, no researchers provided an explanation of the coders' qualifications to perform a content analysis. Similarly, 92% (n = 49) of the articles published in JOE, 80% (n = 32) of the articles published in JAC, and 60% (n = 9) of the articles published in JAE in which researchers reported using content analysis method to gather data did not provide an explanation of the coders' qualifications. In summary, 86% (n = 108) of the total number of articles analyzed for this study that were published in the premier ACEEL journals where the study employed content analysis to gather the data did not include a description of the coders' qualifications. In contrast, 100% (N = 56) of the articles reviewed in the six premier journals that used the Delphi study method contained a description of the panelists' qualifications and/or the criteria used to select the people who served on the panel. A breakdown of the percentage of articles lacking a description of coders' and panelists qualifications by journal was presented in Table 2.

Furthermore, 96% (n = 121) of the total number of articles using the content analysis method did not contain a citation (e.g., Krippendorff, 2013; Neuendorf, 2002) that would either support or refute an inclusion or lack of inclusion of a description of coders' qualifications. Of the total number of articles using the Delphi study method, 79% (n = 44) did not include a citation that supported the researchers' selection of individuals to serve on the panel of experts (e.g., Dalkey, 1969; Linstone & Turoff, 1975). A breakdown of articles lacking a citation to support the researchers' selection of coders' and panelists based on their qualifications by journal was presented in Table 3.

Examples of the qualification descriptions from the articles that provided a description of the coders' qualifications included:

Journal of Agricultural Communications

"Our research team was comprised of faculty members in agricultural communication programs located in the United States with varying years of experience in academics ranging from eight to less than one. All team members have been involved in developing coursework and curricula to some degree," (Cannon et al., 2016, p. 10).

"The primary researcher, a master's student in agricultural communications, coded every page. A co-coder, also a master's student in agricultural communications, was selected to code 20% of the pages to ensure inter-rater agreement," (Rogers, Rumble, & Lundy, 2016, p. 37).

Journal of Agricultural Education

"...two agricultural communications graduate students in the Department of Agricultural Education, Communications, and Technology at the University of Arkansas," (Pennington, Calico, Edgar, Edgar, & Johnson, 2015, p. 33).

"The researchers' professional backgrounds were beneficial during the content analysis process. One researcher had taught a preservice course that included instructional planning, and the other researcher had recently student taught," (Greiman & Bedtke, 2008, p. 51).

Journal of Extension

"A panel of expert reviewers made up of five Extension professionals, including 4-H and Family and Consumer Health Science agents, analyzed the data to identify emerging themes through content analysis," (Peterson & McDonald, 2009, para. 6).

"Two researchers, who were knowledgeable about recreation, fisheries, and related resource management issues, coded the data," (Woosnam, Jodice, Von Harten, & Rhodes, 2008, para. 11).

Conclusions & Discussion

The majority of studies that noted using content analysis or Delphi methods in the premier agricultural journals did not describe the qualifications used to select coders or the credentials the coders possess that would make them qualified to code the data in a content analysis. Researchers were also inconsistent citing literature to support the inclusion or exclusion of a description of coders' qualifications. Based on these findings, ACEEL researchers, agricultural education journal editors, and research professionals tasked with performing journal article reviews should consider how including a description of coder credentials could enhance the consistency, transparency, replicability, rigor, and integrity of ACEEL research. According to Roberts et al., (2011), "Researchers should clearly explain data collection processes and procedures for coding and analyzing data," (p. 4) which includes a clear description of the qualifications of the individuals who coded the data. In many instances, an article may have multiple authors, but only one or two of the authors participated in coding. In other instances, individuals not at all affiliated with implementing the study may have coded the data, yet their background, skills, and problem-solving abilities relevant to the study are not described. Some researchers believe that as long as coders have the cognitive ability to complete training and follow a set of instructions, often required in a quantitative content analysis, they are suitable coders (Bryman, 2012). Indeed, cognitive ability is important. However, researchers may not be able to account for such things as coding fatigue, poor work ethic, negative attitude, and/or inconsistent adherence to the coding instructions after the interrater reliability coefficient was calculated. For content analysis studies to portray the same rigor as other research methods, researchers should give greater consideration to the level of expertise the coders bring to the study and thoroughly describe the level to increase transparency and replicability.

Also, all of the researchers whose studies were analyzed in this study described what qualifications were necessary for a potential panelist to possess to be suitable to serve on a Delphi study panel. Perhaps a reason is because the seminal authors (e.g., Dalkey, 1969; Linstone, & Turoff, 1975) made it very clear that panels in a Delphi study must be comprised of experts to reach consensus. Selecting individuals with expertise is only a recommendation for researchers to consider when selecting coders for a content analysis.

There are several likely reasons researchers are not describing content analysis coders qualifications: (a) providing a description of a content analysis coder's qualifications is not a fundamental requirement of the methodology; (b) researchers may not be choosing coders who have experience in the phenomenon under investigation; (c) it was determined having experience in the phenomenon under investigation would not enhance the coder's ability to adequately code the data; (d) researchers may rely on convenience or their ability to delegate coding tasks to those with whom they may have authority over (e.g., undergraduate and graduate students); and (e) researchers may believe expertise is implied or implicit in the very nature of conducting research--those who conduct research are typically working towards achieving an advanced degree or are those who have already achieved advanced degrees. Further, members of the research teams' names and titles are included in the journal article either at the beginning or end of the manuscript. Perhaps researchers believe the title (e.g., assistant professor, graduate student) is suggestive of expertise. This belief is erroneous as it does not consider the differences that exist in coders' level of skill, cognitive ability, knowledge, and prior experience. For example, a traditional undergraduate student entering a master's program immediately following graduation would not possess the same level of prior experience or knowledge as a person entering a master's program after spending several years, or even decades, in industry. Yet, both types of individuals share the same "graduate student" title. Similarly, it is also possible that an assistant professor who has the cognitive ability and knowledge of a particular subject may not possess the same level of prior experience or skill in certain subject matter that an individual returning to school after spending decades in industry may possess. For example, it is possible that some faculty may possess interactional experience (e.g., not skilled practitioners but can articulate knowledge; Collins & Evans, 2002) and some graduate students may possess contributory experience (e.g., high level knowledge and performance skills; Collins & Evans, 2002), which is a reason relying on an individual's title to ascertain expertise is problematic. The assumption can be made that the person with the more prestigious title has more expertise than the individual with a title that might imply they are a novice when in fact the person could be considered an expert in certain contexts. Including a more complete description of coders' credentials could increase transparency and alleviate the potential for misunderstandings, assumptions, or confusion.

In light of the findings of this study, it would be advantageous to consider possible reasons why researchers are not consistently describing the qualifications their content analysis coders bring to a study. Do they not deem providing a description of coders' qualifications important? The case could be made that describing the qualifications of a coder is of equal importance to justifying the methodology choice, describing the method itself, comparing the method to other methods that could have been used in the study, or providing an interrater reliability coefficient. Similarly, are there reasons researches are not consistently citing the literature to support their decision to provide an adequate description of the coders qualifications? It is possible the omission of a citation or a description of expertise is due to space limitations in some journals. It could also be cultural differences between the research training academics receive in different parts of the world. It is also possible coders were selected based on availability, convenience, or to provide the coder with research experience--all acceptable reasons, but a citation would provide support for those choices, as well as indicate to the audience whose methodology recommendation (e.g., Krippendorff, 2013; Neuendorf, 2002) is being followed. Consistent inclusion of a citation regarding coders' expertise in content analyses, similar to what many research professionals provide when describing their choices for Delphi study panelists, would enhance consistency, transparency, replicability, rigor, and integrity of the research published in the premier agricultural education journals.

The findings provide reason to hold researchers in ACEEL disciplines accountable for not providing a citation to support their decisions and their selection of certain individuals to serve as coders or panelists in a study that employs content analysis or Delphi study methods. However, journal editors and peer reviewers, who are the gatekeepers tasked with deciding which manuscripts are suitable for publishing, share in the responsibility of ensuring consistency, transparency, replicability, rigor, and integrity are ever present.

Recommendations

Based on the findings of this study, ACEEL researchers are encouraged to thoroughly describe the qualifications of their content analysis coders and should look to the ways researchers are describing the experts chosen for a Delphi study as an example of the level of detail to include. This will:

(a) Aid researchers in the decision-making process for future replication of the study.

(b) Improve consistency in the published work across all ACEEL disciplines.

(c) Ensure rigor by establishing the coders were able to fully generate data that is appropriate for the level of analysis required to answer the research question.

(d) Provide transparency with the intention of making the research process as clear, accessible, understandable, and replicable as possible.

(e) Establish integrity, as much of the misperception that surrounds social science research stems from researchers who veil their methods in secrecy and academic jargon.

(f) Ensure researchers include the relevant literature supporting their decision not to include a description of coders' qualifications.

Further, researchers using the Delphi study method should continue to provide detailed descriptions of the qualifications their panelists bring to a research study, but be more consistent about including an appropriate citation. All researchers who use content analysis and Delphi study methods should be cognizant of the impact their choices of coders and panelists truly have on the study results.

Recommendations for future research include opening up the discussion of expertise to a broader group of ACEEL researchers. The insight and opinions of a broader group of ACEEL researchers on the topic of expertise would be beneficial in generating an over-arching protocol specific to the ways ACEEL researchers report coders and panelists' qualifications in studies using content analysis and Delphi study methods. For example, it is possible that coders in studies using content analysis are being chosen based on a level of skill or knowledge possessed, but researchers may not be providing a complete description in their manuscripts because of space limitations in some journals, or because journal editors and peer reviewers have not set a consistent standard of detail needed to ensure publication.

References

Baker, L. M. & King, A. E. (2016). Let's get theoretical: A quantitative content analysis of theories and models used in the Journal of Applied Communications. Journal of Applied Communications, 100(1), 51-63. doi.org/10.4148/1051-0834.1021

Bryman, A. (2012). Social research methods. New York, NY: Oxford University Press.

Buriak, P., & Shinn, G. C. (1993). Structuring research for agricultural education: A national Delphi involving internal experts. Journal of Agricultural Education, 32(2), 31-36. doi.org/10.5032/jae.1993.02031

Cannon, K. J.; Specht, A. R., & Buck, E. B. (2016). Agricultural communications: A national portrait of undergraduate courses. Journal of Applied Communications, 100(1), 6-16. doi.org/10.4148/1051-0834.1018

Camerer, C. F., & Johnson, E. J. (1991). The process-performance paradox in expert judgment: How can experts know so much and predict so badly? In K.A Ericcson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 195-217). Cambridge, MA: Cambridge University Press.

Collins, H. M., & Evans, R. (2002). The third wave of science studies: Studies of expertise and experience. Social Studies of Science, 32(2), 235-296.

Creswell, J. W. (2007). Qualitative inquiry and research design: Choosing among five approaches. Thousand Oaks, CA: Sage Publications, Inc.

Dalkey, N. C. (1969). The Delphi method: An experimental study of group opinion (No. RM-5888-PR). Santa Monica, CA: Rand Corporation.

Edgar, L. D., & Cox, C. (2010). Citation structure: An analysis of the literature cited in the Journal of Leadership Education from 2002 to 2006. Journal of Leadership Education, 9(1), 87-104.

Edgar, L. E., Edgar, D. W., Briers, G. E., & Rutherford, T. (2008). Research themes in agricultural education: Future gap analysis of the National Research Agenda. Journal of Southern Agricultural Education Research, 58(1). Retrieved January 28, 2018, from http://pubs.aged.tamu.edu/jsaer/pdf/Vol58/58-01-061.pdf

Edgar, L., & Rutherford, T. (2011). Citation structure: An analysis of the literature cited in the Journal of Applied Communications from 1997 to 2006. Journal of Applied Communications, 95(2), 34-47. doi.org/10.4148/1051-0834.1176

Edgar, L. D.; Rutherford, T., & Briers, G. E. (2009). Research themes, authors, and methodologies in the Journal of Applied Communications: A ten-year overview. Journal of Applied Communications, 93(1&2), 21-34. doi.org/10.4148/1051-0834.1201

Ericsson, K. A., & Smith, J. (Eds.). (1991). Toward a general theory of expertise: Prospects and limits. Cambridge, MA: Cambridge University Press.

Expert. 2017. In Merriam-Webster.com. Retrieved from https://www.merriam.webster.com/dictionary/expert

Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2012). How to design and evaluate research in education. New York, NY: McGraw-Hill.

Greiman, B. C., & Bedtke, M. A. (2008). Examining the instructional planning process taught in agricultural education teacher preparation programs: Perspectives of university faculty. Journal of Agricultural Education, 49(4), 47-59.

Goldman, A. I. (2015). Expertise. Topoi, 37(3). doi.org/10.1007/s11245-016-9410-3

Harder, A., & Roberts, T. G. (2006). Seeing the forest for the trees: Authorship in the Journal of Agricultural Education. Poster session presented at the Southern Region Agricultural Education Meeting. Orlando, FL.

Hasson, F., Keeney, S., & McKenna, H. (2000). Research guidelines for the Delphi survey technique. Journal of Advanced Nursing, 32(4), 1008-1015.

Herling, R. W. (2000). Operational definitions of expertise and competence. Advances in developing human resources, 2(1), 8-21.

Hoffman, R. R. (1998). How can expertise be defined? Implications of research from cognitive psychology. Exploring expertise. New York, NY: Macmillan.

Krippendorff, K. (2013). Content analysis: An introduction to its methodology. Thousand Oaks, CA: Sage Publications, Inc.

Linstone, H. A., & Turoff, M. (Eds.). (1975). The Delphi method. Techniques and Applications. Reading, MA: Addison-Wesley.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Newbury Park, CA: Sage Publications, Inc.

Merriam, S. B., & Tisdell, E. J. (2015). Qualitative research: A guide to design and implementation. San Francisco, CA: Jossey-Bass.

Naile, T. L., Robertson, J. T., & Cartmell, D. (2010). Examining JAC: An analysis of the scholarly progression of the Journal of Applied Communications. Journal of Applied Communications, 94(1&2), 49-60. doi.org/10.4148/1051-0834.1186

Neuendorf, K. A. (2002). The content analysis guidebook. Thousand Oaks, CA: Sage Publications, Inc.

Pennington, K., Calico, C., Edgar, L. D., Edgar, D. W., & Johnson, D. M. (2015). Knowledge and perceptions of visual communications curriculum in Arkansas secondary agricultural classrooms: A closer look at experiential learning integrations. Journal of Agricultural Education, 56(2), 27-42. doi: 10.5032/jae.2015.02027

Peterson, B., & McDonald, D. A. (2009). A focused interview study of 4-H volunteer performance appraisals. Journal of Extension, 47(5), 1-7.

Potter, W. J., & Levine-Donnerstein, D. (1999) Rethinking validity and reliability in content analysis. Journal of Applied Communication Research, 27(3), 258-284. doi:10.1080/00909889909365539

Roberts, T. G., Barrick, R. K., Dooley, K. E., Kelsey, K. D., Raven, M. R., & Wingenbach, G. J. (2011). Enhancing the quality of manuscripts submitted to the Journal of Agricultural Education. Journal of Agricultural Education, 52(3), 1-5.

Roberts, T. G., Harder, A., & Brashears, M.T. (Eds). (2016). American Association for Agricultural Education national research agenda: 2016-2020. Gainesville, FL: Department of Agricultural Education and Communication.

Rodriguez, L. & Evans, J. F. (2016) Coming of Age: How JAC is reflecting a national research agenda for communications in agriculture, natural resources, and life human sciences. Journal of Applied Communications, 100(1), 29-50. doi.org/10.4148/1051-0834.1020

Rogers, T. M., Rumble, J. N., & Lundy, L. K. (2016). Promoting commodities through comic books: A framing analysis of the Captain Citrus Campaign. Journal of Applied Communications, 100(4), 33-44. doi.org/10.4148/1051-0834.1240

Ryle, G. (1945, January). Knowing how and knowing that: The presidential address. In Proceedings of the Aristotelian society, 46(1), 1-16. Aristotelian Society: Wiley.

Sandelowski, M. (2000). Focus on research methods-whatever happened to qualitative description? Research in Nursing and Health, 23(4), 33-340.

Saldana, J. (2009). The coding manual for qualitative researchers, Thousand Oaks, CA: Sage Publications, Inc.

Saucier, P. R., McKim, B. R., & Tummons, J. D. (2012). A Delphi approach to the preparation of early-career agricultural educators in the curriculum area of agricultural mechanics: Fully qualified and highly motivated or status quo? Journal of Agricultural Education, 53(1), 136-149.

Scardamalia, M., & Bereiter, C. (1991). Literate expertise. In K.A Ericcson & J. Smith (Eds.), Toward a general theory of expertise: Prospects and limits (pp. 172-194). Cambridge, England: Cambridge University Press.

Schon, D. A. (1984). The reflective practitioner: How professionals think in action. United States of America: Basic Books.

Shinn, G. C., Wingenbach, G. J., Briers, G. E., Lindner, J. R., & Baker, M. (2009). Forecasting doctoral-level content in international agricultural and extension education--2010: Viewpoint of fifteen engaged scholars. Journal of International Agricultural and Extension Education, 16(1), 57-71.

Stewart, J., Lambert, M. D., Ulmer, J. D., Witt, P. A., & Carraway, C. L. (2017). Discovering quality in teacher education: Perceptions concerning what makes an effective cooperating teacher. Journal of Agricultural Education, 58(1), 280-299.

Weinstein, B. D. (1993). What is an expert? Theoretical Medicine and Bioethics, 14(1), 57-73.

Williford, B. D., Edgar, L. D., Rucker, K. J., & Estes, S. (2016). Literature themes from five decades of agricultural communications publications. Journal of Applied Communications, 100(1), 80-92

Winch, C. (2010). Dimensions of expertise: A conceptual exploration of vocational knowledge. New York, NY: Continuum International Publishing Group.

Winzenried, A. (1997). Delphi studies: The value of expert opinion bridging the gap--data to knowledge. Paper presented at the annual conference of the International Association of School Librarianship, Vancouver, BC, Canada.

Woosnam, K., Jodice, L., Von Harten, A., & Rhodes, R. (2008). Investigating marine recreational fishing stakeholders' perspectives across three South Carolina coastal regions: The first step towards collaboration. Journal of Extension, 46(2).

Yang, Y. N. (2003). Testing the stability of experts' opinions between successive rounds of Delphi studies. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL.

Lori M. Costello

Texas A&M University

Tracy Rutherford

Texas A & M University - College Station

Lori Costello, Ph.D. is a curriculum development program assistant in the Department of Agricultural Leadership, Education, and Communications at Texas A&M University.

Tracy Rutherford, Ph.D. is a professor and associate department head for graduate and undergraduate programs in the Department of Agricultural Leadership, Education, and Communications at Texas A&M University.

https://doi.org/10.4148/1051-0834.2211
Table 1
Summary of Articles Included in this Study by Journal

Method            JAC  JAE  JIAEE  JOE  JOLE  NACTA  TOTAL

Content Analysis  40   15    9     53   4     5      126
Delphi             4   23   11     10   1     7       56

Note. JAC = Journal of Applied Communications, JAE = Journal of
Agricultural Education, JIAEE = Journal of International Agriculture
and Extension Education, JOLE = Journal of Leadership Education, JOE =
Journal of Extension, NACTA = North American Colleges and Teachers of
Agriculture Journal

Table 2 Percent of Articles Lacking a Description of Coders
'/Panelists' Qualifications by Journal

                  JAC     JAE     JIAEE    JOE     JOLE    NACTA
Method            %   n   %   n   %    n   %   n   %    n  %    n

Content Analysis  80  32  60   9  100   9  92  49  100  4  100  5
Delphi             0  11   0  23    0  11   0  10    0  1    0  7

                  TOTAL
Method            %   n

Content Analysis  86  108
Delphi             0   56

Note. JAC = Journal of Applied Communications, JAE = Journal of
Agricultural Education, JIAEE = Journal of International Agriculture
and Extension Education, JOE = Journal of Extension, JOLE = Journal of
Leadership Education, NACTA = North American Colleges and Teachers of
Agriculture Journal

Table 3
Percent of Articles Lacking a Citation to Support Selection of
Coders/Panelists by Journal

                  JAC      JAE      JIAEE    JOE     JOLE    NACTA
Method            %    n   %    n   %    n   %   n   %    n  %    n

Content Analysis   95  38  100  15  100   9  94  50  100  4  100  5
Delphi            100   4   61  14    0  11  80   8  100  1   86  6

                  TOTAL
Method            %   n

Content Analysis  96  121
Delphi            79   44

Note. JAC = Journal of Applied Communications, JAE = Journal of
Agricultural Education, JIAEE = Journal of International Agriculture
and Extension Education, JOE = Journal of Extension, JOLE = Journal of
Leadership Education, NACTA = North American Colleges and Teachers of
Agriculture Journal
COPYRIGHT 2019 Agricultural Communicators in Education
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Costello, Lori M.; Rutherford, Tracy
Publication:Journal of Applied Communications
Article Type:Report
Geographic Code:1USA
Date:Mar 1, 2019
Words:6931
Previous Article:Consumers' Evaluations of Genetically Modified Food Messages.
Next Article:Informal Science Engagement via Extension Exhibits: A Pilot Evaluation of Adult State Fairgoers' Experiences, Attitudes, and Learning at Raising...
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters