Printer Friendly

Use, misuse, and abuse of content analysis for research on the consumer interest.

While possessing a long and respected history as a research method, content analysis studies attempting to address issues of the consumers' interests often are published with invalid conclusions or implications. This essay offers that the source of these problems is rooted in inferring causal relations from what is nothing more than descriptive data.

**********

Content analysis is a methodological technique that I have used extensively in my own research career and one that I often find in papers that I review. Because of these realities, I believe I have developed a background and expertise about content analysis that qualifies me to comment on how it is being used by other researchers. While content analysis can be and has been applied successfully in any number of research settings, I also believe that there is potential for its abuse, particularly regarding what content analysis results can and, even more importantly, cannot infer to researchers, public policy officials, and other individuals who are interested in findings that are based on content analysis applications.

Rather than reviewing the literature in an effort to identify exemplary examples of where content analysis has been used appropriately, it could be more instructive to more broadly expound on the uses, misuses, and abuses of the research method itself. Content analysis can be misused and even abused when applied to the questions and investigative work that is of interest to readers of this journal as well as other publications that feature consumer interest research and the related issues of public policy.

This concern about content analysis has become increasingly important to me because of a disconcerting sameness that I have begun to find in at least some of the research that I read that features this methodology. Consequently, certain of my reviews of this work have begun to exhibit similar thoughts, words, and phrases irrespective of the content analysis topic or focus because the problems that I identify occur again and again across papers I read. Because of this reoccurrence of errors in applying content analysis, I have started to crystallize in my own mind what I believe are mistakes in how content analysis results are interpreted. Hence, the purpose of this article is to convey those thoughts, and consequently, it is my hope that this note will alert researchers about certain pitfalls that are being manifested in content analysis-based investigations. These mistakes can be easily avoided and, for reasons to be explicated below, must be evaded because of the potential adverse consequences that arise when this technique is misused and misapplied.

I will begin by building a research exemplar that reflects the types of problems that I believe are being manifested in some content analysis research. The scenario that is repeated in content analysis research goes something like this. A public policy or consumer interest "problem" is cited that in almost every instance is well documented, factual in nature, current, and (most importantly) generally recognized as an issue that needs to be resolved. While I could cite actual problems that have characterized content analysis research I have read recently, for the sake of protecting the review process regarding those manuscripts, I will base my hypothetical scenario on a fictitious "problem" for which I have not reviewed a manuscript, at least not recently.

In the hypothetical scenario under consideration, the problem to be considered has to do with advertising heavily sugared foods to children. In the not-so-distant past, this issue was at the forefront of public policy research and debate. In my exemplar, the problem is described and documented; at one time, many of the foods being advertised to children did (and perhaps still do) contain ingredients, such as sugar, that are not necessarily conducive to healthy development in children.

The problem documentation is then followed by a discussion on a theoretical component that could be useful in explaining how children process ads for heavily sugared foods for which kids are the target. For example, there is a wealth of literature on how children reveal developmental differences with respect to when they can differentiate and understand what constitutes an advertisement versus other content that is not targeting them with a selling proposition. These differences are usually explained by the child's age and other psychological factors; many of these papers would commonly cite Piaget's pioneering work on developmental stages that are manifested in children and teens.

After presentation of the problem and narrative on how younger children may not be able to understand that what they see is really a promotional device--they do not understand that it is meant to encourage their consumption of the product or at least make a request for it to their parents--the research then moves into the empirical component of the study. This aspect is often a content analysis of advertisements that have been chosen to reflect the "problem" under consideration. In my hypothetical scenario, the set of stimuli would be ads for heavily sugared food products.

In the content analysis, themes, trends, and/or similarities about the substance of the ads under consideration would be documented and categorized according to the tenets of content analysis research. For example, what might be ascertained from the results of the content analysis is that certain of the advertised foods tend to contain more sugar, sponsoring characters used in the ads appear to be fictional (real), and/or that disclaimers of various types are used (not used) in the ads. Up to this point in the development of the paper, most reviewers, like myself, usually have few substantive problems with the manuscript being read.

After the results are reported, the authors then present their research "implications" that supposedly flow from the content analysis data. This is where the problems that I find with the manuscript are typically manifested. These concerns arise because of a disconnect between what the content analysis results can tell readers legitimately versus how the findings are actually being interpreted by the authors.

Specifically, in problematic papers, the descriptive content analysis results are deduced as supporting causal relations that were not studied but are assumed to exist and are inherent in the problem around which the study was formulated and based. Most commonly, this entails making insinuations and assumptions about how the ads "cause" children to believe or act in certain ways. With respect to my exemplar, this would entail assuming that the ads influence (1) children's beliefs about the sponsoring characters in the ads, (2) children to request the heavily sugared foods that are featured in the ads, and/or (3) the improper use of disclaimers which is then believed to have dire consequences for the underage consumers of the product. I could go on.

Unfortunately, and with respect to baseless conclusions such as these, content analysis is a descriptive methodological technique. It does not constitute an experimental design in which independent factors have been manipulated and from which causal inferences might be assumed. To make interpretations that go beyond the descriptive findings is extending the content analysis methodology into realms for which it was never intended. There is nothing wrong with counting instances of certain characteristics across a set of advertisements, such as the frequency and type of cartoon characters appearing in the ads, and then offering a conclusion that reflects that descriptive content. However, it is inappropriate to cite such statistics from the study and then offer assertions about how these descriptive data in some way reveal or indicate how the ads are causally affecting the subjects of the investigation in a detrimental manner.

For example, a quite legitimate conclusion from a content analysis would be that the results indicate that 46% of the ads in the study utilized a cartoon or fictional character rather than a "real" person. An inappropriate conclusion would be that use of cartoon characters in an advertisement under study will somehow confuse younger children into believing that the character wants them to eat more of the sugared foods depicted in the advertisement or that they must request the foods from their parents. It is possible that that the ads being considered might influence children in this manner. However, we have no way of knowing how the ads actually affected children or will affect them because these crucial questions were not asked or addressed in the study.

The discussion sections of these problematic content analysis papers attempt what can be considered improper logical leaps, a series of mysterious non sequiturs, as they incorporate the theoretical perspective advanced earlier in the manuscript. Their earlier discussions about how some children are too young, developmentally, to realize the difference between a noncommercial cartoon character and one that is attempting to "sell" them something are used to conclude that all advertising using such cartoon entities are somehow more harmful than other sales messages. Again, we cannot draw such conclusions because we do not know from the content analysis results how the ads under consideration actually affect children. We might be able to ascertain this from other research but not from the content analysis results, which provide descriptive results and nothing more. Content analysis findings cannot and should not be used to infer what might be obtained from an experiment or other more appropriate research methods.

For example, in many of the reviews that I write that feature content analysis misuse, I strongly suggest that the research might be strengthened by incorporating the viewpoints of actual respondents rather than relying solely on inferences about what the descriptive findings supposedly reveal. In my hypothetical example, why not include in the "study" actual children who represent the target audience of the ads? Why not ask them about their perceptions of the purpose of including cartoon characters in the ads? Why not ask children if looking at the advertisement makes them want the product being depicted in the ad? This is the critical component that is almost always lacking in content analysis research.

Carole Macklin showed that even very young children can be queried reliability and validly about marketing-related stimuli (e.g., see Macklin 1985, 1987). We might know so much more about how advertising actually affects vulnerable populations if content analysis researchers took this next step in the research process. Thus, while I am not discounting content analysis results per se, I am advocating that these findings be used as a springboard for additional work that dovetails with, and strengthens, what the content analysis might indicate rather than using the results inappropriately to make assertions that were not studied.

Overinterpretation of content analysis results represents bad science when these findings are extended into conclusions that cannot be supported based on what was actually found in the content analysis study itself. Unfortunately, overinterpretation does not subsume the entire scope of how content analysis can be misapplied. Specifically, there are additional detrimental outcomes that arise when this research is published, especially when the baseless conclusions and recommendations from the study enter the citation stream in subsequent publications.

For example, earlier in my career, a prior JCA editor asked me to review a previously published book. In my review, I noted that many of the conclusions offered were based entirely on the introspective interpretations of a set of advertisements rendered by a single "judge," who was, in this case, the book's author. Because of these procedures, I raised the possibility that when such restricted methodological tactics are used, equally valid, sometimes competing, and unfortunately (for the author) considerably less titillating conclusions can be rendered by others (I offered some in my review). This was content analysis being misapplied and misinterpreted to a degree that I had not witnessed previously or since, though I acknowledge and agree with the conclusion of Rotfeld and Stafford (2007) that researcher backgrounds can impinge on how studies are conducted and interpreted. Yet, later, even though this author's "conclusions" were based on nothing more than her own observations and perceptions of a series of advertisements, the book was quoted and cited in other works without any cautionary notes on how these largely unsubstantiated conclusions were determined.

As academic researchers, it is our duty to ensure that such science is not passed on as fact in the studies we cite in our own investigations. Unfortunately, this still happens and more often than we care to admit.

It is possible that given some of the thoughts I have shared here that some might read this essay and conclude that it was I who reviewed one of their previously submitted manuscripts. However, since editors regularly share the comments written for authors with all the double-blind reviewers on a manuscript, I have often read remarks from other reviewers who echo some of these same thoughts. And sometimes, content studies with all their intrinsic limitations are used so extensively that they might crowd out other work. For example, Taylor (2002) notes that international advertising research has been characterized by an overuse of content analysis studies and too few experimental designs. Still, too few reviewers and authors are raising these same concerns. The result of this is that published research is being conducted competently from a methods perspective but which is nevertheless replete with erroneous conclusions and implications that cannot be based upon valid interpretations of the data.

Content analysis is a useful alternative for investigators who are attempting to identify patterns, frequencies, or potential categories with respect to advertising phenomena. It should not be used to infer causation because the data gleaned from content analysis simply do not support such inferences. As investigators, it is incumbent upon us to move forward the practice of research on issues of the consumers' interests in ways that add to our knowledge in a meaningful manner while refraining from misusing the very techniques that are supposed to aid in this endeavor.

REFERENCES

Macklin, M. Carole. 1985. Do Young Children Understand the Selling Intent of Commercials? Journal of Consumer Affairs, 19 (2): 293-304.

--. 1987. Preschoolers' Understanding of the Informational Function of Television Advertising. Journal of Consumer Research, 14 (September): 229-239.

Rotfeld, Herbert Jack and Maria Royne Stafford. 2007. Toward a Pragmatic Understanding of the Advertising and Public Policy Literature. Journal of Current Issues and Research in Advertising, 29 (Spring): 67-80.

Taylor, Charles R. 2002. What Is Wrong with International Advertising Research? Journal of Advertising Research, 42 (6): 48-54.

Led Carlson is a professor of marketing at Clemson University, Clemson, SC (carlsol@clemson.edu).
COPYRIGHT 2008 American Council on Consumer Interests
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2008 Gale, Cengage Learning. All rights reserved.

 Reader Opinion

Title:

Comment:



 

Article Details
Printer friendly Cite/link Email Feedback
Author:Carlson, Les
Publication:Journal of Consumer Affairs
Geographic Code:1USA
Date:Mar 22, 2008
Words:2388
Previous Article:What am I drinking? The effects of serving facts information on alcohol beverage containers.
Next Article:Potential and pitfalls of applying theory to the practice of financial education.
Topics:


Related Articles
Older women & substance abuse: there are no age limits when it comes to substance abuse. Girls as young as 10 or 11 can be found in treatment...
Viral Video Warns AXE Users About the Dangers of Misuse.
GSMA Launches Mobile Alliance Against Child Sexual Abuse.
GSMA Launches Mobile Alliance Against Child Sexual Abuse.
Drug abuse; concepts, prevention, and cessation.
ID Analytics Study Reveals Employees' Criminal Misuse of Stolen Identities.
Cautions and concerns in experimental research on the consumer interest.
Detica urges mobile operators to prevent on-line child sexual abuse content.
Disciplined conduct of interdisciplinary research.

Terms of use | Copyright © 2014 Farlex, Inc. | Feedback | For webmasters