Printer Friendly

Using judgment profiles to compare advertising agencies' and clients' campaign values.

Newspaper business pages and trade publications frequently report misunderstandings between advertising agencies and their clients, reflected in lost accounts, redirected campaigns, and fluctuating sales figures. According to a recent study by the Association of National Advertisers, agency/client conflict "is becoming increasingly acute. . . . Agencies and their clients agree on virtually nothing, including such fundamental issues as goals and each other's strengths and weaknesses" (Goldman, 1993).

Researchers have sought reasons for agency/client dissonance at both organizational and interpersonal levels. Some have examined the impact of structural factors such as organizational size and product type on agency switching (Buchanan and Mitchell, 1991; Cagley and Roberts, 1984). Others have explored the effects of longevity on the agency/client relationship (Hotz, Ryans, and Shanklin, 1982; Wackman, Salmon, and Salmon, 1986; Verbeke, 1988). Still others have compared the priorities of clients and agencies regarding their campaign goals (Korgaonkar, Moschis, and Bellenger, 1984; Korgaonkar and Bellenger, 1985). Research has typically looked at "bad habits" on both sides (Hotz et al., 1982) - a broad range of organizational, managerial, and interpersonal factors that cause friction between agency/client partners.

In fact, a common thread that unites studies of agency/client conflict is the difficulty both sides experience with understanding their counterparts' value system and priorities. For example, Michell, Cataquet, and Hague (1992) found that "dissatisfaction with agency performance" was the number-one reason for account termination. This attribute included lack of agency understanding about the client's business and advertising objectives, reflected in weak sales and image results. Hotz et al. (1982) found similar discontent and attributed it to poor communication on both sides, On the one hand, agencies have "a tendency not to listen"; on the other, "the agency is hindered in adequately performing its role because the client is not supplying the agency with the types of information necessary for good performance. Indeed, even if they think alike, agencies and clients do not always communicate their priorities clearly. 'Thus Bourland's (1993) meta-analysis of research on ad agency/client conflict highlighted "poor communication" as the major client complaint, with agencies' poor listening skills as an additional irritant. For their part, agencies' grievances included "lack of information" from clients, along with "indecisiveness."

Researchers have noted that both sides engage in guesswork, rather than systematic examination, to gauge their counterpart's goals and priorities. This guesswork is particularly true of agencies (Cagley, 1986; Wackman et al., 1986). Many losing agencies "do not appear to learn from their own or other agencies' losses" (Michell et al., 1992). As a result, they experience client dissatisfaction as "somehow exogenous and unexplainable" and "account losses in terms of machiavellian conspiracies" (Doyle, Corstjens, and Michell, 1980). Studies that begin by comparing agency/client perceptions often end by recommending performance audits as a way to repair the poor communication that appears to underlie so many failed partnerships (Michell, 1986; Wackman et al., 1986; Ryan and Colley, 1967).

Like other researchers, we were interested in exploring the roots of agency/client misunderstanding. However, we did not focus on the structural, managerial, or interpersonal aspects of the collaboration. Rather, our study centered on the extent to which both clients and agencies share the values that they consider most important to successful campaigns. Individual players may have little control over interpersonal factors ("chemistry") or organizational factors (budget, turnover, mergers). However, they can exert some control over the cognitive aspect of the agency/client relationship: that is, the priorities and objectives they assign to their advertising campaigns.

Wackman et al. (1986) found that clients who retain agencies that agree with whatever they say attain "less productive working relationships." In fact, the give-and-take between agency and client ideas is important to the evolution of a successful campaign. However, such negotiation works best when the two sides share major priorities and know where their partners stand. Therefore, agency/client disagreements can sharpen if neither group fully understands where the other's priorities lie. Conversely, if agencies and clients clearly defined their own judgment values, and clearly understood their counterparts' values, their teamwork, and campaign success, could be enhanced.

Toward this end, we used multiple regression-based judgment analysis to derive decision profiles for a group of advertising agency consultants and their clients. First, we identified important decision factors about what constituted a good advertising campaign. Then we analyzed how a group of advertising agency professionals and their clients used these factors to assess campaigns. We thus isolated cognitive attributes of the agency/client relationship - values and priorities - from organizational factors or interpersonal "chemistry" that would be less amenable to individual control. We wished to compare agency judgments with those of clients, concerning attributes each group wanted to see in a good advertising campaign. These goals led to the following research questions:

1. What criteria did these professionals use to determine a sound advertising campaign? And how did they use (weight) those criteria?

2. How consistently did these professionals use the criteria to make decisions?

3. Were these professionals conscious of their own values? When faced with a plan for an actual advertising campaign, did they use the decision factors the way they thought they would?

4. Were there fundamental differences between the judgments of agencies and clients?


We used multiple regression-based procedures to analyze and compare how decision-makers weight cues; how the cues shape their final judgment; and how consistent the judgment patterns are. Our analysis was done with a judgment-analysis software package called POLICY-PC (Executive Decision Services, 1991).

To identify individual cues, we first consulted prior research for attributes of advertising campaigns that had previously proven important to clients and agencies. The studies by Korgaonkar et al. (1984, 1985) were especially important in this respect because they featured cognitive judgment components rather than organizational or emotional factors. We then conducted open-ended interviews with a pilot group of 17 agencies and clients. These sources confirmed five of the decision criteria that Korgaonkar et al. found central to successful advertising campaigns: market research, media planning, message/creativity, advertising budget, and relationship between client and agency.

We computer-generated different mixtures of these five cues from a random number seed into 30 hypothetical cases. Each case represented a different proposed advertising campaign, each with different levels of the cues depicted by bar graphs. For example, one campaign might show high creativity and involve good client/agency synergy, but the campaign might pose severe budgetary challenges and look weak in terms of both market research and media planning. Samples of three cases are given in Figure 1. Respondents were asked to rate each hypothetical case on a scale of 1 to 10, where 10 was the most favorable judgment. Finally, we asked respondents to weight each of the five decision cues by dividing 100 points among them. Because individual judgments could change depending on the specific campaign, we asked respondents to specify their current largest campaign and respond with that campaign in mind.

Our initial sample was selected from the Agency Red Book (1993); our questionnaire gave both agencies and clients the option of naming their counterpart for us to survey. While not strictly random, this method increased the number of respondents who worked together and therefore made comparisons between their judgments more meaningful than a strictly random sample would have done.


From a one-flight mailing of 250 questionnaires, we received usable questionnaires from 57 agency consultants and 63 current or prior clients, a 48 percent response rate. The sample yielded a broad range of experience levels, product types, budgets, and audiences, with heaviest representation from senior managers in the consumer goods sector. Demographics appear in Table 1.

We first extracted individual judgment policies. Preliminary examination of the plotted data [TABULAR DATA FOR TABLE 1 OMITTED] suggested that a nonlinear formulation would result in superior models. The favorability ratings for the hypothetical cases formed the dependent variable and the five cues comprising the hypothetical cases formed the independent variables. Three statistics held particular interest: multiple R, which indicated whether the procedure had adequately captured the individuals' judgment policies; standardized beta weights for the five cues, which indicated how the individuals were using the judgment criteria; and function forms derived from the regression equations, which indicated how much of each cue elicited a favorable or unfavorable judgment.

Before analyzing the data, we checked the multiple R for each respondent's regression to see whether the procedure provided a good model for individuals' judgment processes. Of the 120 respondents, 96 had multiple Rs above .8, and 18 respondents had between .7 and .8. Thus the five variables in the regression produced an excellent decision model in the vast majority of cases.

We then compared the decision structures of all clients as a group with all agency practitioners as a group. All 63 clients were averaged together to form a single client profile and analyzed as one individual; the same was done with all 57 agency respondents. As a note of caution, pooling the data this way tended to obscure individual discrepancies and elevate apparent agreement between the two sides. However, it produced overall decision profiles for each group and therefore allowed us to compare normative judgment patterns as our study intended.

First we compared agencies' and clients' judgments derived from their responses to the 30 hypothetical cases. The correlation between the two groups' derived judgments was .89 (see Table 2). This correlation indicated substantial, though imperfect, agreement. (Interpretations of correlations were made relative to the participating decision-makers; correlations in the .7 to .9 range are common for this type of study [Stewart, 1988]).

Examination of each group's standardized beta coefficients (see Table 3) and function forms shed light on where they did disagree and provided a normative decision profile for each professional type. The client group assigned heaviest weight to message/creativity (beta = .33). Advertising budget (beta = .26) and market research (beta = .22) were about equally important, followed distantly by media planning (beta = .12). Clients cared little about agency/client relationships (beta = .07). Coefficients of the nonlinear terms were significant at the .05 level throughout the analysis.

While substantially in agreement with their clients, agency rankings of the five decision attributes did show some discrepancies. Agency professionals agreed with clients on the primacy of message/creativity (beta = .34) and advertising budget (beta = .25). They also weighted [TABULAR DATA FOR TABLE 2 OMITTED! media planing (beta = .10) about as lightly as clients did. However, they gave strikingly less weight to market research (beta = .13) and considered the agency/client relationship much more important (beta = .18) than clients did.

The function forms of the nonlinear regressions [ILLUSTRATION FOR FIGURE 2 OMITTED! showed how the level of each decision attribute affected each group's decisions. Both groups showed nearly linear positive function forms for message/creativity and advertising budget. Thus their rule of thumb was "the more, the better": they liked an advertising campaign in direct proportion to its level of creativity and budget effectiveness. The inverted Us of the market research and media planning function forms suggest that both agencies and clients were satisfied with moderate levels of research and planning: these factors held declining utility for both at high levels. Regarding agency/client relationships, agencies always felt "the more, the better," while their clients' enthusiasm actually dipped when offered high levels of this attribute.

Note that these judgment policies were backed out of respondents' answers to the 30 hypothetical cases. Because we wanted to test the accuracy of self-reports, we did not ask respondents to define their judgment policies until after they had assessed the 30 hypothetical cases. At the questionnaire's end, they self-explicated their judgments by dividing 100 points among the five decision criteria and specifying function forms for each criterion. These specified weights and function forms were then applied to the 30 hypothetical cases to derive a conscious judgment model, representing what clients and agencies would claim they wanted from a campaign. We then compared this specified model to each group's derived judgments about the 30 hypothetical campaigns. Although some prior researchers (e.g., Leigh, McKay, and Summers, 1984) have found little difference between derived and self-explicated weights in conjoint analysis, the agency/client literature presented enough evidence of "indecisiveness" to warrant checking this point.

Looking at agency staffers first, correlation between their specified beliefs and actual judgments was excellent (.95). Clients' beliefs and judgments agreed nearly as well (.92). Still, both groups' beta weights showed slight flaws in self-understanding that might cause each side to misrepresent its expectations to the other, despite a wish to communicate in good faith. For example, agencies specified budget as the most important attribute in the abstract, but their actual judgments elevated message/creativity (beta = .34) far higher than budget (beta = .25). Similarly, clients' actual judgments rated message/creativity higher (beta = .33) than they had specified (beta = .26). Communication would be further complicated by the nonlinearity of some judgments, as revealed by the inverted Us of the function forms for market research and media planning. In such cases, it would be very hard for either side to communicate just where "more is better" ended and where "more is worse" began.

These flaws were slight, however. Perhaps more important is that both groups' specified judgments showed good overall consensus (.87; see Table 1). Thus agencies and clients talking together would perceive their stated values to be similar. Agencies' derived judgments [TABULAR DATA FOR TABLE 3 OMITTED! also correlated quite well with what clients said they wanted (.85), whereas clients' derived judgments showed greater distance from values agencies specified (.80). Again, this flaw is slight, since it is much more important for agencies to act compatibly with clients' stated wishes than the reverse.


With reference to the research questions, the five decision criteria used in the study - message/creativity, campaign budget, media planning, market research, and agency/client relationship - provided a sound model for these professionals' judgments about advertising campaigns.

Only two criteria - message/creativity and budget - heavily dominated both groups' decision-making, accounting for 59 percent of their overall judgments in terms of combined beta weights. Media planning counted for little on either side - roughly one-tenth of their overall judgments. The two sides did diverge on market research (twice as important to clients) and relationship (more than twice as important to agencies). These discrepancies are not surprising in themselves: we expect clients to worry about product development and agencies to worry about relationships. Such divergences about market research and relationships are appropriate to their respective roles and are more complementary than disruptive.

In the aggregate at least, both clients and agencies were consistent decision-makers with a sound grasp of their own decision values, who could communicate those values clearly to their partners.

Why, then, do we continue to read so much about agency/client conflict in newspapers, trade publications, and scholarly research? Our preliminary study appears to rule out cognitive disagreement as a source of conflict. By default, it points to other areas, such as interpersonal factors and organizational deficiencies, that might cause both groups to perceive poor communication or misunderstanding. In short, the good news is that agencies and clients think very much alike; the bad news is that they often believe they do not.

Given the sample size, it would be imprudent to generalize widely. Still, such judgment analysis offers a profile of the "typical" client and agency that might serve as a practical guideline for both sides. For example, Cagley (1986) pointed out that the selection process gives clients a good sense of agency priorities, but agencies lack the ability to interview clients, so "they must rely on perceptions of what they feel clients regard as important." Judgment analysis techniques might therefore prove useful in the prospecting process, particularly in giving agencies a sense of clients' priorities before entering a relationship. For existing client/agency pairs, such an approach could provide valuable information during performance audits.

This initial study looked at the broad picture to establish normative decision profiles for the typical client and agency. In the future, given large enough samples, it should be possible to develop judgment policies not only for clients and agencies overall but also for specific product types, job roles (e.g., agency creatives and agency management), levels of experience, length of working relationship, and campaigns with different purposes. Supplementary questionnaires might also compare perceived levels and areas of conflict with the actual levels and areas identified by methods used in this study. The researchers are continuing to build their data base in order to explore such questions in the future.

In addition, because this study treated clients and agencies only in the aggregate, individual differences that could make flash-points for dissension were subsumed. Comparison between working client/agency pairs - the next leg of our analysis - will examine how accurately each side perceives the other's values, and how these cognitive values shape satisfaction with the relationship.


Bourland, Pamela G. "The Nature of Conflict in Firm-Client Relations: A Content Analysis of Public Relations Journal, 1980-89." Public Relations Review 19, 4 (1993): 385-98.

Buchanan, Bruce, and Paul C. Michell. "Using Structural Factors to Assess the Risk of Failure in Agency-Client Relations." Journal of Advertising Research 31, 4 (1991): 68-75.

Cagley, James W. "A Comparison of Advertising Agency Selection Factors: Advertiser and Agency Perceptions." Journal of Advertising Research 26, 3 (1986): 39-44.

-----, and C. Richard Roberts. "Criteria for Advertising Agency Selection: An Objective Appraisal." Journal of Advertising Research 24, 2 (1984): 27-31.

Chevalier, Michel, and Bernard Catry. "Advertising in France: The Advertiser-Advertising Agency Relationship." European Journal of Marketing 10, 1 (1976): 49-59.

Colley, Russell H. "Squeezing the Waste Out of Advertising," Harvard Business Review 40, 5 (1962): 76-88.

Doyle, Peter, Marcel Corstjens, and Paul Michell. "Signals of Vulnerability in Agency-Client Relations." Journal of Marketing 44, 4 (1980): 18-23.

Executive Decision Services, Inc. Policy PC: Judgment Analysis Software, 3rd ed. Albany, NY: Executive Decision Services, 1991.

Goldman, Kevin. "The Client-Agency Relationship: Great Grist for Group Therapy." The Wall Street Journal July 29, 1993.

Hotz, Mary R., John K. Ryans, and William L. Shanklin. "Agency/Client Relationships as Seen by Influentials on Both Sides." Journal of Advertising 11, 1 (1982): 37-44.

Korgaonkar, Pradeep K., George P. Moschis, and Danny N. Bellenger. "Correlates of Successful Advertising Campaigns." Journal of Advertising Research 24, 1 (1984): 47-53.

-----, and Danny N. Bellenger. "Correlates of Successful Advertising Campaigns: The Manager's Perspective." Journal of Advertising Research 25, 4 (1985): 34-39.

Leigh, Thomas W., David B. MacKay, and John O. Summers. "Reliability and Validity of Conjoint Analysis and Self-Explicated Weights: A Comparison." Journal of Marketing Research 21, 4 (1984): 456-62.

Michell, Paul C. N. "Auditing of Agency-Client Relations." Journal of Advertising Research 26, 6 (1986): 29-41.

-----, Harold Cataquet, and Stephen Hague. "Establishing the Causes of Disaffection in Agency-Client Relations." Journal of Advertising Research 32, 2 (1992): 41-48.

Ryan, M. P., and Russell H. Colley. "Preventive Maintenance in Client-Ad Agency Relations." Harvard Business Review 45, 5 (1967): 66-74.

Standard Directory of Advertising Agencies. The Agency Red Book. Wilmette, IL: National Register Publishing Co., June-Sept. 1993.

Stewart, Thomas R. "Judgment Analysis: Procedures." In Human Judgment: The SJT View, Berndt Brehmer and C. R. B. Joyce, eds. Amsterdam, The Netherlands: Elsevier Science Publishers B. V., 1988.

Verbeke, Willem. "Developing an Advertising Agency-Client Relationship in the Netherlands." Journal of Advertising Research 28, 6 (1988): 19-27.

Wackman, Daniel B., Charles T. Salmon, and Caryn C. Salmon. "Developing an Advertising Agency-Client Relationship." Journal of Advertising Research 26, 6 (1986): 21-28.

PRISCILLA MURPHY is an associate professor and director of the Master of Journalism program in the School of Communications and Theater, Temple University, Philadelphia, PA. Her professional experience involves media, marketing, and financial communications for insurance and financial services companies, including the Paine Webber Group in New York and Blue Cross and Blue Shield of Florida. Her research interests encompass decision-making in conflict and crisis situations, and risk and environmental communication.

MICHAEL L. MAYNARD is an assistant professor in the department of journalism, public relations, and advertising at Temple University, Philadelphia. His area of research includes mass media analysis, the relationship between mass communication and culture, and cross national communication as well as textual and semiotic analyses of television and print advertising in Japan.
COPYRIGHT 1996 World Advertising Research Center Ltd.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1996 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Murphy, Priscilla; Maynard, Michael L.
Publication:Journal of Advertising Research
Date:Mar 1, 1996
Previous Article:What can one TV exposure do?
Next Article:Advertiser perceptions of fair compensation, confidentiality, and rapport.

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters |