How advertising frequency can work to build online advertising effectiveness.
How to gauge advertising effectiveness on the web has consumed the thoughts and energy of many over the last few years. There has been no shortage of press coverage about the decline of banner advertising response rates. Some have likened banners to drops of rain in a stream, small splashes that are rapidly absorbed into obscurity. Advertisers investing in the medium, however, tend to view the internet in a more optimistic light. They look to uncover learning opportunities for how the web can help them develop better relationships and profits with their customers.
In reality, the internet is a medium that can work with other consumer touchpoints (e.g. TV, radio, direct mail, customer service) to achieve business and marketing goals. Because it is the newest of media, however, it is often isolated under a spotlight of the highest intensity. Most publicly cited internet performance metrics centre around click rates and provide little insight into whether advertisers' campaigns have been successful. This is unsurprising since that information is usually kept under proprietary wraps.
There are many metrics that help explain the effectiveness of web advertising. It can be the difference in performance between creative executions, whether one banner pulls a higher response rate or does a better job at building awareness. Another might be the number of sales leads resulting from people registering information on a site. Targeting advertising to the right prospects will also add to a campaign's impact. An analysis of media placement across OgilvyOne clients shows that response rates were almost double for ads placed near target-relevant content than those run in random rotation across the site. Additionally, the presence of offline media can potentially boost the impact of web advertising if both efforts are well coordinated. Finally, there is one area that holds much promise for improving online ad effectiveness: media scheduling, which encompasses the amount of advertising weight and its spacing over time. This paper will focus on learning about the relationship between frequency of ad exp osure and advertising effectiveness, and how media planners can use this information to develop scheduling guidelines.
Principles of media scheduling
It has been a longstanding goal of media planners to schedule advertising to get the greatest possible impact per dollar spent. At the heart of the guidelines for scheduling brand advertising have been the concepts of effective frequency and recency. Effective frequency implies that repeating messages to consumers will translate to learning and eventually result in action. Recency theory says that advertising is most effective when it occurs close to the time when consumers are ready to buy. Effective frequency is more focused on achieving changes in awareness levels that will eventually result in a sale, while recency strategies concentrate mostly on short-term sales results. In either case, too little advertising might prove ineffective while too much is wasteful overkill.
After establishing effective frequency or recency goals, a media planner has to have a sense of how many people are exposed to a medium over time, so that the goals can be achieved through appropriate scheduling. For traditional media, reach/frequency (R/F) models are available that enable planners to estimate what percentage of the target population is reached by a campaign and how often. Using the R/P model, the planner may construct several alternative media schedules within a prescribed budget, selecting the one that is best suited to meet the objectives. The major input variables are the number of weeks, media vehicles (e.g. TV networks/programmes, magazine titles, websites) and the amount of advertising weight placed on each. Whether the goal is to saturate the market quickly by building awareness across the masses or to drive home a point with a smaller target using heavy repetition, the planner has a road map of how audiences accumulate in order to fulfil communication objectives.
Paradigms for scheduling advertising on the web are still in development. In fact, the same can be said for traditional media as more and more work is being devoted to media mix models to identify the relative contribution of individual media to a campaign's success. The difference between the old and new is that traditional media have established axioms in place, while the internet is still defining itself. A big part of the challenge is determining the relationship between ad impressions bought and resulting awareness, click-through, sales or site registrations. There is virtually no research publicly available relating to this topic; therefore internet media planners will often buy advertising weight based on cost efficiency and content compatibility alone.
Two approaches to web marketing: direct response vs. branding
The foundation of any marketing/advertising program requires a set of clearly stated campaign objectives. While setting objectives for web efforts, it is important to consider the extent to which campaign goals are direct response (DR) oriented or brand building in nature. From the DR perspective, e-commerce and lead generation on the web makes transactions possible in the same way consumers would call an 800 number after watching a TV ad or completing a business reply card (BRC) found in a magazine. From the brand builder's point of view, banner advertising and websites themselves help to increase brand awareness and purchase intent.
Many campaigns actually fulfil dual objectives. Unclicked advertising banners help to build brand awareness while clicked-on ads might result in a sale or serve as a lead. In reality, the likelihood is that more emphasis will be placed on one approach versus the other, depending on the campaign. For example, if a discount offer used to acquire more customers is at the heart of the campaign, then DR results will serve as the primary success metric.
The distinction between DR as opposed to brand building is key in establishing success metrics and, therefore, identifying the appropriate data to analyse. Tracking DR advertising results on the web is relatively simple since there is such an abundance of data. Response information (e.g. clicks, banner interaction) is electronically captured and reported through the ad serving process as well as by tracking consumer activity once they have reached an advertiser's website. Capturing the impact of branding is not as straightforward, since it requires the recruitment of respondents who fill out questionnaires online. There can be technical problems in survey administration and a reluctance among some sites to perform this research, for fear that its intrusiveness will disenfranchise their visitors. OgilvyOne has learnt much in executing online surveys and has found some valuable insights that help to get around these obstacles. (See Appendix for a comparison of DR vs. branding data collection.)
Through two case study analyses, the next section explores how different levels of advertising frequency impact both DR and brand-oriented campaigns. Two internet web banner campaigns were selected to demonstrate the learning and how it might be applied for more effective media scheduling. Identity of the advertisers and the product brands and categories have been masked to maintain confidentiality.
Case Study 1 - Direct response
Advertiser A provides customers access to an online marketplace where they can engage in transactions on a regular basis. Heavier users may be on the system daily while others use the service weekly or even less frequently (they also have the option of transacting by telephone). Marketing efforts have centred around a direct response campaign which uses broadcast, print, direct mail and the internet to acquire customers. The focus of this case study will be on the internet only, concentrating on the relationship between advertising weight (impression) levels and achieving the desired goal on the site -- conversion to sale leads.
The customer acquisition process works as follows:
(1) Advertiser A runs banners on targeted internet sites.
(2) Consumers click on Advertiser A's banners.
(3) Consumers land on Advertiser A's web domain where they explore the site and consider subscribing to the service.
(4) Interested consumers electronically submit a request-for-application form, which is sent to them by post. For the purposes of this analysis, we will consider someone who has submitted a request-for-application form to be a sales lead. This submission is the last point at which they can be tracked and tied back to the banners and sites that delivered them to Advertiser A's domain.
Analysis framework and set-up
The objective of this analysis was to gain insight into optimal ad frequency levels and apply the learning to schedule future campaigns more effectively.
The method was to evaluate changes in sales leads resulting from shifts in frequency of ad exposure, i.e. ratio of advertising impressions to sites' unique users. This was done in two stages:
(1) Single site analysis -- Evaluation of one site with significant changes in impression levels and unique visitors to clearly illustrate their effects on leads.
(2) Multi-site reallocation of advertising weight -- Using the learning achieved in Stage 1, optimise a full schedule of sites to achieve more leads per dollar spent. A regression-based model was created for this purpose.
* Impressions -- Number of banner ads run on each site or opportunities to see.
* Unique users -- Number of different people visiting a website during a given month.
* Frequency of exposure -- Number of impressions divided by number of unique users.
* Cost per lead (CPL) -- The final cost for an individual to submit a subscriber application form. In effect, CPL is the cost to buy the banner impressions that resulted in people clicking, landing on the site and submitting the form.
* Cost per thousand (CPM) -- The dollar cost to buy a thousand impressions.
Effective frequency discussion
Effective frequency is the optimal number of banner exposures to an average user that results in action. Generally, it has been found that people tend to click on banners during the first few exposure opportunities (see Figure 1) and that, beyond that point, response rates diminish significantly.
The questions for the media planner then become 'How many impressions should I buy to achieve the best response?' and 'What about factoring in the price paid for those impressions (CPM)?'
To answer the first question, the planner should establish a guideline for how often people will be exposed to banners during the course of the campaign at a given impression level. For example, we have seen that response rates are highest at lower frequency levels; therefore an effective frequency limit, or cap, of three exposures per person might be established as a scheduling guideline. A 10:1 ratio of impressions to unique users, for instance, would appear to be too high given the relatively low click rates illustrated at the frequency level of 10 in Figure 1.
Next comes the question about how to factor in costs. Response rates should be evaluated hand in hand with CPM to maximise return on investment. For example, if a site's response rate is low relative to other sites on an advertising schedule, but its CPM is well below average, it may still make economic sense to schedule a substantial level of impressions because of low cost. This means that on the more cost-efficient sites it may pay to advertise at higher frequency levels if the cost to obtain leads is within reason. This will be discussed in greater detail later on.
Analysis period and data
* 14 sites with similar editorial content and visitor demographics.
* 13-week period during the first quarter of 2000.
* Varying visitor traffic and ad impression levels.
* Five waves of creative rotated randomly across the sites over the 13 weeks. Messaging was consistent throughout, with a discounted offer commencing week 5.
* Different research sources -- a key aspect of the analysis is the relationship between banner impressions scheduled on sites and the unique users of those sites, and these figures are used to calculate average frequency of exposure. These data come from two different sources. Banner impressions are culled from the log files of the third-party ad-serving company that placed the advertising across the sites on the media schedule. Unique user estimates were drawn from MediaMetrix, a panel of internet users whose web activity is electronically recorded and then projected to total US estimates. A study done by the FAST (Future Advertiser Stakeholder) committee in Autumn 1998 found wide disparities between the two sources with no systematic pattern to explain the differences. OgilvyOne feels, however, that the benefits of using these data to invoke a discipline on the media planning process far outweigh the shortcomings of the sources' disparities.
* Historical period -- The data for this analysis covers activity during a 13-week period (the first quarter of 2000). Results may vary due to seasonality effects and/or changes in site dynamics like audience size, demographic composition and visit patterns.
* Modelled vs. tested results -- All modelled results in this report have not been marketplace tested. Results in the marketplace may vary directionally from those originating from the model.
Results: single site analysis
This example provides a clear illustration about how advertising frequency levels impacted CPL (cost per lead). Site A was chosen for this example because, relative to other sites on the schedule, there were very extreme shifts in impression levels and unique users that helped to isolate the impact of ad frequency on response, and therefore CPL. The following was learned:
* Best advertising results (lowest CPL) were achieved when frequency of banner exposure was relatively low.
* Results were poorest (highest CPL) when impressions were unusually high, and, therefore, frequency levels were high.
Figure 2 reveals a more detailed account of how changes in impressions and unique users impacted advertising frequency levels and, therefore, CPL.
(1) Site A grew more popular, with substantial increases in unique users over the course of the quarter. This influx of new visitors, combined with steady levels of ad impressions over the first few weeks, lowered the ratio of ad impressions to unique users, or ad exposure frequency. Lower frequency levels improved the rate of response, thereby lowering CPL. In effect, during the first five weeks of the campaign, banner frequency levels were apparently too high.
(2) A surge in advertising banner impressions during the last few weeks of the quarter was spurred by Advertiser A's need to match competitive pressure. This spike in advertising weight proved to be overkill as response rates declined. CPL rose dramatically despite the fact that in March 2000 unique visitors had gone up 33% compared to the previous month's level.
This example provided general principles about the role of advertising frequency in creating more effective impression scheduling in the future. There are limitations, however, for applying the learning to other sites because of variations in:
* Response rates within sites (week to week)
* Response rates across sites
* CPM across sites.
The next section will show how the relationship of advertising frequency to advertising results can be used to optimise response across the entire group of 14 sites within a fixed budget.
Multi-site reallocation of advertising weight
The single site analysis in the previous section demonstrated that best results occur when frequency levels are relatively low compared to a barrage of repetition. Now the task became how to apply this learning to a marketplace example by including the other sites and their advertising costs. The objective of the exercise was to improve overall CPL by reallocating media weight from more costly, low-response sites to more productive sites, and to stay within specific customer lead acquisition cost guidelines. This simulation was done as follows:
* A model was created -- The relationship between impressions, unique users and CPL for all 14 sites (using regression analysis) was established, creating a model to simulate the change in response resulting from additional advertising weight (impressions).
* Sites were classified according to response potential and cost efficiency
-- High potential: frequency level currently low; relatively few impressions compared to other sites; low to moderate CPM.
-- Low potential: frequency level near at or above optimal; and/or high CPM; low response rate.
* Media weight was reallocated -- 10% of the budget was transferred from low-potential to high-potential sites.
* Maximum CPL was established -- At certain frequency levels, adding more impressions to a site has the effect of lowering response rate, thereby driving up CPL. For sites receiving more weight, a maximum CPL goal was set as a guideline for optimising results.
Building the model
The model was built around the following dynamics between impressions, CPL and unique users:
* As impression levels increase, conversion rates tend to drop.
* As the conversion rate drops, the cost per conversion increases.
* At some impression level the cost per conversion equals the maximum acceptable price for leads (CPL).
* The ratio of impressions to unique visitors (average frequency) provides a common metric that can be used to compare sites.
The first step in developing the model was to aggregate weekly data from all 14 sites into a single dataset. The information in the dataset included the impression/unique visitor ratio and the conversion rate (leads/impressions) for that week (13 weeks in total). The unique visitors were obtained from syndicated panel-based research and the impression level and conversion rates were provided by a third-party ad-serving company.
A linear model and several non-linear curves were fitted to the dataset before a single non-linear curve was selected (significance level: 0.0001, [R.sup.2]: 0.593). After viewing the results for each of the individual sites, it became clear that the variation between the sites was too large to allow the use of a single set of parameters for all sites. Parameters were estimated and the resulting curve was applied to each of the sites individually. The parameters were then recalculated for each site individually using that site's data.
The next step was to express the number of impressions as a function of the number of unique visitors, the maximum allowable CPL (MAC) and the CPM rate. By comparing the MAC to the actual cost per lead it was possible to reallocate impressions from sites with high CPL to low-CPL sites without exceeding CPL guidelines for sites receiving more impressions.
Results - effect on individual sites
10% of advertising dollars were shifted from low- to high-potential sites. The changes in advertising impressions, average frequency and CPL are shown in Table 1. The response model was used to determine the change in CPL resulting from an increase or reduction in advertising impressions. Overall, the following became apparent:
* Advertising weight was successfully shifted from low- to high-potential sites without raising the CPL of high-potential sites beyond the 14-site average (100 Index). The model was instrumental in providing guidelines for this reallocation.
* Low-potential sites actually benefited from a reduction in impressions, as ad frequency went down and CPL declined as well.
* Even a site with currently high level of ad exposure frequency (Site D) could handle more impressions since the cost to advertise was relatively low.
Results -- overall effects on ad schedule impact
Benefits of the reallocation were evident on a number of scores:
* Schedule CPM was lowered 6% due to the gravitation towards more cost-efficient sites.
* Overall CPL was reduced by 11%.
* The actual number of leads improved by 13%.
* This evaluation suggests that planning internet advertising impressions based on effective frequency goals and cost efficiency appears to be a sound approach to achieving more advertising impact per dollar spent. According to Forrester, internet ad budgets will grow from $5 billion in 2000 to $22 billion in 2004, with more than half of these dollars devoted to banner advertising (Li 1999). As more funds are allocated towards this medium, tactical media scheduling methods to improve advertising performance loom larger in importance.
* Many sites in a web schedule can handle more advertising weight. Using the model guidelines, a media planner can add the amount of advertising impressions to a site that would be sufficient in getting more leads but at the same time stay within recommended CPL guidelines.
* Models can be created for a variety of success metrics. This analysis employed CPL as the primary measure of success. Similar models can be built around other metrics such as click-through, site registration, time spent, cost per sale. The thread of similarity between all these measures is determining some level of advertising exposure that is optimal, assuming that targeting and creative execution have already been taken into consideration.
* The model needs to be in-market tested. Using experimental design, a controlled test would provide an idea of how accurately the model could predict real behaviour.
* There is an opportunity to enhance the model by developing more precise response curves. In this analysis, a 'one-size-fits-all' curve was culled from results across all 14 sites on the schedule and was uniformly applied to select sites in the reallocation process. Since we know that response varies by site, individual site curves could be constructed by using more weeks of response data (e.g. 39 weeks versus 13).
Case Study 2 -- Brand/ad awareness
To date, the advertising/marketing effectiveness of the internet has been viewed mostly through the lens of the direct response discipline. As stated earlier, a lot of attention has been focused on click rates as a benchmark for success (or failure), although it is recognised that results posted on the website, such as sales or registration completion, are key performance indicators. These site-level success metrics, however, are rarely shared for proprietary reasons.
The questions then become 'What are the branding effects of banner advertising, considering that click rates are about 0.4%?' and 'How many exposures are needed to have some kind of impact on ad/brand awareness or even purchase intent?' In Case Study 1 it was mentioned that click rates were highest within the first few banner exposures. It would seem that the frequency threshold for moving brand and ad awareness with banners would be even higher, since banners tend to be smaller than most advertising forms (i.e. TV, print, billboards) and have limited motion and audio. This handicap is changing as we speak, however, as the growth of high-speed internet transmission such as cable modems and DSL lines provides better platforms for more sophisticated internet advertising forms. Rich media, for example, are banners with built-in functionality that draw consumers via interaction with mazes, word jumbles, golf games, and so on that weave brand messages into the activity. Enhanced speed and further development of r ich media will significantly advance the internet on the advertising quality spectrum. However, the majority of people in the US currently access the web from home at the standard 56k speed, and rich media banners account for less than 10% of the internet's advertising inventory.
Advertiser B markets a technical product (Product B), and uses a variety of media and channels for customer acquisition and retention. At the time of the study, the internet was used primarily as a branding medium. Campaign banners contained simple, attractive messages that associated Advertiser B's product with new ways of doing business. When consumers clicked on banners, they were sent to a corporate site that provided more specific information about the product and company contact information, but there was no direct sales offer or e-commerce capability.
Analysis framework and set-up
The goal was to determine the branding impact of banner advertising on Advertiser B and Product B, keying in on the effect of repeat ad exposures.
* An online brand tracking study was used. Site visitors were recruited via a pop-up inviting them to participate in a ten-minute survey focusing on brand-oriented metrics. The incentive to participate was a contest drawing for cash prizes.
* Two sites that attract a high concentration of Advertiser B's target audience were selected for the media schedule. They agreed to host the surveys and work with the research supplier to implement study.
For both Advertiser B and Product B the following were tracked in the study:
* Brand awareness (Brand B only)
* Purchase intent
* Product attributes.
Analysis period/survey dates
There was a six-week campaign with three waves of research:
* Benchmark pre-wave -- one week before the campaign start
* Post-wave 1 -- two weeks after campaign start
* Post-wave 2 -- five weeks after campaign start.
Capturing frequency of ad exposure
Cookies were dropped in respondents' browsers for the duration of the campaign. During that time, the cookies recorded how often people were exposed to Advertiser B's banners, making it possible to determine the impact of banner frequency on advertising effect when results were tabulated. (A cookie is a small text file written into a file on the web browser of the end user at the time advertising is served. What is written into the cookie is simply the ad banner ID number, a time stamp, and a frequency counter that captures how many times the banner was delivered to a browser. No other personal information is stored in that cookie when it is written, nor is any other information appended to that cookie later.)
Following are the composite results from both sites where the survey was executed. Overall, banner advertising had a positive branding effect on Brand B's awareness and attributes; the same was true for Product B's attributes, including purchase consideration.
Brand B -- Unaided brand awareness rose from 32% to 70%, while brand attributes grew: * Expertise +17% * Ease of use +15% * Industry leader +13% * Dependability +15% Product B -- Brand attributes rose: * Ease of use +23% * Customer support +15% * Purchase consideration +21% * Innovation +15%
Impact of advertising frequency
Survey results were evaluated by frequency of banner exposure to gain insight into the number of repetitions that generate the most effect. For Brand B, shifts in unaided awareness were analysed across frequency levels, while the same was done for Product B's attributes (there was no awareness measurement of Product B). Figure 3 summarises the relationship between ad frequency and ad effects; because the data are proprietary, results are reported in percentage change rather than the absolute awareness or product attribute levels.
* For both Brand B and Product B, banner advertising had its greatest rate of impact between one and seven exposures; 80% of the effect was achieved by the time site visitors had seen seven ads.
* After seven repetitions, awareness and product attributes continued to improve although the rate of growth tailed off.
* Results suggest that repetition is a key factor in achieving branding objectives on the web, more so than for direct response-type campaigns. Although this study was executed only once, the findings make intuitive sense given the communication challenges banners face as an advertising form (e.g. size/position relative to content, limited motion, audio).
* For campaigns with brand-building objectives, media planners should schedule web impressions at higher frequency levels than they are accustomed to with direct response campaigns.
* This type of study should be repeated to confirm initial findings. Also, strong consideration in study design should be given to the impact of different creative approaches on results to develop further the banner as an advertising medium. This could include the use of colour, message size and length, or rich media vs. standard banners formats.
Creative messaging, audience targeting and site selection are key tactical factors that contribute to the effectiveness of internet banner campaigns. Another important, but somewhat overlooked, tactic is the process of establishing and scheduling the optimal level of advertising weight, or frequency of exposure. Too much advertising repetition can be wasteful, while too little might prove ineffective. Currently, there is little or no industry information available about ad frequency's effect on internet advertising impact.
This paper has explored the relationship between advertising frequency and response through two case studies with different campaign approaches: one for direct response, the other for building brand awareness.
The direct response campaign (Case 1) demonstrated how lead generation was improved by scheduling advertising according to optimal frequency levels. The key findings were:
* An analysis of historical weekly campaign media impressions and acquired sales leads showed that best results (lowest cost per lead) were achieved when advertising frequency levels on sites were relatively low.
* Using data from the analysis, a response model was created that documented advertising effects at various levels of advertising frequency.
* The model was then used to demonstrate how to get more sales leads from the media schedule for the same amount of money spent. 10% of the budget for a 14-site schedule was reallocated by shifting dollars from low-potential to high-potential sites. Low-potential sites were considered those having relatively high frequency levels and moderate to high cost per thousand (CPM). High-potential sites tended to have the opposite characteristics. As a result, significant gains were made in the number of ad impressions and sales leads (see Table 3).
In the branding campaign (Case 2), an online study provided insight into how banner advertising frequency impacts awareness and affinity towards the brand/product. The key findings were:
* Banner advertising had its greatest impact between one and seven exposures. 80% of the effect was achieved by the time site visitors had seen seven ads.
* After seven repetitions, awareness and product attributes continued to improve, although the rate of growth tailed off.
The implications are:
* The growth rate of web advertising dollars is expected to outpace that of traditional media for years to come. Learning more about how frequency of ad exposure works can help to improve campaign impact for each dollar spent.
* In general, it appears that direct response campaigns require a lower level of advertising frequency to achieve campaign objectives compared to branding programs. This is not surprising, since response is likely to be more immediate with an enticing offer compared to the repeat exposures often required for people to recall brand messaging.
* When establishing advertising frequency goals, media planners should consider to what extent a campaign strategy is direct response as opposed to brand building. If the emphasis is on brand building, for example, then greater ad frequency should be scheduled than with a direct response-type effort.
* There are substantial opportunities to achieve better campaign results through frequency planning with direct response-type web campaigns, when compared with branding efforts. This is because a rich supply of response data is available in the form of clicks, registrations, sales, leads, and so on. The relationship between advertising impression levels and advertising response can then be modelled from this information. With brand-oriented campaigns, tracking the impact of banner frequency on advertising awareness is more difficult to set up and execute; therefore there is less information available.
* As web advertising forms evolve and creative executions become more versatile, branding and its measurement will become more important in stature.
This paper would not have been possible without the substantial contributions of the OgilvyOne Media Metrics & Analytics group, particularly Harry Case, for his work developing the response model.
Gerard Broussard has been with OgilvyOne for five years, and currently heads a group investigating how to help the media department purchase and schedule advertising media more effectively. Previous media research positions include spells at CBS TV Network, Grey Advertising and BBDO. He holds a BBA in Marketing and an MBA in Market Research from the City University of New York.
Li, Charlene (1999) Internet Advertising Skyrockets. Cambridge MA: Forrester.
Reallocation of advertising weight to high-potential sites (shifting 10% of budget) Average Average Impressions frequency frequency Site % change (original) (revised) High potential (add impressions) B +21 0.5:1 1:1 D +21 7:1 8:1 E +14 3:1 3:1 F +100 1:1 2:1 G +100 1:1 2:1 Low potential (reduce impressions) L -26 3:1 2:1 N -26 3:1 2:1 O -26 1:1 1:1 Cost Per Lead Cost per lead Site Index (revised) (% change) High potential (add impressions) B 34 +9% D 63 +11% E 75 Unchanged F 76 +18% G 92 +18% Low potential (reduce impressions) L 139 -12% N 369 -12% O 492 -5%
Cost Per Lead Index - individual site CPL divided by average for all sites; 100 Index = 14-site average Average frequency - ratio of impressions to unique visitors in schedule
Impact of reallocation on impressions and leads (shifting 10% of budget) Schedule Impressions Index Cost Per Lead Index Leads Index Original 100 $100 100 Revised 106 $89 113 % change +6 -11 +13 Impact of reallocation on impressions and leads (shifting 10% of budget) Schedule Ad impressions Cost per lead Number of leads % change +6 -11 +13 Measuring advertising impact on the web-direct response vs. branding Direct response log files Data source * Third-party ad server * Advertising site server * Destination site server Comments: * Continuous measurement capturing virtually all behavioural activity Data collected * Click-through * Sales * Registrations * Leads Comments: * Rich source of information but limited to transactional activities. Report frequency * Daily (if desired) Comments: * Greater opportunity to mine ad effect relationships from data User profiles * Unique cookies * Registrations Branding online surveys Data source * Every nth visitor selected * Pop-down menu * 7-8-minute questionnaire Comments: * Intermittent measurement, limited opportunity due to intrusion during site visitation Data collected * Ad awareness * Brand awareness * Purchase intent * Demographics Comments: * Provides cognitive data and demographics not available through log files Report frequency * Usually limited to a few weeks out of a campaign Comments: * Higher cost than data mining log files; executed less frequently User profiles * Demographics * Technographics * Corpographics
|Printer friendly Cite/link Email Feedback|
|Publication:||International Journal of Market Research|
|Date:||Dec 22, 2000|
|Previous Article:||Managing the capture of individuals viewing within a peoplemeter service.|
|Next Article:||The future of multimedia research.|