Printer Friendly

Automated pricing rules in electronic posted offer markets.


Electronic commerce affords the opportunity to automate the process by which buyers and sellers engage in an exchange at a mutually agreeable price. For example, buyers on the Internet can use software agents to automate the gathering of information on price and other product attributes in online retail markets that are traditionally organized as posted offer markets. (1) This technology obviously reduces the consumer transaction costs of comparing prices, which presumably would induce stronger competition among sellers. However, as Hal Varian remarked in the e-commerce magazine Wired, "Everybody thinks more information is better for consumers. That's not necessarily true" (Bayers, 2000, 215). The same technology that reduces consumer transaction costs can be employed by sellers to automate the monitoring of rivals' prices, which might mitigate or even overwhelm the procompetitive effect of reduced consumer search costs. In addition, electronic markets grant more than just ideal information on competitors' pr ices. With electronic commerce a seller can commit to implementing an automated algorithm that could possibly sustain tacitly collusive prices. In this article we study the supply-side effects of three automated pricing algorithms that use price information as an input that could be retrieved from consumer price-gathering technology in an electronic market. Two of the three algorithms that we study have the theoretical potential to facilitate collusion: low-price matching and a trigger strategy. The third and presumably more competitive algorithm, which we refer to as undercutting, is drawn from the common retail practice of beating competitors' prices.

Using the experimental method, this article explores how sellers deploy these three automated pricing algorithms in electronic markets. More specifically, we ask: (1) Do sellers prefer to set their price manually or to adopt automated algorithms that adjust price more frequently? (2) Are markets with automated pricing algorithms more competitive or less competitive than markets with manually-posted prices? (3) Does increased commitment to an automated pricing mechanism facilitate tacit collusion?

To study the impact of automated pricing on seller behavior, we base our work on the model of price dispersion in retail markets by Varian (1980). Thus, our article also explores how well a theory of mixed strategy pricing organizes the behavior of sellers in a market with a nearly continuous stream of differentially informed fully revealing customers. (2)

As a preview of the results, we find that (1) sellers employ automated pricing algorithms more often than they manually set their own price, (2a) automated undercutting leads to prices similar to the game-theoretic prediction, (2b) automated low-price matching generates prices significantly higher than the game-theoretic prediction, (2c) automated trigger pricing results in market prices below the game-theoretic prediction, and (3) greater commitment to automated low-price matching shifts prices closer to the joint profit-maximizing outcome.

The structure of the article is as follows. Section II presents the three automated pricing algorithms we consider. The experimental design and procedures are in section III. Section IV presents and discusses our results, and section V briefly concludes.


The market environment is motivated by the model of sales in Varian (1980). (3) In this model there are n sellers that supply buyers who each desire to purchase at most one unit of a homogeneous product. Each seller has a commonly known, constant marginal cost c of supplying a unit to a buyer and posts a price p, which the buyer can accept or reject.

The private value v for each nonstrategic buyer is assumed to be a random variable, drawn independently from a known uniform distribution with a support [v, v]. Buyers differ based on the number of firms from which price quotes are received prior to making a purchasing decision. Our framework enriches Varian's (1980) model to include three types of such buyers, indexed by i [member of] {1, 2, n}. A buyer of type i with value v randomly samples the prices of i sellers and purchases from the one offering the lowest price [p.sub.L] if [p.sub.L] [less than or equal to] v, and in that case the buyer's utility is v - [p.sub.L]. If [p.sub.L] > v, then the buyer does not purchase and his or her utility is zero. The type 1 and type 2 buyers can be considered uninformed customers, using Varian's terminology, or customers who have some preference for one or two sellers beyond the homogeneous good that the sellers supply (but this preference is considered to be equally random by the sellers). (4) The type n buyers corres pond to customers who employ a software agent to search out the lowest price offered. We assume that a fraction [[omega].su.i] of the buyers are of type i, with [summation over (i)] [[omega].sub.i] = 1. Details of the model and the equilibrium conditions can be found in the appendix.

We investigate three well-established pricing algorithms for this exploration of the human behavior underlying automated pricing rules. These algorithms could be implemented on the Internet in that they only require two different types of inputs: (1) the rivals' publicly posted previous period prices, retrieved presumably by the same technology that buyers employ to compare all prices; and (2) no more than three additional parameters to process the competitors' prices and generate a new posted price. Two of the three algorithms are drawn from pricing strategies commonly found in the bricks-and-mortar retail economy, and the third has received considerable attention in economic theory. Furthermore, our simple rules provide a valuable benchmark for assessing market performance of more sophisticated automated rules that eliminate the human element of setting prices via automated algorithms. (5)

Undercut Algorithm

As casual reading of advertisements reveals, price-beating policies are common in bricks-and-mortar retail markets. Retailers pledge in some advertisements to beat any competitor's price by a percentage of the price difference, a fixed percentage, or a fixed dollar amount. (6) Price-beating policies have also been implemented on the Internet. For example, before was acquired by in November 1999, the former online book retailer would automatically query the three dominant competitors (,, and to determine what price was currently posted at each of those sites for a specific book for which a buyer at had expressed an interest. After securing their competitors' prices, would undercut the lowest of the posted prices, often by 1%. It is a worthy empirical question to ascertain whether removing the automated undercutting rival relaxed competition post-merger, but we do not address that here.

In our experiment, we implement the fixed dollar amount alternative because the Undercut strategy can also be interpreted as a static best response if a seller undercuts the lowest price last period by the smallest unit of account, unless the sale at that price leads to a lower expected profit than choosing the monopoly price. Because a seller may prefer to build in rivals' future responses to his or her lower prices, we generalize a static best response strategy to permit a seller to undercut last period's lowest price (unless it is his or her own) by any amount that he or she specifies. However, if that price is less than or equal to a lower bound price, then the price will be set at an upper bound price. The seller chooses both the upper and lower bounds. Hence, the seller specifies three parameters for the Undercut algorithm. The Undercut algorithm uses last period's prices as the target for beating prices in the current period, to avoid infinite pricing loops that could occur if the algorithm was impleme nted within a period.

Low-Price Matching Algorithm

A second and perhaps more common pricing policy in retail markets involves matching any competitor's price. The widespread view of low-price matching policies is that under a variety of conditions they can facilitate collusive outcomes. (7) Automating low-price matching strategies appears to enhance the likelihood of tacitly collusive prices because firms can commit to an algorithm that immediately implements low-price matching without the hassle costs of the bricks and mortar world that could impede collusion. With bricks-and-mortar markets, the burden of executing low-price matching rests with the consumer, and as Hviid and Shaffer (1999) show, the process via low-price matching is impeded by search and other transaction costs. However, with automated pricing in an electronic market, sellers can swiftly query their competitors' prices and immediately execute the price matching without relying on the consumers.

In an experimental study using telephone markets, Grether and Plott (1984) examine most favored customer clauses, a common business practice somewhat related to low-price matching. Most favored customer clauses guarantee each customer that no other customer will find a like quantity at a lower price from the same seller. Grether and Plott find that combining most-favored customer clauses with other potentially anticompetitive practices raises market prices. The key difference between low-price matching and most favored customer clauses is that low-price matching explicitly guarantees common prices across sellers for a single buyer (if all sellers use it), whereas a most favored customer clause pledges a common price across buyers for a single seller. Both may facilitate collusion, but we focus on how the former affects prices when automated in an electronic posted offer market.

Trigger Algorithm

The use of trigger strategies to explain tacit collusion in repeated games has been extensively studied, most notably by Friedman (1971, 1977), Rubinstein (1979), Fudenberg and Maskin (1986), and Aumann and Shapley (1992). Given the widespread attention to the analysis of trigger strategies, we include this algorithm to evaluate its performance in this framework. It is well known that the multiplicity of equilibria arising from the Folk theorem provides little guidance in predicting market outcomes, but the prominence of trigger strategies in the literature warrants exploring which of the multitude of equilibria may emerge when the strategy is automated in an electronic market. (8) With this algorithm a seller first sets the price, and if the minimum price of the other sellers is less than a seller-specified threshold, the price will be set at another price of the seller's choosing for the remaining periods over which the pricing rule is in effect. Thus a seller also specifies three parameters with this algor ithm. This version of a (grim) trigger strategy is designed to be simple and to help facilitate its effectiveness in supporting collusion. More general trigger strategies (such as, set initial price at [p.sub.0] and if the minimum of the other prices drops below p then trigger n rounds of pricing at p and then post a new price p) require (two) more parameters on which to coordinate, thereby increasing the difficulty of supporting collusion.

To be comparable with the other two rules, the version of the trigger strategy implemented in this article involves a finite horizon, a difference from the standard models in the literature that utilize an infinite horizon. Nevertheless, we maintain the defining features of the trigger strategy in standard models, namely, that if a seller defects on a collusive price, rival sellers can punish by pricing competitively. In the appendix we verify that our Trigger algorithm is capable of supporting collusive behavior in equilibrium. The crux of the argument is related to how a grim trigger strategy can support collusion in an infinite horizon model. In the infinite horizon model, a defection is followed by a sufficient number of competitive pricing periods, and this strong incentive is what supports collusion. In our automated finite horizon trigger strategy, any defection will occur in the first period and will be punished immediately for the remainder of the finite horizon. Collusion is supportable in equilibri um if the one-period gain from defection is less than the loss in profit from punishment over the remaining periods.


To explore the behavioral element of deploying automated pricing algorithms in electronic markets, we conducted a controlled market experiment. The sessions were conducted at the Economic Science Laboratory during the spring 2000 semester using University of Arizona undergraduates as participants. Many of the subjects had prior experience in various unrelated economic experiments, but for some this was their first experiment.

Each computerized session consisted of computerized buyers and identically parameterized subject sellers. The electronic markets ran continuously for a predetermined amount of time that was not conveyed to the subjects. The subjects and the automated buyers traded in experimental dollars for a fictitious commodity. Every three seconds, which we a call a period, a potential buyer entered the market, and so a sale could occur every three seconds. When a buyer entered the market, his or her value was randomly determined, as was his or her type, the number of sellers from which price information would be collected prior to making a purchase. In the case of type 1 and type 2 buyers, the identifications of the sellers to sample were also randomly determined. Although buyers arrived every period, subjects could only change the parameters of the algorithm after a block of 20 periods.

For what we denote as the Baseline treatment, sellers simultaneously chose a posted price for the entire block of 20 periods. The fixed number of periods (buyers) between updating posted prices in the Baseline algorithm represents the opportunity cost to the seller of continuously monitoring and adjusting prices manually each period. Having 1 buyer in each of 20 periods is isomorphic to having 20 independent buyers simultaneously making purchasing decisions in the Baseline treatment, as the posted prices remain unchanged. However, we chose to have buyers arrive sequentially for consistency between these baseline markets and the automated algorithm treatments where prices are updated between periods. The simultaneity of posting a single price for the entire block of 20 periods maintained the theoretical assumptions of the stage game for each block of 20 periods. Thus the simultaneous move game establishes the empirical foundation of the Baseline treatment vis-a-vis the theoretical predictions in the appendix.

In each of the algorithm treatments, the seller has the option of employing that algorithm or posting a single price as in the Baseline treatment. (9) For example, in the Undercut treatment, every seller has the option of manually setting his or her own price for a block of 20 periods or deploying the Undercut algorithm that will update prices every period within a block of 20 periods, but the Trigger and Low-price Matching algorithms were not available.

After each period, a seller received feedback in the form of the prices posted by the other sellers, his or her own price, his or her own profit for the period, and his or her own implemented algorithm. The parameters of the algorithms were private information. Figure 1 displays an example of the seller's screen. During an experiment only one algorithm and the Baseline rule were available to a subject. Figure 1 displays all four pricing rules for demonstration purposes only. At any point during the session subjects could examine price history and past algorithm performance by scrolling through the information displayed on their screens. During the 20 periods when an algorithm was fixed, a subject was able to select the algorithm and its parameters that he or she wanted to implement for the next block of 20 periods. Because each period lasted three seconds, a subject had one minute to determine the algorithm for the next block of 20 periods. Again, this fixed length of time for which an algorithm was in effect represents the opportunity cost of monitoring and reevaluating an automated algorithm.

One of the questions in which we are interested is whether sellers prefer to set their own price infrequently or adopt automated algorithms that adjust more frequently. Because the opportunity cost of changing the parameters of the algorithm (the length of time for the which the algorithm is in effect) can plausibly interact with the inherent properties of the algorithm to affect market performance, we controlled for this interaction by simultaneously implementing the updated or new automated pricing algorithms for each subject; that is, all subjects were allowed to update their pricing rules at the same points in the session. Moreover, this feature of our design directly ties our automated pricing treatments to the Baseline treatment and the analytical framework on which it is based.

To summarize, the input parameters of each algorithm as they were presented to the subjects are as follows:

Baseline: Set my own Price at -----,

Undercut: Undercut the lowest price by ----- unless [less than or equal to] -----, then raise the price to -----,

Low-price Matching: Set my Price = ----- and then match the lowest price of the other sellers, and

Trigger: Set my Price = ----- unless minimum of the other prices is [less than or equal to] -----, then set my price at -----,

where the subjects fill in the blanks. An algorithm was implemented at the start of each block of 20 periods. If the subject made no changes to their algorithm, the previous block's algorithm served as the default for the current block.

Each session consisted of n = 4 sellers with a common per unit cost of c = 25. Buyer's values were uniformly distributed over the interval [25, 125]. The percentage of type 1, 2, and 4 buyers was 0.6, 0.2, and 0.2, respectively. For these parameters, Figure 2 displays the simulated, symmetric Nash equilibrium mixing distribution derived from the condition given by equation (A-1) in the appendix. The support of the equilibrium mixing distribution is [34, 75] (see equation [A-2]), and the median and mean prices of the equilibrium mixing distribution are approximately 46.1 and 47.8, respectively. We chose probabilities for the buyer types and the number of sellers so that the resulting equilibrium cdf F(p) had median and mean prices approximately (and purposely not exactly) in the middle of the interval [c, [p.sub.m]] = [25, 75], where [p.sub.m] is the monopoly price. We did not want the theoretical prediction to be either too competitive or too noncompetitive. Another advantage of setting n = 4 is that the like lihood of a buyer visiting a seller remains relatively high from period to period, thus keeping the subjects more involved in the experiment because the results are reported in real time.

On entering the laboratory, a subject was seated at a computer terminal displaying self-paced instructions. Once all four subjects in a market completed the instructions, the subjects participated in a training phase that lasted for six 20-period blocks in which the only available algorithm was to set one's own price each block. If the subjects were participating in a Baseline experiment, then they simply continued setting their own price for an additional 45 blocks of 20 periods each. If the subjects were participating in one of the following treatments, Undercut, Low-price Matching, or Trigger, then the initial phase was followed by a second set of instructions for that specific algorithm treatment. After all subjects completed this second set of instructions, the experiment proceeded with 52 blocks of 20 periods each in which the subjects could either set their own price or implement the specific automated algorithm. In the first 20 periods of the second phase, subjects had to set their own price to provid e a required starting point for the implementation of the Undercut algorithm. With the Undercut strategy there must be posted prices available for someone to undercut the lowest price posted in the previous period. (10) Hence, all experiments result in 51 blocks of 20 periods in which the appropriate algorithms can be implemented. Because the first set of instructions took approximately ten minutes and the second set took five minutes, the Baseline treatment sessions lasted one hour; the other three treatment sessions lasted one hour and fifteen minutes.

The experimental design consisted of 16 sessions: 4 sessions under each of the three algorithm treatments plus 4 sessions under the Baseline treatment. Sessions with various treatments were conducted simultaneously to control for extraneous factors. (11) The baseline sessions were conducted separately because these sessions were shorter in duration than the algorithm treatments.

Experimental dollars were converted to cash at a rate of 300 experimental dollars for US$1 at the conclusion of the experiment. In addition to the privately paid earnings, the subjects were also paid a $5 show-up fee. Table 1 reports the average earnings by treatment for all blocks, excluding the $5 show-up fee.


For each period, our data include the algorithm (Baseline, Undercut, Low-price Matching, and Trigger) employed by each seller, the relevant input parameters, and all posted prices. We also have the random draws of the buyer's value and type, and the sellers visited by each buyer. The data permit us to compare posted prices, transaction prices, and profitability across institutions.

We summarize our results in what follows as a series of four findings. As a control for potential learning effects over the course of the experimental session, our analysis focuses exclusively on the last half of the session (25 blocks of 20 periods or 500 periods). We begin by comparing the Baseline treatment to the symmetric game-theoretic prediction. Figure 3 depicts the prices chosen by the 16 sellers in the Baseline treatment. We find, as Davis and Wilson (2000) and Kruse et al. (1994) do in mixed strategy pricing games, that the central tendencies of the Nash equilibrium mixed strategy distributions characterize pricing behavior fairly well. This is our first finding.

FINDING 1. When sellers set their own prices in the Baseline treatment, the resulting distribution of prices is a mean-preserving spread of the theoretical symmetric Nash equilibrium mixing distribution. However, the sellers' profits are less than the game-theoretic prediction.

Figure 4 provides the qualitative support for the first portion of this finding. The median and mean of all prices across all subjects and sessions in the Baseline treatment are remarkably close to the game-theoretic predictions. (12) The game-theoretic predictions for the mean and median are 47.8 and 46.1, respectively, whereas the Baseline treatment mean and median are 48.0 and 44.0, respectively. A two-sided Wilcoxon signed-rank test of the 16 sellers' median prices fails to reject the null hypothesis of a median equal to 46.1 (Z = -1.165, p = 0.2440). The observed frequencies of prices between 36 and 50, inclusive, in Figure 4 are very similar. The distribution of Baseline prices also exhibits the theoretical skew to the right. However, we observe that much of the predicted weight of prices between 51 and 70, inclusive, is shifted to prices less than 36. More specifically, over four times as many prices are observed less than a price of 36 than what the theory predicts. As Figure 5 illustrates, this shift in the weight to lower prices reduces the sellers' profits. Thirteen of the 16 sellers earn less than the symmetric game-theoretic prediction, and on average, they earn 16.1% less than the noncooperative equilibrium prediction. (13)

Finding 1 provides evidence that our modified version of Varian's model of sales organizes behavior in such markets fairly well. Having established this baseline, we turn our attention to studying how automating pricing decisions affects market performance. Finding 2 addresses our first question: Do sellers prefer to set their price manually for a block of periods or to adopt an automated algorithm that can adjust price every period?

FINDING 2. Sellers employ automated pricing algorithms more often than they manually set their own price.

Figure 6 displays the adoption rates of the algorithms for each subject in the automated algorithm markets. Of the 400 blocks for which the 16 sellers in a treatment could choose to automate their pricing or set their own price for the block, sellers implement the algorithm in 78% of the Undercut treatment blocks, 62.5% of the Low-price Matching treatment blocks, and 81% of the Trigger treatment blocks. These adoption rates are broadly distributed as 38 of the 48 sellers deploy an algorithm over 50% of the time. A two-sided Wilcoxon signed-rank test of the 48 sellers' adoption rates of algorithms rejects the null hypothesis of a mean adoption rate of 50% (Z = 4.500, p = 0.0000).

Our next finding addresses the second question we pose: Are markets with automated pricing algorithms more competitive or less competitive than markets with manually posted prices? As some preliminary evidence on this question, Figure 7 displays the smoothed transaction prices over the last 500 periods for each the sessions. (14) Finding 3 reports that the levels of the Undercut and Baseline transaction prices appear to be very similar, whereas the Low-price Matching transaction prices are noticeably higher than the Baseline transaction prices. Trigger transaction prices seem to be lower than in the Baseline treatment.

FINDING 3. The Undercut algorithm leads to median prices statistically equivalent to the Baseline treatment and game-theoretic prediction. The Low-price Matching algorithm increases the median price above the median Baseline and game-theoretical levels, and the Trigger algorithm lowers market prices below the median Baseline and game-theoretical levels.

Figure 8 depicts the observed frequencies of prices for the last 500 periods, and Table 2 reports the descriptive statistics for those distributions. Because we expect a priori from the theory that the distribution of prices will be skewed to the right (and hence not Gaussian), we employ the Wilcoxon signedrank test to test that the median prices are the same as the game-theoretic median, and we use the Wilcoxon rank-sum test to test whether the median prices are the same for the algorithm and the Baseline treatment. Both are tested against the two-sided alternative. Table 3 reports the results of these statistical tests.

1. The Undercut treatment has descriptive statistics (mean, median, and variance) that are qualitatively similar to those from the Baseline treatment. The median Undercut price is 45.0, which falls between the median price of 44.0 for the Baseline treatment and 46.1 for the game-theoretic prediction. Indeed, we fail to reject the null hypothesis that the Undercut median price is the same as the game-theoretic and the Baseline treatment medians. Table 3 reports the p-values of 0.2246 and 0.2649, respectively.

2. Figure 8 clearly depicts that the Lowprice Matching algorithm shifts the distribution of prices toward higher prices. The overall median of the Low-price Matching treatment is 25% greater than the Baseline median price (55 vs. 44) and 17.3% greater than the game-theoretic prediction (55 vs. 46.1). Table 3 reports that this treatment effect is highly significant with p-values of 0.0004 and 0.0005, respectively.

3. As Table 2 and Figure 8 indicate, the Trigger algorithm and the Baseline treatment share a similar skew (2.06 vs. 1.91) and mean (47.1 vs. 48.0). However, there is considerably more weight on prices less than or equal to 40 such that the median Trigger price is significantly less than the game-theoretic prediction (p = 0.03 13 in Table 3). With p = 0.0530, the evidence is a little less compelling that the median Trigger price is less than the Baseline treatment median.

This treatment effect is even more striking when considering the profits of the sellers. Using the data from the last 25 blocks of 20 periods, Figure 5 illustrates that 15 of the 16 Low-price Matching sellers are more profitable than the game-theoretic prediction, and the average seller in this treatment is 52.4% more profitable than the average Baseline counterpart (2,460 vs. 1,614). We also observe that the Trigger treatment depresses average seller profit by 12.1% from the average Baseline level (1,419 vs. 1,614) and that the average Undercut profit is close to the average profit in the Baseline treatment (1,710 vs. 1,614).

Analyzing the input parameters to the Undercut and Trigger algorithms also reveals some interesting behavior. The average amount in the Undercut treatment by which the rule undercut the lowest prevailing price is 5.1. In only 54 of the 312 times that the algorithm was used did the sellers choose to undercut by a mere 1 experimental dollar, the smallest unit of account. The sellers apparently look back one period to determine what price to beat, but calculating that their rivals will also lower their price, they look forward with respect to how much they should undercut in the current period. The algorithms also undercut the competitors down to a price of 31.9 on average and reset the price to only 62.6 on average. Recall that the bounds of the equilibrium mixing distribution are 34 and 75. Because the Undercut distribution is fairly similar to the Baseline distribution, it is plausible to infer that the price-beating behavior adopted by the sellers describes a large portion of the pricing dynamics in the Base line treatment.

We also observe that nearly one-third of the times that the sellers implemented the Trigger algorithm (95 out of 324), the initial price was set below the price to be implemented if the trigger was tripped, contradictory to the intuition of the Folk theorem. For example, one seller in the Trigger treatment employed the algorithm 23 (out of 25) times, with 18 of the 23 uses following this unanticipated pattern. Averaging across the 18 blocks, the initial price was set at 42.4. This subject would switch to a price of 72.7, if the minimum of the prices set by the other sellers fell below 37.2. We interpret this behavior as an attempt to avoid aggressive pricing. Although unexpected, it appears to be a reasonable response to failed attempts to employ the trigger strategy to enforce collusion. The apparent problem with implementing a noncooperative Trigger strategy is that the sellers have different preferences about where to set the triggering price and the adjusted price. Stigler (1964) provides some early analy sis on the difficulties of sustaining collusion. The following comment summarizes these observations.

COMMENT. The Undercut algorithm is more than a simple best response, as it is used to beat the lowest competitor's price by more than 1 experimental dollar, and if that price is too low the algorithm on average resets the price below the monopoly level. The Trigger algorithm is often used as an insurance mechanism to raise a price as opposed to utilizing it as a collusive mechanism to punish defectors.

Sellers may use the Trigger algorithm as an insurance mechanism because the trigger phase was activated so frequently (perhaps due to a lack of consensus about the appropriate parameters), and one way to avoid such an ultracompetitive outcome would be to use the Trigger algorithm to raise prices if the minimum price is very low. An alternative formulation of the Trigger algorithm could impose more competitive reactionary prices. Though such a restriction could generate prices that are less competitive in the laboratory, there is no reason to believe such a restriction would exist in naturally occurring markets.

Our last finding addresses our third question: Can increased commitment to an automated pricing mechanism facilitate tacit collusion?

FINDING 4. Greater commitment to the Low-price Matching algorithm shifts prices closer to the joint profit-maximizing outcome.

In addition to finding that Low-price Matching prices are greater than the noncooperative prediction, we observe that higher rates of adopting the Low-price Matching algorithm are correlated with higher transaction prices. First, notice in Figure 7 that the Low-price Matching transaction prices in sessions 1 and 3 are generally higher than the transaction prices in sessions 2 and 4. (5) On average, Sessions 1 and 3 transaction prices are 57.5 and 61.8, respectively, whereas the average Sessions 2 and 4 transaction prices are 47.2 and 51.8, respectively. Second, observe in Figure 6 that Sessions 1 and 3 more frequently adopt the Low-price Matching algorithm. The sellers in Sessions 1 and 3 utilize the algorithm 80% and 76% of the time, respectively, versus the adoption rates of 58% and 36% for Sessions 2 and 4, respectively.

Surprisingly, although the Low-price Matching is the most profitable algorithm, it is also the least frequently used of the three algorithms. However, as Finding 4 indicates, the effectiveness of the tacit collusion is weakened but not eliminated as fewer participants adopt the policy. A rogue seller who defects from the joint-profit maximizing price is subsequently punished by all sellers following a Low-price Matching strategy. However, collusion can be maintained even if some sellers do not follow a Low-price Matching policy. A seller has an incentive to try free-riding on the cartel by setting a price equal to the collusive price and allowing other sellers to punish defectors. This free-riding seller would gain in the event that the low-price matching was carried out by another seller and then an uninformed buyer considered the free-riding seller for a purchase at the collusive price.

An implication of Finding 4 for the development of automated pricing mechanisms is that simplicity is a virtue for achieving tacit collusion. The Low-price Matching algorithm requires only one type of information as an input from the market--competitors' prices. This simple computerized algorithm appears to solve the problem of anticipating the pricing decisions competitors in a mixed strategy environment. Moreover, it provides swift and effective feedback to a maverick, low-price seller, thus making it easier to facilitate collusion.


This article reports the results of a laboratory experiment that investigates the market impact of automated pricing algorithms. The main result is that the information and technology provided by electronic commerce could change the way prices are determined. In our Baseline treatment, the overall distribution of prices is similar to the symmetric Nash equilibrium mixing distribution. This environment is similar to the traditional bricks-and-mortar retail economy in that pricing behavior necessarily lags information and that adjustments are made manually. However, when automated pricing algorithms are available, they are used more frequently than manually posted prices. The three pricing algorithms examined in this study generate the following ranking of average and median prices: Trigger < Undercut < Low-price Matching. The Undercut algorithm leads to a distribution of prices similar to that observed in the Baseline treatment, suggesting that this may be the strategy people implement when forced to adjust pr ices manually. The Trigger pricing algorithm, which has the theoretical potential for collusion, actually generates prices and profits well below the noncooperative game-theoretic prediction. Finally, the highest prices are sustained through an algorithm advertised to the public with competitive overtones, Low-price Matching. Furthermore, we find that the greater the commitment to Low-price Matching, the higher the average transaction price in the market.

This article's experimental results provide a first step toward understanding automated pricing behavior. Our experiment restricts attention to the seller's decision, leaving the question of how buyers respond to automated posted pricing and the effects of search costs for subsequent work. It also limits the insight gained from effects on social welfare. In this setting, full efficiency is achieved when every buyer makes a purchase at constant marginal cost. Therefore, the algorithm generating the lowest price will necessarily be the most efficient as the buyers are fully revealing. Of course algorithm performance could change as the number of sellers varies and as buyers' strategic behavior is considered. Also, the experimental results beg the question of pricing behavior when multiple algorithms are simultaneously available to all sellers as would be the case on the Internet. We provide a preliminary insight into this interaction in Deck and Wilson (2000), which investigates transaction prices when Undercut ting and Low-price Matching are both available to sellers. The initial indication is that Undercutting hampers but does not eliminate the ability of Low-price Matching to sustain collusion. Additionally, changing the relative frequency of informed and uninformed consumers could alter seller behavior. For example, with the Trigger algorithm a smaller percentage of informed buyers reduces the cost of not punishing thereby making signaling more profitable. Similarly, the Low-price Matching algorithm may be less effective at facilitating collusion when there are fewer uninformed buyers. For example, Capra et al. (2001) find that prices are near the competitive level when only 16% of buyers were uninformed. As trade continues to expand in electronic markets and as information becomes more accessible, the need to understand the impact of these subtleties on automated pricing behavior will only increase.



In this appendix we present the stage game symmetric Nash equilibrium for the model described in section II when there are two or more sellers, following the reasoning of Varian. (16) We first argue that no pure strategy symmetric Nash equilibrium exists. Note that any price below cost or strictly above the monopoly price cannot be part of an equilibrium because these prices are strictly dominated by the monopoly price. Therefore, we can restrict attention to prices between cost and the monopoly price inclusive. Suppose that each seller j sets price [p.sub.j] with probability one. Without loss of generality, suppose [p.sub.1] [less than or equal to] [p.sub.k] [for all]k [not equal to] 1. If [p.sub.1] is strictly the lowest price, then seller 1 wants to raise his price by [epsilon]. This follows because [p.sub.1] is strictly below the monopoly price and the expected gain from raising the price to buyers who will continue to purchase at the higher price more than offsets the expected loss of sales to customers whose values lie between [p.sub.1] and [p.sub.1] + [epsilon]. If seller 1's price is tied with another seller, he wants to either undercut the other sellers' price by [epsilon], capturing the entire expected profit, or if the price is too low, seller 1 will raise his price and serve only type 1 buyers. Therefore, no pure strategy symmetric Nash equilibrium exists. In a related vein, no mass points are possible in the density of the mixed-strategy symmetric Nash equilibrium because the positive probability of a tie would always induce a seller to capture all of the profits at a price [epsilon] below his rival as opposed to splitting the profits at the tied price.

As a first step in deriving the symmetric mixed strategy equilibrium we find the profit that seller can secure unilaterally without randomly drawing a price from a density with a corresponding cumulative distribution function (cdf) F(p). A seller can guarantee a profit only from the type 1 buyers who visit the seller, thus making the seller a monopolist to these buyers. The monopoly profit [[pi].sub.m] for this market is found by finding the price [p.sub.m] that maximizes [pi] = [([upsilon] - p)/([upsilon] - [upsilon])](p - c), where the first factor is the probability that a buyer's value is greater than the price and the second factor is the seller's profit from a sale. Hence, [[pi].sub.m] = [[([upsilon] - c).sup.2]/4([upsilon] - [upsilon])] with [p.sub.m] = (v + c)/2. With probability [[omega].sub.1]/n a type 1 buyer will visit a particular seller, and so the "security" profit of a seller is ([[omega].sub.1][[pi].sub.m])/n at [p.sub.m].

Next we determine the probability that a buyer will consider a purchase from a particular seller when the other sellers price according to F(p). We already noted that a type 1 buyer will Visit a particular seller with probability [[omega].sub.1]/n. There are n - 1 possible pairs to which a seller can belong and (g) total pairs, and so for type 2 buyers, a particular seller will be one of a pair sampled by a buyer with probability (n - 1)/(n/2) = (2/n). That same seller will have a price less than the rival in the pair with probability 1 - F(p). Hence, the probability of a potential sale to a type 2 buyer is (2/n)[1 - F(p)][[omega].sub.2]. A seller will have a lower price than all other rivals with probability [[1 - F(p)].sup.n-1], and so with probability [[1 - F(p)].sup.n-1] [[omega].sub.n], a seller could make a sale to a type n buyer. Therefore, the overall probability that a buyer will consider a purchase from a particular seller is ([[omega].sub.1]/n) + (2/n)[1 - F(p)][[omega].sub.2] + (n/n)[[1 - F(p)].su p.n-1] [[omega].sub.n], which we denote by [delta](F[p])/n, where [delta](F[p]) [[SIGMA].sub.i[member of](1,2,n)] i[[omega].sub.i] [[1 - F(p)].sup.i-1].

Finally, to find the equilibrium cdf, we equate the security profit to the expected profit for a price p

(A-1) ([v - p]/[v - v])(p - c)([delta](F[p])/n)

= ([[omega].sub.1][[pi].sub.m]/n),

where ([v - p]/[v - v]) is the probability that a buyer's value is greater than the seller's price, p - c is the seller's profit from a sale, and [delta](F[p])/n is the probability of being selected as a potential seller by a buyer.

Unfortunately, for nonzero weights on each buyer type, a closed-form solution for F(p) does not exist. In more naive environments where [[omega].sub.i] = 0 for some i, closed-form solutions do exist. If every buyer is type 1 ([[omega].sub.1] = 1), then the sellers post the monopoly price because they are not competing with each other. In contrast, if there are no type 1 buyers, then p = c with certainty. Even without a closed-form solution, we can determine that the upper bound of F(p) is [p.sub.m] when F(p) = 1, and that the lower bound [p.sup.*] of the support is

(A-2) [p.sup.*] = [p.sub.m] - [square root of ((v - v)[[pi].sub.m][1 - ([[omega].sub.1]/[[[omega].sub.1] + 2[[omega].sub.2] + n[[omega].sub.n])])]

when F(p) = 0.


The joint profit-maximizing price [p.sub.m] is the price a monopolist would charge. Suppose that n - 1 sellers employ the following trigger strategy: post an initial price of [p.sub.m], trigger a punishment phase if the minimum price of the other sellers is less than or equal to [p.sub.m], - 1 (the next lowest price) and punish at a price equal to c. Because any price below [p.sub.m] can potentially attract away any type 2 and type n buyers from the other sellers, the optimal deviation for a seller is to charge [p.sub.m] - 1 initially and then charge [p.sub.m] in the remaining periods when the other n - 1 sellers are pricing at cost. Automating the trigger strategy in a finite horizon of T periods inherently involves a strong element of commitment in that any deviation has to occur in period 1 and cannot occur in any later period. The expected profit from the one period deviation at a price [p.sub.m] - 1 is ([p.sub.m] -1- c)([p.sub.m] -1 -v]/[v - v])([[omega].sub.1]/n] + [2[[omega].sub.2]/n] + [[omega].sub.n] ), and (T - 1) ([[omega].sub.1][[pi].sub.m]/n) is the expected profit from T - 1 periods at the monopoly price when the other sellers are punishing at a price of c. Hence, for cooperation to be sustainable for T periods, the following equation must hold:

(A-3) (T[[pi].sub.m]/n) > ([p.sub.m] - 1 - c)([[p.sub.m] - 1 - v]/[v - v])

x ([[[omega].sub.1]/n] + [2[[omega].sub.2]/n] + [[omega].sub.n])

+ (T - 1) [[omega].sub.1][[pi].sub.m]/n.

The left side of the equation represents the expected payoff to cooperation, and the right side denotes the expected payoff from the optimal deviation. There is no discount rate because our game involves a finite horizon and, as section III describes, all payments in the experiments are received simultaneously.







Histogram of Baseline Posted Prices and Simulated Game-Theoretical

(Last 25 Blocks of 20 Periods)

 Baseline Treatment Game Theory Predictions

Mean 48.0 47.8
Median 44.0 46.1
Variance 272 98.1

Note: Table made from bar graph


Average Payoffs (US$) by Treatment

Baseline 4 sessions 12.11
Undercut 4 sessions 13.53
Low-price matching 4 sessions 18.00
Trigger 4 sessions 12.61

These payoffs do not include the $5 payment for showing up on time.


Descriptive Statistics for Prices by Algorithm Treatment

 Game-Theoretic Low-Price
 Prediction Baseline Undercut Matching

Mean 47.8 48.0 49.8 56.3
Median 46.1 44.0 45.0 55.0
Variance 98.1 273 270 137
Skewness 0.530 1.91 0.869 1.29
Number of Obs. 8,000 (simulation) 400 8,000 8,000


Mean 47.1
Median 40.0
Variance 382
Skewness 2.06
Number of Obs. 8,000


Wilcoxon Tests (Two-Sided) of Equivalent Medians

 Signed-Rank Test Rank-Sum Test
Algorithm [H.sub.0]: [micro] = 46.1 [H.sub.0]: [micro] =

Undercut Z = 1.1636 Z = 1.1148
 m = 16 p = 0.2446 p = 0.2649
Low-price Z = 3.4997 Z = 3.5704
 Matching m = 16 p = 0.0005 p = 0.0004
Trigger Z = -2.1531 Z = -1.9348
 m = 16 p = 0.0313 p = 0.0530

(1.) Some examples include the Web sites at www.,, and

(2.) Sec Davis and Holt (1996) and Cason and Friedman (2000) for examples of experimental studies on costly buyer search and price dispersion. Our focus is on the role of automating prices with differentially informed customers.

(3.) This model has been used to study pricing algorithms by Greenwald et al. (1999) and price dispersion in laboratory markets by Morgan et al. (2001).

(4.) For example, once a customer has submitted all of his or her personal information at, a buyer may prefer to only shop from instead of another Web site at which he or she would need to fill out a series of forms again.

(5.) See, for example, Tesauro (1999) who studies duopoly pricing dynamics with neural networks and Q-learning. The analysis of these models is limited to simulations, abstracting away the human behavior from deploying such algorithms. To be viable, these purely artificial environments must outperform human-based environments, thus motivating our study as a benchmark.

(6.) See Arbatskaya et al. (1999) for a study on the use of these policies.

(7.) See, for example, Salop (1986), Png and Hirsh-leifer (1987), Doyle (1988), Logan and Lutter (1989), and Dixit and Nalebuff (1991).

(8.) Due to antitrust scrutiny, sellers have strong incentives to conceal naturally occurring examples of trigger strategies, thereby making direct observations with field data difficult.

(9.) If the submitted algorithm would result in a posted price off the support of the buyers' values, then that period's price was set equal to the appropriate boundary of the values. This guarantees that the subject will not lose money as a result of an action considered to be an error.

(10.) The Low-price Matching and Trigger strategies do not use the price from the last period of the previous block because these algorithms explicitly set the price for the first period of the current block. These two algorithms reset in each block because once the algorithm either matches the lowest price or triggers punishment, the effect is permanent for the remainder of the block. The Undercut algorithm has the inherent potential to reset across blocks.

(11.) Due to participation constraints, all treatments could not be conducted at all sessions.

(12.) The game-theoretic distribution was simulated using 8,000 random draws, the same as the number of observed posted prices in the baseline treatment (20 periods x 25 blocks x 4 sellers x 4 sessions).

(13.) The analysis assumes that the individual decisions are independent within sessions. This assumption is consistent with the null hypothesis that each seller is using the independent mixed strategy derived in the appendix but at odds with the observed behavior in Figure 3.

(14.) The mixture of type 1, 2, and n = 4 buyers leads to a high enough variance in transaction prices such that it is difficult to compare effectively, in one figure, the levels of the raw data across sessions and over time. The data are smoothed using the two-sided linear filter proposed by Hodrick and Prescott (1997). Let [p.sub.t], represent the raw data on transaction prices and [s.sub.t] the smoothed series. The Hodrick-Prescott filter minimizes the variance of [p.sub.t] around [s.sub.t] subject to a penalty that constrains the second differences of [s.sub.t]. The penalty parameter was set at 14,400, the standard for high-frequency data.

(15.) Transaction prices on average in sessions 2 and 4 are still typically higher than prices in other treatments.

(16.) Baye et al. (1992) show that the Varian model of sales also has a continuum of asymmetric equilibria, the possibility of which we do not explore here. A form of the asymmetric equilibrium involves mixtures of mixed and pure strategies, with 100% mass points at the top of the support. Such strong predictions can be easily rejected by the data.


Arbatskaya, M., H. Morten, and G. Shaffer. "On the Incidence and Variety of Low-Price Guarantees." Manuscript, 1999.

Aumann, R., and L. Shapley. "Long Term Competition: A Game Theoretic Analysis." University of California at Los Angeles Department of Economics Working Paper 676, 1992.

Baye, M., D. Kovenock, and C. De Vries. "It Takes Two to Tango: Equilibria in a Model of Sales." Games and Economic Behavior, 4(4), 1992, 493-510.

Bayers, C. "Capitalist Econstruction." Wired Magazine, 8(3), 2000. Available online at wired/archive/8.03/markets.html.

Capra, C. M., J. Goeree, R. Gomez, and C. Halt. "Learning and Noisy Equilibrium Behavior in an Experimental Study of Imperfect Price Competition." International Economic Review, 2001, forthcoming.

Cason, T, and D. Friedman. "Buyer Search and Price Dispersion: A Laboratory Study." Manuscript, Purdue University, 2000.

Davis, D., and C. Holt. "Consumer Search Costs and Market Performance." Economic Inquiry, 34, 1996, 133-51.

Davis, D., and B. Wilson. "Firm-Specific Cost Savings and Market Power." Economic Theory, 16(3), 2000, 545-65.

Deck, C., and B. Wilson. "Interactions of Automated Pricing Algorithms: An Experimental Investigation." Proceedings of the 2nd ACM Conference on Electronic Commerce, 2000, 77-85.

Dixit, A., and B. Nalebuff. Thinking Strategically. New York: Norton, 1991.

Doyle, C. "Different Selling Strategies in Bertrand Oligopoly." Economics Letters, 28, 1988, 387-90.

Friedman, J. "A Noncooperative Equilibrium for Supergames." Review of Economic Studies, 28, 1971, 1-12.

-----. Oligopoly and the Theory of Games. Amsterdam: North-Holland, 1977.

Fudenberg, D., and E. Maskin. "The Folk Theorem in Repeated Games with Discounting and with Incomplete Information." Econometrica, 54, 1986, 533-54.

Greenwald, A., J. Kephart, and G. Tesauro. "Strategic Pricebot Dynamics." Proceedings of the ACM Conference on Electronic commerce, 1999, 58-67.

Grether, D., and C. Plott. "The Effects of Market Practices in Oligopolistic Markets: An Experimental Examination of the Ethyl Case." Economic Inquiry, 22(4), 1984, 479-507.

Hodrick, R., and E. Prescott. "Postwar U.S. Business Cycles: An Empirical Investigation." Journal of Money, Credit, and Banking, 29, 1997, 1-16.

Hviid, M., and G. Shaffer. "Hassle-Costs, The Achilles Heel of Price Matching Guarantees." Journal of Economics and Management Strategy, 8(4), 1999, 498-521.

Kruse, J., S. Rassenti, S. Reynolds, and V Smith. "Bertrand-Edgeworth Competition in Experimental Markets." Econometrica, 62, 1994, 343-71.

Logan, J., and R. Lutter. "Guaranteed Lowest Prices: Do They Facilitate Collusion?" Economics Letters, 31, 1989, 189-92.

Morgan, J., H. Orzen, and M. Sefton. "An Experimental Study of Price Dispersion." Princeton University Woodrow Wilson School of Public and International Affairs Discussion Paper #213, 2001.

Png, I., and D. Hirschleifer. "Price Discrimination through Offers to Match Price." Journal of Business, 60, 1987, 365-83.

Rubenstein, A. "Equilibrium in Supergames with the Overtaking Criterion." Journal of Economic Theory, 21, 1979, 1-9.

Salop, S. "Practices that (Credibly) Facilitate Oligopoly Coordination," in New Developments in the Analysis of Market Structure, edited by J. Stiglitz and F. Mathewson. Cambridge, MA: MIT Press, 1986.

Stigler, G. "A Theory of Oligopoly." Journal of Political Economy, 72(1), 1964, 44-61.

Tesauro, G. "Pricing in Agent Economies Using Neural Networks and Multi-Agent Q-Learning." Proceedings of the IJCAI Workshop on Learning about, from and with Other Agents, 1999.

Varian, H. "A Model of Sales." American Economic Review, 70(4), 1980, 651-59.


* For helpful comments we thank two anonymous referees, Dan Kovenock, Mark Olson, Stan Reynolds, Bradley Ruffle, Chuck Thomas, John Wooders; seminar participants at the University of Arizona, University of Mississippi, Purdue University, and Virginia Commonwealth University; and participants at the 2000 North American Regional meetings of the Economic Science Association. The data and a sample copy of the instructions are available on request. We gratefully acknowledge financial support from the International Foundation for Research in Experimental Economics.

Deck: Assistant Professor, Department of Economics, Walton College of Business, University of Arkansas, Fayetteville, AR 72701. Phone 1-479-575-6226, Fax 1-479-575-3241, E-mail

Wilson: Associate Professor, Interdisciplinary Center for Economic Science, George Mason University, 4400 University Dr., MSN 1B2, Fairfax, VA 22030. Phone 1-703-993-4845, Fax 1-703-993-4851, E-mail
COPYRIGHT 2003 Western Economic Association International
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2003 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Deck, Cary A.; Wilson, Bart J.
Publication:Economic Inquiry
Geographic Code:1USA
Date:Apr 1, 2003
Previous Article:Transaction costs and coalition stability under majority rule.
Next Article:Democrats, dictators, and demonstrators.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters