# Conditional reasoning = conditional probability? not necessarily.

Several prominent theories of conditional reasoning argue that reasoners compute and use conditional probability information to engage in conditional reasoning processes, which suggests that naive reasoners should be at least somewhat sensitive to the factors that affect conditional probability. In two experiments, reasoners solved "Mastermind problems" in which an array of colored buttons and feedback were provided, and participants were asked to make deductions about the colors and their locations in the array. In the first experiment, each problem contained a high stationarity button that did not "move" in the array, although its conditional probability of its possible location in the array varied. In the second experiment, each problem contained at least one button whose location in the array was logically certain, while the stationarity of the button was varied. Experiment 1 showed that, contrary to the predictions of some dual-process theorists, the likelihood with which people made deductions of stationary buttons was not significantly influenced by the stationary buttons' conditional probability. People with deductive competence were also no more likely to make correct logical deductions of stationary buttons than were people with less deductive competence. Experiment 2 showed that people were more likely to conclude that buttons that appeared in the array more frequently were more likely to be code members than were buttons that had appeared less frequently even when the conditional probability of all buttons in the array was held constant at unity. People were much more likely to correctly deduce high stationarity buttons compared to low stationarity buttons, even when the conditional probability of the buttons was held constant at unity. The pattern of response suggests that, at least in this form of conditional reasoning, people use more superficial characteristics of the information presented, such as covariation patterns, than they use conditional probability computation to make their deductive inferences. These findings were discussed in terms of their implications for some potential "boundary conditions" in conditional probability computation.Evans (1999) defined reasoning as the ability of untrained individuals to understand and generate logical arguments, that is, inferences in which a conclusion must necessarily follow from some given premises. Such reasoning can occur in many forms, one of which is known as conditional reasoning, that is, reasoning about implications from the word "if." In conditional reasoning studies, a person may be given a conditional syllogism beginning with a premise containing the content "If P then Q." The person is also given an additional premise that either affirms or negates P or Q. Then the person may be asked if any conclusion necessarily follows from that information. In other paradigms, the individual may be given a potential conclusion about the content of the other term that was not included as a premise and asked to determine the likelihood that this conclusion necessarily follows from the premises, if the premises are assumed to be true. For example, in the following case

Premise: If P then Q

Premise: ~Q (i.e., "not Q")

Conclusion: ~P

This conclusion necessarily follows from the premises, and the syllogism is thus an example of a valid conditional reasoning process known as Modus Tollens (MT). There are three other conditional reasoning processes, one valid and two invalid. In the following case:

Premise: If P then Q

Premise: P

Conclusion: Q

The conclusion likewise necessarily follows, and so it must also be true if the premises are true. This valid process is known as Modus Ponens (MP). No valid conclusion follows from the conditional premise and "~P". So the reasoner who concludes that "~Q" is necessary in that case has used an invalid process known as Denying the Antecedent (DA). Finally, no valid conclusion follows from the conditional premise and "Q". A reasoner who concludes "P" in that situation has used an invalid reasoning process known as Affirming the Consequent (AC).

Dual-Process Theory. Over the last decade, a loosely-affiliated family of theories (e.g., Evans & Over, 2004; Sloman, 1996, 2002; Stanovich & West, 2000, 2002; see Osman 2004 for a critical review) has sought to discover the factors that enable the execution of such reasoning processes. Dual-process theory (or sometimes known as Dual-systems theory) explains reasoning in terms of two somewhat independent subsystems, frequently referred to as the analytic and the heuristic systems. Of the two systems, the analytic system seems to be slower, but more likely to contain processes and results of which the reasoner is aware. This system may operate in situations in which "logical" reasoning is explicitly called for (Osman, 2004; Sloman, 1996; 2002). It may be able to create and manipulate symbolic representations, computing which operations on such representations preserve the logical characteristic of necessity, and which operations do not (Best, 2001). In contrast to the analytic system, the heuristic system consists of processes that operate pragmatically at a preconscious level, determining automatically what becomes represented as "relevant." These representations include both selective features of problem content and also relevant associated knowledge retrieved from long-term memory (Evans, 2007; Evans & Over, 2004, p. 7). For example, Sloman (1996, 2002) has theorized that the heuristic system encodes and processes statistical regularities of the environment, along with frequencies and correlations amongst the various features of the world. Other theorists (Stanovich & West, 2000, 2002) have expressed a similar thought.

In their search for specific mechanisms to account for the heuristic system's processing, theorists and researchers have proposed several possibilities. It has been proposed that the heuristic system uses covariation information--patterns among stimuli that are characterized in terms of changes, constants, and the consequences of each. For example, Fugelsang and Thompson (2003) investigated causal reasoning (which is related to conditional reasoning) by presenting participants with 2 X 2 contingency tables in which the event frequencies had been manipulated to create high and low [DELTA][P.sub.c] values (i.e., the differences in the likelihood of the effect given that the cause was either present or absent). Although participants brought other information, such as prior beliefs, to bear in making causal judgments, they were nevertheless exquisitely sensitive to the effects of covariation in the 2 X 2 contingency tables in making causal judgments.

Conditional Probability. One mechanism in particular that has garnered a great deal of attention as a possible mechanism for heuristic conditional reasoning involves conditional probability. Evans and colleagues (Evans, Handley, & Over, 2003; Over & Evans, 2003) have stated that the typical conditional reasoning paradigm, in which the premises are given as true, may be fundamentally flawed because few premises in life can be assumed to be true. Instead of this typical paradigm, Evans proposes that conditional reasoning is accomplished heuristically by evaluating the strength of the conditional premise itself in terms of its conditional probability. That is, Evans proposes that conditionals are evaluated with a degree of belief equal to the subjective conditional probability of Q given P. The probability that the conditional is judged to be true will be high when the conditional probability of Q given P (i.e., Prob(Q/P)) is high. According to this model the willingness to endorse conditional inferences can be predicted from the conditional probability of the conclusion given the categorical premise (Prob(Q/P)) such that inference rates should be higher for conclusions with greater conditional probabilities. Generally, these conclusions were supported in a series of studies (Evans, Handley, & Over, 2003). Thus, when given frequency information about a hypothetical deck of cards, people's estimates of the probable truths of statements like the following: "If the card is yellow, then it has a circle printed on it" were strongly predicted by the conditional probability of the premise given the frequency of the cards in the decks. This outcome would be hard to explain if the participants were not actually computing the conditional probability of the conditional statement given the changes in the cards' frequencies.

A similar analysis has been applied to each of the other conditional reasoning processes. Thus, under the assumptions made by Evans and others, the rate of acceptance for MP should be closely related to Prob(Q/P) while the rate of acceptance of DA should vary as a function of Prob(~Q/~P). Similarly, the computed estimates of Prob(P/Q) should predict participants' willingness to endorse AC and computed estimates Prob(~P/~Q) should predict participants' willingness to endorse MT. Oaksford, Chater, and Larkin (2000) used three parameters Prob(P), Prob(Q) and Prob(Q/P) to generate mathematical expressions for these conditional probabilities and found that inference rates varied as a function of the probabilities derived from those parameters.

Ohm and Thompson (2006) did not use this parametric method, rather they computed the four conditional probabilities directly from the participants' probability estimates of the four truth-table cases, expecting a close relationship between inference rates and conditional probabilities appropriate for each inference. In other words, participants were given a statement like: "If Julian helps Paul with his homework, then Paul will save Julian a seat in class," and then asked to indicate the four probabilities directly (such as the probability that Julian helps Paul and Paul saves Julian a seat, the probability that Julian helps Paul, but Paul does not save Julian a seat, etc.), with the stipulation that the four probabilities must add up to 100%. Consistent with the results of Evans and colleagues, Ohm and Thompson (2006) showed that estimates of Prob(Q/P) could be used to predict the truth of pragmatically rich conditionals. Finally, they showed that conditional probabilities derived from truth-table estimates (which was their term for the probability indication task), could be used to predict inference patterns on a conditional reasoning task. Specifically, Prob(Q/P) predicted the acceptance rate of the MP process. On the other hand, the following probability, Prob(~Q/~P), (computed as ~P~Q/(~PQ + ~P~Q), predicted the acceptance rate for DA. Similar computations were used for AC and MT. Ohm and Thompson found that, in the main, each of the four reasoning processes was associated with, and predicted by, a different conditional probability.

Taken together, these findings suggest that the computation of conditional probability may be a powerful tool for the heuristic system to use in reasoning domains in which the premises express pragmatically rich conditional statements. But despite the power and appeal of this approach, there is currently little known about what the heuristic system is actually using or is sensitive to, in order to conduct conditional reasoning in situations in which either experiential knowledge, or memory cannot be applied. For example, the conditional probability account does not specify the conditions under which conditional probability could or could not be computed. However, there are surely there are some boundary conditions under which the conditional probability computation would be defeated. Moreover, even if the account is true, knowing these boundary conditions is important for theoretical reasons. These are the concerns the animated the current series of experiments.

The Logical Deduction Game Mastermind and the Heuristic System The two experiments that follow make use of the logical deduction game Mastermind. This section explains this deductive game and shows how the heuristic reasoning system could implement a covariation strategy, or a conditional probability strategy, to reason in the game.

In the standard version of the game Mastermind, players deduce the identity and location of four hidden colored buttons, called collectively "the code" which are drawn either with, or without replacement from a pool of six colors. When drawn with replacement, the solver must search through a space of [6.sup.4] = 1296 possible codes; without replacement the search space is reduced to 6!/2! = 360 codes. To play, solvers first create their own hypotheses about the code by deploying buttons from the set in whatever sequential position they wish. After this deployment the solver receives somewhat ambiguous feedback that he or she uses to shape future hypotheses. Each unit of red feedback tells the player that one of the colors that he or she has played is a button of the correct color, and has been played in the correct location in the string. Each unit of white feedback tells the player that one of the colors that he or she has played matches a color in the code, but has not been played in the correct location within the string. In the following example, The single red feedback pin is given for the Red button, and the single white feedback pin is given for the Blue button. The player's goal is to deduce the code, that is, produce a hypothesis whose feedback is four red feedback pins, by advancing as few hypotheses as necessary.

Location 1 2 3 4 Feedback Hypothesis: White Red Yellow Blue 1 red, 1 white Code: Black Red Blue Green

Given that the subject of the game is "logical deduction", it seems clear that a person's analytic system should be involved in the playing Mastermind successfully. Best (2001, 2009) has shown that a person's deductive ability or deductive competence is influential in determining the success with which a person plays Mastermind. However, the role of the heuristic system has not been studied extensively. The following section shows how covariation and conditional probability computation could be used by the heuristic system to make deductions in Mastermind.

To see how a covariation strategy could be applied to Mastermind. Consider the following array:

Location 1 2 3 4 Feedback Green Orange Yellow Blue 1 red, 1 white White Green Orange Blue 1 red, 1 white

A reasoner may have advanced these two hypotheses in a particular hypothetical Mastermind game, and received this feedback. The reasoner who wishes to consider which of the two buttons, Green or Blue, might be accounting for the red feedback pin awarded, may consider both buttons equally likely to be the one scoring the Red feedback pin based on what happened in the first hypothesis. But after the second hypothesis, to continue with this analysis, the reasoner may have to weigh the likelihood that the Green button was scoring the red feedback in the first hypothesis, and the Orange button was button was scoring the red feedback in the second hypothesis versus the likelihood that it was the Blue button in both cases. Based on the pattern of change (the Green button that moved, and the Orange button that moved) and constancy (The Blue button that remained constant), the heuristic process of covariation may deduce that that Blue button is accounting for the red feedback, and can thus be located in Position 4. That is, the association is based on the positive covariation between the presence of a certain colored button in a specific constant location (its "stationarity") and the awarding of the red feedback pin. Although this conclusion is not necessarily valid, it is certainly plausible. Best (1997) found that people were very sensitive to the number of appearances, the amount of red feedback (i.e., feedback about location), and the "movement" of the colors in the problem when they made deductions from these types of arrays. Specifically, people were very likely to conclude that the specific color that was played in the same location in each hypothesis must be located there, as long at the hypothesis was accompanied by at least one red feedback pin.

The reasoner may apply heuristic conditional reasoning to the array by framing the placement of the various buttons in terms of conditional probabilities. Consider the hypothetical game fragment shown above from the standpoint of a reasoner who wished to use a conditional probability heuristic to make inferences. This reasoner may wish to evaluate the truth of the following conditional statement: "If the Blue button is played in Position 4, then I'll receive at least one Red feedback pin, because the Blue button is correctly located in that position." The italics frame the text that is not necessarily part of the reasoning process, but it is included to show why the reasoner would want to evaluate the truth of the conditional statement in the first place, in other words, the reasoner is trying to figure out where the Blue button might be located. Starting from the schematic form:

"If P then Q" = Prob(Q/P)

This expression is enacted as the following:

If the Blue button is played in Position 4, then I'll receive a pattern of feedback that includes a Red feedback pin.

And this statement is in turn equivalent to: "What is the probability of receiving this pattern of feedback, one that includes a red feedback pin, given that Blue has been played in Position 4?

The actual computation of this probability involves finding the number of possible codes that are still "viable" (that is, the number of codes that are consistent with the hypotheses that have been advanced and the feedback that has been given). For example the candidate code [Wh Or Yw Bl] is not viable because it is not consistent with the pattern of feedback that has been given. Starting from the number of viable codes, the next step involves computing the number in which a Blue button has indeed been played at Position 4. In this example, there are eight possible codes that are consistent with the hypotheses that have been advanced, and the feedback that has been given in the hypothetical game fragment. These eight possible codes are:

Rd Yw Wh Bl Wh Rd Yw Gn Wh Rd Yw Or Yw Rd Wh Bl Yw Wh Rd Bl Rd Gn Yw Wh Wh Or Rd Yw Wh Bl Yw Rd

Three out of the eight have the Blue button in Position 4, so the probability is 3/8.

Overview of Experiments and Hypotheses. In the two studies that follow, people first reasoned their way through a series of abstract or decontextualized conditional reasoning problems to establish their use of valid (MP & MT) and invalid (DA & AC) conditional reasoning processes. After playing one conventional game of Mastermind, which was described as a game involving logical deduction, the participants saw a series of Mastermind problems, (arrays like those above), and they had an opportunity to draw inferences that they believed were valid. In the first experiment, participants solved a series of Mastermind problems in which the conditional probability of a particular button at a particular location was varied, while holding the stationarity (the constancy of the button's position in the array) at a high, constant value. If people are indeed sensitive to the variables that influence the conditional probability of a button's being located in a specific site in the Mastermind array, then the hypothesis is that the variability of the conditional probability should be reflected in the certainty of the participant's deductions. If however, the participant's certainty of the stationary button remains high among each of the Mastermind problems, that suggests that people are not necessarily sensitive to all the factors that affect conditional probability, other than the covariation factor. This hypothesis was examined in Experiment 1. An alternative explanation is that people realize that the task requires logical deduction, and therefore cannot be accomplished by the heuristic system. In that case, the hypothesis is that people who have logical ability, or deductive competence (Best, 2009) should be better able to track conditional probability than are people who are less deductively competent. In the second experiment, participants solved a series of Mastermind problems in which the deductive status of a particular button was held constant (that is, it was possible to deduce that a particular button must necessarily be assigned to a particular location), but the stationarity was varied. When a button can be assigned to a particular location, its conditional probability is necessarily 1.0. If people can compute conditional probability in the absence of stationarity, then the hypothesis is that they should be able to deduce the correct location of particular buttons in the array. An inability to do so suggests instead that people are actually sensitive to the covariation or simple associationistic information produced by stationarity differences rather than actual conditional probability differences.

Experiment 1: High Stationarity Buttons in Non-Determinant Problems

The participants solved a series of Mastermind problems in which the conditional probability of a particular button at a particular location was varied, while holding the stationarity of that button at a high, constant value. The study's objective was to determine people's ability to detect differences in conditional probability, when conditional probability was varied independently of some of the other cues (such as covariation) that are typically associated with it.

Method

Participants Twenty-four traditionally-aged university students (5 men, 19 women) enrolled in upper-division Psychology courses at Eastern Illinois University voluntarily participated in return for extra credit in their courses, amounting to 1-2% of their total grade.

Materials and design The experiment was carried out in three phases. In the first phase, the participants' logical reasoning capability was assessed in a set of 16 conditional reasoning problems, each of which had the same format. Participants read two "initial statements" of text that contained the problems' premises, followed by a line of text representing the problem's conclusion. One of the initial statements was always a conditional, if-then statement, and the other initial statement affirmed or negated one of the if-then statement's clauses. The conclusion then affirmed or negated the if-then statement's other clause.

Participants checked which of three alternative they believed best expressed the conclusion's likelihood of being true, under the clearly-explained assumption that the information in both of the initial statements must be true: Alternative 1: It was impossible for the conclusion to be true; Alternative 2: The conclusion could be true or false--it was equally likely; and Alternative 3: The conclusion must be true. Eight problems used the "People in Cities" format (Johnson-Laird, Byrne, & Schaeken, 1992) and eight used the "Imaginary Blackboard" format (Braine & O'Brien, 1991). In the People in Cities format, the if-then statement described two hypothetical individuals who were always located in actual US cities, such as "If Alice is in Nashville, then Bob is in San Francisco." In the Imaginary Blackboard format, the if-then statement described the presence of specific numbers on an unseen blackboard such as "If there is a 2 written on the blackboard, then there is a 7 written on the blackboard." Each group of eight problems contained two problems that were representative of each of the four conditional reasoning processes, Modus Ponens (MP), Modus Tollens (MT), Denying the Antecedent (DA) and Affirming the Consequent (AC). Each problem appeared on its own piece of printer paper. The two content areas were counterbalanced with the packets; the order of reasoning processes was randomized within each content area.

In the second phase, the participant was introduced to the logical deduction game and played one Mastermind game using the standard plastic tokens of the commercially available game. The "reduced space" of codes (360 possible codes) was used in both experiments.

In the third phase the participants were presented with four "Mastermind problems". Each problem consisted of three hypotheses and the appropriate feedback. These problems were presented to the participant as the beginning fragment of a hypothetical Mastermind game. Each problem was organized around a specific color that was played in the same location in each of the problem's three hypotheses.

This specific color is termed the "stationary" button, because it appeared in each of the problem's hypotheses in the same location. For each of the four problems there are a number of possible sequences of colors that are consistent with the hypotheses that have been played and consistent with the feedback that has been given. The sequences of colors that are consistent with the hypotheses and feedback constitute the set of potential codes that are still "viable" at that point in the play of the hypothetical Mastermind game. For each problem this set of potential codes was computed, essentially by "passing through" the Mastermind problem the entire set of 360 codes available at the outset.

By varying the number and placement of different colors that were played in each problem, it was possible to construct problems in which the number of viable codes remaining after three hypotheses differed. For some problems, a fairly high proportion of the remaining codes contained the stationary button at the same location in which it appeared in the problem. For other problems, the proportion of remaining codes containing the stationary button at the location in which it had appeared in the problem was low. In this way, the conditional probability of the stationary button's assignment to its location in the problem was varied while holding the number of appearances and location of the stationary button constant.

Table 1 shows the number of potential codes of potential codes for each problem, and the number of those potential codes in which the stationary button can be found at the location in which it was played in the game. Physically, each Mastermind problem appeared on its own piece of paper, along with a response grid for the participants to indicate their deductions. An example of one of the problems appears in Appendix A.

Procedure Participants were run individually. After being greeted by the experimenter, each participant was told that the experiment would consist of three phases. The experimenter first described the set of 16 conditional reasoning problems. The participants were instructed that all conditional statements in the problems were perfectly true "in the logical world of the study", and that they should use logical processes in answering. While the participants worked on these problems at their own pace, the experimenter left the lab for 10 minutes. After the participants completed the conditional reasoning problems, the experimenter explained the game of Mastermind, showing examples using the game's plastic tokens. The participants were told that they would play one game of Mastermind, and that the code would consist of four different colors (i.e., no repeated colors) drawn randomly from the pool of six colors. The experimenter also explained to the participants that they would be permitted to advance up 10 hypotheses to deduce the code. The experimenter stated that Mastermind was a game of logical deduction whose objective was to deduce the code in as few hypotheses as possible, not necessarily in the least amount of time possible. There was no overt time pressure to play quickly.

In the third phase, the participants were told that the information displayed represented the hypotheses of a Mastermind game that had already been played. The participants were told that their job in this phase of the experiment was to review the hypotheses that had been advanced, along with the feedback. Based on that information, the participants were instructed to deduce as much as they could about the color and placement of the buttons involved in the code. The participants indicated their responses on a form having two main sections. For each of the six colors in the pool, the participants indicated the likelihood of that color's being in the code by marking a spot on a bar with numbered indicators ranging from "0" to "100" in increments of ten. The participants were instructed to think of these indicators as percentages. For example, the participants were told to mark any color that they were positive was not in the code "0", and to mark any color they were positive was definitely in the code "100." Numbers between these values were to be used to indicate varying degrees of certainty in either direction with "50" indicating maximal uncertainty. The participants were invited to consider each color's inclusion independently in this way, and these ratings were used as the basis for the participants' "Inclusion Likelihoods" that were subsequently analyzed.

In the second section of the response form, the participants indicated their reasoning about a putative code member's position in the code. For each color whose Inclusion Likelihood as a code member the participant marked as 80 percent likely or higher on the form's top portion, the participant indicated where he or she thought the color was actually located in the code on a 4-place location matrix. For colors whose location the participant had positively deduced, he or she wrote "100" (i.e., perfectly certain) next to the number (1, 2, 3, etc.) corresponding to the location where the participant was sure the color was located. For colors whose location the participant was less sure of, he or she wrote some combination of numbers adding up to 100 next to the numbers (1, 2, 3, etc.) to indicate at which location he or she thought the color was most likely to be located. These deductions were used as the basis for the participants' "Assignment Certainty" scores. The participants were not under any time constraints in any of the three phases; most of the participants required approximately 1 hour to complete the experiment.

Results

Performance on Mastermind game. Approximately 75% of the participants were unfamiliar with the Mastermind game. Previous research (Best, 1990) has indicated that even those participants who indicate they are unfamiliar with the game perform at the same level as those who are not. The single game of Mastermind that was played was solved by 18 of the 24 participants (75%); the mean number of hypotheses advanced by those who solved was 6.78. Both figures are broadly consistent with previous research on Mastermind play by novices (e.g., Best, 1990), suggesting that the single played game was sufficient to establish a minimal competence level to understand the Mastermind problems.

Performance on logical deduction problems. Deductive competence was scored by awarding one point for each correctly answered problem from the set of 16 conditional reasoning problems. The overall mean number correct was 7.5, SD = 2.72 (maximum = 16, maximum for each reasoning process = 4, Modus Ponens M = 3.20, SD = .98, Modus Tollens M = 1.58, SD = 1.53, Denying the Antecedent M = 1.13, SD = 1.15, Affirming the Consequent M = 1.50, SD = 1.38). Higher numbers on the invalid reasoning processes (DA & AC) indicate that the participant avoided making those errors when he or she was faced with such a problem.

Tracking conditional probability. Each problem had one high stationarity button, that is, a button that was played in the same location in the array on each of the three hypotheses making up the problem. For each of these particular buttons, therefore, the number of appearances and covariation patterns were constant. But the conditional probabilities of the stationary buttons differed, as shown in Table 1. These conditional probability differences influenced the button's logical status. If the conditional probability is 1.0 (as it is for Problem 4), then the stationary button can be logically deduced at the position where it is played. But if the conditional probability is lower (as it is for Problems 2 and 3), then the stationary button cannot necessarily be logically deduced at the position where it appears in the problem. For Problem 1, in which the conditional probability is 0, the stationary button cannot possibly be located at the position in which it appears in the problem. If the heuristic system is "tracking" (i.e., is sensitive to) conditional probability computation, then these differences should impact the reasoner's willingness to conclude that the stationary button can actually be logically deduced as being located in the position in which it has appeared in the problem array.

To measure the possible impact of conditional probability on deduction, the data were scored in the following way. For each button in the problem that the participant reasoned was likely to be a code member (Inclusion Likelihood = 80% or higher), the participant was instructed to indicate an Assignment Certainty. If the participant was certain (i.e, had logically deduced) that a code member was deducible at a particular position in the code, then he or she marked "100" (indicating 100% sure) in the position for that button. The participant used numbers lower than 100 to indicate lower degrees of certainty about the deduction at that position, down to a lower bound of "25" (indicating 25% for each of the four positions, meaning that the participant was completely unsure about the position of the button). Of the 24 participants, seven did not list Inclusion Likelihood numbers of 80% for at least one of the stationary buttons, and so these participants lacked one or more Assignment Certainty scores for one or more of the four problems. These participants' data were excluded from the following analysis.

A one-way repeated measures ANOVA was carried on the Assignment Certainty scores for the stationary button to its constant position (the position in which it was played on each of the problem's three rows). The hypothesis here is that the participant's Assignment Certainty scores should track conditional probability, with higher assignment certainty scores associated with greater conditional probability. However, this hypothesis was not supported, F(3,48) = 1.40, p = .256, [[eta].sup.2.sub.p] = .08. Table 2 shows the mean Assignment Certainty score for each of the stationary buttons, aligned with its conditional probability. Although there was a trend for the participants to be somewhat less certain about the deduction of stationary buttons whose conditional probability was lower, the trend was not significant. Moreover, the stationary button whose assignment to its position in the problem was logically impossible (the Red button in Problem 1), was still assigned to this position with slightly more than 90% certainty. In addition, although the conditional probabilities of the stationary button spanned the full range from 0 to 1.00 across the set of problems, the range of mean Assignment Certainties for these stationary buttons was clearly much less than this (the difference between the highest Assignment Certainty score and the lowest was approximately 10%) which suggests that any tracking that may have been going was clearly dramatically attenuated relative to changes in the actual conditional probability.

Conditional reasoning processes and conditional probabilities Although three of the four problems used in this set were not entirely logically deducible, the instructions given for the Mastermind game emphasized the importance of logical deduction. For this reason, the participants may have made the reasonable inference that the Mastermind problems could be solved with logic. Under this assumption, the hypothesis is that people who have at least one element of deductive competence (i.e., they can recognize and apply the valid conditional reasoning processes MP and MT) will do a better job of tracking conditional probability than will people who have less of this form of deductive competence. People with deductive competence are more likely than those who do not have it to realize that, in logic, computing the conditional probability more accurately is equivalent to a computation of logical necessity. In part, this is what is meant by the term "deductive competence": An ability to compute the cognitive operations and manipulations on symbolic tokens such that the property of necessity over that set of tokens is retained or lost.

To test this hypothesis, each participant's score for the two valid conditional reasoning processes (Modus Ponens and Modus Tollens, maximum score = 4 for each process) was correlated with the participant's Assignment Certainty for the stationary button in each of the four problems. If the participant is applying valid conditional reasoning processes to solve the Mastermind problems, then scores on MP and MT processes should be significantly negatively correlated with Assignment Certainty of the stationary button on each Mastermind problem in which the conditional probability of the stationary button is .50, because theoretically the participant could use valid conditional reasoning processes to determine that the stationary button is not necessarily located in the position in which it appears on those problems and by doing so, lower his or her Assignment Certainty score. Conversely, scores on MP and MT should be significantly positively correlated with Assignment Certainty of the stationary button on the Mastermind problem in which the conditional probability of the stationary button is 1.0, because correct conditional reasoning can determine that the stationary button must be located in the position in which it appears. These correlations were computed for each participant who had an Assignment Certainty score on the problems in question, and this number ranged between 18 and 24 participants. The results of this analysis are shown in Table 3. First, although some of the correlations were in the predicted directions, contrary to expectations, none of the correlations were statistically significant. Correlations between MT and Assignment Certainty scores on the two Mastermind problems in which the conditional probability of the stationary button was .5 were negative, which is consistent with the idea that the participants were using valid conditional reasoning processes, but correlations between MP and Assignment Certainty scores on the same problems, which should also be negative, were actually weakly positive. Modus Ponens is correlated positively with Assignment Certainty scores on the Mastermind problem in which the conditional probability of the stationary button is 1.0 as expected, but contrary to expectations, the correlation between MT and Assignment Certainty on the same problem is negative.

Although there were no predictions about it, the correlation between MP and MT and the Assignment Certainty score is also shown in Table 3 for the Mastermind problem in which the conditional probability of the stationary button is 0 is also shown for the sake of completeness. These correlations show the same pattern as the previous ones (i.e., positive for MP and negative for MT). Although this pattern of correlations may be interpretable, certainly there's no clear evidence in the overall pattern of correlations to suggest that the participants who had one element of deductive competence (i.e., they could agree with a the valid conclusion of a conditional reasoning problem when it was presented to them) had shifted to this mode to solve the Mastermind problems.

Discussion

If conditional reasoning depends on a computation of conditional probability, then it seems clear that people should be able to compute conditional probabilities, at least under some circumstances, and they should also, therefore, be sensitive to the variables that influence conditional probability. This leads to the hypothesis that people should be sensitive to differences in conditional probability, that is, they should be able to "track" conditional probability. Moreover, they should be able to use information about conditional probability to draw inferences in a task requiring conditional reasoning. This expectation was tested by asking people to reason (i.e., draw whatever valid conclusions they could) over a series of "Mastermind problems" in which the conditional probability of a button's assignment to a particular location was manipulated independently of its salience, logical status, and number of appearances. However, as the ANOVA showed, contrary to the hypothesis, a salient button's Assignment Certainty score was not clearly associated with higher conditional probability. When the conditional probability of the stationary button was 1.00 (CP = 1.00), the mean Assignment Certainty was at its highest (92%), and this value was marginally higher than the assignment certainty scores for the two CP = .5 problems (84% and 82%). But the CP = 0 problem had the second highest assignment certainty (90%), and this was only marginally lower than the assignment certainty for the CP = 1.00 problem. In other words, the participants were almost as sure about a button that couldn't possibly be located in its position as they were about a button that had to be located in its position. With regard to the stationary button, there is not much evidence that people were engaged in any sort of reasoning that was influenced by the stationary button's conditional probability.

One plausible countervailing interpretation is based on the notion that the task was described to the participants as a logical deduction task. Under these circumstances, it might be reasonable to assume that the participants were using the instructions as a basis to override their heuristic system (which is the one that might be housing their conditional probability computation) with their analytic system. This line of reasoning led to the hypothesis that individuals who have deductive competence should be better at tracking conditional probability than are people with less deductive competence because people with deductive competence may be using their analytic systems in their efforts to solve the Mastermind problems. However, this hypothesis was not supported: Assignment Certainty was not significantly positively correlated with scores on MP or MT reasoning when the conditional probability of the stationary button was 1.0, nor was Assignment Certainty significantly negatively correlated with MP and MT reasoning scores when the conditional probability of the stationary button was .5. These findings strongly suggest that the participants were not shifting to a primarily analytic mode to solve the Mastermind problems. This result has some implications for the previous finding, namely the failure to find a linkage between conditional probability and Assignment Certainty. If the participants had shifted out of their heuristic system to solve the Mastermind problems, then such a shift might explain why the participants were not tracking conditional probability in their Assignment Certainty scores: Only the people with deductive competence who were using their analytic system to solve the Mastermind problems should show the tracking effect. But the failure to observe a correlation between deductive competence (as measured by scores on MP and MT problems) and Assignment Certainty suggests even the participants with deductive competence had not shifted to the analytic system to solve the Mastermind problems. It may be that the cues in the task that were needed to provoke the shift to the analytic system were missing here. Best (2009) found that explicit cues could bring about a shift from the heuristic to the analytic system, and could also partially disable the activity of the heuristic system, when the heuristic system could actually be used to solve a Mastermind problem. But the cues in those experiments were triggered by the experimenter and were very salient. In the current experiments, the cue to shift to the analytic system may have been much more subtle, and therefore more easily missed, even by reasoners who possessed sufficient deductive competence to compute the conditional probability accurately.

Experiment 2: Low Stationarity Buttons in Logically Determinant Problems

Experiment 1 showed that people do not appear to be tracking variations in conditional probabability when stationarity is held constant in a complex conditional reasoning task. However, Experiment 1 did not show what might happen when stationarity was itself allowed to vary, and conditional probability was held constant. As in Experiment 1, reasoners solved a set of four Mastermind problems. But in this set, the stationary of one button was varied, while holding the conditional probability constant in three problems. The hypothesis is that people are more sensitive to the number of times a particular color appears in the array, than they are to any particular color's conditional probability. This means that reasoners will be more or less likely to deduce that a particular color should be included in the code as a function of the number of times the color appears in the array, even though the conditional probability of the color is high

Method

Participants. Thirty-three traditionally-aged university students (10 men, 23 women) enrolled in upper-division Psychology courses at Eastern Illinois University voluntarily participated in return for extra credit in their courses, amounting to 1-2% of their total grade.

Materials, Design, and Structure of Mastermind Problems The experiment was carried out in three phases, the first two of which were identical to Experiment 1. In the third phase the participants were presented with four "Mastermind problems", similar to those presented in Experiment 1, but with some important differences. First, in Experiment 2 each problem consisted of four (rather than three) hypotheses and the appropriate feedback. More importantly, whereas each problem in Experiment 1 had a stationary button that was played in the same location in each hypothesis, only one problem in the current set was organized that way. Thus, unlike the problem set in Experiment 1, where the stationarity of one button was constant, but its conditional probability was varied, in Experiment 2, the stationarity was varied, but the conditional probability was held constant. This was accomplished by creating three problems that were logically determinant, that is, capable of being deduced by a reasoner whose deductive powers were sufficient to do so. In that case, there is only one code that is consistent with the hypotheses that have been advanced and the feedback given. Applying the conditional probability analysis that was used in Experiment 1, the conditional probability that a particular button can be located in a specific position in a logically determinant problem is 1.0. One problem in the set of four was not logically determinant. For Problem 1, a button was played in each of the four hypotheses in the same location, thus creating a high stationarity problem similar to those used in Experiment 1. For this problem, there were three codes that were consistent with the hypotheses advanced and feedback given. In two of those three codes, the button appeared in the same location that it had in the hypotheses, thus making the conditional probability for this high stationarity button .67. The use of this problem enables a replication of one of the findings of Experiment 1: If participants are sensitive to stationarity, as they appeared to be in Experiment 1, then they should indicate with greater confidence that the high stationarity button can be located in its apparent position (the position where it was played in each hypothesis) compared to the less stationary, but logically-deducible, buttons in the other three problems. Table 4 shows the structure of the problems used in Experiment 2. Physically, each Mastermind problem appeared on its own piece of paper, along with a response grid for the participants to indicate their deductions. An example of one of the problems appears in Appendix B.

Procedure. The details of the procedure for Experiment 2 were the same as for Experiment 1. Participants required approximately 45-60 minutes to complete the experiment.

Results

Performance on Mastermind The single game of Mastermind that was played was solved by 26 of the participants (79%); the mean number of hypotheses advanced by those who solved was 5.31. Both figures are broadly consistent with previous research on Mastermind play by novices (e.g., Best, 1990), suggesting that the single played game was sufficient to establish a minimal competence level to understand the Mastermind problems.

Performance on logical deduction problems Deductive competence was scored by awarding one point for each correctly answered problem from the set of 16 conditional reasoning problems. The overall mean number correct was 10.27, SD = 3.09 (maximum for each conditional reasoning process = 4, Modus Ponens M = 3.79, SD = .48, Modus Tollens M = 2.85, SD = 1.18, Denying the Antecedent M =1.64, SD = 1.48, Affirming the Consequent M = 2.0, SD = 1.58).

Performance on Mastermind problems: Inclusion likelihood. As in Experiment 1, there were two principal performance measures recorded on participants from the set of Mastermind problems. First, participants indicated on their response forms how likely they thought it was that each color in each problem was a member of the code (Inclusion Likelihood). For colors they thought must be located somewhere in the code, (ie, definitely codemembers), participants indicated "100%"; for colors that the participants were positive were not codemembers, they indicated "0%." The participants used numbers in between these values to indicate various degrees of certainty about each color. Second, for colors whose Inclusion Likelihood the participants rated at 80% or higher, they indicated where they thought that color was actually located in the code on a 4-place location matrix. For colors whose location the participant had positively deduced, he or she wrote "100" next to the number (1, 2, 3, etc.) corresponding to the location where the participant was sure the color was located. For colors whose location the participant was less sure of, he or she wrote some combination of numbers adding up to 100 next to the numbers (1, 2, 3, etc.) to indicate at which location he or she thought the color was most likely to be located. These numbers, the participants' Assignment Certainty scores, were the second measure taken on the participant's from the Mastermind problems. Thus, the closer the participant's Inclusion Likelihood is to 100 for any given color, the more sure he or she is that the color is included in the code; the closer the Assignment Certainty is to 100 for any particular position, the more sure the participant is that the code member is located at that position.

Based on the results of Experiment 1, I hypothesized that the participants are far more sensitive to the effects of appearance number (the number of times a color appeared in the Mastermind array) than they are to conditional probability in making determinations of Inclusion Likelihood. A given color can have an appearance number ranging from a maximum of "4" (such a color appeared in each row of the problem's four rows) down to "1" (this color appeared only one time in one row in the problem). To assess the effects of appearance number on Inclusion Likelihood, the participants' mean Inclusion Likelihoods were computed for each of the codemember colors that appeared four times (in various positions), three times, and so on, in the three determinant problems (the single logically-indeterminant problem in this set was excluded from this analysis). These mean Inclusion Likelihood ratings were then analyzed in a one-way, repeated-measures ANOVA. This result is shown in Figure 1. The participants' Inclusion Likelihoods for codemember colors varied significantly as a function of the number of times the color appeared in the problem array, F(3,96) = 19.03, p < .001, [[eta].sup.2.sub.p = .373, even though the conditional probability of all the codemembers in these three determinant problems was held constant at 1.00. Generally, as the number of appearances increased from 1 to 4, the participants showed higher Inclusion Likelihoods for the color, with the exception of the codemembers that appeared 2 times in the array and had a higher-than-expected mean Inclusion Likelihood.

This finding suggests that the Inclusion Likelihood for nonmember colors could also vary as a function of appearance number. To assess the effects of appearance number on Inclusion Likelihood for nonmembers, the participants' mean Inclusion Likelihoods were computed for each of the nonmember colors that appeared four times (in various positions), three times, and so on, in the three determinant problems. These mean Inclusion Likelihood ratings were then subjected to a one-way, repeated-measures ANOVA. As Figure 1 shows, the participants' Inclusion Likelihoods for nonmember colors varied significantly as a function of the number of times the color appeared in the problem array, F(2,56) = 19.33, p < .001, [[eta].sup.sub.p] = .408, even though the conditional probability of all the nonmembers in these three determinant problems was held constant at 0. As the number of appearances increased from 1 to 4, the participants showed higher Inclusion Likelihoods for the color. At the highest appearance number (4) the participants' Inclusion Likelihoods were basically no different for codemembers than they were for nonmembers, even though the conditional probability for the codemembers was 1.0 and the conditional probability for the nonmembers was 0.

[FIGURE 1 OMITTED]

Performance on Mastermind problems: Assignment certainty. For each color in the problem that the participant reasoned was likely to be a code member (Inclusion Likelihood = 80% or higher), the participant was instructed to indicate an Assignment Certainty. If the participant was certain (i.e, had logically deduced) that a codemember was deducible at a particular position in the code, then he or she marked "100" (indicating 100% sure) in the position for that button. The participant used numbers lower than 100 to indicate lower degrees of certainty about the deduction at that position, down to a lower bound of "25" (indicating 25% for each of the positions, meaning that the participant was completely unsure about which particular position the button could be located).

Each of the four Mastermind problems had at least one color button that appeared in each of its four rows (the high appearance button). In the case of the single logically indeterminant problem, the high appearance button appeared in the same position in each of its four rows.

Thus the stationarity for this high appearance color button was 4 (appearances) / 1 (number of different positions the color appeared in), or simply 4. For the other three problems used in this experiment, each of which was logically determinant, the stationarity of the high appearance color button was varied downward by having the high appearance button "move" to different positions in different rows of the Mastermind array. For example, the stationarity of a high appearance button that was played 4 times, but in 3 different positions in the problem array was 4/3 or 1.33. In this way, the stationarity of the high appearance color button was varied while holding its logical status, and hence its conditional probability, constant. Because three of the Mastermind problems used in this set were logically determinant, it was possible for the participants to correctly determine what position each color should be assigned to, regardless of the amount of its "movement" in the problem array. Based on the results of Experiment 1, the hypothesis regarding the relationship of conditional probability and stationarity is that the participants are actually tracking the movement of the high appearance color to a greater extent than they are computing its conditional probability. Hence, as the movement of the high appearance button increases, and its stationarity thus decreases, the participant's ability to correctly locate the button in the array should decrease, though the conditional probability of the button in the three logically determinant problems has been held constant at 1.0.

Of the 33 participants, 15 did not list Inclusion Likelihood numbers of 80% for at least one of the high appearance color buttons, and so these participants lacked one or more Assignment Certainty scores for one or more of the four problems. For this analysis, any missing values were imputed using the "Replace Missing Values" routine in SPSS 17[R]; however, very similar results were obtained when the missing cases were excluded. Consistent with the hypothesis, the participant's assignment of buttons to their logically correct locations was influenced by the button's stationarity, even when the conditional probability of the high appearance button was held constant at 1.0, F(3,96) = 22.41, p < .001, h2 p = .412.

Figure 2 shows the participant's assignment certainty as a function of stationarity. In general, the participants were more likely to correctly assign the high appearance button to its correct location when its stationarity was high (that is, the button did not move around very much in the array), even though the conditional probability of the high appearance button for the three problems on the left of Figure 2 was constant at 1.0. The assignment certainty for the problem in which the stationarity was 1.33 was somewhat anomalously high (84%), and inconsistent with the results for the other three problems. Replicating and supporting the results of Experiment 1, the participants' assignment certainty was highest (86%) for the problem in which the high appearance button was the most stationary (the result on the right of Figure 2), which was interesting because this was the only problem in this set in which the conditional probability of the high appearance button was actually .67, thus making it the lowest of the conditional probabilities in this set of four problems.

[FIGURE 2 OMITTED]

Assignment Certainty and Logical Reasoning Processes. The results of Experiment 2 thus far have shown that peoples' inferences on the Mastermind problems are not particularly sensitive to the variables that actually influence conditional probability, and this in turn suggests that, at least under some circumstances, people may not be making conditional probability computations in conditional reasoning. One alternative explanation that needs to be explored has to do with the logical nature of the Mastermind problems themselves. Given that the Mastermind game is accurately described as being based on logical deduction, it may be the case that the when people realize that the conditional probability computation is going to be very effortful in working memory and other resources, they turn instead to logical reasoning, which although effortful itself, might be easier than the conditional probability computation in this case. Under that interpretation, it may be that the participants are not making conditional probability computations in this case because they are engaged in logical reasoning, involving computation of logical necessity rather than conditional probability, and thus using the analytic rather than the heuristic system. If that's the case, then the Assignment Certainty score of the high appearance button in each of the three deducible problems should be positively correlated with the participant's scores on the conditional reasoning task that the participant completed in Phase 2 of the experiment. Each of the four conditional reasoning processes (ie, the two valid processes, MP and MT, and the two logically invalid processes, DA and AC) was measured four times in the set of 16 problems. The participant's score on each process, ranging from 0 to 4, represented the number of correct answers he or she gave to problems measuring valid conditional reasoning (MP and MT), and the number of correct answers in which the participant avoided giving logically invalid answers (DA and AC). Each participant's score on each of these four processes was correlated with his or her Assignment Certainty score for the high appearance button in each of the three deducible problems.

These correlations are shown in Table 5. As Table 5 shows, contrary to the expectation that the participants may have shifted to using the analytic system under conditions of burdensome conditional probability computation, there were no significant positive correlations between the logical processes and Assignment Certainty scores of the high appearance button on the three deducible problems. The reasoning processes themselves were not generally positively correlated--a somewhat unexpected and previously unreported result. With the exception of the significant correlation between the DA and AC processes (which suggests that people who were better at avoiding one form of invalid conditional reasoning were also better at avoiding the other form as well), there was no significant correlation between any of the reasoning processes.

Discussion

The Inclusion Likelihood results showed that people were much more likely to include particular colors as being code members when the colors appeared more frequently in the array than when they did not, even though the conditional probability of the code members was held constant at 1.0 for the code members of the three deducible problems.

This phenomenon was also observed for the nonmembers: When colors whose conditional probability was 0 appeared more frequently in the array than did other nonmembers, people were significantly more likely to conclude that the more frequently appearing colors were code members. For the colors that appeared the maximum number of times in the array (four times), the Inclusion Likelihood means were no different for code members than they were for nonmembers, even though the conditional probability for the codemembers was 1.0 and the conditional probability for the nonmembers was 0. With regard to determining which colors were code members therefore, peoples' reasoning did not appear to be dependent on a computation of conditional probabilities.

Turning to Assignment Certainty, the findings show that when a color button was played the maximal number of times, its lack of "movement" in the array was significantly more predictive of the participant's ability to correctly assign it to its location in the deducible problems than was its conditional probability. In general, buttons that "moved around" more (low stationarity) were less likely to be correctly assigned to their location than were buttons that didn't move as much (high stationarity), even though the conditional probability was 1.0 for all code members in the three deducible problems. As the results showed, people had the highest Assignment Certainty (86%) for the color button that moved the least, even though its conditional probability was actually lower than 1.0.

One alternative interpretation of the results has to do with the analytic system. If the conditional probability computation is too challenging, then perhaps participants shift to the analytic system to do their reasoning. Based on the correlational analysis of the participants' scores on the conditional reasoning problems they faced in Phase 2 and their Assignment Certainty scores on the three deducible problems they attempted in Phase 3, there is not much evidence in the study that the participants were treating the Mastermind problems as a logical task, even though they were instructed to do so. Although it seems to be the case that the participants are not computing conditional probability, it seems that even the participants who could have used logic to solve these problems weren't doing so. Because a logical analysis would clearly involve using the analytic rather than the heuristic system, the absence of significant correlations between the conditional reasoning problems and the Assignment Certainty scores strongly suggests that people were not simply shifting to the analytic system to solve the Mastermind problems. Rather, it seems to be the case that the participants are continuing to use their heuristic systems to compute their responses on the Mastermind problems. But these responses appear to be based on simpler heuristics such as the number of appearances, and the "movement" of certain colors in the array, rather than actual conditional probabilities.

One objection to this interpretation might be based on an assertion that conditional reasoning is not involved in playing Mastermind, or in solving Mastermind problems. Under that view, even if the participants did shift to their analytic systems, their conditional reasoning ability would not come into play. However, Best (2001) showed that performance on conditional reasoning problems is positively correlated with success in playing Mastermind, and this finding in turn suggests that if the participants had shifted to their analytic systems in order to assign the high appearance color in the three deducible Mastermind problems, the correlational analysis would have picked up that shift with some significant correlations between at least some of the reasoning processes and Assignment Certainty.

GENERAL DISCUSSION

The dual-process approach to understanding conditional reasoning (Evans, 2007; Evans & Over, 2004; Sloman, 1996, 2002; Stanovich & West, 2000, 2002) postulates that reasoning can be accomplished by either a fast, unconscious, domain-specific, heuristic system, or by a slower, deliberate, domain-general analytic system. Generally, the heuristic system dominates the analytic system in reasoning situations: In addition to being slow, the analytic system is also effortful, and, although sometimes less accurate, the heuristic system's output seems "good enough" in most situations. The dual-process approach has not been too specific with regard to the mechanisms by which the heuristic system might operate, but there are several candidates. For example, there is an extensive literature (e.g., Gigerenzer & Hoffrage, 1995) showing that likelihood judgments are very sensitive to frequency information, and these types of judgments occur quickly and fairly effortlessly. Other research (Fugelsang & Thompson, 2003) has shown that can make causal judgments by attending to covariation information presented in 2 X 2 contingency tables. The suppositional account of Evans and colleagues (Evans, Handley, & Over, 2003; Over & Evans, 2003) turns the question of conditional reasoning around. Evans theorizes that that conditional reasoning is accomplished heuristically by evaluating the strength of the conditional premise itself in terms of its conditional probability. That is, Evans proposes that conditionals are evaluated with a degree of belief equal to the participantive conditional probability of Q given P. In findings from a series of studies (Evans, Handley, & Over 2003), Evans found that people's estimates of the probable truth of conditional statements like "If P then Q" was strongly affected by changes in frequency information that in turn affected the conditional probability of Q given P, or Prob(Q/P). Such an outcome seems difficult to explain without recourse to conditional probability computation.

However the current studies did not find evidence of conditional probability computation, or sensitivity to differences in factors that influence conditional probability. In Experiment 1, participants' Assignment Certainty scores for the stationary buttons were all high in an absolute sense, and were not statistically different from each other, despite the fact that each stationary button was associated with a widely different conditional probability. The fact that the participants were much more likely to be sure in their logical deductions about the stationary buttons than they were about the buttons that "moved" in the array seems to be a strong indication that people were using covariation information to make logical deductions. In Experiment 2, there was some evidence that individuals were using frequency information to make deductions about which buttons must be included in the array. The participants' Inclusion Likelihood for buttons that were code members covaried significantly with the number of appearances made by the code member, even when the conditional probability was held constant at 1.0.

Consistent with the results from Experiment 1, the Assignment Certainty scores in Experiment 2 were not apparently influenced by a particular button's conditional probability. Even for buttons whose conditional probability was held constant at a high value (CP = 1.0), peoples' Assignment Certainty scores varied as a function of a given button's stationarity. When a button whose conditional probability was high, and its stationarity was high, then people correctly assigned to the button to is logically determinant location. But when a button's stationarity was low, then people were much less likely to assign the button to its necessary location correctly. In response to a potential criticism, namely that the conditional probability was not being carried out in this case because the participants had shifted over to their analytic (logical) systems in order to do the necessary reasoning, there is not much evidence of such a shift taking place in these data. There were no significant positive correlations between the logical processes of MP and MT, and the Assignment Certainty scores of the high appearance button on the three deducible problems.

Even though there is no substantial evidence of conditional probability computation, and therefore little or no evidence for a suppositional account of conditional reasoning that relies on conditional probability computation, there are several important caveats that should be borne in mind. First, computing the actual conditional probabilities in the Mastermind problems is, frankly, really hard, and many or most people may not know how to do it. Supporting this assertion, Nickerson (2004) has shown that many people equate conditional probabilities (i.e, judge Prob(Q/P) = Prob(P/Q)) when it is clear that they are not. Fox and Levav (2004) have also shown that people are susceptible to a number of biases and other inadequacies in conditional probability computation. It may be the case that in reasoning domains that are "easier" or more intuitive than Mastermind, there may be more evidence of accurate conditional probability computation.

Along these lines, it should be noted that most of the evidence for conditional probability computation comes from semantically rich, sentential domains (Evans & Over, 2004; Ohm & Thompson 2006). It might be the case that conditional probability computation can be fast and relatively effortless in these pragmatically rich domains, but it seems clear that it is a lot harder to do in other domains. It is not clear that the Evans, Handley, and Over (2003) experiments were in a semantically rich domain, but the conditional probability computation in those studies was nevertheless probably substantially easier than it is in Mastermind.

There is also some confusion over what the theorists might understand as conditional probability "computation." For example, in "real" conditional probability computation, background beliefs should not bias the results, because the computation is really just about numbers or frequencies of events that have been presented. But Evans, Handley, Over, and Perham (2002) showed that belief-based information plays a role, and may even dominate in judgments of conditional probabilities. When people use a quick and effortless heuristic system to make judgments about likely conditional probabilities, and when they apply an effortful analytic system to do so, is a topic that will need to be investigated in much more detail in future work.

Computing the conditional probabilities in the Mastermind problems is really hard, and it's not surprising in retrospect that the participants either could not (or at least did not) apparently compute them. That doesn't mean that people do not compute conditional probabilities under some circumstances. Thus, it might be that the suppositional account of Evans and others, using sentential statements that clearly involve world knowledge, conditional probabilities are computed, or at least estimated and used. But in situations in which the materials do not have very much semantic content beyond themselves, and where the computation load is clearly burdensome, there may not be as much conditional probability computation or estimation or any other use. Under those circumstances, the current series of studies serves to establish at least one pole of the boundary conditions under which conditional probability computation is not very likely to be observed.

APPENDIX A Example of Problem Used in Experiment I

Position Number Row Number 1 2 3 4 Feedback 1 Bl Rd Yw Wh 1 Rd, 2 Wh 2 Gn Bl Yw Rd 1 Rd, 1 Wh 3 Or Rd Yw Wh 1 Rd, 2 Wh

For each color, indicate the likelihood that the color is in the code on the following scales. For a color you are sure is NOT in the code at all, use "zero":

[ILLUSTRATION OMITTED]

Just for the colors that you indicated as 80% likely or greater, indicate what position you think the color is in:

Color Position Likely (% 0-100) Or 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Wh 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Rd 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Gn 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Yw 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Bl 1 -- 2 -- 3 -- 4 --

Appendix B

Example of Problem Used in Experiment 2

Position Number Row Number 1 2 3 4 Feedback 1 Or Yw Rd Bl 1 Rd, 1 Wh 2 Bl Gn Rd Wh 1 Rd, 2 Wh 3 Yw Gn Rd Wh 1 Rd, 2 Wh 4 Yw Gn Rd Bl 1 Rd, 1 Wh

For each color, indicate the likelihood that the color is in the code on the following scales. For a color you are sure is NOT in the code at all, use "zero":

[ILLUSTRATION OMITTED]

Just for the colors that you indicated as 80% likely or greater, indicate what position you think the color is in:

Color Position Likely (% 0-100) Or 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Wh 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Rd 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Gn 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Yw 1 -- 2 -- 3 -- 4 -- Color Position Likely (% 0-100) Bl 1 -- 2 -- 3 -- 4 --

Author Note: Lisa Fisher, Christine Lowell, Rachel Miller, and Katie Zohner were the experimenters in Experiment 1. Lisa Fisher and Lindsay Nash were the experimenters in Experiment 2. I would like to thank all of them as well as two anonymous reviewers of this paper.

REFERENCES

Best, J. (2009). Need to override your heuristic system? Better bring your deductive competence. North American Journal of Psychology, 11, 543-582.

Best, J. B. (1990). Knowledge acquisition and strategic action in "Mastermind" problems. Memory & Cognition, 18, 54-64.

Best, J. B. (1997, May). Conditional reasoning in Mastermind. Poster presented at the annual convention of the Midwestern Psychological Association, Chicago, IL.

Best, J. B. (2001). Conditional reasoning processes in a logical deduction game. Thinking & Reasoning, 7, 235-253.

Braine, M. D. S., & O'Brien, D. P. (1991). A theory of If: A lexical entry, reasoning program, and pragmatic principles. Psychological Review, 98, 182-203.

Evans, J. St. B. T. (1999). Rational analysis of illogical reasoning. Contemporary Psychology APA Review of Books, 44, 461-463.

Evans, J. St. B. T. (2007). Hypothetical thinking: Dual processes in reasoning and judgment. New York: Psychology Press.

Evans, J. St. B. T., Handley, S. J., & Over, D. E. (2003). Conditionals and conditional probability. Journal of Experimental Psychology: Learning, Memory, & Cognition, 29, 321-355.

Evans, J. St. B. T., Handley, S. J., Over, D. E., & Perham, N. (2002). Background beliefs in Bayesian inference. Memory & Cognition, 30, 179-190.

Evans, J. St. B. T., & Over, D. E. (2004). If. Oxford, UK: Oxford University Press.

Fox, C. R., & Levav, J. (2004). Paritition-edit-count: Naive extensional reasoning in judgment of conditional probability. Journal of Experimental Psychology: Learning, Memory & Cognition, 133, 626-642.

Fugelsang, J. A., & Thompson, V. A. (2003). A dual-process model of belief and evidence interactions in causal reasoning. Memory & Cognition, 31, 800-815.

Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684-704.

Johnson-Laird, P. N., Byrne, R. M. J., & Schaeken, W. (1992). Propositional reasoning by model. Psychological Review, 99, 418-439.

Nickerson, R. S. (2004). Cognition and chance: the psychology of probabilistic reasoning. Mahwah, NJ: Erlbaum.

Oaksford, M., Chater, N., Larkin, J. (2000). Probabilities and polarity biases in conditional inference. Journal of Experimental Psychology: Learning, Memory & Cognition, 26, 883-899.

Ohm, E., & Thompson, V. A. (2006). Conditional probability and pragmatic conditionals: Dissociating truth and effectiveness. Thinking & Reasoning, 12, 257-280.

Osman, M. (2004). An evaluation of dual-process theories of reasoning. Psychonomic Bulletin and Review, 11, 988-1010.

Over, D. E, & Evans, J. St. B. T. (2003). The probability of conditionals: The psychological evidence. Mind & Language, 18, 340-358.

Sloman, S. A. (1996). The empirical case for two systems of reasoning. Psychological Bulletin, 119, 3-22.

Sloman, S. A. (2002). Two systems of reasoning. In T. Gilovich, D. Griffin, and D. Kahneman (Eds.) Heuristics and biases: The psychology of intuitive judgment (pp. 379-396). Cambridge, UK: Cambridge University Press.

Stanovich, K. E., & West, R. F. (2002). Individual differences in reasoning: Implications for the rationality debate? In T. Gilovich, D. Griffin, and D.

Kahneman (Eds.) Heuristics and biases: The psychology of intuitive judgment. (pp 421-440). Cambridge, UK: Cambridge University Press.

Stanovich, K. E., & West, R. F. (2000). Individual differences in reasoning: Implications for the rationality debate? Behavioral and Brain Sciences, 23, 645-665.

John Best

Eastern Illinois University

Author info: Correspondence should be sent to: John Best, Department of Psychology, Eastern Illinois University, Charleston, IL 61920. E-mail: jbbest@eiu.edu

TABLE 1 Conditional Probabilities of Stationary Buttons in Experiment 1 Problem Stationary Number of Number of Conditional Number Button Codes Codes in which Probablility Consistent Stationary with Feedback Button can be Located in Apparent Position 1 Red 1 0 .00 2 Orange 2 1 .50 3 Yellow 4 2 .50 4 White 3 3 1.00 TABLE 2 Mean Assignment Certainty Scores for Stationary Buttons Aligned with Conditional Probabilities Problem Stationary Conditional Mean Stand. Dev. Number Button Probablility Assignment Assignment Button can be Certainty Certainty Located in Apparent Position 1 Red .00 90.41% 24.81% 2 Orange .50 83.94% 27.69% 3 Yellow .50 81.88% 32.79% 4 White 1.00 92.18% 24.83% TABLE 3 Correlation of Performance on Valid Conditional Reasoning Processes (MP and MT) with Assignment Certainty Scores Problem Stationary Conditional Modus Ponens Modus Tollens Number Button Probablility and and Button can be Assignment Assignment Located in Certainty Certainty Apparent Correlation Correlation Position 1 Red .00 .110 -.157 2 Orange .50 .145 -.178 3 Yellow .50 .136 -.356 4 White 1.00 .240 -.354 TABLE 4 Structure of Mastermind Problems Used in Experiment 2 Problem Button(s) Number Number Conditional of Probability Locations of Different Appearances 1 Red 4 1 .67 2 Yellow 4 2 1.00 3 Red, White 4 3 1.00 4 Green 4 4 1.00 TABLE 5 Zero-Order Correlations of Assignment Certainty and Reasoning Processes in Experiment 2 St2 St1.33 St1 MP MT DA AC St2 1 St1.33 .126 1 St1 .234 .238 1 MP -.006 -.136 -.200 1 MT -.212 -.213 -.123 .326 1 DA .004 -.155 -.033 -.155 .111 1 AC .081 -.003 -.008 .082 -.067 .616 ** 1 Note: Correlation flagged with two asterisks is significant at p < .01.

Printer friendly Cite/link Email Feedback | |

Author: | Best, John |
---|---|

Publication: | North American Journal of Psychology |

Article Type: | Report |

Geographic Code: | 1USA |

Date: | Dec 1, 2010 |

Words: | 12655 |

Previous Article: | Editor's comments. |

Next Article: | Do psychological cues alter our discount function? |

Topics: |