Printer Friendly

Comments on "common statistical errors and mistakes: valuation and reliability".

The recent article, "Common Statistical Errors and Mistakes: Valuation and Reliability" (Fall 2013) by George Dell, MAI, SRA, posits that the complete population is obtained by a data set consisting of all the comparable sales in the market. However, the statistical population is never obtainable when the parameter being estimated is the market value of a specific property. Mr. Dell's thesis, "inferential models using sample statistics are not necessary for appraisal work. Descriptive models of populations are all that is required," is internally contradictory. Descriptive models only describe, they do not predict, estimate, or forecast. Only inferential models make inferences, i.e., predictions, estimations, and forecasts. A population only contains information about itself; as soon as a purported population is used to make inferences about an unknown quantity it becomes a sample and a larger population is necessarily assumed.

All inquiry starts with a question. In appraising that question is, what is the market value of a specific property? In statistics, the question to be answered determines the population of interest. Therefore, the ideal population of interest in a market value appraisal is the population of all possible prices that could be paid for the appraised property (under the assumptions of market value) as of the date of value estimate. This ideal population is illustrated by the bell curve in Figure 1 of Mr. Dell's article, with market value being one of its parameters (mean). To directly sample this population is impossible as it would require the selling of the property again and again on the date of value estimate. We cannot time travel to conduct such an experiment. It must be clearly understood that this ideal target population is a required conceptual construct that cannot be directly sampled, let alone obtained as Mr. Dell suggests. Statistical modeling assumes this ideal target population exists within a larger context or larger population. (The classic illustration of this relationship is a series of bell curves centered about a linear regression line.) No matter how large or complete, a set of comparable sales is used as a sample representative of the larger population. By statistically modeling the larger population, inferences can be made about the parameters of the ideal target population, i.e., all possible prices that could be paid for the appraised property.

Mr. Dell directs appraisers to use the notion of populations instead of samples and to disregard measures such as confidence intervals, etc., which indicate the reliability and accuracy of a statistical estimate. However, all advanced computer-driven methods and statistical regression are firmly within the inferential branch of statistics. Any model or technique that estimates an unknown quantity is inferential. If a data set, however large, does not contain the unknown quantity then, by definition, it is not the population but is a sample.

It is true that appraisers do not collect random samples. Appraisers do not randomly choose sales from within a set of comparable sales. This criticism of inferential statistics by Mr. Dell is a red herring. Statistical modeling does not require random sampling in the colloquial sense of the term, but it does require an unbiased representative sample. If we were investigating the widgets of a manufacturing process, random sampling would be required to protect against bias and to ensure representativeness. However, this is not the case in appraising. When properly selected, a set of comparable sales is assumed to already be an unbiased sample, representative of the population to be modeled. Each comparable sale price is assumed to have some degree of noise, randomness, variability, or uncertainty. Mathematically speaking, each sale price is value taken on by an underlying random variable. In other words, each observed sale price is itself an element of an underlying population of all possible prices for which the sale could have sold. This is sufficient for statistical modeling.

An unbiased representative sample is the requirement for reliable inferences about population parameters. What is representative and what is not is determined by transparent inclusion and exclusion criteria based on the appraiser's expert judgment. The inclusion and exclusion of a sale is based solely on the transparent inclusion and exclusion criteria. As long as the appraiser's data set is unbiased and represents the comparable market segment, reliable statistical inferences can be made. But this data set is always treated as a sample and never the underlying mathematical population if we are estimating an unknown quantity.

In his article, Mr. Dell suggests the use of prediction intervals and regression. Both of these are unquestionably based upon an assumed underlying abstract, infinite, and unobtainable population. Implicit in the concept of a prediction interval is the population of all possible sale prices of a property (under the assumptions of market value) as of the date of value estimate. The 95% prediction interval is the interval estimated to contain the sale price of the appraised property as of the date of value 95% of the time. This statement and the notion of a prediction interval have no meaning without the notion of a population consisting of all possible prices for which the property could sell. In other words, the assumptions of inferential statistics are absolutely required for predictive estimation.

Mathematics by its nature is an idealized abstraction. There are no perfect triangles in reality. The assumptions of statistics are no different. Appraisers should not be fooled into believing they have the full population for estimating market value. No one ever has the entire population in its true mathematical sense and to claim as much is akin to claiming one knows where infinity ends. The unobtainable nature of the target population is at the very core of statistical modeling; it is the central problem of statistics and its nature is what we make inferences about. We have already encountered cases in our practices were an "expert" claimed he need not provide any confidence intervals or other measures of accuracy for his statistically derived estimate because he had the entire population. However, estimates always have uncertainty. Estimates free of all uncertainty are called facts. Inferential statistics is in large part the science of uncertainty. Problems free of randomness or noise are deterministic and estimation is not needed. To claim one has the complete population amounts to a claim of perfection free of all uncertainty. If the advice of the article is heeded by appraisers, it will be a major step backwards for the profession and embark appraisers down a path completely at odds with the underlying logical framework of not just statistics but estimation itself.

Matthew C. Trimble, MS

Oklahoma City, Oklahoma

Author's Response

I wish to thank Mr. Trimble for his comments on "Common Statistical Errors and Mistakes." A common search for truth is good for the profession.

Mr. Trimble's comments seem to rest on the assumption that something other than the competitive market segment (the comparables) is the relevant data population of interest. The difference in perception here may very well be the difference in critical thinking approaches--that of applied econometrics versus Mr. Trimble's pure mathematics. Effectively, his refutation rests on the largest of the common critical thinking errors challenged in the article--the need to create a fictitious population in order to apply complex and impressive advanced statistics.

I believe Mr. Trimble has misread my thesis statement. My thesis is that inferential models using sample statistics are not necessary for typical appraisal work. Although he states that descriptive models "only describe," The Appraisal of Real Estate, 14th edition, states, "descriptive statistics is concerned with data collection, presentation, and quantification" (p. 279).

Mr. Trimble states appraisers are interested an abstract unpopulated population, i.e., the specific population of interest in a market value appraisal is "the population of all possible prices that could be paid for the appraised property." However, what determines the population is the data you have to work with, and what he describes is the distribution of the dependent variable. His conceptual population (as he states) does not exist. 1 agree. However, in the discussion of frequentist statistical thinking ("selling of the property again and again"), pure math theory seems to override the reality of the problem, and the discussion exemplifies the problems of imposing a statistic-inferential solution on to a one-off appraisal question. Applied asset economics deals with actual sale data, realized in a real market--not from some imaginary "ideal target population" or hypothetical superpopulation.

Mr. Trimble's statement that all of the advanced computer-driven models are firmly within the inferential branch of statistics ignores today's reality--the exponential growth in software for big data, predictive modeling, machine learning, and data science methods using complete data sets. Further, the statement that statistical regression is firmly within the inferential branch of statistics is simply incorrect. There are three uses of regression: (1) description of a conditional distribution, given market-demand characteristics; (2) prediction as in estimation or forecasting where usefulness is the test; and (3) inference if an underlying sampling model is first applied. As it turns out, the third use is the most problematic, for the very reason that it is difficult to falsify a simple random sample in most valuation settings. As The Appraisal of Real Estate, 14th edition, states, "Regression models can either be predictive or structural. Predictive models are predominant in most valuation settings" (pp. 736--737).

I roughly agree with his statement that "any model or technique that estimates an unknown quantity is inferential." The problem here, once again, is the equivocal usage of the word inferential. For inferential reasoning the statement is true, for inferential statistics, it is not. In predictive methods, the unknown quantity cannot be part of the population. (If you know its value, you do not need to estimate it).

The mathematics of inferential statistics requires a random sample. Mr. Trimble twice states that the set of comparable sales is assumed to already be an unbiased representative sample, and so can be used for statistical inferences. However, this just sweeps away a bothersome mathematical requirement with an assumption of compliance. There is a better way.

The appraisal problem is twofold: (1) get data on the competitive market, and (2) position the subject within that market. In data science, the identification of the competitive market segment is a classification problem, not a sampling problem. (Is it a comp or not a comp?) The estimate of subject value is a prediction-estimation problem. The reason the article stresses this distinction is to avoid the confusion of the two separate analytics required in appraisal: classification and prediction.

The Appraisal of Real Estate acknowledges that appraisers use judgment samples (pp. 99-100). Mr. Trimble's comments (as well as those of Dr. Wolverton, which follow) suggest that such expert judgment is close enough that the profession can just pretend the mathematical requirement is satisfied. However, the use of an imaginary population and imaginary sampling mechanism is one of the problems for professional appraisal credibility. Appraisal education that rationalizes this belief exacerbates the junk science problem in the courtroom and elsewhere. Conventional statistical inferences (such as standard error, f-tests, etc.) depend on the assumption of random sampling. This is not a matter of debate but a matter of mathematical necessity. It is true that there is noise variability in a data set. The confusion here is between sampling variability and measurement variability. When using the complete data set, there is no sampling variability. And no need for inferential statistics.

Mr. Trimble asserts that the assumptions of inferential statistics are "absolutely required for predictive estimation" and that predictive intervals and regression are "unquestionably based upon an assumed underlying abstract, infinite, and unobtainable population." No source is provided for this mathematically inconsistent statement. Regression is simply a mathematical formula, no more and no less. Everything else is about modeling decisions. Comparable sales data are real, finite, and obtainable. Predictive model assumptions are usually linearity, additivity, and distribution. There is a fundamental reason that prediction intervals (versus confidence intervals) are appropriate for such data sets. They consider the previously mentioned data noise, comprising measurement error (such as those from transaction zones) (1) and nonlinearities inherent in utility functions--but not sampling error. He makes some generalizations and allusions to pure mathematical theory that feel like the theoretical perfection positions warned about in my article. Usefulness is what matters, not abstract pure math theory about perfect triangles and infinity.

The final paragraph of Mr. Trimble's letter continues the confusion of predictive methods versus statistical inference--even as we agree on the tautology that "estimates always have uncertainty." If you visually estimate (predict) of how far a home run ball flies, there is uncertainty. Yet no mathematics is available for that single event on which to build a confidence interval (because there is no sample). However, if you instruct a pitcher and batter to throw and hit in exactly the same way as much as possible a hundred times (a controlled sample), and use the results as a random occurrence from the theoretically infinite population of hits, then you would have sampling variation, and can estimate the mean distance via a confidence interval.

Prediction error (in appraisal analysis) derives from measurement noise. Statistical error derives from sampling error. A prediction interval tells you where to expect the next data point A confidence interval tells you how well you have estimated the mean.

The assumption that somehow, some way, inferential frequentist methods must be forced upon the two components of the appraisal problem is the overarching statistical mistake. It does not solve the problem presented and misdirects appraisal education. This inferential assumption harms the profession, and its prevalence in the appraisal literature and education is to be lamented. My position is that when you have all (or substantially all) the relevant sale data, you do not need to worry about samples. As Albert Einstein said, "If you can't explain it simply, you don't understand it well enough."

George Dell, MAI, SRA

San Diego, California

(1.) Richard U. Ratcliff, Valuation for Real Estate Decisions (Santa Cruz, California: Democrat Press, 1972).
COPYRIGHT 2014 The Appraisal Institute
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2014 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Trimble, Matthew C.
Publication:Appraisal Journal
Article Type:Letter to the editor
Date:Mar 22, 2014
Previous Article:Information, knowledge, and awareness: resources for real estate analysts and valuers.
Next Article:Comments on "common statistical errors and mistakes: valuation and reliability".

Terms of use | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters