Printer Friendly

Improving the effectiveness and efficiency of appraisal reviews: an information systems approach.

Real estate investment decisions often involve millions of dollars. As the recent S&L debacle shows, these decisions are sometimes misguided. A real estate appraisal is an important decision-making tool. A good appraisal will identify the potential risks of a project; unfortunately, not all appraisals are reliable. Lenders often turn to reviewers to assess the accuracy of appraisal reports. Most institutional investors employ full-time reviewers, spending considerable amounts of money and time to ensure that the appraisal reports they receive are reasonable.

Despite the importance of appraisal reviewing, few tools are available to help reviewers. Most institutions have standardized review forms. In developing their forms, they must balance review completeness with review efficiency. A form that is too complex will be costly to complete and difficult to use in decision making. A form that is too simple will not help reviewers identify problems in a report. Unfortunately, some institutions emphasize efficiency at the expense of effectiveness, resulting in review forms that are overly simplistic.

Information System (IS) technology can improve the effectiveness and efficiency of appraisal reviews. The characteristics of such an IS are identified here. The nature of the review process is examined, and issues that are problems for reviewers but opportunities for system developers are identified. The structure of an IS that supports appraisal review is then outlined.


Before an IS can be developed, the task it supports must be understood. A review is a qualitative assessment of an appraisal. Ultimately, a reviewer's work is summarized in an overall judgment as to whether an appraisal should be accepted, changed to clarify issues or correct deficiencies, or rejected. This judgment is supported by a more detailed analysis in which a reviewer makes specific comments on attributes of an appraisal report. For example, were the rental comparables representative of the subject? Were future zoning plans considered? Was the reversionary capitalization rate appropriate? Was the discounted cash flow (DCF) analysis consistent with assumptions?

These judgments are based on information contained in (or identified as missing from) a report. A judgment might be supported by a quantitative analysis. For example, a reviewer's judgment that a DCF analysis is incomplete might be supported by a second analysis showing the effect of a change in the reversionary capitalization rate. The review process includes the following steps:

1. Selection of the attributes of the appraisal that will be considered, such as whether the land sale comparables have identified usable as well as total land area.

2. Gathering of information on these attributes from the report. In the preceding example, the reviewer might check the land sales comparable grid or the comparable write-up.

3. Evaluation of the sufficiency of the report on these attributes.

It should be noted that the process is not as structured as this list might suggest. In gathering information about one attribute, a reviewer might notice another piece of information that has a bearing on a report's evaluation. The reviewer may interrupt his or her work to consider the new information. He or she might jump to another part of the report, notice and record a weakness, and then return to the section of the report originally examined.

The attributes of the task must be considered. First, reviewers need to digest a lot of information. An appraisal might be 200 pages long and contain thousands of individual pieces of data. Somehow a reviewer has to make sense of this.

Second, all appraisers do not include the same information in their reports or use the same format. Although there are Uniform Standards of Professional Appraisal Practice (USPAP) guidelines, information can be presented in many different ways. It can be difficult to find information or to notice that information is missing. For example, a subject's historical operating expenses might be listed in the History of the Subject Property or Income Approach sections, or simply be mentioned in the Reconciliation and Conclusion section.

Third, reviewing requires a substantial amount of expert judgment. Reviewers must be knowledgeable to fully review an appraisal. Parts of the review process, however, require more expertise than others. Information can be found by those with relatively little expertise, for example, as long as they know what they are looking for. Evaluating this information, however, is more difficult.

Fourth, reviewing is fairly costly. A thorough review of a commercial property development appraisal can easily take two to three days to complete, with an effective cost in excess of $1,000.

Some attributes of reviewers and the environments in which they work must also be considered. As mentioned previously, reviewers are task experts. Further, their familiarity with computers varies; some reviewers rarely touch a keyboard while others are "power users."

Some reviewers have secretaries or administrative assistants. The support staff's time is less expensive than that of the reviewers. This is a significant factor in reducing the cost of the review process. Allocating review functions that require less expertise to the support staff can increase the productivity of a reviewer and reduce the overall cost of performing reviews.


Computers are of limited use in tasks that require expert judgment. While some information systems have been developed that reach expert levels of performance (e.g., the performance of the medical system MYCIN is comparable to that of human experts on infectious disease(1)), such "expert systems" have two primary disadvantages. One is that they are expensive to develop. Even a simple system can cost several hundred thousand dollars to build. The other is that they perform best in constrained problem domains. Although MYCIN performs well in the domain of internal medicine, it is useless outside that problem space. It could analyze a complex series of diagnostic tests, but could not provide the simplest advice on how to set a broken arm.

It would be possible to develop an expert system for appraisal review(2) given the appropriate level of investment. The cost of development, however, increases with the complexity of the problem domain. Appraisal review is complex; there are many different property types, some of which require a large amount of information to appraise. A system capable of analyzing complete appraisals might cost well over $1 million to develop. Although systems that provide expert advice on pieces of appraisals (e.g., cash flow analysis on income-generating properties) could be developed more cheaply, they would not address the needs of investors for complete, accurate appraisal review.

This means that the expert judgment of human reviewers is needed. A computer's role is to help reviewers make judgments, not to make the judgments. This type of information system is known as a decision support system (DSS). As suggested earlier, reviewing involves 1) selecting a set of report attributes to check; 2) gathering information on these attributes; and 3) forming a judgment. The last of these is essentially up to a reviewer. Although a DSS could include some simple evaluative logic, its main role is to help with the first two steps.

Even experts sometimes forget to examine an important aspect of a complex report. A DSS can help compensate for the limitations of human memory by providing a comprehensive list of criteria for judging an appraisal report. Such a list should be generated by an expert reviewer and ideally would cover all aspects of appraisal review. This is a difficult requirement to meet, however, in light of the diversity of appraisal assignments. A practical, feasible alternative is to 1) make the list of attributes as complete as possible; and 2) allow reviewers to add their own attributes.

A DSS also should help a reviewer find the information needed to make each judgment. This would be easiest if every appraisal report were in a standard format; unfortunately, this is not the case. A useful DSS would help translate a report into an easily used format that would require less expertise than an actual review and could be performed by support personnel. They should be able to add their own comments about the report; noting, for example, sections they do not understand.

This suggests that a DSS should have two relatively independent parts. Support personnel would use the first part to transcribe information from the appraisal report into a standard format. It should be broken into sections such as report purpose, appraiser information, zoning, income approach, cost approach, and so on. A conclusion of value section would also be useful as an overall summary. A prototype screen is shown in Figure 1.

Reviewers would use the second part of a DSS to record their comments. Each section would list a series of judgments about the report. For example, there would be a section to record comments about the report purpose (e.g., the adequacy of the statement of purpose, whether the definition of value corresponds to USPAP requirements when appropriate). Figure 2 shows a prototype screen.

The third characteristic that a DSS should possess is support for the inherent "jumpiness" of the task. While using the system to record comments on one topic, a reviewer might notice a problem with another part of the appraisal that he or she wants to record immediately. The DSS should allow a reviewer to jump to the section that records comments on that issue, and then return to the original starting point. This type of navigation should be convenient and require little effort. It is important that the reviewer be in control rather than the DSS.

In addition, a DSS should be easy to use. Reviewers and their support personnel cannot be expected to be computer experts. They will resist a system that is not convenient for them. A reviewer should be able to enter comments quickly and easily with a minimum of typing. A DSS should also have a useful help system that is available at all times.

One of the most significant developments in computing over the last decade is the graphical user interface (GUI). Examples of GUIs are Windows for the PC, System 7 for the Macintosh, and X Windows for Unix machines. A GUI relies on people's ability to accurately process visual information. Users generally find GUIs to be more convenient, more accurate, and faster than other interfaces. For example, to select a land measure (e.g., square feet, acres) a GUI program might list the choices and allow a user to select the appropriate one with a mouse. This operation involves no typing.

A DSS also should allow for different levels of summarization during output. At times, a reviewer might want to print a single-page document summarizing his or her opinion of an appraisal report. At other times, a reviewer might want a more comprehensive list of a report's attributes.

Further, a DSS should possess some review intelligence, when it can be integrated into a system at a reasonable cost. That is, the system itself should be able to note some problems within an appraisal. For instance, a DSS might notify the reviewer if it finds a capitalization rate of greater than 20%.

Consider how a review DSS might be used. Suppose a reviewer is asked to evaluate an appraisal report for an apartment construction project. First, the report is given to a knowledgeable assistant. He or she uses the DSS to record the report's data in a standard form.(3) If an aspect of the report is not clear, the assistant notes the fact. All information is stored in a computer file.

The assistant tells the DSS to print the result of his or her work, and the system outputs the data in the standard format. The output would include comments produced by the DSS itself noting unusual values or inconsistencies in the data. The reviewer receives the standardized report, the file used to produce it, and the original appraisal report. He or she reads the standardized report, examining the assistant's and the system's comments. The reviewer might revise the assistant's work, perhaps finishing sections that the assistant did not have the expertise to complete.

The reviewer uses the second part of the DSS to enter his or her qualitative comments on the appraisal report. When possible the reviewer saves time by selecting a standard comment provided by the DSS (e.g., adequately considered), rather than typing an entry. The reviewer types comments only when necessary. During the process, the reviewer marks some comments for printing. For example, the reviewer could mark all of the negative comments.

Once the task is complete, the reviewer prints two documents. The first is a one-page summary, stating whether changes must be made to the appraisal report. The other is a list of questions and comments that is sent to the appraiser. When these issues are addressed or explained by the appraiser, the assistant can enter the new data into the file, and the reviewer can verify that the changes have been made.


Appraisals are used to make multimillion-dollar investment decisions. It is imperative that the information contained in appraisal reports be reliable. Several characteristics of a DSS, identified in this article, would help appraisal reviewers improve the accuracy and efficiency of their work.

The authors are currently developing such a system; it will run under Windows on IBM or compatible PCs. The three principles that guide its development are that it should 1) help make appraisal reviews more reliable and complete; 2) support the work flow of the review task; and 3) focus reviewers' attention on those parts of the task that require their expertise, while using assistants to perform the rest of the work. The DSS can also be a useful training tool, because it embodies the expertise of an experienced appraiser. This system should improve the quality of real estate investment decisions by providing reliable information in a cost-effective manner.

1. Bruce G. Buchanan and Edward H. Shortliffe, "The Problem of Evaluation," Rule-based Expert Systems (Reading, Massachusetts: Addison-Wesley Publishing Company): 571-596.

2. Brent J. Dreyer, "Artificial Intelligence: The 'AI' MAI Appraiser," The Appraisal Journal (January 1989): 51-56.

3. Alternatively, an institution might require appraisers to summarize information in an appropriate format.

Kieran Mathieson, PhD, is assistant professor of management information systems at Oakland University, Rochester, Michigan. His research focuses on users' attitudes and beliefs about information systems. He received an MIS degree from the University of Queensland and an MBA and a PhD from Indiana University.

Brent J. Dreyer, MAI, is president of his own firm, The Appraisal Center, Inc., and has been active as a fee appraiser since 1978. He received a BS from Michigan State University.
COPYRIGHT 1993 The Appraisal Institute
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 1993 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Mathieson, Kieran; Dreyer, Brent J.
Publication:Appraisal Journal
Date:Jul 1, 1993
Previous Article:Survey of the effects of state certification on appraisers.
Next Article:The theory, assumptions, and limitations of direct capitalization.

Related Articles
Performance appraisal as a modifier of physician behavior.
Application of TQM principles to the utilization management process.
Performance appraisal effectiveness: a matter of perspective.
Lender perceptions of appraisal quality: after FIRREA.
URAR site comments often poorly developed.
Performance appraisal, performance management and quality in higher education: contradictions, issues and guiding principles for the future(1).
The impact of AVMs on the appraisal industry.
Implement strategic 360 degree appraisal for a university.

Terms of use | Copyright © 2016 Farlex, Inc. | Feedback | For webmasters