Home Part III: The Direct Comparison Model (Quality Point)

    Part III: The Direct Comparison Model (Quality Point)

    1249
    1

    (This post is Part III to the previous article “Part I: Standardizing the Direct Comparison Approach” and Part II: What You Should Know About the Direct Comparison Approach and Were Afraid to Ask?)

     Summary

    We have identified in Part I and II that there is a frame work to the Direct Comparison Model (DCM). Research has shown that the typical body of the DCM is in the right order to determine a value range for any property after the correct variables and adjustments are applied. So nothing needs to change in that department. We know variables are characteristics that help the appraiser explain and reduce the variance in the selling prices of the comparables. We also know that there are some rules to follow about variables if we want to continue along the line of data analysis. Further, buyers and sellers don’t talk in terms of pinning each property characteristic down in terms of a given dollar. However, they do talk in terms of what they think is good, not so good, fair or simply average when it comes to real estate. In many case buyers may not even know why they paid what they did for the property? In other words, the interaction of buyers and sellers is not always totally explainable even if the appraiser interviewed them all for each comparable. The reason is that buyers and sellers interact on a subjective level. They may not know specifically but they either want to buy or want to sell and have settled upon an agreed price.

    For the appraiser that is a very interesting mystery to solve. How do we solve this when in fact the buyers and sellers may not know themselves? Firstly, we are separating sale price as a distinctly different entity from market value. Market value comes from the collectiveness of a number of sales and is never set on one sale. If this is not so, then all the appraiser has to do is find one sale. There is the problem. There is never one perfect sale but a series of them that seem to have characteristics that are similar and dissimilar all at the same time; however, the appraiser cannot ignore any of them. So how does this whole business get sorted out if we have a good frame work for analysis but no real means of making sense of a very obtuse real estate market place?

    We know appraisers are trying to work through this problem with the DCM. The creativity that the author sees is interesting because appraisers are moving in the right direction but need a tool that will bring into focus their observations and intuitive nature of real estate.  Therefore, the goal is to marry real estate observations to a modern data analysis whereby data can be explored in a professional and meaningful way.

    The only method every found of the DCM that provides evidence of “proof” of adjustments is “Quality Point” (QP). This method was first used by Gene Dilmore an American appraiser who coined the phrase. He was working with Dr. Graaskamp out of the University of Michigan at the time. There are a number of versions of QP that have surfaced in the USA. In 2014, the Appraisal Journal published an article: “Qualitative Analyses in the Sales Comparison Approach Revisited”, by Gene Rhodes. The author has seen other articles from a professor out of California and an appraiser in Minnesota. However, none of them match the one that is being presented in this paper. The main reason is that there are automatic “tests” within the QP that hone in whether or not the adjustments and variables are correct. We do not see this feature in other formats of the DCM.   In other words nothing has really changed towards providing evidence of the adjustment process.  It is not the position that appraisers want to find themselves in.

    Although, QP has been used in a number of court cases in the USA, the author has presented it in many trials. One in particular was the ARB case # 3660 between OPAC (now MPAC) against Sifton Properties that dealt with five office buildings that were fully leased. The ARB ruled that despite the fact that the buildings were income driven, the best evidence was the DCA (QP). In this case the owners had their assessments reduced by approximately $12 million. Thank you Mr./Mrs. QP.

    Part III deals with the DCM using Quality Point

    QP uses the traditional framework of the DCM as the basis for the adjustment process. However, adjustments are not made with arrows or +- adjustments but converts the necessary process of adjustments into a numeric format known as an “Ordinal Scale”. An Ordinal Scale has no association other than by membership. An example is 1-2-3-4-5-7. Seven is not seven times better than a one.  It represents a category.  There is no value attributable to the scale.  It is a form of sorting.  In QP the Ordinal Scale is used to represent a state or position such as:

    1 = Fair

    2 = Slightly Below Average

    3 = Average

    4= Slightly Above Average

    5 = Good

    6 = Very Good

    7 = Excellent

    We know from experience that these words Fair to Excellent capture all possible combinations of the position of any real estate from location, condition of the improvements to more subtle aspects such as zoning, official plan, site and building size to age of building.  These words are all encompassing and there are no others.

    All that is occurring is that we are converting words to a number. Computers like numbers not words. This is just a form of substitution.

    There are three points worth noting here:

    1. One needs to establish what is meant by a 1 or a 5 when dealing with real estate sales. For example, one variable may be the specific location of the sale within the neighbourhood.. Those sales located on a corner would automatically receive a score of 5. Those located near to a corner would be a 4 and those with a midblock location would be a 1 and anything in between would be a 3 or a 2. We could also say that we need a criteria regarding Location as in Neighbourhood. Therefore, we create specific meaning to each individual score.  Example:

    (A) 1 represents a location that is in an older area of the City with poor access to arterial roads.  New development has not occurred in years.

    (B) 2 represents a location that is in an older area of the City with good access to arterial roads. New Development has no occurred in years.

    (C) 3 represents a location within the City of newer improvements to that of 1 and 2 with good access to an arterial road.

    (D) 4 is the same as #3 but newer developments are occurring around the sale property.

    (E) 5 represents a location on a good arterial route.

    (F) 6 represents a location on a good arterial route that has significant new developments occurring.

    (G) 7 represents a location on a very good arterial route that is improved with considerable number of retailing and commercial draws or has very good access to a major roadway such as Highway #407 or #401.

    The point to be made is that we can pinpoint the locations of the properties through the specific eyes that the buyer saw to determine whether or not the specific details of the general locations of the sale properties have any bearing on explaining price.  Appraisers are data explorers.

    1. The Ordinal Scale of 1-2-3-4-5-6-7 moves in increments of 1 unit. However, the sales data may not. For example, sale prices per square foot of building could be $19.00, $23.00, $25.00, $35,00, $46.00, $59.00 and $68.00. These sales are not moving at an increment of 1. They are moving at increments of $4.00, $2.00, $10.00, $11.00, $13.00 and $9.00. Obviously, the scale of 1-2-3-4-5-6-7 won’t quite fit. That is fine. All that is needed is to square the Ordinal Scale into 1-4-9-16-25-36-49.

     

    This latter scale is moving at increments of 3, 5, 7, 9, 11 and 13. It is not the identical match to the sales but it is a lot closer. The changing of the scale does not alter the data. It is known as a re-expressing the scale to fit more closely with the data.

     

    1. We can also use a scale to adjust for the differences between properties for Lot Size, Building Size, Front Foot of Site. We find the average, for example, of the building sizes located on the comparables.  If the average was 12,000 square feet then any sale near to the average would be give a score of 9(for average).  If a sale had a building size of 8,000 square feet, then we would score it a 4(for Slightly Below Average).  Where this changes is when we use building size as a unit of comparison.  There might be an inverse relationship of building size whereby the smallest building located on the sales is selling at a high selling price per square foot of building and vice versa.  We have an example of this in the article.

    The whole idea is to get either the Qualitative or Quantitative variables to aid the appraiser in explaining price.

    Upper Part of the Traditional Direct Comparison Approach

    Earlier we discussed the Unit of Comparison (selling price per square foot of building, selling price per front foot, etc.). Sometimes we don’t know the best one to use. With QP there is no need to spend a lot of time on computations to figure that out. All you have to do is try several of them and QP will quickly tell you which unit of comparison reduces the adjusted selling prices the most. That usually takes about 5 minutes.

    In the upper body of the DCA we needed to adjust (if any) for Motivation, Property Rights, Mortgage Financing and Time. In QP those are always set at a 1.0 in the spreadsheet. This means no adjustment. If the appraiser feels that an upward adjustment of 10% was needed on the variable, Motivation, all one has to do in the cell is change the 1.0 to a 1.10. QP automatically adjusts the Unit of Comparison.

    So how does one know if a Motivation Adjustment is required? Since QP can predict automatically, the selling prices of each sale and if that sale cannot be predicted closely to the actual sale, then the appraiser needs to find out why.

    Perhaps, there was a Motivational Adjustment required after the appraiser discusses the issue with the realtor? Or sometimes a faster way is that a particular sale has a much lower or higher unit of comparison price, all things being equal, when compared to other sales in the data set. That might be another clue for an adjustment.

    One of the hardest adjustments made in real estate analysis is Time. QP can handle it two ways. Firstly, the appraiser can make an arbitrary adjustment for Time of the sales and then begin the body of the adjustment process. The automatic test shows that the residual dollar difference between the predicted selling price to the actual selling price is within acceptable parameters (5%) then that would validate the Time adjustment. Or simply remove the Time adjustment and see what happens to the residuals. If they are farther apart, then the Time adjustment was appropriate.

    Secondly, the appraiser makes no Time Adjustment and monitors the standard deviation of the adjusted selling price of the data. It uses the average adjusted unit of comparison as a base with standard deviation (distance from the mean) as a base. If the standard deviation is 10% with the mean of the adjusted selling prices of unit of comparison of the sales, then a Time Adjustment could be applied to the sales to see whether or not the 10% gets reduces down to a 4%. If that is the case, then a Time Adjustment is warranted.

    Another very tough adjustment is Building Size or Lot Size. Easily handled in QP. Firstly, if the Unit of Comparison is the Selling Price Per Square Foot of Building then perhaps no adjustment is needed because that gets sorted out through the basic mathematics of dividing the number of square feet of building of each sale into the selling price. Secondly, if the appraiser feels that an adjustment is still needed, then set the Excel spreadsheet to calculate the average building size of the comparables. Let’s assume it is 15,000 square feet. If Index #1 has a building size of 14,000 square feet then give it a score of 3 or 9 (dependent upon the scale that you are using) for Average. If Index #2 has a building size of 10,000 square feet that would qualify for a score of 2 or 4 for Slightly Below Average. Thirdly, what happens if the building sizes interact with the selling prices to produce an opposite effect whereby the larger the building size the smaller the selling price per square foot of building and vice versa. Try adjusting for that using symbols. So we can see that in the graph of the data below. This is classic data analysis.

    George Canning, Part II-image1

    In QP all we do is reverse the scoring. Instead of using a 1=Fair, 2=Slightly Below Average, 3=Average, 4=Slightly Above Average, 5=Good, 6=Very Good and 7=Excellent, a 1 now becomes Excellent and the 7 becomes a Fair and everything falls into place. We simply use Standard Deviation percentage to monitor the results. If we don’t reverse the scale when the sales are in the graph, the Standard Deviation will be high. When we reverse the score the Standard Deviation drops like a stone. Works every time!

    Let’s talk about the word Average in the context of QP. An average of the adjusted selling prices is determined automatically by the model. However, an average number of anything is not relevant unless you know the spread of those numbers. For example, the average selling price of agricultural land in Ontario is $12,000 per acre. The lowest selling price is $3,000 and the highest is $70,000 per acre. The $12,000 is meaningless. However, if we said that the average selling price of agricultural land in Ontario is $12,000 and the lowest selling price is $10,000 and the highest is $14,000 then the $12,000 becomes important because we know that the lowest and the highest selling prices per acre are not that far apart. So the question is how much is the $12,000 apart relative to the low and high selling price? Standard deviation can tell us that. In QP it is calculated in terms of dollars per (your unit of comparison) as well as a percentage automatically. Therefore, if the average selling price of the adjusted sales is $45.00 and the standard deviation is 5%. Then you know that the bulk of the adjusted selling prices are between $42.75 to $47.25. Obviously, in the DCM the lower the standard deviation the better because the smaller the range, the tighter and more important is the average adjusted selling price per (Unit of Comparison).

    The Main or Lower Part of the DCM using Quality Point

    This is the part of the model where the “rubber hits the road”. We said earlier that we need variables to help explain the price differences of the comparables and that we can score these variables using a scale of 1-2-3-4-5-6-7 or 1-4-9-16-25-36-49 which ever best suites the data. There are two questions here:

    1. How does the Unit of Comparison (Selling Price Per Square Foot of Building) get integrated in to these scores?
    1. How is the Adjusted Selling Price Per Square Foot of Building created?

    In QP all of the above is done automatically. However, we need to know some things.

    Firstly, each sale will have a Total Utility Score. It is similar to an examination mark. Obviously the higher the score then the student studied hard and was prepared. In real estate the higher the Utility Score the better the property would be in terms of attributes (bigger lot size, bigger building, better location). We need a Utility Score because we need to ascertain each sale property’s characteristic relative to one another. A total Utility Score is not much use in the DCM unless we can connect it to something that is very important such as the Unit of Comparison. So as each sale is scored using the scale, the total score is divided into the Unit of Comparison. Why do we need that? Because in the traditional DCM, the adjusted selling price of the Unit of Comparison (whatever that might be) has already been selected. Therefore, we need to stay on the same theme in Quality Point. It is all done automatically in QP. However, we are still missing a piece of the puzzle. How are the scores related to the variables? Good Question. And how is the Utility Score created? Another good one.  The best question is : how does the appraiser know if the adjustments made are correct?

    We use a Solver program to do the above calculation. What is a Solver? Solver programs have been around for 30 years. They are an add-on feature in Excel and if one remembers Lotus 1-2-3 it was known as an Optimizer. So what does a Solver Do? Well it only does two things. It either makes the spread of numbers larger or smaller. In the case of the DCM we want to reduce the variation in the selling prices of the comparables. That’s the job of the DCM-remember? Well we can do the calculation by hand using 5 variables and 6 sales. That would take about a week. Solver is set to do 20,000 calculations in about 1 second. So what method would the appraiser prefer? The Solver is automatically set within the QP. The only time it changes is when you add or subtract a variable. In which case, it takes about 3 seconds to change this in the Solver add-on program in Excel and about 45 seconds to add or delete a variable in the Excel spreadsheet that QPp uses. That sounds a lot better than a week. To activate the Solver all one does it call it up and press the button. Another 2 seconds. So what does the Solver do in that 2 seconds? We will demonstrate this now.

    The Solver is set on one aspect of Quality Point which is the mean or average adjusted selling price of the comparables. After all that is a very important number. It is the key to everything. So the Solver says I am to reduce this number as low as possible based upon the scores of each variable that the appraiser makes in the spreadsheet. The Solver applies a percentage weight to each variable over and over again until the lowest adjusted selling price of the comparables is reached. That does not necessarily mean that the lowest adjusted selling price is the one the appraiser wants. After all, the sales could start off with a selling price per square foot of 76%. The first run through with the Solver may only reduce it to 18%. Still not enough. However, it is starting to reduce the variation in the selling prices of the comparables. This is where the intuitive/observation instinct of the appraiser comes into play. The appraiser now has a tool in front of him or her to “test” one’s knowledge regarding the comparables quickly and correctly. Welcome to Data Analysis!

    Here’s is what it looks like in Excel

     

    Sale 1 2 3
    TOP                               PART OF DCM
    Property Rights 1.0 1.0 1.0
    Financing Terms 1.0 1.0 1.0
    Motivation 1.0 1.0 1.0
    Market Conditions

    Selling Price Per Square Foot of Building

    (line up with $44.02)

    1.03 1.03 1.03
    $44.02 $47.89 $58.21
    BOTTOM PART OF DCM
    Quality Ratings(Attributes) Weights from Market(Solver) Sale #1

    (scores from appr)

    Sale #2

    (scores from appr)

    Sale #3

    (scores from appr)

    Location 0.21 or 21% 9 9 9
    Lot Size 0.38 or 38% 9 16 9
    Building Age 0.09 or 9% 4 9 25
    Zoning 0.28 or 28% 9 1 9
    Condition 0.05 or 5% 4 16 25
    Total Weights 1.00 or 100%
    Total Utility Score 8.31 9.77 11.22
    Adjusted Selling

    Price Per Sq Foot of

    Building Per Point

    $5.30 $4.90 $5.19

    The Utility Score for each sale such as the 8.31 of Index #1 is calculated by 21% x 9, 38% x 9, 9% x 4, 28% x 9 and 5% x 4 and the results are added together. The average of the adjusted selling price per square foot of building per point is monitored in a separate area of the spreadsheet as shown below.

    ADJUSTED UNIT OF COMPARISON OUTPUT

    Average Price of Adjusted Sales                $5.03

    Standard Deviation Amount                      $0.19

    Standard Deviation as a Per Cent             4%

    All of the above is completed automatically once the appraiser puts in the variables and the scores. However, we have not said one word about the subject property. The reason is that we are not interested in the subject property at this point of time. This is data analysis. What we are concerned with is balancing the comparable sales to one another by doing two things:

    Lowering the Adjusted Selling Price Per Sq Foot of Building to as low as possible by using Standard Deviation. Remember we started off with a difference of 142%, used a good Unit of Comparison and reduced it down to 34%. Now we have the sales after adjustments down to 4%.

    We need to further test our decisions about the variables selected and the

    scores made to each of the variables by taking this information and predicting the selling prices of each sale. Why do we want to do this? Well prediction is everything in data analysis. If the appraiser can predict (done automatically) as close as possible to each Unit of Comparison of each sale, then the appraiser has selected the right variables and MORE IMPORTANTLY, the right ADJUSTMENTS OR SCORES for each comparable. Let’s have a look.

     

    PREVIOUSLY

     

    Adjusted selling

    price per sq ft of

    building per Point

    $5.30 $4.90 $5.19
    PREDICTED UNIT PRICING
    Predicted Sq Ft of Building $41.79 $49.14 $56.42
    Selling Price Per Sq Ft of Building $44.02 $47.89 $58.21
    Absolute Error(Residual) 5.33% 2.61% 3.09%

     

    The Predicted Selling Price Per Square Foot of Building (Unit of Comparison) is calculated by taking the total utility score of each comparable and multiplying it by the adjusted mean price per square foot of building per point. In one case it is 8.31 (utility score of Index #1 x $5.03(mean) = $41.79. The 5.33% of Index #1 is simply the percentage difference between the two numbers (Predicted and Actual). Therefore, in Index #1 we missed it by 5%, Index #2 by 3% and Index #3 by 3%. These misses are relatively quite small and were only achieved by having the right scoring and the variables selected.   This is all done automatically by QP.

    Dr. Whipple who teaches out of Curtin University in Australia said it best about the residual analysis of the QP model.

    “Finally, residual analysis is a most important component of the technique. The assumption underlying the sales comparison approach is that recent buyer behaviour toward comparable sold properties will be the same as for the subject property. Residual analysis shows how well the model replicates the prices fetched for the comparable. If the replication is good, then the expectation is that it will produce an acceptable prediction of price for the subject property if the analogy has been validly constructed. Few valuers test the logic they adopt on actual transactions-this method allows them to do so and is a most desirable feature. The ultimate test of any method is the extent to which it produces results consistent with reality”:

     “Property Valuation and Analysis”, The Law Book Company Limited, 1995.

    The Subject Property

    The analysis of the sales data is completed when we have a very low standard deviation within the Adjusted Selling Price per Unit of Comparison and low residuals between the Predicted Selling Prices and the Actual Selling Prices of the Comparables. Now on with the Subject Property.

    When the appraiser is completing the scoring of the variables there is no referencing the subject property. This is not about looking at differences between the subject property and the comparables. There is no need to do that because that process is completed during the course of determining the appropriate scores of the subject property. It is determined by completing the Total Utility Score of the Subject Property. See below:

    George Canning, Part III- image 2

    The variables are the same as used with the Comparables including the Weights from the market determined by the Solver. The score of 16 for Location of the subject property is either determined based upon the definition of the scale prior to the DCM or one or two of the sales has a similar location to that of the subject property and they were given scores of 16 each. In other words, we are just using the scores already allocated to the data. The score of 9 for Lot Size is based upon an average score. Perhaps the average Lot Size of the comparables was 34,567 square feet and the subject’s Lot Size is 33,987 square feet. It is close enough to qualify the subject a score of 9 for average in this regard. Therefore, the scores of the subject property are the result of the already performed analysis of the comparable sales.

    In order to convert the subject property into an indication of value, the Total Score of 12.74 of the subject property is multiplied against the Average Selling Price per Unit of Comparison of the Sales Data ($5.03) +- the Standard Deviation Amount ($0.19) x the Building Size of the Subject Property. This is all done automatically and shows within the Excel Spreadsheet the Value Range of the Subject Property by three selling prices per square foot of building(example), the average ($5.03), the lower range (5.03 – $0.19) and the upper range ($5.03 + $0.19).

    Conclusions

    Quality Point analysis is the recommended standardization format for the DCM. It is part of a Continuing Professional Development course of the AIC and is also within the supplementary studies of the University of British Columbia’s Sauder School of Business. You will also find it in appraisal textbooks. See “Property Valuation and Analysis” by RTM Whipple. Thomson Lawbook Company, Second Edition. It may be difficult to find.

    Te Quality Point spreadsheet has always been free to any appraiser who wants to shed the dogma of trying to make adjustments to comparable sales within the confines of a DCA format with no regard to testing and proper data analysis.

    Appraisers would balk if they had their computers taken away from them and had to resort using a pen and paper to write an appraisal. Some argue that using “Ad Hoc” methods of adjustments within the DCM is the same thing against modern computerization and proper data analysis. For this Author the decision is very clear. Pen/Paper against Computer and DCA dogma against a strong and decisive computer model that is friendly fast and FREE. All one has to do is step up and learn. No appraiser went back to the pen and paper after using a computer. No appraiser is going back to the symbolic method of adjustments after using QP.

    AIC Disclaimer:

    This post is part of the AIC’s innovative program to explore new and creative concepts for valuing real property within the broader context of advancing the profession to meet and complex marketplace and evolving profession. To achieve this end the author(s) of these blogs/articles have the freedom to raise, express and discuss ideas and opinions that are not necessarily endorsed by  the Appraisal Institute of Canada’s (AIC) or comply with its professional guidelines and standards. While the AIC edits all blogs/articles for literary correctness it does not judge or edit the merits of the blog’s/article’s ideas or concepts. Readers are encouraged to discuss the ideas and contents of these blogs/articles on-line, and to share their own thoughts and ideas through the comment section below.

     

     

    George Canning, AACI, P.app

    George Canning is the principal of Canning Consultants Inc., a real estate appraisal and consulting firm. He has over 30 years of practical and diversified experience with several of the largest real estate firms in Southwestern Ontario. George now specializes in providing specialty consulting services to meet the needs of the clients that were not being met by traditional valuation methodologies. In particular, he provides solutions to complex real estate problems using modern techniques that in the past could not be reliably solved. He is one of a very few real estate appraisers/consultants that employs modern statistical methods and modeling tools with a common sense approach based upon many years of analyzing real estate.

    SHARE
    Previous articleAround the Globe
    Next article5 Tips to Keep Candidate Members Past Designation

    1 COMMENT

    1. A very interesting article – thank you for the three in the series.

      The methodology described in this third article is very much statistical analysis and thus I do agree with you that when these various statistical variables are applied and analysed (including interpreting standard deviation) then the direct comparison approach becomes a statistical model rather than an approach.

      I do have concerns however with using the “per square foot” or “per square metre” variable in valuation. Per square metre dollar value assumes that each component of the property or of vacant land is valued exactly the same so that if 1 square metre of land is valued at 5000 then 2 square metres will value at 10,000. In reality we (here in the Yukon anyway) have found this is not the case and in fact we have to apply adjustments when determining the value of large parcels of land.

    LEAVE A REPLY

    Please enter your comment!
    Please enter your name here