Sunday, January 2, 2011

Using Metrics to Enhance Continuous Improvement

If you are like the vast majority of FM’s you are now much more familiar with Metrics and Continuous Improvement programs than you once were.  Hopefully you have metrics that measure the important parts of your business accurately, and a system for routinely analyzing and reporting data.  Good data collection and analysis practices are at the very heart of meaningful metrics outputs.  Measure the wrong things or measure incorrectly and you are at risk of making wrong decisions.  Measure correctly and you have a gold mine of opportunity.

Ah, but that is the rub, isn’t it?  We have all of this information now, but what are we supposed to do with it?  Simply making charts to flash on the screen doesn’t seem like much value for the investment made in developing the data.  It’s not.  The real value of metrics programs is in the change they lead to.  Change, as in “Continuous Improvement.”

Organizations that link their Metrics and Continuous Improvement efforts take advantage of the knowledge gained through data to direct efforts to improve operations.  This is a key linkage that increases the return on investment from the metrics program, helps fine tune data processes, and increases the speed of Continuous Improvement.

Continuous Improvement programs that are targeted based on accurate data yield far more beneficial results than those that are not.  This is because data driven Continuous Improvement projects apply leverage to those processes in an organization that really matter and have the greatest potential to improve performance. 

As an example, let’s take a look at a classic FM service issue through the lens of a data-targeted Continuous Improvement process and one that is not.

The Issue:  Customer Satisfaction ratings on hot/cold work orders appears to be declining over time.

Untargeted Approach:  Analyze hot/cold work orders for Mean Time to Repair, Mean Time Between Failures, Mean Down Time, etc. to determine which sub-processes appear to be out of control.  Investigate and revise those processes.

Data-Targeted Approach:  Analyze Customer Satisfaction rating data at a fine level to determine which element(s) of the process is driving the rating decline.  Develop a Continuous Improvement project to address those elements.

The difference between the two approaches is that the untargeted approach assumes the entire process is part of the problem, while the targeted approach looks at data to identify the key elements(s) of the problem, thereby concentrating improvement efforts where they matter most.

Interestingly, the untargeted approach may indeed improve the overall process yet not improve the Customer Satisfaction rating.  For example, a targeted approach may identify that customer dissatisfaction is based in lack of communication and ambiguity regarding the work order status or close out. 

Metrics programs are about discovering data, seeking knowledge from the data, and then turning the knowledge into actionable wisdom.  Continuous Improvement programs are about applying systematic problem analysis methodologies to important issues in order to effect positive change in the Quality, Time, Cost and Customer Satisfaction dimensions. 

Using data to target Continuous Improvement efforts increases the metrics program ROI and accelerates the pace of operational improvement.  That seems like a positive outcome for both, and for you.


  1. I have a philosophical problem with the notion of continuous improvement. In principle it is a good thing, but in practice it has two significant problems working against it:

    1. Instability. Continuous improvement can lead to instability because systems are constantly being tinkered with. This is exactly what happened with Toyota, who has been a leader in Kaizen (continuous improvement) when they had issues with accelerators last year.

    2. Diminishing returns. Nowhere in continuous improvement literature do I see warnings of diminishing returns. Over time, each improvement you make to a system gives you less of a bang for your buck. Eventually, the cost of continuous improvements can override the value of the improvement being implemented.

    I would add that in linking data to institutional improvements, you should make use of *all* the available data. In your example of customer satisfaction for hot/cold calls, you are studying the customer satisfaction data itself. Survey data is always subjective. However, if you compared the results of customer satisfaction with reported room temperatures at the time the incidents were reported, you could learn whether or not there is a physical problem or a perception problem. The solution you choose to implement varies greatly from one to the other.

  2. While I agree that CI can seem never ending and thus lead to a feeling of instability (constant flux?) I do not believe the Toyota experience invalidates the principle. Toyota's problem came about not because CI didn't work but because management sabotaged it. They lost sight of their "True North" and paid the price, as they should have.

    The issue of diminishing returns is valid. Sometimes it is easy to keep flogging the same old horse when you really should be changing horses. Another way of saying that selecting which CI projects are allowed to go forward should be a studied exercise. The old axiom "don't let the good become the enemy of the best" comes to mind as good advice here.

    Finally, your last point was precisely the point! The example was meant to illustrative only, not meant to be definitive. The only way one can get to the real source is by looking at all of the data objectively.

    Thanks for reading and commenting. I hope your New Year is off to a great start.

  3. Ken, I enjoyed reading your post, and I appreciated Jamie Todd Rubin's response. While I agree with him that in many areas, the notion of CI can yield diminishing returns, I believe that those diminished returns are still of significant enough value to support the importance of continuous improvement for facilities management. There are two reasons for this:

    a) A very small percentage of the universe of buildings have been benchmarked as a part of a continuous improvement process with a good set of metrics; ergo, there is much room for improvement.

    b) In a field where technology is always advancing, new best practices are always evolving, thereby enabling continuous improvement.

    For metrics to do more than just be a scorecard (i.e., result in improvement, not just statistics), they must be tied into best practices.

    While I agree that the first years nearly always will yield the largest improvement, there will always be more that one can do, partly because of the way technology has evolved.

    We have been tracking improvement over a ten year period for people who have benchmarked their facilities over time. In the area of operating costs, the median company that we have tracked reduced its operating expenses by 6% after its first year, and 31% after ten years, with some reduction achieved each year. This certainly supports the claim of continuous improvement.

    Similarly, we have found that for those who have benchmarked their energy consumption over the same time period, the median company reduced its consumption by 8% after one year and 40% after ten years.

    Yes, there is a cost involved to make the improvements, but the identified improvements should be evaluated and cost-justified the same way that any other facility improvement is examined. The companies surveyed did not implement everything -- they implemented only those that were proven to be cost effective.

    Bottom-line: By integrating metrics and best practices, one is able to demonstrate continuous improvement in the world of facilities.

  4. Ken, I'm trying to find some information and wondering if you can help me out. What is the industry average (biotech in particular, or I'll take any industy information) of the number of hours invested in continuous improvement efforts? For example, one company I know of is investing about 5% of labor hours in strictly continuous improvement projects, training, etc. Is this high or low?

  5. Good question. I'll do some digging and see what I run across. Check back here for updates or send me an email for direct contact.