An effective set of interlocking indicators provides feedback to individuals, groups, and the enterprise, directing the behavior of all.
Measures of performance have been used by management for centuries to review current operational capabilities. Such measures have been used to assess both departmental and corporate performance, as well as trend performance achieved against plan.
In many industrial facilities, these measurements are related to safety (number of incidents), environmental (number of releases), costs (percentage of departmental budgets used), and production (comparison of actual vs targeted production output). These measures are needed in order to determine not only if resources and costs have been managed for the production achieved, but also whether the assets or plant remain in good health. Clearly, these measures provide assurance that asset policies in place today do not limit capabilities for tomorrow.
In order to define a complete set of performance measures, companies must ensure that simple, workable measures are in place. The real challenge is not only to select those indicators that satisfy budgetary goals, but also to build the activities needed to meet the levels of asset performance required to meet strategic goals.
Selecting the right measures is vital for effectiveness. Even more importantly, the metrics must be built into a performance measurement system that allows individuals and groups to understand how their behaviors and activities are fulfilling the overall corporate goals.
When built into management processes, performance metrics become a system which will generate organizational behaviors that comply with what is measured, i.e., “you are what you measure.” Hence, this will encourage behaviors which help present a good score for the individual or for the department.
This may or may not, however, help to achieve strategic goals. Therefore, when building performance metrics, we must begin with the end result in mind. We need to focus on what we want as outcomes of our work processes. This presents a dilemma, as we do not work as a set of isolated departments, but in collaboration with others. Processes that begin with an individual are continued or completed by others. So, how do we effectively measure outcomes when a single individual or group is not controlling all the key steps?
Several basic frameworks have been proposed to build intelligent metrics that help form sets of composite measures to simplify this problem. For example, the SMART (see accompanying section “Building and Testing Performance Indicators ”) test is frequently used to provide a quick reference to determine the quality of a particular performance metric. But these do not, however, address how the measures will interact to stimulate an effective network of key processes. How can individuals see what the effects of their improvements are, if these get lost in the noise of company management reports?
One problem is that business processes are segmented, and many departments are collecting silos of information that produce metrics used only for the sake of measurement. These silos then reinforce divergent opinions of company performance and limit a common understanding of what new behaviors are needed. So, a major factor in implementing performance measurement is changing the way performance is measured and reported and how people view success within their own processes.
For many organizations, this is “where the rubber hits the road:” How can we build realistic, practical metrics which drive change? How can we articulate company objectives through enterprise-wide metrics in an integrated measurement system?
Asset performance metrics
An asset performance management (APM) initiative is comprised of business processes, workflows, and data capture that enable rigorous analysis to help define strategies based on best practices, plant history, and fact-based decision support.
An effective initiative must include three phases: Strategize, Execute, and Evaluate (SEE). All companies engage in some form of activities in the Execute phase, such as performing maintenance, inspections, and monitoring. However, most companies neglect the Strategize and Evaluate phases, where most of the real value of the work execution can be realized. In the Strategize and Evaluate phases, successful companies will not only perform analysis to determine appropriate asset strategies, but also use the information generated by the Execution phase to reevaluate their asset strategies and redefine how they do manage their assets and process risks.
As with many management issues, the key to building a set of performance metrics is to do it in stages, as noted in the accompanying section “Building and Testing Performance Indicators.” Clear corporate goals are important at this point, otherwise vague objectives will create impractical perspectives and metrics. Consider also current asset performance indicators: What is currently measured? How are these aligned with company objectives? For many, this lack of connectivity causes dissatisfaction with management reports and criticism of managers who “manage by the numbers.”
By contrast, well-organized metrics and scorecards provide operational measures that have clear cause-and-effect relationships with the desired outcomes. Each of these outcomes will build toward the goals of the perspective. And these metrics, if well chosen, will be the catalysts for change, providing warning signals to identify ineffective or failed asset performance strategies.
The best way to build this relationship is to “map” front-line activities all the way up to corporate goals. For many organizations undergoing strategic change, this may involve reorientation around new customer or supplier perspectives (company stakeholders). For others, building in customer perspective already may have occurred, but the linkages to external (or internal) stakeholder metrics may have been lost since the change program was initiated.
Four tactical perspectives
The accompanying “Performance Indicators for Managing Risk and Improving Profitability ” chart illustrates a high-level map developed for a chemical company using operational excellence goals of managing risks and improving profitability. From this strategic goal, perspectives have been defined which are specific to four functions: Operations, Reliability, Work Management, and Safety and Environmental.
Within these perspectives, each discipline can take charge of factors under its control by choosing the right metrics to measure its progress toward achieving the collective goal. At this stage, metrics should be reviewed to determine which will most effectively measure the desired outcomes. One question to consider is “at what stage it is reasonable to expect the metric to be a meaningful measure of performance?” Clearly some measures may not be effective if the work processes that generate their outcomes are still being built and learned.
So, for this example, the Operations objectives are to focus on delivering reduced operating costs, and managing the risks inherent in the process and in operational activities, while maximizing methanol output.
In the Safety and Environmental perspective, the focus is on providing the systems, procedures, and training which build operational awareness, skills, functional systems and capabilities to prevent, manage, and eliminate safety and environmental incidents.
And in the Work Management perspective, the focus is on efficiently completing maintenance work while minimizing the potential for future breakdowns and restoring assets to their operating condition.
Finally, for the Reliability perspective, the focus is to build the analytics and skills required to increase and improve plant uptime while preserving the integrity and life of plant assets.
From each of these perspectives, tactical metrics can be set to stimulate new outcomes, build new processes, and build skill development and learning—all with clear links to the goals of each individual perspective. Now the value of performance improvements can be easily seen and used to drive changes in functional behaviors and functional interactions.
Scorecards and performance reporting
With the perspectives aligned to corporate goals, key performance indicators (KPIs) can be organized into scorecards using well-considered metrics. KPIs can be chosen that directly achieve individual goals or fulfill shared objectives needed to maximize operating performance, such as asset availability, asset integrity, optimal process capability, or reduced utility consumption.
Metrics that build upon individual perspective goals need to be mapped from the lower-level operational measure to higher-level strategic measures. For example, within the Operations perspective, a strategic KPI candidate could be Plant Uptime. Obviously, this cannot be achieved by operations personnel alone or through a single new activity or the application of a new skill. But the lower operational KPIs of Reduced Startup Time and Increased Running Time at Optimal Output may be more under their control. Improvements in this metric will result in more product produced for the available hours in the operating period.
But if the plant is not available, operations staff will be limited. This is where the visibility and ownership of shared KPIs will come into play. Here, tactical KPIs of Maintenance Compliance and skill/learning development of Defining/Measuring Deterioration Mechanisms will need to be adopted. The compliance metric will ensure that effective maintenance is executed in a timely manner before extensive collateral damage has occurred. And, once the deterioration mechanisms are clearly understood, they can be monitored and interpreted by reliability, operations, and maintenance to reduce impact on production hours and save maintenance resources through timely planning.
From these integrated metrics, we now have to face the challenge of how to collect the data in a systematic manner and on a reasonably routine basis. Asset performance management systems provide transaction engines but often they leave many KPIs uncollected. This leaves manual processes to fill the gap, resulting in missing data records and inadequate information since, quite often, there is no time between scheduled work to record what has occurred. But if details are not recorded, later analysis is frustrated by poor or limited recorded information in the transactions and has to rely on less-reliable memories from craftsmen and contractors.
Solutions are needed here to pull basic data from enterprise resource planning (ERP) systems and supplement these with post-event information. Often, however, there are too many individual work items to get all of the details around each event, so facts should be built for the significant events from which asset performance insights can be evaluated.
As direct costs associated with events are often a fraction of production losses (lost profit opportunities), production incidents need to be mapped to their corporate impact instead of providing only measures of man-hours expended. A production incident tracking system is vital so these events can be tied to systematic assessments of actual and potential losses. Figure 1 shows how maintenance measures as well as production losses can be combined to give a full measure of the business impact of events over time.
Routine reporting and automation of the KPIs provides management, reliability engineers, or asset performance analysts with the collective data. Then less time is spent collecting and more time is spent applying the data to achieve the benefits required for business success.
However, without a central location to collect, store, and report KPI data, it can be extremely difficult to manage metrics unified around a strategy map. The data must be accurate, trustworthy, and timely to make beneficial contribution to a site’s or a company’s strategies aimed at asset performance improvement. So, having a system that has all the data you need in one central location is important.
As shown in Fig. 2 , using a “dashboard” approach with dial gauges and graphical trends gives highly visible, visual feedback to groups and individuals on their operational and strategic performance achieved. These gauges appear in user Web-based home pages and graphic alerts can be auto-generated when changes in performance occur, signaling either successful improvements or failures of existing asset strategies.
A petrochemical facility with approximately 50,000 assets employed had rigid preventive maintenance (PM) and predictive maintenance (PdM) programs in place. A primary corporate objective was to achieve a minimum return on net assets (RONA) of 12 percent, but the company had floundered around the 8 percent mark for the past few years. Preventable failures and lost production items were still prevalent.
The company initiated a focus on reliability to help achieve the expected RONA. A PM program was set up by functional location and scheduled in its maintenance management system. Predictive maintenance items such as vibration data collection and analysis, infrared switchgear inspections, ultrasonic thickness measurements, and oil sampling and analysis were routinely performed. The facility developed subject matter experts and established new work programs.
As a result, machine availability increased from poor earlier performance to between middle to top tier performance. However, critical machine failure still occurred randomly and unexpectedly. Production targets were affected, and RONA targets were still not achieved. Even though subject matter experts were in place and a focus on implementing preventive and predictive maintenance programs was occurring, unexpected failures were still affecting reliability and impacting production targets.
(A common problem with mature maintenance programs can be that they may not have been designed correctly, and that, on average, between 40 and 60 percent of the PM tasks typically serve very little purpose. This poses a significant issue for improving maintenance performance since no amount of perfection in planning and scheduling will make up for the inefficiencies of the maintenance program itself. Achieving 100 percent compliance with an initiative that is 50 percent useful and 50 percent wasteful is not good asset performance management.)
In spite of their efforts, preventable failures were still occurring at this facility. The production manager was given the primary responsibility to achieve the 12 percent RONA. In order to achieve this goal, she quickly realized that this must involve interdepartmental cooperation. To achieve that end result, she needed all departments to buy in to the goal and make everyone else accountable
as well. Within months, and after much discussion and many presentations to department heads, the KPIs began to show up on each department manager's scorecard.
Building an asset performance culture
With accountability comes responsibility, and in this case, the responsibility to achieve the corporate objective. Alignment processes were in full swing. A strategy was developed that employed KPIs to manage the effectiveness of the existing PM and PdM strategies. Knowing when to perform the PM and when and what PdM data to routinely capture for predictive maintenance is a must in order to be effective and eliminate the preventable failures that are occurring. If RONA was a key corporate objective, then howcould this company align their PM strategies to achieve the target? How can KPIs be used in this effort?
This company decided to incorporate an asset performance management strategy by first defining which assets were critical to achieving production targets. It decided to first focus on those efforts that had a business impact and were keeping the plant from achieving a 12 percent RONA. Only 4000 assets (8 percent of the total) were identified, including rotating, fixed, electrical, and instrumentation.
After the highly critical assets were identified, specific PM and PdM schedules were put in place, rather than the typical “once-per-month” philosophy that was previously employed. KPIs were aligned in an effort to meet the corporate objective and included items such as the number of failures on highly critical equipment, and percentage of highly critical equipment with optimized PM study completed.
As you can see, these are not the typical metrics that many companies employ. These metrics were designed to focus on the 4000 highly critical assets that were preventing the achievement of 12 percent RONA. The spotlight had now shifted from carrying out PM and PdM efforts on all 50,000 assets to the 4000 highly critical assets. Resources were aligned to employ methodologies to optimize PM plans for each of the critical assets.
When failures occurred in the facility on noncritical assets, significantly less attention was given to those events. Operators, maintenance technicians, process engineers, and management were refocusing their resource allocations on the 4000 highly critical assets. KPIs dictated which assets would receive the most focus based on consequence and impact of failure. This practical, easy-to-implement strategy, using KPIs, led to achievement of the desired 12 percent RONA.
Asset performance management benefits
Every organization needs a strategy for effective metrics mapped to corporate objectives in an asset performance management system that can track, trend, and analyze asset information for better business decision making. Having data in one central location and detailing asset events to promote easy querying and reporting is required.
For this company, the APM system promoted:
• A closed loop process of defining strategies, executing the performance of those strategies, and evaluating failed assets in order to determine if the initial strategy was ineffective and required change
• Use of a central repository for all technical and detailed asset event data
• Use of KPIs to measure progress and reinforce positive behaviors
• Elimination of departmental barriers (everyone had a common objective driven by corporate management)
• Use of statistical tools to evaluate asset performance and understand past failures and successes
• Delivery of corporate value (the RONA goal was achieved)
In this case, the benefits of this strategy included a clear understanding of a common goal, or an objective, and alignment of strategies and resources. The 12 percent RONA was now not just some corporate objective; it had real meaning to the site’s employees. Further, when targets were not met, a process was put in place to analyze the results and determine if the strategies for a specific asset needed to be changed. A continuous, closed loop business process was now in place.
KPIs in APM workflow
When KPIs are used in an APM workflow, an interface exists between the APM system and the maintenance management system. Data is automatically flowing between the two systems, generating key performance indicators. In one specific example for a pump (PMP-101), the strategy employed takes the tasks developed in the APM system and automatically enters them into the maintenance management system.
Typically, they are scheduled as work orders. In this workflow, all work order data is captured and sent to the APM system and a detailed data capture is performed regarding any event associated with PMP-101. This data is used to routinely capture KPIs that can be used as a set of criteria to automatically notify the user that an analysis is required. In the example, this would probably occur when a highly critical asset experienced a failure. The implementation of the APM system made the initiative much easier to achieve and adjustment of strategies simpler to accomplish.
SEE for clarity
When organizations are given a framework to monitor asset performance and empowered with a strategy review process, improvements in asset performance occur and production process constraints wither away. Benchmark levels of performance are achieved, and what were formerly regarded as “ideal” levels of production are regularly achieved.
Effective scorecards are a powerful catalyst for making the need for change visible and the opportunity for improvement clear. Alone, the opportunity is a powerful motivator. But harnessing that power with an APM system is essential to prevent the process from becoming fragmented with multiple conflicting improvements, causing confusion and instability.
An APM platform focused on reinforcing the cycle of Strategize-Execute-Evaluate (SEE) brings clarity and structure to the situation. An asset performance culture begins with defining or refining asset strategies (the Strategize phase), which requires an understanding of the production requirements, processes hazards, plant configuration, and the current organizational capabilities.
The Execute phase should capitalize on the information gathered when the asset strategies are applied and failures and successes occur. With an effective APM system, observations made by individuals in the Execute phase can be recorded and shared with various departments, offering new insights into how potential failures can be detected and prevented earlier.
With this culture in place, metrics and scorecards can progressively reveal opportunities for improvement in the Evaluate phase. Here the right metrics and scorecards replace the “blame culture” that surfaces in many situations where low-grade performance is revealed. With the right culture, following the cause-effect relationships in the strategy map can determine when and where systemic causes are being introduced and what improvements can be made.
These improvements can then cycle back to the Strategize phase where the strategies can be re-assessed to make effective holistic improvements to the asset configuration, the way failures are managed and mitigated, the organization's skills and capabilities, or the performance measurement system itself.
Anthony McNeeney is a senior APM consultant for the Asset Performance Management Consulting Group at Meridium , 10 S. Jefferson St., Roanoke, VA 24011, provider of asset performance management solutions; (540) 344-9205
BUILDING AND TESTING PERFORMANCE INDICATORS
As with many management issues, it is often best to build a solution in stages. Suggested stages for performance indicators are:
1. Define the links between corporate goals and major operational perspectives.
2. Map these strategic links to required processes in each perspective area.
3. Define a set of near-term and medium-term metrics which drive the new outcomes in each perspective.
4. Define the gaps and dependencies across the organization which will need to be bridged to result in corporate success.
5. Implement the metrics as individual and group scorecards and monitor to secure the strategic results.
Use the SMART test
S = Specific: clear and focused to avoid misinterpretation.
M = Measurable: can be quantified and compared to other data.
A = Attainable: achievable, reasonable, and credible under conditions expected.
R = Realistic: fits into the organization's constraints and is cost effective.
T = Timely: doable within the time frame given.
Key performance indicators should be trendable, observable, reliable, measurable, and specific.
Source : http://www.mt-online.com/component/content/article/103-april2005/639-selecting-the-right-key-performance-indicators.html?directory=90