Archive for the ‘Gary Cokins Space’ Category

Cut Through the Confusion to Unleash the Full Power of Performance Management

Saturday, April 16th, 2011

 This is a post by Mr.Gary Cokins. Gary Cokins is a Product Marketing Manager with SAS, the leader in business analytics software and services. He is an internationally recognized expert, speaker, and author in advanced cost management and performance improvement systems. Gary also blogs regularly at

I keep coming across evidence that there are still serious misunderstandings about the potential scope and power of performance management initiatives.

Here’s an example: blogger Ann All recently posted this article titled “BPM + CRM = Improved (not Perfect) Customer Service.” In it, she advocates integrating a company’s customer relationship management (CRM) system with business process management (BPM) tools.

The article is a solid piece. However, when it comes to integration, why stop with just those two components? Why limit what information and solutions can be combined and integrated?

One reason for this myopia is because there’s still some confusion in the marketplace about the meaning of the term “performance management.” Just Google it and you’ll see what I mean.

The confusion begins with the alphabet soup of acronyms. We often see in the press and media the acronyms BPM for business performance management, CPM for corporate performance management, and EPM for enterprise performance management. But just as, in their various languages, the words merci, gracias, and danke all mean the same thing, so do these acronyms. Fortunately, IT research firms like IDC and Gartner are accepting the short version, and simply calling it PM — performance management.

In All’s article, the P in BPM refers to “process,” not “performance,” and indeed this acronym is in common usage with two distinct meanings. The former is a subset of the latter. Performance management should not be confused with the more mechanical business process management tools that automate the tasks of creating, revising, and managing workflow processes, such as customer order entry and accounts receivable.

A much more serious confusion is that many people give performance management a meaning that’s far too narrow. It’s often regarded as a CFO initiative that provides a bunch of measurement dashboards for feedback and better financial reporting. In fact, though, the scope of a performance management initiative is, or should be, much broader than that, as I’ll explain in a moment.

A similar, but more recent, confusion arises from the term being narrowly applied to a single function or department, as in marketing performance management or IT performance management.

Then there’s the historical baggage that the term carries. In the past, performance management most commonly referred to the job performance of individual employees and the methods used by the personnel and human resources functions for processes such as employee appraisals. But today, the term is widely accepted as covering enterprisewide performance — the performance of an organization as a whole. Clearly, employees’ performance is an important element in an organization’s success, but in the broad framework of performance management, human capital management is just one component.

Let me try to clear up the confusion.

The good news is that performance management is not a new methodology that everyone now has to learn; rather, it tightly integrates business improvement and analytic methodologies that executives and employee teams are already familiar with.

Think of it as an umbrella concept: It integrates operational and financial information into a single decision-support and planning framework. Its capabilities include strategy mapping, a strategic balanced scorecard, operational dashboards, costing (including activity-based cost management), budgeting, forecasting, and resource capacity requirements planning.

These methodologies fuel other core solutions such as CRM, supply chain management (SCM), risk management, business process management, and human capital management systems, as well as Lean management and Six Sigma quality initiatives. It’s quite a stew, but they all blend together.

Performance management increases in power the better these managerial methodologies are integrated, unified, and spiced with all flavors of analytics, such as segmentation and correlation analysis. But its ultimate power comes from including predictive analytics. Predictive analytics are important because organizations are shifting away from managing by control and reacting to after-the-fact data; they’re moving toward managing with anticipatory planning. The goal is to be proactive and make adjustments before problems occur.

Unfortunately, at most organizations performance management’s methodologies are typically implemented in a silo-like sequence and operate in isolation from each other. It’s as if the project teams and managers responsible for each methodology live in parallel universes. But we all know that there are linkages and inter-dependencies, so we know that they should all somehow be integrated. It’s like a jigsaw puzzle — everyone knows that the pieces should fit together. But, too often, the picture on the puzzle’s box is missing!

Performance management provides that missing picture of integration, both technologically and socially. It makes executing the strategy everyone’s Number 1 job; it makes employees behave as if they are the business owners.

Many organizations jump from one improvement program to the next, hoping that each new effort will be the magic pill that provides that elusive competitive edge. However, most managers acknowledge that pulling just one lever for improvement rarely results in substantial change — particularly long-term, sustained change. The key is to integrate and balance multiple improvement methodologies while blending them with analytics of all kinds, particularly predictive analytics.

In the end, organizations need top-down guidance with bottom-up execution. The way to get there is through integrating methodologies and applying analytics to complete the full vision of the analytics-based performance management framework.

Analytics Can Answer, “Why Can’t …?

Sunday, February 20th, 2011

  This is a post by Mr.Gary Cokins. Gary also blogs regularly at

Are you as curious as I am about why these questions have not been solved?
• Why can’t traffic intersection stoplights be more variable based on street sensors that monitor the presence, location, and speed of approaching vehicles? Then you would not have to impatiently wait at a red light when there is no cross-traffic.
• Why can’t a call center route your inbound phone call to a more specialized call center representative based on your phone number and previous call topics or transactions? And, once connected, why can’t that call rep offer you rule-based deals or suggestions most likely to maximize your customer experience? Then you might get a quicker and better solution to your call.
• Why can’t dentists and doctors synchronize patient appointment schedule arrival times to reduce the amount of time that so many people collectively sit idly in the waiting room? Then you could show up just before your treatment.
• Why can’t airlines better alert their ground crews for plane gate arrivals? Then you don’t have to wait, sometimes endlessly, for the jet bridge crew to show up and open the door.
• Why can’t hotel elevators better position the floors the elevators arrive at to pick up passengers based on when hotel guests depart their rooms? Then you don’t have to get stuck on a slow “milk-run” elevator stopping at so many floors while an “express” elevator subsequently arrived and could have quickly taken you to your selected floor.
• Why can’t airport passport control managers regulate the number of agents in synchronization with the arrivals of international flights? Then you don’t have to wait in long queue lines, while later the extra staff shows up (sometimes).
• Why can’t retail stores, at least when men enter them, partner with credit card companies and their transaction histories and use algorithms like and Netflix do to suggest what a customer might want? Then you might more quickly find what you are shopping for.
• Why can’t water, gas, and electrical utility suppliers to home residences provide instant monitoring and feedback so that households can determine which appliances or events (e.g., taking showers) consume relatively more or less? Then households could adjust their usage behavior.
• Why can’t personnel and human resource departments do better workforce planning on both the demand and supply side? That is, for the supply side, why can’t they predict in rank order the most likely next employee to voluntarily resign, based statistical data (e.g., age, pay raise amount, or frequency) of employees who have resigned? For those who will retire, isn’t this predictable? For the demand side, why can’t more accurate forecasts of sales volume be translated into headcount capacity planning by type of skill or job group? Then the workforce will match the needs without scrambling when mismatches occur.
• Why can’t magazines you subscribe to print at the time of production an issue customized to you that has advertisements (and maybe even articles) that you likely care more about based on whatever profile they may have about you? Then the magazine’s content may be more relevant to you.
• Why can’t your home’s refrigerator and pantry, using microchips and barcode scanners, keep track of what you purchased and the rate of usage? Then you could better replenish them when out shopping.
• Why can’t CFOs and financial controllers allocate expense to products, standard service lines, and customers with cause-and-effect factors (i.e., activity drivers)? Why do they continue to use broad-brush and averaged factors? Then users of the cost information could make better decisions compared to the flawed and misleading costs that are typically reported.
Are these a vision of the future? Not in all cases. With analytics software, advanced managerial accounting software, and communication technology, some if not all of these questions are already solvable. It is time for gut-feel, intuition, guessing, and flawed information to be replaced by applying analytics to better manage organizations and better serve their customers. ###

Why Does Shaken Confidence Reinforce One’s Advocacy?

Sunday, January 2nd, 2011
 Gary Cokins, CPIM is Global Product Marketing Manager for Performance Management at SAS,the world’s leader in business intelligence, and analytical software.  He is an internationally recognized expert, speaker, and author.

These last few years I have been mystified as to why the adoption rate of proven and valuable managerial methods and techniques are taking so long to be applied in organizations. Examples are business analytics, the balanced scorecard, and activity-based costing. Organizations that use them proclaim improved performance and better decision making. There is even empirical evidence in terms of financial results from Dr. David Norton, co-author with Professor Robert S. Kaplan of the Balanced Scorecard book series, that companies that have implemented their methods achieve superior results compared to their competitors.

My unscientific research has revealed explanations including:
• Insufficient or poor quality data.
• Misperceptions that the techniques are too complex.
• Initial failures with prior pilot projects.
• Human nature’s resistance to change.
• Not wanting to be held accountable.
• Not wanting to be measured (or measured without involvement selecting the measures).
• Fear of knowing the truth.
• Lack of executive sponsorship or a willing and passionate champion.

Note that solving the last few “excuses” deals with behavior modification and change management. A problem is none of us have been trained in change management. We are typically specialists.

Related to this, I have recently read some disturbing research of psychology. It deals with why people actually hang on stronger to their ideas even after they learn their ideas are proven wrong. The research was conducted by two faculty members, David Gal and Derek Rucker, of Northwestern University’s Kellogg School of Management. Using tests with a control group, they revealed that the more that people doubt their own beliefs, then paradoxically the more they are inclined to support and lobby for them. Their research paper’s title “When in Doubt, Shout” (Psychological Research; November, 2010) summarizes their research findings. The test subjects who were confronted with evidence that challenged and disproved their beliefs subsequently advocated them even more aggressively compared to the control group.

This both bothers me and confirms some of my own observations. I often attempt to persuade managers and executives that applying fact-based quantitative statistics and logical methodologies is far superior than making decisions based on intuition and gut feel or than using inappropriate key performance indicators (KPIs) or flawed and misleading cost allocation methods. Too often, I get stone-walled or told about the obstacles I listed above. How can we transform a “Dr. No” into a “Dr. Know”?

I can accept resistance to change and judgment up to a point. But maybe there is pride of ownership (and perhaps some ego) that explains why, despite strong business cases for adopting the methods I described, managers and executives remain hesitant to march forward. Wouldn’t they like to gain insights or know something about the future before their organization gets there? How valuable should it be to them to know things that their competitors do not know? Perhaps Gal and Rucker’s research gives a clue.

Why Are There Irrational Business Decisions?

Sunday, December 19th, 2010
This is a guest post by Mr.Gary Cokins. Gary Cokins is a Product Marketing Manager with SAS, the leader in  business analytics software and services. He is an internationally recognized expert, speaker, and author in advanced cost management and  performance improvement systems. Gary also blogs regularly at

I recently attended a presentation by Dr. Franck Schuurmans, a guest lecturer at the Wharton Business School and a consultant for Decision Strategies International. He captivated the audience with explanations as to why decision-makers make irrational business decisions.
A simple exercise he used with the audience to demonstrate one of his points was providing a list of ten questions such as identifying in what year Mozart was born. The assignment was to select the extremes such that within your range you had 90 percent confidence that the correct answer was encompassed. Mozart was born in 1756, so, for example, you could have narrowly selected 1730 and 1770 or 1600 and 1900. The range was your choice. Surprisingly, the vast majority chose correctly for five or fewer of the ten questions. Why so bad? Most chose too narrow bounds. The lesson is that people have an innate desire to be correct despite no penalty for being wrong.
Dr. Schuurmans did a nice job in not lecturing deeply into the nuances of cognitive psychology and the theories of bounded rationality that brought the Nobel Prize in Economics in 1978 to Dr. Herbert Simon for his research. Dr. Schuurmans explained irrational decision-making in layman’s terms.
A takeaway I got was that humans have limited rationality between our ears – our brains were designed to hunt prey. Typically what people do is defer to mental shortcuts from learning by discovery. The academic term is heuristics. So, for example, you decide to take an umbrella if the sky has dark clouds but not if it is sunny. Are you 100 percent sure? Probably good enough for the umbrella decision. But do you know or just think you know? This example gives a glimpse of the limits of decision-making. Mental shortcuts, gut feel, intuition, and so on typically work except when problems get complex.
When problems get complex, then a new set of issues arises. Systematic thinking is required. What often trips people is that they do not start by framing a problem before they begin collecting information that will lead to their conclusions. There is often a bias or preconception. You seek data that will validate your bias. The adverse effect, as Dr. Schuurmans described it, is “We prepare ourselves for X, and Y happens.” By framing a problem, you widen the options to formulate a hypothesis.
How is this relevant for applying business analytics, the emerging field of interest to improve organizational performance? A misconception about information technology specialists is that they equate applying business intelligence (BI) technologies with query and reporting techniques like data mining. But in practice, experienced analysts don’t use BI like searching for a diamond in a coal mine. Instead, they first speculate that two things are related or that some underlying behavior is driving a pattern to be seen in various data. They apply business analytics more as confirmatory than as somewhat randomly exploratory. This requires easy and flexible access to data, the ability to manipulate the data, and software to support the process. But IT tends to exhibit gatekeeper behavior, proclaiming, “We own the data and if you want a report, we’ll write it for you.”
Without initial problem-framing and a confirmatory approach, mistakes are inevitable. Sadly, as Dr. Schuurmans observed, many often do not learn from their mistakes but repeat them with more gusto.
In his book, Predictably Irrational, author Dan Ariely observes: “We are all far less rational in our decision-making than standard economic theory assumes. Our irrational behaviors are neither random nor senseless – they are systematic and predictable. So wouldn’t economics make a lot more sense if it were based on how people actually behave? That simple idea is the basis of behavioral economics.”
I expand on Ariely’s quote by asking, Wouldn’t getting the ROI out of our treasure trove of mostly unused raw and transactional data make a lot more sense if we properly applied business analytics? ###

Why Do Once Successful Companies Fail?

Saturday, December 11th, 2010
 This is a guest post by Mr.Gary Cokins. Gary, CPIM is Global Product Marketing Manager for Performance Management at SAS,  the world’s leader in business intelligence, and analytical software.He is an internationally recognized expert, speaker, and author.

How can one explain why seemingly successful companies, like Wang Labs and Digital Equipment, go bankrupt or fall from a successful leadership position? I continue to be intrigued by the fact that almost half of the roughly 25 companies that passed the rigorous tests to be listed in the once famous book by Tom Peters and Robert Waterman, In Search of Excellence, today either no longer exist, are in bankruptcy, or have performed poorly. What happened in the 25 years since the book was published? Ponder this question, “How many of the original Standard and Poors (S&P) 500 list originally created in 1957 are on that list today?” Answer: 74, just 15% according to And of those 74, only 12 have outperformed the S&P index average. Pretty grim. My belief is when it comes to considering whether to implement and integrate the various component methodologies that constitute Performance Management, there are actually two choices. To do it or not to do it. Many organizations neglect the fact that the choice to not act, which means to continue with the status quo and to perpetuate making decisions the way they currently are, is also a decision.

Invulnerable today, aimless tomorrow

Perhaps it is because when an organization is enjoying success, it breeds adversity to taking wise and calculated risks. Each new day going forward requires making strategic adjustments to anticipate continuously changing customer needs and counter tactics by competitors. Risk management is about balancing risk appetite with risk exposure. If there is not enough risk taking appetite, then performance will eventually suffer. (And if risk appetite is excessive, well the current global fiscal crisis is evidence of its outcome.) How can an organization create sustainability for its long term performance?

In Sydney Finkelstein’s book Why Smart Executives Fail based on his research, he observed that the causes of failure are not because executives are unintelligent – they are typically quite smart. The causes of failure are not necessarily due to unforeseeable events. Companies that have failed often knew what was happening but chose not to do much about it. The causes of failure are not always errors due to taking the wrong daily actions.  The explanation involves the attitude of executives. This includes a breakdown in their reasoning, strategic thinking, and creating a culture for metrics and deep analysis.

Prominent examples are Wang Labs and Digital Equipment. Wang Labs failed in part because it specialized in computers designed exclusively for word processing and did not foresee general-purpose personal computers with word processing software in the 1980s, mainly developed by IBM. Digital Equipment was satisfied with its dominance in the core minicomputer market which it was first to introduce the minicomputer. However Digital was slow to adapt its product line to the new markets for personal computers (PCs). The company’s entry into the personal computer arena in 1982 was a failure, and later PC collaborations with Olivetti and Intel achieved mixed results.

Often no one challenges the status quo and asks the tough questions. Delusion and fear of the unknown can develop in organizations that affect how they interact with key relationships like customers and suppliers. My belief is when it comes to considering whether to implement and integrate the various component methodologies that constitute Performance Management, there are actually two choices. To do it or not to do it. Many organizations neglect the fact that the choice to not act, which means to continue with the status quo and to perpetuate making decisions the way they currently are, is also a decision.

In many cases executives believe that if there is a control system in place, it will do the job for which it was intended. However in many organizations their systems and policies are constructed for day to day transactions but not for analyzing the abundance of raw data to make sense of what it all means. Sustainability is based on transforming data into analyzable information for insights and decision making. This is where business intelligence and enterprise performance management systems with embedded analytics fit in, such as from software vendors like SAS.

The emerging need for analytics

With today’s global recession, the stakes have never been higher for managers to make better decisions with analyzable information. Companies that successfully leverage their information to out-think, out-smart, and out-execute their competitors. High-performing enterprises are building their strategies around information-driven insights that generate results from the power of analytics of all flavors, such as segmentation and regression analysis, and especially predictive analytics. They are pro-active, not reactive.

Executives are human and can make mistakes, but in company failures these are not simply minor misjudgments. In many cases their errors are enormous miscalculations that can be explained by problem in leadership. Regardless of how decentralized some businesses might claim to be in their decision making, corporations can be rapidly brought to the brink of failure by executives whose personal qualities create risks  rather than mitigate them. In Finkelstein’s book he observes these flaws can be honorable such as with CEOs like An Wang of Wang Labs or with rogue CEOs like Dennis Kozlowski of Tyco, Ken Lay of Enron, John Rigas of Adelphia, and Steve Hilbert of Conseco.

To sustain long term success companies need leaders with vision and inspiration to answer “Where do we want to go?” Then by communicating their strategy to managers and employees they can empowering their workforce with analytical tools for workforce to correctly answer, “How will we get there?”

Is risk management a part of performance management?

Monday, October 4th, 2010

This is a guest post by Mr.Gary Cokins. Gary Cokins CPIM is Global Product Marketing Manager for Performance Management at SAS,  the world’s leader in business intelligence, and analytical software.  He is an internationally recognized expert, speaker, and author.

A popular acronym is GRC — for governance, risk, and compliance. One can consider governance (G) as the stewardship of executives to behave in a responsible way, such as providing a safe work environment or formulating an effective strategy, and consider compliance (C) as operating under laws and regulations. Risk management (R), the third element of GRC, is the element more associated with enterprise performance management.
Governance and compliance awareness from government legislation such as Sarbanes-Oxley and Basel II is clearly on the minds of all executives. Accountability and responsibility can no longer be evaded. If executives err on compliance, they can go to jail. As a result, internal audit controls have been beefed up.
The “R” in GRC has characteristics similar to those of performance management. The foundations for both risk management and performance management share two beliefs:
1. The less uncertainty there is about the future, the better.

2. If you cannot measure it, you cannot manage it. If you cannot mange it, you cannot improve it.
A strong case can be made that risk management is a subset under the much broader umbrella of enterprise performance management. An example is Lora Bentley’s blog, “Risk Management Should Be Part of Strategic Planning, Performance Management.”
Performance management is typically perceived too narrowly as just better financial reporting and a bunch of dashboard dials. It is much broader and is better defined as the integration of multiple methodologies (e.g., strategy maps, customer relationship management, activity-based costing), with each methodology embedded with business analytics such as segmentation analysis, and especially predictive analytics. Their collective purpose is to achieve the strategy and to ebable better decisions. (I describe this in my book, Performance Management – Integrating Strategy Execution, Methodologies, Risk, and Analytics.)
Risk management is not about minimizing an organization’s risk exposure. Quite to the contrary, it is all about exploiting risk for maximum competitive advantage. A risky business strategy and plan always carry high prices. Effective risk management practices are comprehensive in recognizing and evaluating all potential risks and determining the balance in an organization’s risk appetite. Its goal is less volatility, greater predictability, fewer surprises, and arguably most important, the ability to bounce back quickly after a risk event occurs.
A simple view of risk is that more things can happen than will happen. If an organization can devise probabilities of possible outcomes, then it can consider how it will deal with surprises – outcomes that are different from what they expect. They can evaluate the consequences of being wrong in their expectations. In short, risk management is about dealing in advance with the consequences of being wrong. ###

The Shift to Predictive Accounting

Tuesday, September 28th, 2010

This is a guest post by Mr.Gary Cokins. Gary Cokins, CPIM is Global Product Marketing Manager for Performance Management at SAS,  the world’s leader in business intelligence, and analytical software. He is an internationally recognized expert, speaker, and author. 

There is a widening gap between what the CFO and accountants report and what internal managers and employee teams want. This does not mean that information produced by the accountants is of little value. In the last few decades, management accountants have made significant strides, such as with activity-based costing (ABC), in improving the utility and accuracy of the costs they calculate and report. The gap is being caused by a shift in managers’ needs – from just needing to know what things cost (such as a product cost) and what happened – to a need for detailed information about what their future costs and profits will be and why. Examples include driver-based rolling financial forecasts and true marginal cost analysis for what-if scenarios.
Despite accountants advancing a step to catch up with the increasing needs of managers to make good decisions, the managers have advanced two steps. In order to understand this widening gap, and more importantly how accountants can narrow and ideally close the gap, let’s examine the broad landscape of accounting.
What Is the Purpose of Management Accounting?

Contrary to beliefs that the only purpose of managerial accounting is to collect, transform, and report data, its primary purpose is first and foremost to influence behavior at all levels – from the desk of the CEO down to each employee – and it should do so by supporting decisions. A secondary purpose is to stimulate investigation and discovery by signaling relevant information (and consequently bringing focus) and by generating questions.
The widening gap between what accountants report and what decision-makers need involves the shift from analyzing descriptive historical information to analyzing predictive information, such as budgets and what-if scenarios. This shift is a response to a more overarching shift in executive management styles – from a command-and-control emphasis that is reactive (such as scrutinizing cost variance analysis of actual versus planned outcomes) – to an anticipatory, proactive style where organizational changes and adjustments, such as staffing levels, can be made before things happen and before minor problems become big ones. This involves predictive analytics.
An Accounting Framework and Taxonomy

The domain of accounting has three components: tax accounting, financial accounting, and managerial accounting. There are two types of data sources for all three components: (1) financial transactions and bookkeeping, such as purchases and payroll, and (2) nonfinancial measures such as payroll hours worked, retail items sold, or gallons of liquid produced.
The financial accounting component is intended for external reporting, such as for regulatory agencies, banks, stockholders, and the investment community. Financial accounting follows compliance rules aimed at economic valuation, and as such is typically not adequate or sufficient for internal decision- making. And the tax accounting component is in its own world of legislated rules.
Our area of concern – the management accounting component – can be divided into three categories: cost accounting, cost reporting and analysis, and decision support with cost planning. To oversimplify a distinction between financial and managerial accounting, financial accounting is about valuation, and managerial accounting is about value creation through good decision-making.
The managerial accounting component, our focus here, is composed of three elements. Each is a cost measurement procedure, using the source data inputs, that transforms incurred expenses (or their obligations) into calculated costs:

Cost Accounting represents the assignment of expenses into outputs, such as the cost of goods sold and the value of inventories. This primarily provides external reporting to comply with regulatory agencies.

Cost Reporting and Analysis represents the insights, inferences, and analysis of what has already taken place in the business in order to track performance.

Decision Support with Cost Planning involves decision-making and taking. It also represents using the historical cost reporting information in combination with other economic information, including forecasts and planned changes (e.g., processes, products, services, channels) in order to make the types of decisions that lead to a financially successful future.
The Cost Accounting element is deeply constrained by regulatory practices and describing the past in accordance with principles of financial accounting (e.g., GAAP). The other two elements offer diagnostic support to interpret, and draw inferences from respectively what has already taken place and what can happen in the future. Cost reporting and analysis is about explanation. Decision support with cost planning is about possibilities.
(An expansion on this with a visual diagram I created on page 7 is “Evaluating and Improving Costing in Organizations,” published by the International Federation of Accountants, whose members are global accounting institutes like the AICPA.)
What? So What? Then What?

The value, utility, and usefulness of this information increases at an exponential rate from cost accounting to cost analysis to decision-based costing.
The cost reporting for analysis information converts cost measurement data into a context. It is useful to help managers and employee teams clearly observe outcomes with transparency that may have never been seen before, or is dramatically different from their existing beliefs derived from their firm’s less mature cost allocation methods. Cost reporting displays the reality of what has happened, and provides answers to “What?” That is, what did things cost last period?
However, an obvious follow-up question should be, “So what?” That is, based on any questionable or bothersome observations of historical costs, is there merit to making changes and interventions? How relevant to improving performance is the outcome we are seeing?
But this leads to the more critical and relatively higher value-added need to propose actions – to make and take decisions – surfaced from cost planning. This is the “Then what?” question. This is what managers want to know. For example, what change can be made or action taken (such as a distributor altering its distribution routes), and what is the ultimate impact? Of course, changes will lead to multiple effects on service levels, quality, and delivery times, but the economic effect of profits and costs should also be considered. And this gets to the heart of the widening gap between accountants and decision-makers who use accounting data. To close the gap, accountants must change their mind-set from managerial accounting to managerial economics – what I nickname as “decision-based costing.”
The Need for Managerial Economics

There is a catch. When the Cost Reporting and Analysis element shifts to become Decision Support with Cost Planning, then analysis shifts to the realm of decision support via economic analysis. For example, one needs to understand the impact that changes have on future expenses. Therefore, the focus now shifts to resources and their capacities. This involves classifying the behavior of resource expenses as fixed, semi-fixed, variable, etc., with changes in service offerings, volumes, mix, processes, and the like. This gets tricky. A key concept is this: The “adjustability of capacity” of any individual resource expense depends on both the planning horizon and the ease or difficulty of adjusting the individual resource’s capacity (i.e., its stickability). This wanders into the messy area of marginal cost analysis that textbooks oversimplify, but is complicated to accurately calculate in the real world.
In the predictive view of costs, changes in demand – such as the volume and mix of products and services ordered from customers – will drive the consumption of processes (and the work activities that belong to them). In turn, this will determine what level of both fixed and variable resource expenses are needed to supply capacity for future use.
Since decisions only affect the future, the predictive view is the basis for analysis and evaluation. This view is what managers are increasingly seeking, and accountants are lagging at providing. The predictive view applies techniques like what-if analysis and simulations. The projections are based on forecasts and consumption rates ideally derived as calibrated rates from the historical, descriptive view – where the rate of operational work typically remains constant until productivity and process improvements affect them.
Closing the Accounting Gap

Key tests for deciding the capability of a managerial accounting system should be: How does it handle economic projections? Can it accommodate classifying resource expenses as variable, semi-variable, fixed, or as unavoidable or avoidable (i.e., allowing for capacity adjustment decisions)? Does it isolate unused/idle capacity expenses?
Cost accounting system data is not the same thing as cost information that should be used for decision-making. The majority of value from cost information for decision-making is not in historical reports – the descriptive view. Its primary value is in planning the future (such as product and customer rationalization), marginal cost analysis for one-off decisions, or trade-off analysis between two or more alternatives. This requires the predictive view. There is a gap between what managers want and what accountants provide. Closing the gap should be a high priority for every CFO. ###

Help Desk – A User’s Guide to Analytics-based Performance Management

Wednesday, September 15th, 2010
 This is a guest post by Mr.Gary Cokins. Mr.Cokins is a Product Marketing Manager with SAS, the leader in business analytics software and services. He is an internationally  recognized expert, speaker, and author in advanced cost management and performance improvement systems.

Imagine if an organization had a chief analytics-based performance officer and a department that assisted line managers in applying business analytics embedded in the various methodologies of its the performance management framework. Further imagine if that department had a help desk. Here are some questions, including answers from help desk staff members, that might be fielded:
I am in the pricing department. We determine what price the market can bear in an attempt to maximize our sales volume. Next, we also make sure that the price provides an adequate profit margin. My concern is this: Our cost accountants provide standard costs that are grotesquely flawed due to the way our sizable indirect and shared indirect expenses are allocated to products. I want true costs, not the reported costs. What should I do?
Almost every pricing department is in the same boat as ours, except for those organizations where the accounting department reformed from their traditional costing to an activity-based costing (ABC) method. What most pricing departments do is create their own shadow product costing data separate from the accountants. They assign the various indirect expenses to those products that use more or less (or all or none) of the indirect expenses — thus applying some cause-and-effect consumption relationships. Sadly, these pricing departments are doing the cost accounting department’s job. Needless to say, things get confusing because managers are always asking, “Whose cost data are you using?” Eventually, you may want to get up the nerve to suggest to the CFO and financial controller that they implement an activity-based costing system. But be prepared for their backlash argument: They must use only their traditional costing method to comply with generally accepted accounting principles (GAAP). They don’t want to break the rules and go to jail. What they don’t realize is that if they trace and assign costs the correct way for internal managerial accounting, they won’t go to jail.
I am the manager of our delivery operations. It baffles me that our marketing and sales departments seem to provide so many deals and special services to specific groups of customers who cause us so many headaches. We are constantly jumping through flaming hoops to do extraordinary activities for these customers (such as special packaging and expedited shipments). My guess is that if we added up all these extra costs and combined them with our regular costs, these customers may possibly even be unprofitable to us. What should I do?

You and our marketing and sales departments have a misconception. By only considering short-term profitability — and focusing too much on the current costs of customer acquisition and retention and not enough on a customer’s long-term economic value to shareholders — you are missing the big picture.
By classifying our customer segments into a 2-by-2 matrix with the two axes as “easy or hard” and “to acquire or to retain,” what will be revealed will influence managers to target mostly those customers who are easy to acquire and easy to maintain. But this leads to the false assumption that acquisition and retention costs are the major driver of customer profitability. This would not be a problem if each customer segment were equally profitable. But they are never equal!
I suggest you try to educate someone in marketing and accounting to apply business analytics to segment customers into this 2-by-2 or 10-by-10 grid of acquisition and retention. But don’t stop there. Next, project the future for each segment using forecasting tools for future price, volume, cost, up-selling, cross-selling, and so on. This information can guide them to determine the optimal level of spending to maximize long-term customer profitability. Applying business analytics provides a competitive edge.
I am the CEO. I am constantly challenged to make difficult decisions regarding trade-offs between cost-saving measures and high customer satisfaction levels. For example, how do we improve customer service levels and cost-saving process efficiencies while restricted to fixed contract-like budget constraints and profit targets? How do I balance short-term and long-term goals?
I think we need better performance measurements — and they should somehow be causally linked to our strategy as well as to each other as leading or lagging measures. We also need to rethink how we conduct our annual budgeting process.
As background, I understand your pain that external forces are producing unprecedented uncertainty and volatility. The speed of change makes calendar-based planning (and long cycle times for planning with multiyear horizons) unsuitable for management purposes. Both short- and long-term time frame-horizon pursuits involve change.
• Short-term goals require agility to maintain the linkage of a constantly adjusting strategy with operational execution, while complying with and meeting aggressive performance-level expectations of stakeholders, including our board of directors’ demands for constantly increasing earnings.

• Long-term strategic objectives require continuous innovation, foresight of risk and opportunity, relentless process improvement, an eye on recruiting and retaining a motivated workforce, and leveraging partnerships and alliances of all kinds for interdependent mutual benefits.
Have you considered implementing a strategy map and its companion, the balanced scorecard? I think that a reason we struggle with executing our overall strategy is that most managers and employees cannot explain what our organization’s strategy is. As a result, they really do not know how the work they do contributes to our executive team’s strategic intent. Strategy maps, balanced scorecards, key performance indicators (KPIs), and dashboards are the components of performance management’s suite of solutions that address this. If we all understand what our strategy is and are involved in selecting projects to implement and key processes to improve (including the KPI measures and targets that will monitor our progress), then our behavior and priorities can be aligned in order to manage our corporate strategy. By approving funding for our project selections and applying driver-based budgeting for our processes (which would require implementing activity-based costing measures to get the appropriate data), you can link our budget to your strategy. If we use business analytics software, you can receive reliable rolling financial forecasts to parallel all the changes.
The Reality
I don’t believe that organizations need an analytics-based performance management help desk. They simply need to complete implementing the integrated components of their performance management framework and be sure that each component is embedded with business analytics

Economist or Iconomist?

Sunday, September 12th, 2010
 This is a guest post by Mr.Gary Cokins. Gary Cokins, CPIM is Global Product Marketing Manager for Performance Management at SAS,  the world’s leader in business intelligence, and analytical software. He is an internationally recognized expert, speaker, and author.

Have you heard the joke about economists? Most of them are calling them an “iconomist” because they did not foresee the 2009 global credit crisis.

As volatility increases – such as oil prices, foreign currency exchange rates, and commodity prices – the task of macro-economic analysis becomes more challenging. The world is becoming more complicated in part due to the speed at which information flows. The same challenges apply to micro-economics, and that directly impacts the decisions individual commercial companies and public sector government agencies must deal with.

What is an executive to do? There is more uncertainty and risk. My belief is there is not much of an alternative than to become much more analytical. This means digging deeper into the mountains of data one already has as well as becoming more proficient at predicting the future. Unfortunately most companies are far from where they want and need to be when it comes to implementing business analytics. They are still relying on gut feeling, rather than hard data, when making decisions. They are short on the skilled talent and the technologies to perform analytics. What is needed by executive leaders is to create a culture for metrics in their organizations.

Creating a culture for metrics
But what does this mean? It means that high-performing enterprises should build their competitive strategies around data-driven insights that generate results from the power of analytics of all flavors, such as segmentation and regression analysis. Commercial companies need to successfully leverage data to out-think, out-smart, and out-execute their rivals. Public sector organizations need to get more yield from their resources – more with less.

To create a culture for metrics also means clarifying some of the confusion in the marketplace about analytics, especially predictive analytics. For example, there is confusion about the difference between forecasting and predictive modeling. Here is a quick analogy to illustrate the difference:

• Forecasts tell you how many ice cream cones will be sold in July, so you can set expectations for planned costs, profits, supply chain impacts and other considerations.
• Predictive models tell you the characteristics of ideal ice cream customers, the flavors they will choose and coupon offers that will entice them.

If your goal is to do a better job of buying raw materials for the ice cream and to have them at the factory at the right time, your company needs a forecasting solution. If the marketing department is trying to figure out how, where and which most attractive customers to market the ice cream, it needs predictive modeling.

Predictive analytics and risk management
Consider these real-world forecasting examples. The hospitality industry uses forecasting to determine demand for particular rooms or properties. Financial companies use it to generate accurate sales forecasts, which feed into the planning process. Retailers create forecasts to manage pricing, staffing and inventory.

Predictive modeling delivers a different set of answers. In retail, predictive modeling identifies the most profitable customers and the underlying reasons for their loyalty. In finance, credit scoring is a type of predictive modeling used to grow customer profitability and reduce risk exposure. In the life sciences, it helps companies find promising new molecular drug compounds.
Another source of confusion involves risk management. It is not only about minimizing an organization’s risk exposure. Quite the contrary, it is all about exploiting risk for maximum competitive advantage. A risky business strategy and plan always carries high prices. Effective risk management practices are comprehensive in recognizing and evaluating all potential risks. Its goal is less volatility, greater predictability, fewer surprises, and arguably most important the ability to bounce back quickly after a risk event occurs.

A simple view of risk is that more things can happen than will happen. If we can devise probabilities of possible outcomes, then we can consider how we will deal with surprises–outcomes that are different from what we expect. We can evaluate the consequences of being wrong in our expectations. In short, risk management is about dealing in advance with the consequences of being wrong. Most organizations can not quantify their risk exposure and have no common basis to evaluate their risk appetite relative to their risk exposure. Risk appetite is the amount of risk an organization is willing to absorb to generate the returns it expects to gain. The objective is not to eliminate all risk, but rather to match risk exposure to risk appetite.

Twenty years from now will we look back at the last half century and observe there were six IT eras: mainframes, minicomputers, PCs, ERP, the Internet, and now analytics? Forecast, predictive model, or speculation? My crystal ball is clear. Analytics will become mainstream.

The Tipping Point for Analytics-based Performance Management

Sunday, September 5th, 2010

 Gary Cokins, CPIM is Global Product Marketing Manager for Performance Management at SAS, the world’s leader in business intelligence, and analytical software.  He is an internationally recognized expert, speaker, and author.

How does an organization decide to implement analytics-based performance management system methodologies? To answer this we can learn a lesson from the Malcolm Gladwell, a social scientist and author of best-selling book The Tipping Point, who describes how changes in mindset and perception can attain a critical mass and then quickly create an entirely different position of opinion. Let’s apply Gladwell’s thinking to the question of whether the widespread adoption of analytics-based performance management is near its tipping point or whether we will only know this in retrospect after it has happened.

Gladwell observed that to determine whether something is approaching the verge of its tipping point, such as an event or catalyst, it should cause people to reframe an issue. For example, just-in-time production control reframed manufacturing operations from classical batch-and-queue economic order quantity (EOQ) thinking to the method based on customer demand-pull product throughput acceleration. So, is analytics-based performance management reframing issues and nearing its tipping point? To answer this, we should first acknowledge that business analytics and performance management methodologies are not something new that everyone has to learn, but rather it is the assemblage and integration of existing quantitative techniques and methodologies that most managers are already familiar with. Collectively, these methodologies manage the execution of an organization’s strategy.

Multiple tipping points of analytics-based performance management components
Since analytics-based performance management is comprised of multiple methodologies, all interdependent and interacting, what is profound is that we’re now actually experiencing multiple and concurrent sub-tipping points all at once. Ultimately their collective weight is resulting in an overall tipping point for adopting performance management. These tipping points are:

The Balanced Scorecard (BSC) – Thanks to successes on how to properly implement the combined strategy map and balanced scorecard framework (and there are plenty of improper implementations), executives are now viewing BSC differently. Rather than BSC being a rush to put the massive number of collected measures (the so-called key performance indicators, or KPIs) on a diet to distill them down to the more relevant few, executives now understand the strategy map and BSC framework as a mechanism to better execute their strategy by communicating it to employee teams in a way they can understand it, and then aligning the employees’ work behavior, priorities, and resources with the strategy. By embedding KPI correlation analysis into its strategy map, organizations can rationalize the best vital few KPIs based on their explanatory contribution to the resulting financial KPIs.

Decision-Based Managerial Accounting – Reforms to managerial accounting, led by activity-based costing (ABC), may have once been viewed as just a more rational way to trace and assign the increasing indirect and shared overhead expenses to products and standard service-lines (in contrast to misleading and flawed cost allocations based on cost distorting broad averages). Today, reforms such as ABC are now being reframed as essential managerial information for understanding which products, services, types of channels, and types of customers are more profitable or not – and why. There is a shift away from cost control to cost planning and shaping because most spending cannot be quickly changed. This means better capacity and resource planning and less historical cost variance analysis. Executives recognize that cost management is an oxymoron, like “jumbo shrimp” in the supermarket. You don’t directly manage your costs but rather you manage the quantity, frequency, and intensity of what drives your process workloads. ABC places focus on cost drivers with optical fiber-like visibility and transparency to view all the currently hidden costs with their causes.

Customer Value Management – Customer relationship management systems (CRM) have been narrowly viewed as a way to communicate one-to-one with customers. However, executives have learned that it is more expensive to acquire new customers than to retain existing ones, and that their products and service-lines have become commodities from which there is little competitive advantage. As a result, organizations are reframing CRM more broadly as a way to analyze and identify characteristics of existing customers that are more profitable and valuable, and then apply these traits to formulate differentiated and tiered treatments (such as marketing campaigns, deals, offers, and service levels) to existing customers as well as to target attracting new customers who will possess relatively higher future potential value (which, incidentally, requires ABC data to calculate customer lifetime value scores to differentiate prospects). This reframing places much more emphasis on micro-segmenting customers and post-sale value-adding services with cross-selling and up-selling. Mass selling that snares unprofitable customers is out, and it is being replaced by the new recognition that one must not just grow sales but rather grow sales profitably.

Shareholder and Business Owner Wealth Creation and Destruction – The strong force of the financial capital markets to assign financial value to organizations has caused the executive teams and governing boards to realize that old, traditional methods of placing value on a company are obsolete. The balance sheet assets now only account for a small fraction of a company’s market-share price capitalization. A company’s future value is linked to its intangible assets such as its employee skills and innovation. As a result, executives are reframing its understanding as to how to increase its positive “free cash flow,” the financial capital market’s metric of choice, to convert potential value (ideas and innovation) into realized value (financial ROIs). They have reframed the path to continuous shareholder wealth creation as governed by customer value management – viewing customers as investments (like in a stock portfolio) rather than things you spend money on hoping they earn you a profit.

Synergy from the integrating analytics-based performance management components
It is not a coincidence that each of the four tipping points above mentioned interdependencies amongst them. Transaction-based information systems, like enterprise resource planning (ERP) systems, although good for their designed purposes, do not display the relevant information to apply business analytics to and that are required for decision analysis and ultimately for decision making. Transactional systems may provide some of the raw source data, but it is only through transforming that raw data into decision-based information that the potential ROI trapped in that raw data can be unleashed and realized financially. This in part explains the growing demand for analytics-based performance management systems as value-multipliers.