Think Shift

On Alignment

Time-honored approaches to strategy formulation, corporate performance management, planning, budgeting, monitoring methods and technologies serve management to varying degrees of satisfaction. Shortcomings, where they exist are mostly related to misalignment of targets and conflicting goals. Methods such as the balanced scorecard attempt to structure strategy and name goals provide a framework for agreeing on performance due to stakeholders. The role of executive leadership is to intervene to make sure that limited resources are employed in the right place at the right time and competitive performance is incentivized.

On Defining Good Performance

We believe that the shortest description of good performance is quite simply “More with Less”. But correctly defining “More of what?” in each unique business context is the key to good management and achievement of enterprise targets. The question “How much of what?” follows. No level of experience is adequate in preparing performance recipes for the organization; instead it requires thorough analysis and the right management technology to evaluate available data.

Simply assuming that we understand the question What drives what? is usually a cardinal mistake as there is no definitive direction in business dynamics. Making assumptions about simple, two-dimensional relations on What drives what? will lead to statements such as:

“The higher the number of cheques written, the higher the deposits and the credit line volume.”

“The higher the number of payroll service accounts, the higher the car loans”.

“The shorter the coffee breaks, the higher the number of calls handled”.

Clearly, these statements will not always be valid, regardless of how well they may be documented as business fact.

Carefully reviewing all the performance factors in a multidimensional approach using appropriate management analysis and technology to define what exactly drives good performance is a necessary prerequisite to defining what will constitute performance improvement and, furthermore, what needs to be done to achieve this target.

On Enterprise Information Transparency

We believe that limited access to information is a performance malady. The impact of limiting access to performance information across departmental or transactional “silos” is well understood yet often little is done to remedy the problem. Narrowly defined accountability and the notion of performance privacy around accountability are often to blame. Sharing information only on a need-to-know basis is more often than not a bad idea.

Information transparency is a necessary prerequisite if an organization wants to achieve internal alignment of divisional or functional goals for total value, and be able to rapidly adapt to changes in the market. And this is, of course, where Alta Bering EPO™ makes a real difference.

Information transparency for value alignment
In today’s world – let alone tomorrow’s – “Everybody doing their bit” is just not good enough any more. The whole of the organization needs to work together for the whole of the organization to achieve outstanding results. For instance, if the customer relationship officer or teller is expected to refer daily banking customers to the insurance officer in her branch, should he/she not know how many of them actually purchased insurance policies? Clearly, the teller would benefit from this information and refer more suitable customers.

Information transparency for fact-based adaptation
In the service industries, the complexity of product and service offers, along with the fleeting nature of the client base presents both a challenge and an opportunity to compete better. And organizations striving to improve their competitive edge make great efforts to track customer behavior. Large volumes of data are collected and leveraged to gain insight into customer behavior. Findings are fed into systems that sometimes reach the service agents in near real time. In contrast, the performance contracts, including targets and resource assignments change much less frequently, in some cases quarterly, most often annually. Should these targets be adjusted more frequently to better address prevailing conditions? The answer is obviously “Yes!” The fundamental challenge is in the onerous nature of the enterprise planning process. Reliable estimation of what the organization can achieve, given scarce resources, is the technical challenge that needs to be addressed by management technology.

On Measuring What Matters

A solution to both good resource and target allocation should be able to seek balance and avoid single directional goal definitions. It is when these make up the bulk of balanced scorecard content that problems start to occur.

Measuring what matters is relatively easy. However, recognizing that there is more than one formula for success has its practical difficulties. Given, for example, that the teller is expected to help cross-sell, should his/her performance be measured either in “Number of transactions” or in “Number of customers served” alone? Clearly, they both need to be taken into account. And there may yet be a host of other seemingly peripheral measures that should be included in evaluating existing performance and setting good targets for desired performance improvement.

On Budgeting Performance Planning and Scorecarding

Budgeting and planning are dreaded chores for most managers. Nevertheless, a company cannot thrive without a plan. Planning related processes are intended to answer two critical questions in managing the enterprise: Are internal resources being efficiently allocated? Do targets represent strong, efficient, but achievable goals?

For example, a bank branch can target, for a number of relationship managers, the average quarterly volume of deposits expected from each relationship manager, and average service transaction times at the branches. All of these are interrelated at the branch level. Furthermore, one can’t strike a value-maximizing balance among these items unless their impact on overall enterprise efficiency is taken into account. A balanced scorecard can’t approximate efficient targets because its targets include subjective determinations of what is achievable, and that determination is captured in a static relationship. Most businesses either end up with targets that are too low, and money is left on the table, or the targets are too high, which means demoralized managers with dysfunctional incentives, and the risk of losing them to a competitor. Let’s look at this in more detail.

In most companies, targets are set by business unit managers and are generally understood to get translated into bonus plan targets. So, managers will go through an apparently objective review of historical growth for each line of business, and then extrapolate those results with a modest upward bias. But as no one wants to fall short of their targets, they will be inclined to understate the true potential of their business, at least to some extent. Senior management can impose a global constraint by insuring that all the business unit targets add up to an earnings projection that can be sold to the investment community, but a lot of intercompany negotiations may need to take place to arrive at such a goal.

The second problem is that if there are a dozen or more key performance indicators per business unit, then these KPIs need to be weighted according to some criteria. Considering that a large number of decision making units (DMUs), which may be a branch, a sales manager, a store, etc., are all allocated target values, it is clearly unfair to apply the same standard weight for all as specific conditions may vary widely from one DMU to the other. Furthermore, if these weights are fossilized in a scorecard, then the business can lose its responsiveness to changing conditions in an attempt to look good against the embedded metrics, especially if they are tied to variable pay.

Conventional planning systems have no unified analysis around the inherent trade-offs faced by a manager trying to respond to a shifting market. Even if they agree on the trade-offs for this particular manager, they risk local optimization that may undermine overall enterprise efficiency.

Executives should aim for global optimization – accounting for the constraints of all business units at once. Such optimization can’t be done with a spreadsheet – it requires a much more advanced modeling tool.

The first problem can be solved by a mechanism that separates target setting for the budget from target setting for bonuses, but this can only be reliably done for the company as a whole. The problem remains for how to allocate an objectively set global target to the business units and decision making units below them.

This latter problem, as well as the issue of establishing local targets that are globally optimized, is solved with the Alta Bering EPO™ management technology. Alta Bering EPO™ generates targets for resource allocation and target setting in dynamic manner, enabling managers to do their local best and serve the best interest of the enterprise as a whole.

On the Limits of Predictive Analysis – What to do?

by Mahmut Karayel, Chief Scientist, Alta Bering

The ability to predict the future is a holy grail that mathematicians, scientists, and yes, businesspeople have been chasing for ages. Success has been sporadic at best.

Although coming up with better predictions is an honorable pursuit, we should realize that there are fundamental limits to the usefulness of these predictions. First, timing, size and impact of important events are very difficult to predict. The timing of big events is almost impossible to predict with any consistency. Yet the most useful predictions are those that predict the timing of game changing events. Second, almost all predictive analytics is based on inductive reasoning. Inductive reasoning does not apply when it most counts: when we need to predict paradigm shifts or game changing events. Deductive reasoning offers not much help either. Whereas deductive reasoning applies largely to natural laws, the more interesting and useful predictions all apply to human behavior. Inconsistency of human behavior eludes deductive reasoning.

How have we fared in our historic pursuit to foretell? In short not so well:

We have been more successful in predicting short-term outcomes than predicting longer-term outcomes. Production volume of a factory next month is more predictable than production in 13 months. It takes time to change, and this has been helpful to prediction. It takes time for the ocean to cool off, trees to blossom, people to change their behavior. But as societies become a faster, more informative, and more reactive –specifically, less confident, and less willful –, prediction becomes even harder.

We have been more successful in predicting the behavior of systems obeying natural laws, compared to systems which are influenced by humans. Landing a module at a specific spot on the Moon, and calculating the exact amount of fuel that it will take to do this, has proven to be easier to predict than estimating the number of new bank clients opening a savings account in response to a promotional offer.

Predicting behavior of large number of uniform and independent “agents” is easier than predicting behavior of correlated “agents”. Independence assumption allows scientists to rely on the law of large numbers and the central limit theorem to make prediction somewhat routine. Predicting the outcome of an election from exit polls is a good example of this type of prediction. However what if there was dependence, or rather influence? What if unknown to us one interviewee had influenced the votes of tens of thousands of people and another just voted in isolation? The recent debt crisis in Greece, the housing bubble and subprime mortgage crisis, and the popularity of a particular social networking site are all examples of interconnected behavior, the course of which is much harder to chart.

Lastly, we have relied too heavily on predicting status quo, and in the process eluded predicting the game changing events. The Japan Tsunami and its long term effects, 9/11 and the new world order it created, and failure of Lehman Brothers are all examples of events we would have liked to have foreseen.

Those who have the means and the courage to compare predicted paths to actual outcomes (as a rather banal example consider the price of Oil) will observe that the forecasted paths are always much smoother than actuals observations. Our world is a more volatile place than our predictions would imply. This is undesirable regardless of the quality of the point estimate, since it falsely implies a more stable future than what is in store.

Unfortunately, the prediction profession has inadvertently misled the general population while attempting to remedy these shortcomings.

We have relied too heavily on induction. Reliance on induction (e.g., what will happen tomorrow is what is happening today) ignores scale, sustainability, and paradigm shift. Historical experience and recognition that history repeats itself is important. Yet, this repetition has a bad habit of manifesting itself when least expected and with a new dialectic twist. Further, models typically do not take into account scale and sustainability. If one person becomes rich opening a nail-care salon, this does not predict that the next 1,000 people who open a nail salon will strike it rich. On the sustainability front, induction by itself would be a dangerous approach to deal with slow but important shifts, such as Global Warming. Small changes can be ignored, and interpreted as routine noise. But if they are all ignored, a catastrophe will ensue. The classic example of this is the frog that does not jump out of slowly heating water until it is too late.

Another approach commonly employed, but has proven to be fallacious is to extend natural laws to human behavior. Stating that individuals make independent and rational decisions has been the fundamental pillar of modern economic theory. There are numerous observations and studies that challenge consistency, rationality, and independence of human behavior. Economists insist on this assumption regardless. The truth is inconvenient. A Physicist who does not accept gravitational pull after seeing that everything falls when dropped would surely be mocked. Fortunately, at the time of this writing there is great effort to modify these economic assumptions and place it on a more solid footing.

So what is a practical manager to do? I will humbly suggest the cardinal rule: Be prepared to better deal with uncertainty, rather than assuming that predictions are correct. More specifically:

Assess the present before predicting the future. Accurate data is very important. Assessing the situation today using accurate and up to date data supported by unbiased comparative analytics is more important than doing the next step of prediction. Experienced people can (and surely will) draw their own conclusions from clearly presented data.

Visualize outcomes. Enumerating probable outcomes is often more fruitful than trying to predict what will happen. My high school buddy tossed a coin to “predict” which of the two schools will accept him. The coin rolled downhill and leaned against a wall on its side. (As a bonus we got a lesson on interpretation. As it turns out, this meant “both”, not “neither”.) First know what can happen, then try to estimate what will. This takes experience, insight, imagination and patience. Seek and reward analysts endowed with these traits.

Have a plan. Being prepared for a large portion of the possible outcomes is important. Your plan should detail how outcomes will be dealt with: prevention, mitigation, or transfer of risk. Setting the preparation level at 95% or 90% or 85% is a policy decision that you must be ready to make. This is the human part. Nothing ventured, nothing gained.

Execute quickly. It helps to have a plan to begin with. The majority of managerial errors happen not because prediction was poor, but because when the unexpectedly bad happens, managers tend to get stuck in one of many common dysfunctional modes:

  • Watching events unravel in shock
  • Afraid to convey bad news
  • Denial
  • Looking for a scapegoat
  • Not seeking the required help, trying to do too much
  • Hopes the problem will go away by itself
  • Lost in confirmation bias. Looking only for good (or bad) evidence rather than gathering all the facts.

Incentivize those who are better at dealing with uncertainty. A common mistake in corporate compensation is to reward firefighting. Firefighting managers look really busy, like generals under fire in the heat of war. Very seldom is this justified. The manager is often fighting a fire that he created by not executing a plan. Other managers (admittedly a minority) look like they are not doing much, but keep hitting their targets. These are the ones who know their facts, can visualize, have a plan, and have already delegated execution. Many companies, especially in the finance industry, don’t reward these managers duly.

Rather than waiting for scientists to develop prediction technology with predictable accuracy, it is better to be prepared for a variety of outcomes beyond the one that is predicted.

A successful manager must take risks. It is the nature of business. It is important before you take those risks to:

  • Demand accurate up to date data
  • Visualize and know what can happen
  • Have a plan to deal with it
  • And execute quickly when it happens