Now what?

Companies often collect competitive intelligence, but don’t use it. Is it just the thrill of the hunt or a loss as to how to use it to gain field advantage?

241

The first requirement for being competitive is to know what others in your space are offering or plan to offer so you can judge the unique value proposition of your moves. This is just common sense.

The second requirement is to anticipate response to your competitive moves so that they are not derailed by unexpected reactions. That’s just common sense, too.

The third requirement is to ask the question: Do we have common sense?

In my work in competitive intelligence I have met many managers and executives who made major decisions involving billions of dollars of commitments with only scant attention to the likely reaction of competitors, the effect of potential disruptors, new approaches offered by start-ups and the impact of long-term industry trends. Ironically, they spent considerable time deliberating potential customers’ reactions, even as they ignored the effect of other players’ moves and countermoves on these same customers. That is, until a crisis forced them to wake up.

In my experience, the competitive perspective is almost always the least important aspect in managerial decision-making. Internal operational issues including execution, budgets, and deadlines are paramount in a company’s deliberation, but what other players will do is hardly ever in focus. This island mentality is surprisingly prevalent among talented, seasoned managers.

The paradox is that companies spend millions acquiring competitive or market intelligence from armies of vendors and deploy the latest technology disseminating the information internally. Some estimate the market for market research alone at $20 billion annually. Specific competitor information is another $2 billion. On the other hand, management never questions the actual use of this information by employees in brand, product, research and development (R&D), marketing, business development, sales, purchasing or any other market-facing function. Instead, management implicitly assumes the information is being used, and used optimally. Leadership is happy to ask that proposals and presentations be backed by data. Every middle manager is familiar with the requirement for a 130-slide deck of tables, graphs and charts in the appendix for presentations to top executives.

Yet no one asks: which of the information purchased at high cost from the outside army of research vendors and consultants was ignored, missed, discounted, filtered or simply not used correctly?

What did Peter Drucker really say?

Peter Drucker is often quoted as coming up with the managerial bromide, “What gets measured gets managed.” Yet this does not actually represent his thoughts on measurement. Some have argued that he never actually said that at all; others have claimed that the quote has been mangled, and that in context, it was part of a larger lamentation that managers would only manage what was easy to measure, rather than what was important or useful. Regardless, it’s clear from Drucker’s writings that he worried that management often measures the wrong things, and believed that some critical aspects of management can’t actually be measured.

And the impact of competitive information on an organization’s decisions is one of those things that can hardly ever be measured. It is neither direct, nor unambiguous. Since impact can’t be measured and therefore results can’t be directly attributed to the competitive information, management resorts to measuring the wrong thing, exactly as Drucker feared.

For example, in several companies I have worked with, management measured output. How many reports did the analysts issue? How many research projects were completed within budget and on time? This is the equivalent of searching for your car keys under the street lamp simply because that’s where the light is.

The failure to measure the impact of competitive data leads to an interesting dilemma for companies: even when it’s obvious that the company has missed an opportunity or has been blindsided by a threat because they failed to consider competitive data, managers are at a loss how to improve the situation.

Improving decision quality—measured as the extent to which decision makers use all available competitive information requires focus on usage rather than production of intelligence. This is a major mindset leap for most companies but if offers a way to improve decisions without directly measuring the elusive impact. Just ensuring that managers look at and consider competitive perspective should, in principle, improve decisions. How can companies achieve that?

A simple yet powerful suggestion

Begin by auditing major decisions at the product/service or functional level. This competitive intelligence sign-off is simple to institutionalize. It replaces the haphazard dissemination effort of mass information.

A competitive audit is the more basic level. The potential cost saving or growth opportunities afforded by institutionalized competitive reviews of major initiatives and projects can be significant. A byproduct of these reviews would be better use of costly information sources, or rationalization of the cost of these sources.

That said, a company can’t force its managers to use information optimally. It can, however, ensure they, at least, consider it. In many areas of the corporation, mandatory reviews are routine—regulatory, legal, financial reviews are considered the norm. Ironically, competitive reviews are not, even though the cost of missing out on understanding the competitive environment can be enormous.

Consider this admittedly extreme example. Financial institutions are known to spend billions on mandatory regulatory and legal reviews of their practices. How much do they spend on mandatory competition review? To judge by the measly performance of mega banks’ in the past two decades, compared with more locally focused smaller banks, not much (The Economist, March 5, 2015, “A world of pain: The giants of global finance are in trouble”).

Drucker did say, “Work implies not only that somebody is supposed to do the job, but also accountability.” If managers choose deliberately to ignore the competitive perspective, they should be held accountable. And it is only reasonable to ask top management to apply the same principle to itself: a systematic, mandatory, institutionalized strategic early warning review may keep major issues on the table.

Think about it this way: If competition reviews were mandatory at Sears, Motorola, Polaroid, AOL, Radio Shack and A&P, to name a few, would they have failed to change with the times? We would never know. Common sense suggests a company shouldn’t wish to find out.

Excerpt: Benjamin Gilad is the co-founder and president of the first training institution dedicated to the CIP competitive intelligence certification. A former associate professor of strategy at Rutgers School of Management, he is the author of Business War Games, Early Warning and Business Blindspots.