Confidence is a riddle. We want our leaders to be confident, yet overconfidence is the cognitive bias Daniel Kahneman says he would most like to eliminate if he had a magic wand. That’s because overconfidence creates..

..the kind of optimism that leads governments to believe that wars are quickly winnable and capital projects will come in on budget despite statistics predicting exactly the opposite.

Wikipedia’s entry on overconfidence links it to the Chernobyl nuclear accident, the Titanic’s sinking, and Space Shuttle Challenger and Columbia disasters.

In business, overconfidence bias leads to bad strategic decisions and poor execution. Resources and time are poorly utilized. Internal relationships are damaged.

Decision Making Is Riskier In Technology Markets

Rapid change in technology markets increases the risk of bad decisions caused by overconfidence. Products are continuously iterated. Vendors enter and exit the market. And buyer needs and expectations change along with the evolving market.

With so much change, market knowledge and insights lose value quickly in technology businesses.

In this environment, the re-do is a common scenario. Sales, marketing, and product leaders who over-relied on their years of domain experience have to try again when making critical decisions like these:

  1. Prioritization. Where can I have the biggest impact as a sales, marketing, or product leader? What should we as a cross-functional executive team prioritize? The challenge for these leaders is they know which issues or opportunities are important to others on the leadership team. And they’ve heard about problems that came up in individual deals. But they don’t have good data about the overall patterns. The macro view. How prevalent is this problem? How impactful is it? So they’re at risk of surprises and mad scrambles to handle competitive threats they didn’t prioritize.
  2. Change. After deciding what to prioritize, how should each issue or opportunity be addressed? How exactly do we reduce “no decisions”? Regain market share? Improve our demos? How do we stop losing on price? Can we beat a low cost competitor by better differentiating our offering? Or should we cut prices?
  3. Measure. Did the changes we made work? Or do we need to iterate? Do buyers perceive them? And if they do, did we get better or worse?

To avoid the lost trust and frustration that comes with re-dos, teams need ways to incorporate new facts and patterns into their decision making.

Bad CRM Data Undermines Its Potential Value

In theory, the CRM should provide the market data that is needed to inform decision making and counterbalance overconfidence.

But in my experience too many CRM implementations are contaminated with bad data. The problem begins at the foundation. The data about each individual deal in the CRM is dirty.

Incomplete data about won and lost reasons is the most common concern I hear from sales and marketing leaders. I pulled these quotes from recent conversations:

    • “..just 25-40% of completed deals have reasons”
    • “We don’t have a good understanding of why we lose. We have the sales guy’s response to why an opportunity was closed in our CRM, but it’s not complete.”
    • “We’re strong users of Salesforce. It’s great intelligence. But we don’t have great information about why we win or lose. There’s nothing substantial to act on.”

The other big issue is incorrect competitive data. For example, last month I cross-checked the primary competitor field in a client’s Salesforce against data we’d collected from buyers. The primary competitor was correctly identified in 67% of lost opportunities and just 25% of the won opportunities.

In 70% of the wrong responses the company’s top competitor was listed. This means the client’s product marketing and sales enablement resources are over-allocated to their top competitor. Meanwhile other competitors aren’t being tracked accurately, giving them time and space to become bigger threats.

This is the reality for the technology businesses I work with. And it’s a problem for most businesses, I believe. Because the causes of bad CRM data are universal – distrust between buyers and sellers, distrust between sellers and their managers, disincentives in sales compensation, the high cost of data hygiene, etc.

Use Multiple Data Sources Reduce The Risk Of Bad Decisions

To reduce the risk of bad decisions, use multiple data sources to incorporate new facts and patterns into decision making.

Pipeline dropoffs and other sales performance metrics will of course come from the CRM.

Capturing other data accurately (eg, shortlisted vendors, winning vendors, won and lost reasons) usually requires substantial changes to CRM configuration, rep training, and ongoing enforcement.

Instead of the CRM, a buyer survey can be used to gather competitive data and won and lost reasons directly from buyers.

Whether the CRM or a win/loss survey is used, these quantitative approaches make a valuable contribution to decision making. They provide a large, continuous sample. Trends can be tracked over time. And crosstabs can be run on wins and losses by product, competitor, market segment, vertical, and even seller.

As opportunities and issues are identified in reports from the CRM and/or survey, buyer interviews should be used to understand how to act on them. What exactly needs to change.

And to identify issues in the buying experience and generate deep insight into won and lost reasons, run win/loss buyer interviews annually. This deeper understanding should fuel a range of strategy improvements including market segmentation, differentiation, messaging, pricing, and product roadmap.

Customized, In-Depth Win/Loss Analysis