How to Conduct a Win/Loss Analysis

In this “How To” guide, I describe how we’ve learned to successfully conduct a Win/Loss Analysis based on B2B buyer interviews.

Win/Loss Analysis is defined as the process of analyzing won and lost B2B sales opportunities to learn how to win more deals and increase revenue.

Why You Should Take A Lean And Agile Approach To Win/Loss Analysis

The same agile and lean techniques—like Build-Measure-Learn—that transformed Product Management make a strong framework for approaching Win/Loss Analysis.

Here’s why:

  • You have go-to-market programs you’ve Built and you’re executing.
  • You’re collecting data about how they’re performing. You Measure conversions throughout your pipeline, such as opportunity win rate.
  • The role of Win/Loss Analysis is to help you Learn faster and better—what’s working, not working, and how to make fixes.

Then, the process iterates. After making improvements, you measure your progress, to decide which issues or opportunities to focus on next.

A Seven-Step Template For How To Make An Impact With Win/Loss

The rest of this “How to” guide describes our seven-step process for conducting a Win/Loss Analysis. We’ve iterated this approach through years of in-the-trenches experience.

  1. Analyze your pipeline
  2. Set a goal and socialize it
  3. Select deals and contact champions
  4. Debrief buyers
  5. Analyze the data
  6. Align to act
  7. Measure results

Step 1. Analyze Your Pipeline

The first step is to identify a critical dropoff point in your pipeline.

Dropoffs anywhere—whether they’re at the top of the funnel or bottom of the funnel—can suppress revenue from new accounts.

Maybe you’re struggling to generate enough new sales opportunities at the top of the funnlel, or improve conversion at the bottom of the funnel.

Pipeline graphic showing how below-target conversion of MQLs into initial meetings affects new accounts
Below-target conversion of MQLs into initial meetings ultimately leads to fewer new accounts
Pipeline graphic showing how a low opportunity win decreases new accounts
Low opportunity win rate is the classic trigger for Win/Loss Analysis

 

Either way, you’ve got to pull data about how your pipeline is actually performing, analyze it, and identify the critical dropoff points.

But, as Anthony Iannarino has observed,

“Deals are interesting. The bigger the deal and the closer it gets to being won, the more interesting it becomes. Down the stretch, your team needs help. They need resources, and they need modifications to the solution. This is the work Sales managers love to do because it results in a win.”

This is why Win/Loss Analysis is usually thought of as a tool for improving opportunity win rate.

That being said, you’re going to get the biggest benefit from Win/Loss Analysis when you’re using your own conversion data to identify and fix the biggest dropoff point—not a lower priority issue.

Step 2. Set a Goal and Socialize It

Now that you’ve spotted a problem worth solving, you’ll want to take the fastest path to improvement. This involves:

  • Quantifying the current shortfall
  • Setting a goal for improvement

Without this business case, it’ll be harder to get budget.

It will also be harder to get support from Sales for the program (i.e., access to the accounts, and advice on who to interview at each account). And, getting other teams to follow through on the research findings will be harder. Remember, the buyer’s journey crosses functional lines—from Marketing to Sales to Product.

Socialize this initial plan with functional leaders and other stakeholders. Make sure to ask them what they think causes the conversion dropoff you’ve identified. Save their answers for later when you write the discussion guide.

Step 3. Select Deals and Contact Champions

We’ve found that twenty buyer interviews, 10 Closed/Won and 10 Closed/Lost, makes a good baseline. A smaller sample size of 5 or even 10 interviews puts you at risk of onesie-twosie type comparisons between the losses and wins. With twenty interviews, there is a stronger contrast between factors that have positive or negative influences on outcomes. Starting with a baseline of twenty interviews is the fastest path to spotting issues and making changes.

Use your CRM to run an opportunity report (here’s a link to Salesforce’s report), screening out deals that closed more than 90 days ago. Interviews up to 180 days after close are fine, but the buyer’s memory will obviously be fresher when it’s less than 90 days ago.

Screen all deals to make sure they include the same:

  • Product offering, and
  • Buyer segment (e.g., Midmarket vs Enterprise).

Obviously, if you’re concerned about losses against specific competitors, be sure to adjust your report filters accordingly.

For the opps that Closed/Lost, you’ll want to make sure the selected deals all progressed to the dropoff point you’re working to improve. So, for example, if your dropoff is at the last stage in the sales process, you’ll need to exclude opps that Closed/Lost at Discovery or another early stage in the sales process.

Review this deal report with Sales and revise accordingly. Ask which accounts will be most receptive and which to avoid, particularly customers where there’s follow-on sales activity. Reps can also suggest contacts at each account who would be willing and able to share experience across the entire buying process. Depending on CRM hygiene and configuration, that person may be tagged as a Sponsor, Coach, or Champion.

When you’re ready to begin scheduling champions to participate, start your outreach with the more time-consuming Closed-Lost deals.

You’ll get a better response rate when your email is sent by someone who has authority (e.g., Sales or Marketing leader) or has a relationship with the buyer (e.g. the rep). Writing a template email will make this easier all around.

Step 4. Debrief Buyers

There’s a tradeoff between interview length and response rate. I’ve found response rates are highest when I request a 25-minute interview, but that requires a tight delivery. Put all your questions into an interview script, then time it, and refine.

This is another benefit of focusing on a specific dropoff point in the pipeline. The interview can zoom in on the critical points in a month-long buying journey. For example, if you’re concerned that sales execution is causing lost opportunities, you can omit discussion of perceptions and influences early in the journey (e.g., a Gartner magic quadrant report).

As you prepare your interview questions, here are four tips to keep in mind:

  • Best-Worst Scaling can be a very helpful tool for putting in context the answers to “What did you like most/least” questions. After that question has been answered, follow by asking which vendor was best for that issue and which was worst.
  • Why and how are incredibly valuable follow-ups. Your ability to take action—to know how something needs to be improved—lies in the answer to those questions.
  • Understand that buying usually begins rationally and ends emotionally. That is, buying teams will be thoughtful about their needs and criteria as they work towards a short list of vendors. Then, unless one vendor’s offering is clearly superior on those criteria, the final decision will be made on emotional factors.
  • Look for ways to make it easier for your buyer to cue memories. Using a documentary film metaphor helps here. Doing so makes it more likely you’ll get the real story rather than something made up in the moment.

Step 5. Analyze the Data

Coding or tagging is the first step in analyzing the interviews. You’ll want to code each buyer’s responses to the topics you discussed. So, for example, you’ll tag each of the decision criteria they used. Or each of the sticking points in the buying process.

The quick and free way to do this is with a spreadsheet. Alternatively, you can invest in specialized software like MAXQDA, which has been the default choice for analyzing qualitative data. But, many new SaaS options have emerged such as Dovetail, Delve, and NVivo.

I prioritize each issue by its prevalence and win rate, and use this to categorize each into a matrix, Gartner Magic Quadrant style.

Matrix showing causes for won and lost deals from anonymized win/loss studies

For example, in this report excerpt we’ve plotted the nine most prevalent decision criteria on the four quadrants. Win rate is on the x-axis, and prevalence is on the y-axis.

  • Criteria in the right two quadrants are strengths for this vendor. These four criteria are how this vendor is truly differentiated, as judged by buyers who are voting with their money.
  • And the criteria in the left two quadrants are weaknesses—across all buying contexts, all competitors—and they are ultimately the main reasons this vendor wins and loses.
  • In the upper righthand cell, these criteria are driving wins. These should take a front and center role in the vendor’s positioning. In the top lefthand cell, these three criteria are driving losses. They’re feature deficits. They make it onto many buyers’ lists and should be fixed.

Step 6. Align to Act

A read out of your win/loss findings is necessary but not sufficient.

Improved deal outcomes almost always require participation from multiple functions, so it’s essential to engage a cross-functional team of stakeholders in absorbing and understanding the buyer feedback and planning for action.

For this purpose, consider using Affinity Diagramming during a “buyer workshop.”

In our onsite at a project’s conclusion, we also use Affinity Diagramming to generate a prioritized list of fixes and improvements, which we record in a draft action plan.

The workshop is also a good time to present your plan for measuring the program’s impact. Check your CRM to get an up-to-date reading of conversion at the dropoff point you originally identified. Use your action plan to record a date for the “after” measurement when a new cohort of buyers have been exposed to the improvements you’ve made.

Step 7. Measure Results

Use the “after” reading scheduled in your plan to measure the program’s impact on outcomes.

If you met or surpassed your goal, congratulations!

If the Win/Loss Analysis didn’t improve your target metric, investigate why. Were there faulty conclusions about the causes of won and lost outcomes? Were all the planned changes actually made?

What’s Next?

We’ve now made a complete loop through the Build-Measure-Learn cycle.

We began by using Win/Loss Analysis to Learn how to fix the biggest dropoff point in your pipeline. Then, after analyzing the data and developing an action plan, you and your cross-functional team Built improvements. And then you Measured the results.

Now, consider applying your experience with Win/Loss Analysis to further improve your target metric. Or tackle a new issue or opportunity.