3 Essential Things You Need to Know About B2B Predictive Lead Scoring (Part 3)
By Bo Chipman on November 10
Over the last few years there has been a significant rise in the number of B2B sales and marketing organizations moving from traditional MAP and point-based lead scoring to predictive lead scoring. Many of these early adopters are now two or more years into using predictive lead scoring, and a number of common challenges and important lessons are starting to emerge.
Whether you are just now looking to make the leap to predictive analytics or are already down the path and in need of a reset, here are three essential things you need to consider:
- Driving adoption with your sales force
The most common reason that predictive lead scoring fails is not because the model or the underlying data is bad, but because the model outputs are not fully trusted or accepted by the sales force. Most experienced sales professionals take pride in their ability to distinguish good leads from bad ones, and asking them to put their faith in the outputs of a statistical model, which they’ve had little to no part in developing, is often too much to ask. No predictive model is perfect, and it never takes long for a skeptical sales rep to find a handful of reasons to affirm their initial doubt. Those negative first impressions are usually extremely difficult to recover from.
The most effective way to address this issue is to engage your sales force early and often in the model development process. An important first step is to gather from your reps what they consider to be the most important factors when assessing the quality of a lead. It’s imperative for them to understand that the model development is an ongoing collaborative process, in which many of their initial assumptions with be tested and validated with data, and there will invariably be findings that challenge the conventional wisdom inside the company. There will also be new factors that emerge that have never been considered. Encouraging direct dialogue with the sales force and making them an active participants in the process is critical to building trust and driving adoption.
- Aligning to the unique selling processes of the business
For lead scoring, most large B2B companies require a level of customization to account for the unique selling processes and complexities of their business. In our work, we’ve found the most effective solutions use a multi-level scoring approach that can be easily adapted to meet the unique needs of any B2B sales force. In that model, there are three levels of scoring that take place
- Account – At the highest level, each account is scored based on the likelihood that it will buy one or more of the company’s products. The overall account score is based on attributes and behaviors associated with the account (e.g., industry, revenue growth), and also incorporates the quality of all contacts and opportunities associated with the account. In many cases, large accounts will need to be decomposed into separate buying units to accurately reflect where buying decisions are made inside the company.
- Contact – Inside the account, all contacts are scored based on their likelihood to buy one or more of the company’s products. The contact scores are primarily driven by attributes and behaviors of the contact (e.g., job title, campaign response) but they can also inherit attributes of the account (e.g., industry) if those variables are found to be strong predictors of likelihood to buy.
- Opportunity – The opportunity score goes one level deeper to predict the likelihood that an account or contact will buy a specific product. This model is particularly useful for sales reps that are focused on driving sales for a specific product or group of products.
- Delivering actionable intelligence
Many sales and marketing practitioners have learned the hard way that lead scoring on its own is not enough. In order for a sales rep to have a well-informed and productive conversation with the buyer, they need to know why the lead is scored high (e.g., recent whitepaper download, priority vertical), what products and services they are most likely to buy, and what content is most likely to move them forward in the buying process. Many lead scoring solutions fall short by only providing a summary lead score with little context around what behaviors or attributes contributed to that score.
A better approach would be to provide a short list of reason codes in the user interface for the sales rep. The reason codes should be organized into “fit” and “engagement” variables. The “fit” variables assess the lead quality based on attributes of the buyer and/or the account (e.g., industry, job title), while the “engagement” variables assess lead quality based on specific behaviors observed with the account or buyer (e.g., campaign response, purchase behavior). Having that type of information on the account at their fingertips, without having to do in-depth research on the account, allows sales rep to be more productive and effective with their time.