9 Requirements for Effective Cross-Selling in Financial Services

As a part of their Future of Financial Services series, the World Economic Forum recently released a comprehensive and far-reaching report on the “New Physics of Financial Services” and the impact that digital transformation and the rise of Artificial Intelligence is having on the financial services ecosystem.

There is no doubt that these trends will change the operating models for certain firms and continue to impact the competitive dynamics of the industry in a number of different ways in the medium-term.  In the short-term, they are creating challenges and presenting opportunities for how traditional FSI’s acquire, grow and retain customers.  This is especially true in the area of cross-selling and retention. If traditional FSI’s aren’t accurately anticipating the needs of their customers or understanding their risk of defection, they are at greater risk for disintermediation by new entrants or competitors who do.

Cross-selling is the fastest, most profitable path to incremental revenue growth, period. Assuming a firm has a 30% wallet share within an account, attaining just 5% more in account share grows account revenues by 17%. And with the cost of customer acquisition generally estimated to run from 3x to 25X more expensive than cross-selling, the economics of cross-selling are very compelling.  Recognizing this, cross-selling has become a strategic priority for many financial services firms in recent years – yet many firms still appear to be far from realizing the potential of cross-selling.

Why is this? Some firms may be hesitant given the number of actions and warning directed towards FS firms contained within the CFPB Enforcement files where the cross-selling culture was perhaps a bit too aggressive – “Detecting and Preventing Consumer Harm from Production Incentives” as they refer to it.

For others, it may be the challenges associated with overcoming organizational complexity that spans multiple lines of business, diverse functional areas and disparate technologies and business processes that must be coordinated to deliver effective cross-sell programs.

In many instances, it may be the fact that cross-selling responsibility is often left to the “last mile” (the end of the buying journey) in that relationship managers or sales resources often simply don’t have the time or skills to effectively implement programs at scale. Or, efforts are driven by product owners who take a product-centric view of cross-selling as opposed to the customer-centric view of successful cross-selling programs.

Here then is our list of nine requirements for building effective and scalable cross-selling programs in the financial services industry:

1) Adopt a customer-centric view.

Too many cross-sell programs are still organized around lines of business and driven by a product-centric view of cross-selling. Effective cross-sellers build a customer-centric view of opportunity and take a longer-term view of customer value. The most effective cross-sell efforts are led by segment marketers who have responsibility for specific customer segments, working with the product marketing teams and sales channels to coordinate on execution.

 

2) Establish a single view of the customer.

Patently obvious, but many firms still have a difficult time building a unified view of the overall customer relationship. This includes all product usage and transactional history, service and support history, etc., as well as identifying and integrating external data sources that provide additional insights into buyer behavior and attitudes.

Identifying patterns of behavior across products is essential for understanding and anticipating customer needs. In turn, it informs segmentation and personas in #3 below.

But don’t wait for the completion of an expensive, multi-year data warehouse project. Agile firms today are taking advantage of low-cost storage and data lake architectures to quickly build data repositories. This allows data science teams quick access for specific use cases without processing overhead associated with large inflexible data warehouses. Liberate insights from the tyranny of workflow tools and warehouses!

For more information on the Promise of the Marketing Data Platform, Click Here

 

3) Build actionable buyer segments and personas.

Utilize data from # 2 – supplemented with primary research – to build actionable segmentation and personas. These will allow you to personalize your interactions with existing customers in ways they have come to expect; based on the totality of their relationship with you and reflecting an understanding of their needs. Maintain assignment of segments and personas in your customer database to ensure segmentation is actionable.

For more information on Creating Actionable Segmentations and Personas, Click Here

 

4) Create a scalable analytical engine targeted to specific, prioritized use cases.

Use an agile, reproducible approach to developing and managing a library of predictive cross-selling/retention models. Consider segmentation, RFM, CLV, next logical product, retention, Marketing mix optimization, among others.

Customer growth, share growth, wallet growth, account expansion—all of these strategic goals beg the same question; how do I get a given customer to buy more, or buy something new? Cross-sell models use data about the current installed base and compare this with data on other accounts that have upgraded. These are a close cousin to market basket models on the consumer side, analyzing how customers’ ”baskets” of products typically evolve as new items are added.

Deploy a disciplined, scalable approach to managing your data science operations to ensure reproducibility, scalability, and accountability of your investments in AI.

For more information on Creating a Product-Centric Data Science Organization, Click Here

 

5) Listen to your customers, stalk your competition, foresee your disruptors.

Robust cross-sell programs require a deep understanding of your accounts. This includes how they perceive your brand, what competitors are selling into those accounts and what new disruptors are emerging.

Best-in-class FSI’s leverage “always-on” market intelligence capabilities. Those that track customer feedback, competitive movements, and emerging disruptors. This intelligence is fed back into the product development process, innovation centers, marketing messaging, Account-Based Marketing activities, and sales enablement programs.

 

6) Create content aligned with personas and buyer’s journey.

Today’s FS customers engage with your organization via multiple channels. With 24/7 availability of your content and resources, it can be challenging to provide a consistent experience across every channel. Think of a potential buyer researching new savings and investment options to kick off 2019. This buyer can easily research opportunities online, get side-by-side comparisons from financial providers, chat online or over the phone with an investment expert, and then set up an appointment at the bank or financial institution for a face-to-face deep-dive discussion.

If the user experience across all these channels isn’t integrated – and customers receive different responses across different channels from the same company – the likelihood of successfully cross-selling or even retaining those customers goes down considerably. Utilizing a consistent framework and taxonomy to map content to the buyer’s journey is critical to ensure a consistent, personalized and relevant experience regardless of product, channel or stage in the journey.

For more information on Creating a Consistent Customer Experience, Click Here

 

7) Implement a disciplined cross-channel contact strategy and cadence.

Disciplined cross-sellers implement and adhere to well-structured contact strategies that are based on analytics and insight (i.e., using the scores developed in # 4 above). These strategies help determine what that cadence should be, which channel should engage, and what the product/solution and message should be.

This requires close coordination between marketing and sales. One of the most successful cross-sell programs we have ever seen actually rescored their entire customer population each week. They did so using updated transactional and market response data, and then prescribing a set of dynamic business rules to determine where the opportunities were to be routed the following week. A well-defined nurture stream for the next logical product was presented or the prospect was routed to a sales agent.

In either event, the contact strategy and cadence were well-defined for each product and each step in the buyer’s journey. Content and messaging for each segment were defined and utilized as “fuel” for each outreach and delivered into the marketing automation tools and the CRM (See #8 below). The business rules were dynamic and could be modified weekly based on underlying business conditions. This forced an interlock each week between sales and marketing and helped develop shared accountability for results.

To view the FS Cross-Sell case study, Click Here

 

8) Insert insights and content into workflow tools.

The best insights and AI are useless unless they can be easily understood and acted upon by your sales and marketing channels. Delivering predictive analytics, relevant content and personalized messaging into existing customer contact workflow platforms is critical for successful cross-selling at scale. We have found that loosely coupled architectures are dramatically better over the long run than tight integrations with SaaS MarTech platforms.

Most companies are using multiple platforms to manage the end-to-end customer journey. The need to insert insight into each one of these platforms—and to gather feedback from the customer interaction directed by those platforms. This is critical to managing the process holistically, and understanding where the customer is in the process at any given point in time.

Rather than working to integrate multiple systems, the better alternative is to deliver analytics and content via a set of standardized endpoints that any CRM or MarTech platform can use. Then, writing quick integration layers for specific systems. When that next great piece of technology is rolled out—or when Salesforce raises its prices by 20%— it’s no problem.  It just requires updates to an adaptor layer, vs. tearing out a bunch of proprietary APEX code from Salesforce and trying to remember what the developer was thinking.

Fortunately, all CRM and marketing automation systems—including Salesforce—share the same basic architecture. The objects Account, Contact, Lead, Opportunity, Product, etc. don’t really vary, and haven’t for 25 years.

Cross-selling recommendations also share the same DNA. Typically, the API calls for cross-selling include several microservices that, when taken together, form the basis of the contact and content strategy outlined in #7 woven into the CRM / Martech stack.

For insights on the Microsoft-SAP-Adobe Open Data Alliance, Click Here

 

9) Relentlessly test, measure and track:

Finally, any successful cross-sell program – in fact, any sales and marketing program – requires a relentless focus on test and learn, agile pilots, on-going measurement and optimization from start to finish.

This process starts with establishing a proforma ROI when any new cross-sell model is slated for development. Writing down what you are trying to accomplish, and estimating how its effectiveness will be measured, puts the entire team on much better footing for success. This should be done before a single line of code is written or a query is executed.

To learn more about Measuring Return on Analytics, Click Here

Implement agile pilots using test and learn methods such as A/B testing to quickly gain insight into optimal combinations of factors that drive the best results and then scale.  As every direct marketer’s learning method, the A/B test divides marketing into test and control cells, and the response is then compared using simple z-tests of proportions to pick a winner. This approach is simple and effective, and given sufficient volume, can be turned into a learning factory for the organization.

Finally, develop and deploy a holistic customer-centric marketing analytics framework that will allow you to consistently track, measure, manage and optimize all of the activities occurring with your customers across all products, marketing and sales channels.   This will provide visibility into the overall results and allow you to make more informed decisions on how to effectively grow your installed base. Dont forget to optimize your customer contact strategies and cadence on a continuous basis.

To learn more about building a Marketing Analytics Framework, Click Here.

 

Remember this: “If traditional FSI’s aren’t accurately anticipating the needs of their customers or understanding their risk of defection, they are at greater risk for disintermediation by new entrants or competitors who do.”

Having an agile plan to go-to-market against market disruptors by building a customer-centric approach to cross-selling will be key to success. Check out our latest whitepaper on “The Last Mile Opportunity,” for 5 transformational principles to scale operations and build revenue success in the “last mile” of the customer buying journey.

Download the whitepaper:

Marketing Analytics Family Tree

Marketing analytics is a broad, “meta” field, combining elements of marketing strategy, data science, database management, digital technology, primary research, and psychology, to name a few. To help explain what it is, we’ve created this taxonomy of marketing analytics—a “family tree”—that breaks the field down from high-level to more detailed.

The taxonomy has four levels of hierarchy. The highest level splits analyses broadly into aggregated and discrete classes. Aggregated analyses look at data grouped together—for example, by month, product category, or customer segment. Discrete analyses look at the individual “data objects”—for example, leads, customers, or accounts. The next level down—call it “function”—looks at large categories of analytics that might typically be found on a Director’s business card. For example, “Director of Consumer Research”, or “Director of Customer Analytics.” The third level of the taxonomy, discipline, looks at a thematic area in that function, for example, “qualitative research” or “predictive prospecting.” Finally, at the lowest level are the specific analytics tasks or methodologies that an analyst might be doing on any given day, for example, “social listening” or “customer reactivation.”

Each task has a fairly detailed explanation of what it is below the tree. Where links to greater detail might be helpful, those have been added; but in many cases, they weren’t needed.

You’ll notice that the terms “machine learning” and “artificial intelligence” don’t appear on this taxonomy. This isn’t because they were forgotten, but these are specific techniques that can solve many of the discrete-type problems noted in the taxonomy. In some cases, specific tools are mentioned, like neuro testing, and these were included because they are so uniquely suited to a specific task.

Assuming people find this hierarchy valuable, we are absolutely open to editing it and keeping it fresh with suggested adds, deletions, or merges. Please reach out with suggestions to our Chief Analytics Officer, Andy Hasselwander, at ahasselwander@market-bridge.com.

Download a high-res, printable version of the Marketing Analytics Family Tree at the bottom of the page.

 

Marketing Analytics Family Tree_MB

Below, download a high-res, printable version of the Marketing Analytics Family Tree.

What About Small Data? Part 2

Getting Back to Growth by Playing Small Ball

The ADBUDG curve is a 40-year old handy heuristic for modeling marketing spend vs. return. It was first used for broad-reach advertising. The concept is pretty simple:

  1. The curve starts out flat, as dollars are invested to get breakthrough with a group of consumers
  2. Then, the curve gets steeper as marginal returns reach profitable levels
  3. Finally, the curve flattens as the market is saturated with messaging, and the advertising no longer has much marginal effect.

ADBUDG curve
Certainly, both direct and broad-reach marketers know of this curve, even if they’ve never heard of the word “ADBUDG.” There is a maximum amount of goods or services you can get the market to buy before marginal marketing dollars do not drive a profitable return.

However, this “plateau level”, at least on an aggregate basis, seems to be getting lower, year after year. Over the past two decades or so, at least four factors have conspired to compress this curve for marketers, whether B2C or B2B (I use the term “consumer” interchangeably):

  1. Consumers have become savvier, spotting obviously poor execution and ignoring it out-of-hand
  2. At the same time, consumer behavior has become more search-driven, turning the tables on the advertiser and waiting until they have a real need to find what they want on Amazon or Google
  3. The supply of quality interactions—be these on radio, telephone, television, or at retail—has gone down as consumers have shifted their behavior towards platforms like Netflix and Amazon, and have stopped picking up their phones, etc.
  4. At the same time, the marginal cost-per-touch for low-quality interactions (junk email, crap display ads, lousy ad time on the long-tail of the cable TV spectrum) has gone down, encouraging blasting consumers with touches, thereby further intensifying consumers’ programming to “ignore” marketing tactics.

The net effect of these trends and their impacts can be visualized as a series of curves getting flatter and flatter over time as the aggregate “ROMI,” or return on marketing investment, gets lower and lower.

Downward ADBUDG curve

A traditional way to deal with this problem is optimization. In other words, take that ADBUDG curve and find the best possible marketing mix, message, and consumers to target, which will optimize ROI by lowering costs to get the same return. This works well over the short run. Everyone is happy, because marketing has increased its ROMI. There is no noticeable impact on growth—the plateau of diminishing returns is reached sooner.

Optimized ADBUDG curve

However, over the long run, there is a strategic problem—one that CPG companies, for example, have been facing for years—and that is, growth is harder and harder to come by. However, the economy is growing, and consumers are buying things. So where have the dollars gone? They’ve gone to small competitors, who have marketing departments of one or two, no marketing mix models, and are limited to small digital, agile digital campaigns that go from ideation to execution in days, not months.

If you want to see a concrete example of this, go outside and look at anyone between the ages of 14 and 40 today, at the brands they display conspicuously, at the clothes they wear, at the music they listen to. You will notice that most are telegraphing their individuality in an extremely deliberate way. There is a term that Sigmund Freud coined a hundred years ago that I think perfectly describes consumers and companies as buyers today—the narcissism of small differences:

Every single consumer / company thinks that they are unique, even if they’re actually quite similar.

It’s up to marketers and sellers to recognize this, understand them, and get them what they need to feel like they are being treated as individuals.

The implication is that now more than ever, marketers and salespeople need to go “small” when everyone is talking about “big.” Big data, big advertising, scaled campaigns, and machine learning are great—but what they are so often missing is the touch of the artist, and careful, high-resolution insights-driven thinking.

I was talking to someone the other day whose wife is about to age-in to Medicare, and she’s gotten hundreds of “idiotic” (his word) touches from various companies over the past months blasting them with the same message, over and over. She is now completely turned off on Medicare Advantage. She’ll buy something eventually, but only when she receives the right touch that acknowledges her as an individual—well, maybe not an individual, but at least as someone unique.

Companies optimizing their Medicare Advantage campaigns might put her in a lower decile, as she doesn’t respond to these touches, but she has the means and the need and will buy. Instead of optimizing in aggregate, or across coarse segments of tens of millions, these companies should think about micro-segmenting their campaigns, and understanding what she as a consumer actually needs and wants when it comes to health care; what her specific habits are; and how she lives her life. Simply acknowledging these differences will go a long way towards true optimization, and will drive incremental growth.

Concretely, this means micro-segmentation. This isn’t the huge, enterprise segmentation, but it’s rather a guerilla, agile attempt at understanding the small cells of customers, and reaching out to them in unique ways, all measured rigorously. For each micro-segment, a marketer should strive for unique insights, including:

  • Core needs, wants, insights: What is the real, non-trivial, second- or third-level insight that makes this small segment of consumer care about what I’m talking about?
  • Channel mix: How do I build a go-to-market strategy that intercepts consumers and companies where they travel, and where they care about what I’m selling?
  • Content / message: How do I get the right, unique content and message in front of that call of a few companies / a few hundred thousand consumers, where it will really resonate?
  • Product: It’s not always possible, but can I build a product portfolio with enough diversity to acknowledge difference, while staying profitable?

This requires striving for breakthrough insights among small cells, and these insights then have to spread throughout a marketing organization that embraces “small ball”—the wins that come from looking for nuggets instead of the whole gold mine.

This does not mean giving up on analytics or data science—it’s actually the logical extension of it. Analytics goes from being huge and aggregate to micro and artful.

It does mean spending more time looking for the insights sources that will get marketers a greater depth of insight into markets, prospects, and customers. It does mean doing bringing data scientists into qualitative research sessions.

So what about that ADBUDG curve getting squashed by oversaturation? If you play analytical “small ball,” you’ll be optimizing lots of little ADBUDG curves, one for each of the micro-segments. The saturation level for each of these is higher, so when they are summed up, the aggregate curve rises.

Some of this can be done with technology—for example, some of the very good targeting that is possible with Instagram today based on location, text mining, and image—but much of it still boils to good old-fashioned insights work, and making cell sizes smaller. Another way to think about small-cell marketing is in terms of a campaign / micro-segment portfolio; each “fund” is optimized, and then the entire portfolio of campaigns is optimized for efficiency, as a whole.

Growth Micro ADBUDG curve

One final note; I continue to believe that insights-driven, small-ball analytical marketing is ultimately an organizational challenge.

Marketing and sales organizations have to be built to think like customers, like individuals, while at the same time being relentlessly data-driven.

These two things are not mutually exclusive. The old trope of the “geeks” and the “creatives” is just plain wrong. Merging these two worlds, and successfully playing analytical small-ball, is a really good way to move a big company from efficiency to efficient growth.

 

10 Step Checklist for Creating Actionable Segmentations and Personas

For all of the talk around one-to-one marketing, human beings still need frameworks to understand their world, and marketers are no exception. The word “segmentation” might be ubiquitous among marketers, but it’s difficult to find two people who can agree on what segmentation really means and how they become actionable segmentations. That being said, sales executives and marketers can probably all agree on what a bad segmentation looks like:

  • Bad segmentations are hard to describe in plain English. In other words, only the lead researcher can really describe what the segmentation does, and who the members of the segments are. Another dead giveaway is that the names of the segments or personas don’t really align with what makes them unique. If you can’t name something, it’s probably not really much of a thing.
  • Bad segmentations can’t be used by the business. This kind of segmentation might be brilliant academically, but when you sit down and start to try to implement it, you can’t really do anything meaningful. This is always a challenge for the PhDs who create brilliant latent models of beliefs and wants among buyers—can we actually find these people?
  • Bad segmentations become obsolete quickly. A segmentation that only lasts for a year or two is frankly wasted money. The costs to do the research and analytics, and the opportunity costs of training and implementation across the organization, make it imperative that a segmentation is long-lasting. I’ve found that longevity is rarely stated as an upfront goal at the start of segmentation work.
  • Bad segmentations are too broad or too specific. In some cases, the segmentation is too broad. For example, a segmentation might attempt to describe all of the behaviors of a customer, from media usage to IT behaviors to channel preferences to leisure choices. In other cases, a segmentation might only be useful in describing such a small set of behaviors that the use cases are just too limited.

So, what does a successful, actionable segmentations look like? A successful segmentation effort is first and foremost one that is adopted. Fortunately, there is a checklist that marketers and market researchers can follow to protect against the risk of a six-month segmentation effort ending with a thud (literally, the 100-page PowerPoint hitting the bottom of the shredder bin). These are divided into pre-analysis and post-analysis checks.

Pre-analysis:

Step 1

Set clear descriptive / predictive objectives upfront.

Before any work is started, before a question is written, or before a database query is made, everyone needs to be in agreement on what the segmentation will be used to predict / discriminate, and what it won’t. For example, a segmentation might be great for understanding technology buyers’ wants and needs when it comes to the technology itself, but lousy in predicting where they shop. Of course, segmentation can serve many purposes, but realistically, a segmentation with 3-10 segments can really only be used to predict a few major themes. One way to do this is with a simple table, for example, this table for a printer / copier buyer segmentation:

Will Be Used to Predict

  • Behavioral usage
  • Relationships with others in a company
  • Learning style and objectives
  • Preferences for generic / name brand consumables

Won’t Be Used to Predict

  • Channel preferences
  • Media consumption
  • Technology stack / integration
  • Industry use cases

If everyone signs off on this upfront, before anything else is done, it’ll be much clearer to researchers and stakeholders why this work is being done, and what it’ll be used for.

Step 2

Set actionability requirements upfront.

Actionability is the ability for a segmentation to be used to actually contact or describe specific customers. In other words, “Can the model be handed off to field sales, channel partners, digital, my advertising agency, my database team, etc?” If a segmentation model needs to be useful in tagging every customer in the database, the design must take this into account, by ensuring that “knowable” variables in various databases are included in the research (if primary research is actually being done.)

Concretely, as illustrated in the table below, create a list of knowable variables by customer interaction point, and include these in requirements for the research.

Interaction

Known Data to Include in Segmentation for Assignment

1. Website Registration IP, OS (from browser); previous website traffic (from cookie); age, gender, ZIP code, SIC code, title
2. E-Commerce Purchase All from (2) plus SKU(s) purchased, shipping chosen
3. Retail Channel Retailer, location; firm, title, age, gender, SIC code from loyalty card
Etc…

Another option is actually conducting the research from the database, ensuring a 1:1 tie. Email is a great way to do this. An approach that can work well is doing a split of 50/50 existing customers and unknowns. It goes without saying that the sample would need to be reweighted to account for the true population after the research is completed.

Step 3

Decide where to be on the actionability / insights spectrum.

This is kind of an addendum to the second to-do on pre-building actionability, but leaders should think about whether or not to do any primary research at all. If there is a lot of behavioral data out there—think Amazon—it’s probably possible to create an extremely powerful and descriptive segmentation based on database information alone. However, for a CPG company whose primary concern is deep-seated psychological constructs and how they impact attitudes about cleaning one’s home, a database segmentation won’t cut it. There is no magic rule here, but again, it pays to have these open conversations among stakeholders up-front.

Step 4

Think of quota as the population you are describing.

After setting objectives and being clear on use cases, setting the quota is a critical, no-going-back decision that has to happen very early in the process. When you think about what setting quota actually means, you’re defining the population that you want to segment. In other words, by setting guidelines like those in this table…

Firm Size

Title

Industry

10-100 100 CMO 100 Tech 100
101-500 100 CTO 100 Pharma 100
501-1000 100 VP, Product 100 Other Remainder
1001-5000 100 Developers Remainder
5000+ 50

…you are literally defining the population you are describing. You are also defining who you are not describing. In other words, if you want to be able to use this segmentation to describe DevOps Directors, you can’t be sure that it’s going to work—you’ll be extrapolating. So, spend a lot of time on the quota, and think about it as a description of the buyers you want to describe, rather than just cells to fill out.

Step 5

Get your screener right.

This is a close cousin to #4, but different. The screener is a magical piece of the equation because small changes to questions can have drastic implications on the results. For example, say you want to talk to decision-makers who have a say in what group health insurance plan a company should choose. You could say “Describe your role in the decision-making process for choosing group health insurance for your company,” and only allow in those who say they make the decision or are part of a committee. But what if you also added an option I have veto power (the executive who needs his Doctor to be in the plan), or the person who makes the decision reports to me. You will get different populations for each of these screeners, and consequently, different results. Again, there’s no right or wrong answer—just be very choiceful, and make sure the executives who will choose to adopt or trash the segmentation are well aware of the decisions being made.

Step 6

Do qualitative first to ask better questions.

There seems to be an aversion among analytical marketers to qualitative research; maybe they think it isn’t actionable, or they don’t understand it. However, qualitative research is critical in setting the scope of analysis, and asking the right questions in the quantitative phase.

Qualitative can be done before and/or after a segmentation, but doing it before makes a lot of sense, simply because you’ll end up asking better questions when it comes to quantitatively describing your universe. Of course, qualitative research is an art in and of itself. Focus groups can be effective, but in a business-to-business, considered purchase environment, I’ve found that in-depth interviews or ethnographies (where you literally go into a company and watch them work, peppered with questions) are even better. When you’re writing your discussion guide, take the time to brainstorm all of the “why” questions that might dig up insights for the buyer group. Pick a seasoned moderator who understands the buyers you are talking to. Finally, have the executives, and the people actually doing the analysis, actually attend the research. It will make them much better at driving the questionnaire / analytics effort. They’ll be building the intuition they need to make better decisions when it comes to the questionnaire, analyses to perform, etc.

Post-analysis:

Step 7

Spend a ridiculous amount of time naming your segments.

Naming things doesn’t sound very analytical, but it’s incredibly important when it comes to segmentation. If you can’t name a segment clearly, it will never stand the test of time. The naming exercise is best done by a group of people, sitting in front of the data, over several hours. Ask questions about why a name is applicable, or not. Does it just “sound good”, or is what the name describes actually seen in the data? If you can’t name the segment, there’s something wrong with the solution. You’ll know when you have good names—they’ll describe the segments perfectly, and the data will line up with the names across every crosstab you look at.

Step 8

Check that your solution can actually do its job.

Thought experiments can be done in a few hours, but will save a lot of time down the road. There are three listed here, but the point is to put the model through its paces, before it’s actually put through its paces.

  1. Prima facie validity.
    If you spent the right amount of time naming the segments, this one should be easy. Can you describe, in one or two sentences each, how each segment is unique, and how they should be treated differently by your company? If not, you’re in trouble.
  2. Segment difference.
    The segments should have clear differences in all of the variables / factors that matter. What are the variables that matter, you ask? Those should have been defined back in step one. If you see a lot of grey when looking at crosstabs, you have a problem. Go back and look at the algorithms used for clustering, the number of clusters, etc., to ensure that truly different segments exist.
  3. Go-to-market use cases.
    Do a thought experiment and see if different sales and marketing actions can now be taken with the new information. Is there a different campaign apparent for Segment A? Can we find enough of Segment B in the database to build a meaningful campaign? Could a sales rep build a pitch for Segment C that makes sense and will drive results?

Step 9

Check for reproducibility.

What if a segmentation solution doesn’t hold together when presented with new data? One way to address this is to use the classic machine learning technique of train-test or train-test-validate splits with data. The problem here is that primary research records are more expensive, so a little bit of extra budget will need to be burned, but it’s worth the peace of mind knowing that a solution repeated itself on a 300-n holdout sample. So, do your modeling using train / test, and then score the validation set. Cross-tabs of descriptive statistics should be very close to the train and test data.

Step 10

Get the outputs right.

A PowerPoint deck isn’t enough. For a segmentation solution to be adopted far and wide, two sets of outputs need to be created; communications outputs and operational outputs. The below table lists some frequently used outputs in both categories, but marketers should, by all means, get creative—it’s only through rich, well-thought-out communication that segments will be used and addressed appropriately throughout the organization.

Communications Output Operational Output
  • An executive summary of project goals and the key outputs
  • A master PowerPoint describing the segmentation methodology, the segments themselves, and the go-forward strategies
  • One-page summaries of each resulting segment, that bring the segment to life, potentially as a persona (Bob the IT guy, e.g.
  • A one-page summary of the entire effort that can be put up on bulletin boards, etc., across the organization
  • Models that score prospects, customers, inbound call center contacts, etc.
  • Differentiated go-to-market strategies / routes-to-market for each segment
  • Differentiated creative / content guides for each segment

As I reflect back over my career and think about “segmentations I’ve known”, it’s shocking to me how few of these steps were followed, or even articulated, by huge organizations and their consultants and research providers. If you follow these ten steps, the chances of a catastrophically bad segmentation effort—which happen more often than anyone in marketing would like to admit—will be really low. More importantly, the chances that the segmentation will be robustly adopted—which is, after all, the most important measure of success, will be better.

Download the full checklist to share with your internal team:

How to Build a Product-Centric Data Science Organization

Realistically, most data science is heads-down, unpredictable activity. Typically, a data scientist is given an objective, such as “tell me the part-worth of TV in my advertising mix”, or “come up with a classifier to put a lead into the correct segment,” and has to figure out how to solve the problem with a vast array of tools and potential data sources. This might take hours or days, but the iterations and deep thought required to get to an answer are significant and differ significantly (at least in my experience) from task to task.

What am I driving at? Data science is at its core an individual voyage of discovery. Big-S science provides frameworks around which to structure this voyage of discovery— you know, the old “hypothesis / background / procedure / results / discussion” framework that we were all taught in high school chemistry. But ultimately, inside of “procedure,” there are hours and hours of arms “deep-in-the-data,” pounding away in StackOverflow that is really hard to codify.

This is a tough challenge for organizations because, in my experience, organizations don’t function well or scale relying on the “individual hero” or “craftsman” model.

To scale, organizations need to productize—that is, find common approaches, algorithms, methods, data structures—to solve common problems.

These common approaches can be configurable. Yet ultimately, a product approach needs to scale, improve itself, and be reused by others.

I’m not saying that data scientists need to abandon all creativity and individuality. I do think that truly scalable data science organizations are possible. They will ultimately make everyone, including the super smart creative data scientist, happier in their job. There will be less reinventing the wheel, less manual work, and a better understanding of value provided across the organization.

Fortunately, a lot of the infrastructure and best practices for scalability already exist. We can borrow the best parts of the software development lifecycle, specifically the Agile methodology, to evolve into a “product-centric” data science organization. I’ve had success building this kind of organization, and below are my nine best practices that work, along with some specific tools, frameworks, and processes that go along with them.

1) Implement Version Control

It still surprises me how many data science organizations don’t use version control. Whatever you’re using—Git, BitBucket, etc.—the idea that code is sitting around in C:\ drives or on some Sharepoint site or whatever, YET not tracked, is low hanging fruit. Every data scientist should not only be using version control but should have a branching strategy: Don’t commit to master! Do have a coherent naming convention! Do add commit messages that describe what you did!

2) Separate Projects from Libraries.

Data scientists should do individualistic heads-down work but, they also need to be trained to notice when their work has gone from a one-off to a reusable format, and then transition that to a library (or package). Libraries or packages have different requirements when it comes to documentation (i.e. readme files), parameterization, and general code elegance. To help understand when a “project” is turning into a “product,” code reviews are a great help.

3) Implement Reproducibility

A data science organization should avoid, at all costs, producing reports, PowerPoints, dashboards, etc. that were created by dragging, dropping, and clicking. Instead, they should invest the extra time in writing that PowerPoint programmatically. You can do this either using officeR in R, or python-pptx, or in creating that dashboard coded with a tool like Shiny. If you need a report generated periodically, build it in something like RMarkdown or knitpy. Or, just send someone to the Jupyter Notebook. This will pay dividends both when someone asks you literally “how did you get that number” or you ever want to reuse anything you just built.

4) Implement Agile Project / Product Management

If I had to pick, this might be the most important best practice. There are many aspects to Agile, but the concept of the sprint, backlog, and strategic priorities, arrayed on some kind of a shared board, really helps data science teams work. I like a less structured tool like Trello better than a more structured tool like Jira for data science. This way, lists can evolve flexibly and sprint can be less rigorously defined. If a data science team is split up between a bunch of projects, those can be separate boards or lists, depending on how you want to roll. What matters is that everyone can see clearly what everyone else is working on along with what is up next for a longer-term picture. Writing good functional requirements on each card / ticket (as Trello calls it) is an art in and of itself. While it shares some things in common with software, writing up tasks/stories for data science has its own unique tips and tricks (out-of-scope of this blog).

5) Write Down Procedures

Duh, right? But this is often missed. Every data scientist might have their own way of doing things, and this is fine within reason, but ultimately, procedures, environments, security protocols, etc. all need to be written down. I’ve had great success leaving these as markdown readme’s on Github, but as long as there’s a single source of truth, and people know where to find it, you’re good.

6) Have Code Style Guidelines

It’s not essential, but standardizing code can help a lot in productization. For example: Are comments on separate lines or to the right of code? What is the right level of commenting? How important is it that data scientists make code Pythonic? Should we put helpers functions into one file, or split them up (and at what point)? This might be something that evolves over time, as lead data scientists develop a point-of-view on this that is evidence-based and not just based on personal preference.

7) Have Standups and Demos

Again, basic Agile stuff, but be sure to have your data science team get together in the morning to go around the horn, talk about what they’re doing today (any blockers they may have), and just generally keep on the same page. I’ve had people who push for this to be a “just the facts” meeting, and I get that, but I personally err on letting people talk. Ideas are created, people are cross-pollinated, and ultimately a few extra minutes of talking leads to non-linear gains in productivity in my opinion.

8) Have Standard Data Definitions

If you’re dealing with the same data structures over and over, don’t let every data scientist have their own way of describing the data. Using an example from the sales and marketing world, if we’re constantly looking at opportunities, take the time to define an XML (or flat) definition of an opportunity. Leave it on your version control (in the libraries section) and reuse it. Take the time to have your database or developers write an endpoint to represent it, and use it in all your code. In the long run, parameterizing your variables and making them product-ready. Important: Don’t write different data definitions for every different system. Go spend a couple hours and write an adaptor to your standard definition that can work with various systems so others are able to figure it out.

9) Enlist the Data Team.

The folks that are responsible for building your data warehouse/lake should also get in on the fun. While I find that database developers tend to do their own thing, all of the SQL code they are writing should be in the same version control system, and it’s helpful to cross-pollinate with the data scientists. A lot of times, huge light bulbs go off when the data scientists tell the database folks why they need a certain view. Conversely, data scientists who grumble about latency or speed might see the light when they hear the database engineer’s side of the story.

There are probably many more best practices I could share, but listed above are the most low-hanging fruit. At MarketBridge, we have the added layer of essentially exposing these best practices to our clients, making them a part of our product-centric data science team. It’s how we make sure that our results are actionable and reproducible. It’s also how we get prototypes from a data science-generated idea into a formalized product. That’s a topic for another time.

What About Small Data? Part 1

Big data remains all the rage. After exploding onto the scene in roughly 2012, with the popularization of the Hadoop framework, the big data lens still dominates the “LinkedIn press.” This myopia is certainly not without its merits; machine-generated data contain vast amounts of signals just waiting to be extracted and put to use. Indeed, many machine learning applications require real-time, streaming data for use as features, and likewise at the very least hundreds of thousands of training examples.

Big Data vs. Small Data

Big data sets are, first and foremost, really big. Observations range in the 100,000s (minimum) to the 100s of billions. Big data sets are typically semi-structured, and while data munging is required, I’ve found that it tends to be pretty straightforward. Likewise, when missing data occurs, it’s usually not that big a deal—assuming the missing data don’t have a pattern, it’s safe to delete the observations. Finally, when it comes to finding more signals, it’s usually a matter of finding another vendor or “thing” generating data, assuming you can key on something in the main data set.

Small data is still out there, though! You can’t just make a small data problem big. Small data isn’t collected in hours; it usually takes at minimum weeks to collect it. The number of observations ranges from the 1,000s to the 100,000s; and the number of “1’s” in the data set can sometimes be really, really small—think 10 or 100. Small data tends to be relational. Missing data is precious, and thus can’t just be ignored. And finally, finding more signals usually takes a lot of creativity.

Big Data Small Data
Data Collected In Seconds, Minutes, Hours Days, Weeks, Months
Number of Observations 100,000s – 100s of Billions 10s – 100,000s
Typical Structure Semi-Structured Relational
Data Munging Effort Moderate Hard
Missing Data Ignore or Interpolate Not so fast…
Finding More Signals Find Another Vendor or “Thing” Get creative
Topical Areas B2C, Digital B2B, Sales, Events, Above-the-Line
Limiting Factors Processing Power, Storage Time, Creativity

Many of the best problems out there today—the ones that will yield the most incremental fruit, in terms of leads, opportunities, loyal customers, dollars, etc.—have to deal with small data. At MarketBridge, we’re experts on small data, and I want to share some of the best practices for working in this messier, potentially more lucrative ROI realm. In this blog post, I’ll go through the first best practice, “providing insight along the recall-precision gradient.”

Part 1: Provide Insight Along the Recall-Precision Gradient


For our example, we’ll look at a wins and losses dataset for a large, considered-purchase solution designed for small and medium businesses. The training data consists of 10,000 accounts that have been touched by inside sales, yielding 35 wins, over a period of six months. From a feature perspective, assume that there is data on marketing stimulus; firmographic data; and competitive situation in the account.

As a data scientist, I’m tasked with providing a scored list of accounts based on training data, for a new set of 10,000 target companies. The list will be used for sales reps to prioritize their activities. I develop my awesome model and give Joe, a sales director, 1,000 likely accounts to call. I tuned the model for maximum recall — that is, I want to pick up every potential buyer because I don’t want to miss any revenue. Joe asks me how I did the model, and I tell him I extracted a bunch of features predicting likelihood to buy, and there was a bunch of code and statistics, and his eyes glaze over and he says, “just give me the list.”

A week later, I go by Joe’s desk and ask him how the list did. He is furious. He’s called 50 people on the list, and not one was interested. I tell him that the list was optimized for recall and to make sure he didn’t miss a single likely buyer. He walks away in a huff. How do we get around this problem?

Well, obviously, we retune our model to optimize for precision, right? I want to minimize the number of false positives in my model, so Joe doesn’t call a bunch of people who have no interest in what he’s selling. So, in this case, I give him a list of 35 predicted positives. The problem with this model is that he’s missing a lot of people he should be calling, and he tells me that he wants the “not so perfect” leads too, because he can actually adjust his activities to win in deals that might not have been perfect fits. I call this the “adaptive component” of high-touch marketing; it often goes hand-in-hand with small data analytics, and I’ll get back to this in a future post. And anyway, he’ll be done making calls in four days, but then what should he do?

In looking at this problem, we can learn a lot from the tradeoffs that epidemiologists make when building tests for diseases. Of course, the optimal goal for any model is perfect recall and perfect precision. In other words, all positive cases are predicted correctly (recall), and no false positives are generated (precision). In the real world, this simply doesn’t happen; we are constantly making tradeoffs between erring on the side of capturing all positives, and not predicting any false positives, or, being aggressive vs. being conservative in our prediction.

An epidemiologist might tune a model to predict the presence of an extremely contagious disease, where the consequences of a false negative are grave, for maximum recall (true positives / true positives + false negatives). Likewise, she might tune a model to predict the presence of a disease that isn’t at all contagious, and extremely rare, to maximize precision (true positives / true positives + false positives) and avoid scaring a lot of people who are actually totally healthy.

Multiple Models to Activate Human Intelligence

So what’s the key? Luckily, I don’t have to give Joe just one list. Instead I should give him three, along the recall-precision gradient:

  • A “primo” list, maximizing precision, that is, “few false negatives”;=
  • A “likely suspect” list, trading off precision and recall, perhaps maximizing F1 score
  • And a “wide net” list, maximizing recall, that is, “get all off the likely buyers into a list”

Technically, (using Python in this example), the list could be generated by running a grid search (via `GridSearchCV`, e.g., from Scikit-Learn) maximizing recall (wide net); F1 (likely suspect); and precision (primo). Of course, this is just general guidance; there’s nothing magical about the number three, or actually maximizing these metrics. The point is, give practitioners choices along the recall-precision decision boundary and teach them how to use this newfound intelligence.

1) Primo Model

Predicted Negative Predicted Positive Total True
True Negative 9,950 10 9,960
True Positive 15 25 40
Total Predicted 9,965 35 10,000

2) Likely Suspect Model

Predicted Negative Predicted Positive Total True
True Negative 9,860 100 9,960
True Positive 1 39 40
Total Predicted 9,861 139 10,000

3) Wide Net Model

Predicted Negative Predicted Positive Total True
True Negative 9,000 960 9,960
True Positive 0 40 40
Total Predicted 9,000 1,000 10,000

Now, Joe has three lists that he can use to tune his calling strategy. He can use the primo list of 35 first, perhaps putting maximum effort into tuning what he says and the content he provides. He can then go on to the likely suspect list of 139, perhaps realizing that there might be something he needs to tweak for these. Maybe the reps making the calls on the training data were not quite handling their calls to these correctly, so Joe uses some human intelligence to boost his performance. And finally, he might send the 1,000 wide-net prospectskeep warm” emails to nurture them and bring them along.

This is obviously an overly simple example, but this heuristic has worked very well for MarketBridge’s clients handling small data.

One final note; upon reading this, one of my friends asked me the good question: “Why not just provide the probability of the win? Why the three lists?” My answer: There’s nothing wrong with providing the probability of the win, too, but I’ve found that explaining things via three lists based on three different decision gradients is better. In simple terms, it helps activate the “human intelligence” component in the small data world and drives better adoption and usage.

 

Coming Full Circle: Bringing Art Back to Marketing Science

Yesterday was my first day (back) on the job at MarketBridge leading marketing analytics. I’ve been gone for nine years, and it is truly great to be back at this special organization. This return to my roots, with person after person telling me “welcome home,” brought on a wave of nostalgia, and caused me to do quite a bit of reflection on the differences in our industry between then and now. It’s also caused me to think about what it means to come full circle.

I looked out over a room of much younger faces and told a few stories of the early days of marketing analytics. I’m sure I bored the hell out of everyone. I talked about how “back then” (in the dark ages, 15 years ago) you needed an expensive statistics software license (e.g. SAS or SPSS), bare metal servers, and some fairly arcane knowledge to do marketing analytics. Relatively very few people were doing really interesting stuff.

Today, that software is free, anyone can buy time and space on AWS for pennies, and pretty much any model you can think of has a package or library available in R or Python. Consequently, the novelty of the term “marketing science,” which some friends of mine at IBM cooked up, isn’t a novelty at all.

All of the tools, Stack Overflow articles, LinkedIn groups, and degree programs have certainly made marketing science and analytics a lot more accessible for organizations. For example, building a propensity model is certainly a lot easier and less expensive. If the goal is to score a lead on its likelihood to close, and assuming the data exists, an analyst with a few years of experience can do a serviceable job with a Jupyter notebook and an AWS login.

Today, data science and marketing go together like peas and carrots.

However, the state of the art in marketing science has changed in the intervening years too. Just like animals and plants rapidly adapt to environmental stresses in nature, buyers have evolved as well. While storage, processing power, and algorithms have all been getting cheaper, so have marketing touches. Buyers have become increasingly immune to crude, and even “clever” tactics. Scoring models and attribution are now table stakes.

Buyers now want to be taught, not told. They want to explore, not be led. They want to be rewarded for their time, not feel dirty after a ten-minute prospecting call.

I am calling this new reality Marketing Analytics 2.0, mainly because I’m not very creative.

What does this mean?

In this new world, marketing scientists will now face new, more interesting challenges. This will mean pushing beyond the obvious data sources to find new signals that define buyer-needs at a more meaningful level. It will mean going back to some of our “basic” tools, such as rich buyer journey research or deep customer insights, and driving these “artful tools” back into decision models. And, it will mean putting the power of analytics into more hands throughout the marketing and sales department. Think of an analytics “toolkit” that can be used by creatives, researchers, pre-sales people, etc., that brings all “analytics components” to bear on whatever problems these practitioners are working on. In other words, bringing human intelligence together with artificial intelligence.

These are just some initial thoughts, but I’m increasingly convinced that Marketing Analytics 2.0 will bring an explosion of creativity to organizations and that it will ultimately drive better outcomes for both companies and buyers. This new era in marketing analytics won’t be defined by better algorithms. It will be defined by more comprehensive, creative thinking, and by remarrying marketing analytics with the creative side of marketing.

I’m really excited to have come full circle, and to start this journey, again.

 

Why MarketBridge Hired a Chief Analytics Officer

Ok, forget the lofty title. The last thing most companies need is another new C-level position. Nevertheless, every company from a Fortune 500 titan to a 20-person law firm needs the equivalent of CAO.

Why? And what is the role of a Chief Analytics Officer?

Here is the unspoken fact: the data analytics “knowledge gap” between Boards of Directors/CEOs and their in-the-trenches employee teams is expanding every day. The upside opportunities and downside risks of data analytics are huge and game-changing. As both a CEO of a private company and Director of a large public company I see both sides. And to be honest, I learn every day how little I know and how much I need to learn.

The Chief Analytics Officer (CAO) is a rare breed of executive who can bridge this gap between a) deep knowledge of rapidly advancing data and analytics technologies and applications, and b) fundamental strategic business decisions that affect revenue growth, customer loyalty, and brand trust.

Let’s start with the upside opportunities. In every industry, emerging new competitors are challenging legacy incumbents with new, data-driven strategies. Intelligent data analytics are being embedded in both products themselves, and in how those products are taken to market.

As devices and internet-enabled “things” proliferate exponentially, the data that these devices generate expands at an even faster rate. It requires constant diligence to keep up with what’s relevant and available data. The next data stream you don’t know about could be an opening for a competitor, if you don’t see it first. Likewise, inside the organization, data must be used to segment, target, and engage with customers and prospects in real-time. Buyers are being hammered with message after message, and marketing fatigue set in long ago. With the right data—and the right business know-how to use these data—the marketing and sales pipeline can go from being a race to the bottom to a value-driving experience.

With so much at stake, ask yourself a simple question: how much does your company’s Board/CEO really understand big data, machine learning, and artificial intelligence? Does the CEO have the time to stay on top of all of the new data sources coming online, and all of the ways that analytics can make prospects go from “ignore” to “teach me?”


The job of the Chief Analytics Officer, in short, is to:

  1. Monitor data signals coming online both inside and outside the enterprise, and to proactively connect these to profit-driving business actions.
  2. Recruit, develop, and retain a world-class data science team that can react to challenges in a responsive, agile way.
  3. Strategically deploy big data, machine learning, and related disciplines across the enterprise, in business-impacting ways.
  4. Drive knowledge of data science across disciplines outside of the data science “center of excellence.”
  5. Future-proof the organization by envisioning 3-5 years down the road in terms of both data and machine learning/algorithm needs.

 

At MarketBridge, we welcome Andy Hasselwander on board to not only help us develop and execute our own data analytics opportunities but more importantly, help our C-level clients and their teams keep pace and get ahead of competition.

 

5 CEO Principles for Developing an Applied Analytics Strategy

Image: Hunter Haley

As both topics of AI and Facebook data usage gain greater attention from the media, customers, investors, and regulators, it’s time for CEOs to get deeply engaged in an Applied Analytics Strategy. So what is an Applied Analytics Strategy? Applied analytics is about the strategic use of data for decisions within a given environment. In this case, business, marketing, and sales decisions. Yet, too many C-level execs are abdicating major strategic decisions to their data scientists, data vendors, and software suppliers. Claiming “lack of expertise” in applied analytics is no longer an acceptable position. CEOs and their leadership teams must roll up their sleeves and get engaged.

5 basic principles CEOs must embrace:

1) Deep customer data analytics is a competitive requirement.

Yes, CEOs need to be very concerned about customer privacy, but leading competitors (particularly cloud-based start-ups) are pushing the envelope on predictive and AI applications to better target and serve your customers.

2) Customers expect you to know them better.

Underneath privacy concerns, there is still a growing customer expectation that you have the data on hand to understand customer interests, and ethically and productively sell and service them better. Amazon, NetFlix, Google, etc. are conditioning consumers (and therefore B2B buyers) to expect more targeted content and tailored solutions based on their data profile.

3) Don’t let your strategy be driven by data and software vendors.

Too often I see mid-level executives absent from a top-down applied analytics strategy. This includes spending on what vendors want to sell them vs. what they need. With everything moving to the cloud, data and software vendor overload may actually be taking your business backward. The 80/20 rule (20 percent of your activities will account for 80 percent of your results) applies to both data and software.

4) Reverse engineer your data and Applied Analytics Strategy.

Rather than buying what vendors promise, talk to your front line marketing, sales, customer service, and operations executives to determine what they need to succeed. For example, your sales team needs three basic questions answered by Applied Analytics: a) whom should we target b) what product(s) and messaging should we use for these unique prospects c) how should they be engaged (face to face, phone, email, website, etc.)?

5) You already own a data gold mine.

The most powerful data is already inside your internal systems. Unfortunately, this data is often siloed; either physically or politically. Specific data on existing customers and their patterns is within reach. Using that information, you can make assumptions for new customers. There is so much powerful data in your existing systems; CRM, purchase history, customer service inquiries, product usage (including IoT), website downloads, and/or social media dialogue that can be used.

CEOs and their leadership team can no longer defer their Applied Analytics Strategy to just the “analytics experts” alone. Get engaged, get knowledgeable, and make smarter investments.

 

 

AI Will Eat Millions of B2B Sales & Marketing Spend

Get ready for the disruptive change coming to Sales & Marketing budgets…

Most B2B revenue funnels are built under the assumption that 99% of sales and marketing efforts are wasted on leads that never close (Forrester). Many companies have built expensive “demand waterfall” management software systems, processes, and large staffs, that cycle through 199 dead-end opportunities to find one single deal.

It’s the old needle-in-a-haystack model.

But what if you had a metal detector to tell you within +/- 10 inches where the “needle” is?

Artificial intelligence (AI) might as well be the metal detector that renders millions of investment dollars in demand waterfall infrastructure as obsolete. Think of it this way – if a company can use AI to identify the “most likely 5 buyers” from among every 200 prospects, that entire demand creation and lead management waterfall infrastructure can be downsized – big time! List purchasing, marketing automation, content creation, telemarketing staff, etc. – boom, all reduced significantly. And it’s already starting – our clients are seeing declining ROI on email marketing, outbound calling, and endless content creation. Why? Because it’s incredibly inefficient!

I will write more in future posts, but simply stated, AI can deliver the right buyers for each opportunity leading to shortened sales cycles and increased conversion rates without all the noise of “tire kickers.” This revenue and cost impact is huge.