9 Requirements for Effective Cross-Selling in Financial Services

As a part of their Future of Financial Services series, the World Economic Forum recently released a comprehensive and far-reaching report on the “New Physics of Financial Services” and the impact that digital transformation and the rise of Artificial Intelligence is having on the financial services ecosystem.

There is no doubt that these trends will change the operating models for certain firms and continue to impact the competitive dynamics of the industry in a number of different ways in the medium-term.  In the short-term, they are creating challenges and presenting opportunities for how traditional FSI’s acquire, grow and retain customers.  This is especially true in the area of cross-selling and retention. If traditional FSI’s aren’t accurately anticipating the needs of their customers or understanding their risk of defection, they are at greater risk for disintermediation by new entrants or competitors who do.

Cross-selling is the fastest, most profitable path to incremental revenue growth, period. Assuming a firm has a 30% wallet share within an account, attaining just 5% more in account share grows account revenues by 17%. And with the cost of customer acquisition generally estimated to run from 3x to 25X more expensive than cross-selling, the economics of cross-selling are very compelling.  Recognizing this, cross-selling has become a strategic priority for many financial services firms in recent years – yet many firms still appear to be far from realizing the potential of cross-selling.

Why is this? Some firms may be hesitant given the number of actions and warning directed towards FS firms contained within the CFPB Enforcement files where the cross-selling culture was perhaps a bit too aggressive – “Detecting and Preventing Consumer Harm from Production Incentives” as they refer to it.

For others, it may be the challenges associated with overcoming organizational complexity that spans multiple lines of business, diverse functional areas and disparate technologies and business processes that must be coordinated to deliver effective cross-sell programs.

In many instances, it may be the fact that cross-selling responsibility is often left to the “last mile” (the end of the buying journey) in that relationship managers or sales resources often simply don’t have the time or skills to effectively implement programs at scale. Or, efforts are driven by product owners who take a product-centric view of cross-selling as opposed to the customer-centric view of successful cross-selling programs.

Here then is our list of nine requirements for building effective and scalable cross-selling programs in the financial services industry:

1) Adopt a customer-centric view.

Too many cross-sell programs are still organized around lines of business and driven by a product-centric view of cross-selling. Effective cross-sellers build a customer-centric view of opportunity and take a longer-term view of customer value. The most effective cross-sell efforts are led by segment marketers who have responsibility for specific customer segments, working with the product marketing teams and sales channels to coordinate on execution.

 

2) Establish a single view of the customer.

Patently obvious, but many firms still have a difficult time building a unified view of the overall customer relationship. This includes all product usage and transactional history, service and support history, etc., as well as identifying and integrating external data sources that provide additional insights into buyer behavior and attitudes.

Identifying patterns of behavior across products is essential for understanding and anticipating customer needs. In turn, it informs segmentation and personas in #3 below.

But don’t wait for the completion of an expensive, multi-year data warehouse project. Agile firms today are taking advantage of low-cost storage and data lake architectures to quickly build data repositories. This allows data science teams quick access for specific use cases without processing overhead associated with large inflexible data warehouses. Liberate insights from the tyranny of workflow tools and warehouses!

For more information on the Promise of the Marketing Data Platform, Click Here

 

3) Build actionable buyer segments and personas.

Utilize data from # 2 – supplemented with primary research – to build actionable segmentation and personas. These will allow you to personalize your interactions with existing customers in ways they have come to expect; based on the totality of their relationship with you and reflecting an understanding of their needs. Maintain assignment of segments and personas in your customer database to ensure segmentation is actionable.

For more information on Creating Actionable Segmentations and Personas, Click Here

 

4) Create a scalable analytical engine targeted to specific, prioritized use cases.

Use an agile, reproducible approach to developing and managing a library of predictive cross-selling/retention models. Consider segmentation, RFM, CLV, next logical product, retention, Marketing mix optimization, among others.

Customer growth, share growth, wallet growth, account expansion—all of these strategic goals beg the same question; how do I get a given customer to buy more, or buy something new? Cross-sell models use data about the current installed base and compare this with data on other accounts that have upgraded. These are a close cousin to market basket models on the consumer side, analyzing how customers’ ”baskets” of products typically evolve as new items are added.

Deploy a disciplined, scalable approach to managing your data science operations to ensure reproducibility, scalability, and accountability of your investments in AI.

For more information on Creating a Product-Centric Data Science Organization, Click Here

 

5) Listen to your customers, stalk your competition, foresee your disruptors.

Robust cross-sell programs require a deep understanding of your accounts. This includes how they perceive your brand, what competitors are selling into those accounts and what new disruptors are emerging.

Best-in-class FSI’s leverage “always-on” market intelligence capabilities. Those that track customer feedback, competitive movements, and emerging disruptors. This intelligence is fed back into the product development process, innovation centers, marketing messaging, Account-Based Marketing activities, and sales enablement programs.

 

6) Create content aligned with personas and buyer’s journey.

Today’s FS customers engage with your organization via multiple channels. With 24/7 availability of your content and resources, it can be challenging to provide a consistent experience across every channel. Think of a potential buyer researching new savings and investment options to kick off 2019. This buyer can easily research opportunities online, get side-by-side comparisons from financial providers, chat online or over the phone with an investment expert, and then set up an appointment at the bank or financial institution for a face-to-face deep-dive discussion.

If the user experience across all these channels isn’t integrated – and customers receive different responses across different channels from the same company – the likelihood of successfully cross-selling or even retaining those customers goes down considerably. Utilizing a consistent framework and taxonomy to map content to the buyer’s journey is critical to ensure a consistent, personalized and relevant experience regardless of product, channel or stage in the journey.

For more information on Creating a Consistent Customer Experience, Click Here

 

7) Implement a disciplined cross-channel contact strategy and cadence.

Disciplined cross-sellers implement and adhere to well-structured contact strategies that are based on analytics and insight (i.e., using the scores developed in # 4 above). These strategies help determine what that cadence should be, which channel should engage, and what the product/solution and message should be.

This requires close coordination between marketing and sales. One of the most successful cross-sell programs we have ever seen actually rescored their entire customer population each week. They did so using updated transactional and market response data, and then prescribing a set of dynamic business rules to determine where the opportunities were to be routed the following week. A well-defined nurture stream for the next logical product was presented or the prospect was routed to a sales agent.

In either event, the contact strategy and cadence were well-defined for each product and each step in the buyer’s journey. Content and messaging for each segment were defined and utilized as “fuel” for each outreach and delivered into the marketing automation tools and the CRM (See #8 below). The business rules were dynamic and could be modified weekly based on underlying business conditions. This forced an interlock each week between sales and marketing and helped develop shared accountability for results.

To view the FS Cross-Sell case study, Click Here

 

8) Insert insights and content into workflow tools.

The best insights and AI are useless unless they can be easily understood and acted upon by your sales and marketing channels. Delivering predictive analytics, relevant content and personalized messaging into existing customer contact workflow platforms is critical for successful cross-selling at scale. We have found that loosely coupled architectures are dramatically better over the long run than tight integrations with SaaS MarTech platforms.

Most companies are using multiple platforms to manage the end-to-end customer journey. The need to insert insight into each one of these platforms—and to gather feedback from the customer interaction directed by those platforms. This is critical to managing the process holistically, and understanding where the customer is in the process at any given point in time.

Rather than working to integrate multiple systems, the better alternative is to deliver analytics and content via a set of standardized endpoints that any CRM or MarTech platform can use. Then, writing quick integration layers for specific systems. When that next great piece of technology is rolled out—or when Salesforce raises its prices by 20%— it’s no problem.  It just requires updates to an adaptor layer, vs. tearing out a bunch of proprietary APEX code from Salesforce and trying to remember what the developer was thinking.

Fortunately, all CRM and marketing automation systems—including Salesforce—share the same basic architecture. The objects Account, Contact, Lead, Opportunity, Product, etc. don’t really vary, and haven’t for 25 years.

Cross-selling recommendations also share the same DNA. Typically, the API calls for cross-selling include several microservices that, when taken together, form the basis of the contact and content strategy outlined in #7 woven into the CRM / Martech stack.

For insights on the Microsoft-SAP-Adobe Open Data Alliance, Click Here

 

9) Relentlessly test, measure and track:

Finally, any successful cross-sell program – in fact, any sales and marketing program – requires a relentless focus on test and learn, agile pilots, on-going measurement and optimization from start to finish.

This process starts with establishing a proforma ROI when any new cross-sell model is slated for development. Writing down what you are trying to accomplish, and estimating how its effectiveness will be measured, puts the entire team on much better footing for success. This should be done before a single line of code is written or a query is executed.

To learn more about Measuring Return on Analytics, Click Here

Implement agile pilots using test and learn methods such as A/B testing to quickly gain insight into optimal combinations of factors that drive the best results and then scale.  As every direct marketer’s learning method, the A/B test divides marketing into test and control cells, and the response is then compared using simple z-tests of proportions to pick a winner. This approach is simple and effective, and given sufficient volume, can be turned into a learning factory for the organization.

Finally, develop and deploy a holistic customer-centric marketing analytics framework that will allow you to consistently track, measure, manage and optimize all of the activities occurring with your customers across all products, marketing and sales channels.   This will provide visibility into the overall results and allow you to make more informed decisions on how to effectively grow your installed base. Dont forget to optimize your customer contact strategies and cadence on a continuous basis.

To learn more about building a Marketing Analytics Framework, Click Here.

 

Remember this: “If traditional FSI’s aren’t accurately anticipating the needs of their customers or understanding their risk of defection, they are at greater risk for disintermediation by new entrants or competitors who do.”

Having an agile plan to go-to-market against market disruptors by building a customer-centric approach to cross-selling will be key to success. Check out our latest whitepaper on “The Last Mile Opportunity,” for 5 transformational principles to scale operations and build revenue success in the “last mile” of the customer buying journey.

Download the whitepaper:

Cyborgs Will Beat Robots: Building a Human-AI Culture

There are two competing AI narratives bouncing around the internet. On the one hand, AI is seen as a future scourge, a technology that once unchained will push humanity past a singularity. Past this singularity, we cannot predict what will happen—but many think it won’t be good [1].

The other camp is dominated by AI optimists like Ray Kurzweil, who believe that human-machine integration is inevitable, is a great thing that will usher in a new golden age for humanity, and has been happening for years. Many people don’t realize that their brains have already been rewired with a Google API; when we don’t know something, we’ve gotten incredibly good at opening a browser, executing a pretty optimal search, and finding the answer (if there is one)—dramatically increasing the productivity and intelligence of those who use this API wisely. This camp still sees a singularity on the horizon, but in their view, humans and machines will merge, creating “cyborgs” that integrate the best elements of human intelligence and artificial intelligence, and this is a good thing.

I wanted to write this article is to help companies and executives navigate this coming cyborg transformation. Just like in past technology waves, the companies that succeed will not be the ones with the best algorithms; the algorithms will largely become tablestakes. In this new reality, the winners will do a better job transforming their employees into better “AI interfacers.” In other words, the companies with lots of motivated employees who understand how to use AI—and who are staffed with employees equipped to interface with the technology—will ultimately stand out from competitors by developing better use cases, integrating AI into their value-added business processes, and using AI in concert with human intelligence to drive better outcomes.

Good News: We Are Still Early

Early in the personal computer revolution, the distance between the most advanced computer engineer and a 12-year old kid messing around with his Apple IIe wasn’t really that large. It probably seemed huge at the time, but the reality was that the basics of that machine were still simple, and someone with a soldering iron and a few screwdrivers could actually tinker, maybe upgrading the RAM or adding on a graphics card. Try doing that in 2019 with a MacBook Pro. The components could seen. The circuits could be understood. Programming languages, while clunky by today’s standards, were BASIC. (sorry).

I would argue we’re roughly at the Apple IIe stage right now with artificial intelligence. A hobbyist can download open source software like Python, the SciKitLearn library, Jupyter, and Git, and be off and running building an OCR (optical character recognition) algorithm. In fact, one could argue that AI technology is more democratized than PC technology was in the mid-1980s. At that time, it would cost at least a few thousand dollars to get up and running with a good IBM clone, and programming languages had to be purchased as physical boxes of floppy disks. Learning to program or build hardware required physical books; today, it’s possible to take free courses on AI from Stanford on Youtube, and any error typed into Google returns an immediate solution courtesy Stack Overflow.

In other words, an interested, talented person can achieve basic artificial intelligence literacy today pretty easily, if they put their mind to it, and the distance between there and a self-driving car isn’t insurmountable. Granted, millions of developer hours have been spent tweaking each neural net and environmental sensor on that car driving around Pittsburgh, but a tinkerer can basically explain the theory behind how it all works, if they want to. The net-net is that it’s still possible to build an army of AI citizen scientists at your company who will fully embrace the unknown advancements of the next decade—and that not doing so will put your company at risk of faltering, just as slow movers on technology did in the 1990s.

New Role: The AI Interfacer

Companies that successfully transitioned from offline to digital in the 1990s and 2000s all had one thing in common; they built a strong layer of interface employees. We’ve all been there: Bob is the master of database X. He works 70 hours a week; he can answer any question; people worship him, and he has total job security. However, that database never reaches its full potential. Hundreds of reports are written, but few are used. Integrations happen, but fall down over the last mile. The problem in this scenario is that few people have the skills (or the interest) to meet him half-way. There are no interfacers for Bob.

The company that Bob works at spends millions on expensive proprietary software, and armies of consultants to install and configure. The bare metal servers at this company are just as powerful as the servers at their competitor—but yet, it just never seems to “click.” The competitors pull away, and before you know it, this company is on the trash heap. Sound familiar?

This analogy extends to AI flawlessly. An AI system can be built to (in theory) predict the perfect marketing touch at a given point, or detect fraud with uncanny accuracy, but without human advocates and interfacers feeding the algorithm data, providing improvement suggestions, and driving adoption, these systems will fail—or at the very least, they won’t evolve.

AI interfacers are to 2019 what computer literate employees were to 1989, or what database-literate people were to 1999. They may not be developing machine learning algorithms, but they know what a machine learning algorithm does. They may not be on the team developing the self-driving car, but they can explain how a self-driving car is put together. They are the key to AI’s success over the last mile.

AI Interfacers come in five flavors, not mutually exclusive:

  • User: Can interface with AI endpoints and integrate them into their day-to-day processes;
  • Explainer: Understands how machine learning algorithms are trained and validated, and how these can chain together to form systems, and most importantly, teaches other about them;
  • Product Manager: Can see how systems and processes can be improved by AI, and can prioritize these improvement points;
  • Data Gatherer: Understands how artificial intelligence gets information from the world (IoT, big data, etc.), environmental sensors, users);
  • Prototyper: Can prototype simple AI systems using machine learning algorithms (in other words, tinker).

The AI User is equivalent to someone who liked and was facile in using email in 1989, or an SAP power user in 1999. These are individuals who instead of running away from AI, actually attempt to integrate it into their day-to-day, realizing that it will make their job easier, and allow them to surf to higher value-added activities (and perhaps, get a promotion.)

The AI Explainer is a natural teacher who understands how AI elements are knit together within the core business processes of the company, and evangelizes these stories to others. He is the executive who tells the same story over and over again at staff meetings until it has been internalized; the line manager who explains the sales rep why the AI-based next logical product algorithm works; the new employee who teaches upwards to their 45-year-old supervisor what machine learning really is, using simple, approachable language.

The AI Product Manager might not be an actual product manager, but has that DNA. They are constantly stepping back and seeing how AI does and could improve existing processes. They are passionate about driving better performance and outcomes, and tell the stories across the company that drive innovation.

The AI Data Gatherer sees how information flows through the company—from customers, marketing campaigns, the supply chain, IoT, etc.—and makes connections. They see potential signal for learning algorithms, and they see how AI algorithms can feed data into other systems. For example, this individual might see that internet-enabled cooling units report on energy usage every hour; she surmises that when units spike above two standard deviations for long periods that another chiller might be required. She recommends to the cross-sell AI team that they use these data in their algorithm, along with her hypothesis.

The most advanced non-engineer role is the prototyper—the individual who is comfortable tinkering and messing around with AI technology. This is usually a business power user who is impatient for results. These individuals can frustrate engineering teams (think, stepping on my turf,) but at successful, agile companies, interdisciplinary work is encouraged. We ask AI engineers to understand the business problem; successful companies encourage business leaders to get their hands dirty (in a safe environment, of course.)

Principles for Building Your Bench of AI Interfacers

There were several traits that companies who successfully built up a strong bench of digital natives had in common, and a few traits that struggling companies also shared. There is no reason to expect that the core principles have changed, but I’ve adapted them for AI.

The actions below are all totally doable. None of them require spending millions of dollars on a quantum computer, or hiring 50 new developers to go “do some AI stuff.” Rather, they are mainly HR and management actions. If they don’t get done, it’s probably because, like most things worth doing, they don’t drive immediate ROI. They are cultural changes that must be driven from the top (the first DO below.)

Do’s

  1. Hire a Lifetime Learner CEO / Exec Team. It all starts at the top. If you have a CEO who won’t take the time to understand AI at a foundational level—how it works, how it learns, existing use cases—then you’ll be toast. Keep in mind, I’m not talking about hiring a programmer data scientist—I’m talking about someone with an insatiable thirst for learning who never gets tired of reinventing her skillset.
  2. Hire New Cohorts, Every Year. Companies who don’t hire young people for prolonged periods of time quickly fall behind new waves. AI is no exception. I first heard the term “digital native” in 2004, from a technology company marketing executive who lamented his inability to make the transformation to digital. This company had kept old managers in seat for years (they were the original crew) and now needed a talent infusion. If he’d hired one or two 22-year-olds every year, he wouldn’t have been playing catch-up.
  3. Have a Citizen-AI Training Curriculum. One thing that didn’t exist ten years ago was the MOOC. If you wanted a marketing manager to learn the basics of ad exchanges, she either had to learn on the job or go take a course at a university. Today, motivated learners can take AI courses from basic to fairly advanced, essentially for free. As a manager, it’s your duty to (1) create a curriculum based on existing MOOCs and post on your intranet / wiki, and, (2) give employees the time and space they need to get up to speed.
  4. Co-Create, Foster Agency. If an AI-based next logical call algorithm is implemented in a call center, don’t allow it to be cynically jammed in with an explanation of “just do it.” This will drive resentment. Instead, train users on how the algorithm was built. What are its inputs? What algorithms were used to train the model? How do we know it works? Involve your employees in co-creating the AI interfaces; you’ll find that they quickly surface problems and blind spots, and will happily use it / work with it. Analogies for this exist all over, but perhaps the most powerful is the Andon Cord used in lean manufacturing whereby any employee can “stop the line” to identify problems with production.
  5. Force Human Interaction Interfaces. If AI algorithms are only allowed to talk to one another, we might actually get to the “grey goo” scenario pretty quickly, and I’m only half kidding. Rather, focus on human understandable interfaces. The Google search example I started with is a good example of a human-AI interface that is mutually reinforcing. Concretely, building out a next logical product algorithm in a CRM system shouldn’t just spit out a SKU. Expose more about the key inputs; the predictive factors; allow the human to adjust parameters and see how the model changed. Perhaps most importantly,
  6. Promote Tinkering. Siloes and a “guild mentality” kill innovation. Most Silicon Valley companies have done a good job promoting a tinkering culture. However, in too many other places, “stay in your lane” dominates, causing people who stick their neck out to get whacked. AI is no exception. If you want people to stay around, let them play around. Make sure you have safe spaces set up where nothing can be broken—but innovation beats parochialism any day of the week.

Don’ts 

  1. Don’t Go Build Stuff Just Because AI. Perhaps the fastest way to alienate your workforce, and make them AI opponents rather than AI proponents, is to hit the panic button and go off half-cocked on an AI initiative without a clear business reason. A lot of companies did this last year with blockchain. “We need to do something with blockchain, because… blockchain!” (Guilty. Mea culpa.) So don’t do this with AI. Wait for the real use cases. If your employees are excited about it, it’ll be a lot easier, and it’s a really good indication that it’s worth doing.
  2. Be Cautious of Black Boxes. Proprietary black boxes may be awesome, but even more so than with enterprise software, companies need to use extreme caution before committing to them. AI is, by its very nature, opaque. Buying from a vendor who won’t expose the inner working adds another level of opacity, and will make it much harder for employees to interface and find agency. It’s fine to test out proprietary solutions, but be aware of what you’re committing to.
  3. Don’t Build a Monolith. Finally, don’t build the one AI ring to rule them all. When I see IBM advertising Watson as the solution to everything, I definitely get Lord of the Rings Flashbacks. I guess I get why everything should be centralized, but again, if you’re trying to build a cyborg organization, this seems like a giant mistake. Instead, building smaller AIs that humans can work with directly, that communicate with one another but aren’t a hive mind, seems a safer way to go—in more ways than one.

Conclusion

Companies that successfully navigate the coming AI transformation will build an army of AI Interfacers, made up of power users, product managers, teachers, data plumbers, and tinkerers, who will drive a positive feedback loop between the power of AI and human intelligence. These companies will make the creation of this culture a priority, with concrete management, HR, and technology decisions designed to prioritize the human-AI interface, not the raw power of the algorithms. These “Cyborg Companies” will emerge as the clear winners over the coming decade.

[1] In his book Superintelligence (2014), Nick Bostrum laid out many potential dangerous outcomes for an unchained, general intelligence AI: a “grey goo” of endlessly self-replicating nanomachines that takes over the planet; a resource-consuming algorithm gone awry whose sole goal is factoring prime numbers, eventually building a Dyson Sphere around the sun to achieve its objective; and even more malicious scenarios evoking devious, trickster AIs who fool researchers into mailing it what it needs to build a machine to escape its human prison. This is pretty dark, and while I do think we need to be worried about these dangers, this isn’t the focus of this article.

The Opportunities and Challenges of Marketing Analytics for FinServ Firms

The arrival of the 21st CMO Survey in our inbox prompts us to lift our heads up from our day-to-day work with Financial Services clients and to think retrospectively about where the industry has come in the last year. It affords us an opportunity to compare our “on-the-ground” client experiences with the experiences of the financial services survey responders.   What are the trends we are seeing within our clients, and how do those compare to the surveyed firms?   What are the best practices that are emerging?  What are some the unique marketing analytics and data challenges in this industry, and how can we help clients solve them?

More importantly, it helps us sharpen our focus on those issues that will be critical to achieving breakthrough go-to-market improvements moving forward into 2019.  Specifically, what roles can and should analytics, customer data and insights play in helping to drive performance and growth in the Financial Services sector, and how can companies ensure they are receiving maximum value from their investments in analytics/data?

With that context, here are some highlights based on the survey findings, and our thoughts about implications for the financial services industry moving forward into 2019.

FinServ Expects to Double Marketing Analytics Investment in the Next Three Years – But Challenges Remain

Despite the hype and promise of AI, Predictive Analytics and Big Data, survey results suggest that spending on analytics as a % of total marketing budget has not changed significantly over the last 6 years. (For FinServ respondents, this mean was 7.05% in the August 2018 survey).

FinServ over-indexes on analytics spend vs. many other industries in the survey, but we have found that market leaders spend significantly more than 7% on analytics and data.  As an industry with a long actuarial history in credit and risk analytics, the extension of these capabilities to sales and marketing has come relatively quickly, as most finserv firms have a strong history of working with data and data science.  That said, our experience has shown that credit risk and actuarial experience do not necessarily translate into effective marketing analytics as FinServ firms look to build their marketing analytics competency.  Different skill sets are required to convert customer insight into effective customer engagement across multiple touchpoints.

Despite several years of relatively flat investment in marketing analytics, FinServ survey participants expect to more than double their investments in marketing analytics over the next three years to realize the promise of advanced analytics.  Still, these investments come with a number of challenges. The biggest challenge identified by survey participants to increasing investment in analytics/data is the “lack of processes and tools to measure success through analytics”.  Without clear ROI, executives are hesitant to increase investment – something our Chief Analytics Office talked about in an earlier blog (see below).

See how leaders are instrumenting marketing analytics efforts to measure “Return on Marketing Analytics”

To justify investment increases, analytics leaders in FinServ firms must put measurement frameworks, tools and processes in place to make analytics a truly measured function, with clear quantifiable benefits to the enterprise.  Functional groups like the Marketing Accountability Standards Board are working to “establish marketing measurement and accountability standards across industry and domain … for the guidance and education of business decision-makers and users of performance and financial information”.  This includes guidelines for analytics.  FinServ firms should consider participation in promulgation of these standards, and/or adoption of these standards as they are deployed.

The second most cited challenge to increased investment is the “lack of people who can link marketing analytics to marketing practice”.  At MarketBridge, we refer to this as “activation”. Without the ability to drive analytic insight into action across multiple points of customer engagement, FinServ companies (and indeed, companies in all industries) will continue to underinvest in analytics.

Solving the activation challenge will require investment in workflow integrators and/or partners who can embed analytic output into the multiple platforms, technologies and workflows that support customer engagement across the entire customer journey.   This is especially true when providing insights to sales teams and intermediaries.  Analytic insights and prescriptive guidance must be communicated to salespeople in their vernacular, and with sufficient context and explanation to promote adoption – otherwise insights will be ignored. This “last mile” activation continues to be a barrier to adoption for many firms.  Leading companies are now spending nearly 50% of their analytics investment on driving insights into customer-facing business processes and workflows.

Investments in Data Will Increase – But the Mix is Changing

58.3% of FinServ respondents believe that their investment in online customer data will increase over the next two years as FinServ firms continue to look for timely customer signals, while only 25% believe their investment in third-party customer data will increase.

Clearly the hunt is on to find more timely, verifiable and predictive customer intent signal data on line.  This will require more agile methods of identity resolution, increasingly flexible data storage and stronger data governance.  Central, flexible data repositories such as data lakes should be well-stocked with transactional data, but FinServ firms should also be experimenting with other less structured data sources such as call center recordings that can benefit from speech analytics, particularly in high value customer interactions.

“Trusting Relationship” Remains a Top Customer Priority – Followed Closely by “Excellent Service”

No surprise that Trusting Relationship is still a top priority for FS customers in light of the 2008 global meltdown.  Despite significant regulatory changes to instill liquidity and protections back into the system –  the passage of Dodd-Frank, the creation of the CFPB, and the (recently-vacated) attempt to implement the DOL Fiduciary Rule – trust is still an issue for nearly all in this sector.

Trust extends beyond “do no financial harm” to include more abstract harms such as misuse of customer information and abuse of trust. Proper protection and usage of customer data and security and privacy will remain a top priority in an increasingly digital and interconnected world – especially for financial services firms.

See how financial services firms can use content to increase customer trust: “Why FinServ Businesses Need To Rethink Content. Period.”

Interestingly, 73% of those surveyed believe “Excellent Service” is in the top 2 list for their customers.  When viewed through the data and analytics lens, this means FS firms must do a better job of developing a 360° view of their customers across multiple lines of business and multiple channels of customer engagement.  This remains a major challenge for most of the clients we work with today, and will likely persist into 2019 and beyond.

Market leaders will need to recognize their customers wherever and whenever they choose to engage, understand the full extent of that customer relationship, and anticipate their needs to deliver Excellent Service and to build a Trusting Relationship.  This will require not only breaking down internal data, technology and organizational silos, but also increased investments in gathering and utilizing online customer data and signals to understand and anticipate customer needs.

The Majority of Organic Growth (78%) Is Coming from Existing Products (57%) and New Products (20%) into Existing Markets

Financial Services firms are investing ¾ of their marketing spend in driving growth from existing markets.  This means continued focus on retention and cross-sell activities – areas where predictive analytics can be especially powerful in helping to identify, target and prioritize white space, cross-sell and new product opportunities within the existing client base.

The good news is that the application of analytics to these growth areas can create tremendous efficiencies, thus freeing up additional investment capacity for other higher-cost growth areas.  (See for example, the cross-sell case study included in this newsletter for an example of the significant productivity improvements analytics can drive when applied to this important use case.)

Building a prioritized roadmap of analytics use cases is critical for Financial Services firms. Deploying analytics into marketing and sales workflow and driving adoption are key.  The resulting efficiencies in existing markets can be deployed against more expensive growth in new products and new markets.

In summary, FinServ firms will likely be increasing their investments in marketing analytics and customer data –  but they must do so in a disciplined, prioritized and measured fashion.  These investments must be aligned with strategic customer priorities, like building trusting relationships and providing excellent customer service. This will require increased investments in customer data and agile data management capabilities to support the increase in online customer data that must be tracked, stored and analyzed.  Emphasis and focus must be placed on activating the resulting insight at each customer touchpoint, and on building a centralized capability to manage the customer experience across multiple channels.   Value will only be realized if companies can convert these insights into action, and coordinate that action across all of the customer touchpoints.

Beware of False Profits, Which May Come to You in AI clothing

Image Credit: John Fowler

With all of the hype around AI, don’t overlook the importance of Human Intelligence to ensure your analytics efforts are addressing the right problems

With apologies to Matthew 7:15 for the tacky paraphrase, companies today must remember to look beyond the science of AI and machine learning alone to identify areas where analytics will help drive revenue growth.

Recent McKinsey research identifies “Analytics Translators” as a key role to help companies derive value from their increasing investments in data science, AI and deep learning by translating insights into action. They also talk about the “perfect union” between creativity and analytics that is cross-pollinated in market leaders today.   At MarketBridge, we call this the “Human Intelligence” side of the equation.

In the rush to build data science skills and capabilities, companies must not lose sight of the need for creativity and a deep understanding of the business and its customers to both identify the best focus areas for AI, as well as to effectively “translate” analytic insights into business outcomes in new and creative ways.

Case in point

We recently worked with a large distribution company which had observed a slow decrease in customer spend, for a specific product, across many of their accounts.  Over time, these accounts were exhibiting small decreases in both order frequency and volumes in the category, but in many instances these small category decreases were masked by offsetting increases in other categories within the account.

By the time this SKU-specific decline was brought to the account manager’s attention, it was often too late – the customer was sourcing from other competitors, and remote sites were not conforming with centralized purchase agreements.

The category owners wanted to develop models that would help predict which accounts were likely to decline by a certain percentage in that category in the next quarter. Armed with that data, they could then point their account reps to those accounts sooner to try to identify and mitigate the root cause of any possible decline wherever possible.

We built a machine learning model, first flagging those accounts who had exhibited that type of slow decline, and then using an algorithm to pick out the features that predicted the decline.

Once that model was developed and validated, the entire population was scored monthly on their probability to exhibit this slow decline, and those accounts that had a higher risk profile was flagged for treatment by both sales and marketing (i.e. category promotions, rebates, account reviews, etc.) to try and retain that category business.

One of the key indicators the models identified was that non-seasonal reductions in order frequency and average order size for these items were predictive of longer-term category decline. While not a surprise, the process of systematically evaluating the portfolio and identifying and ranking those accounts at greater risk put a much greater focus on the issue. Armed with the outputs from that model, the client was able to implement marketing and sales programs that helped reduce decline in those higher-risk accounts by identifying “early warning signals” much sooner and flagging those accounts for engagement.

But this “decliners-only” view of the issue did not allow the client to easily see another important dynamic in the marketplace. When we looked at overall customer activity within the category, we uncovered another important reality.

There were a number of customers who had significant increases in their order frequency, volumes and average order value within this category. As a category owner, one might be tempted to think that the firm was doing very well with those customers since their order volumes, frequency and order values were increasing. They were using more of our product, they are engaging with us more frequently, and our category revenues are going up. What could be wrong?

In fact, what the analysis showed was that many clients who exhibited these apparently positive behaviors ultimately contracted to zero. This binary behavior—going from a great account to nothing—was much more serious than small declines and shifts, and could have been totally missed, if the right question hadn’t been asked. When we looked deeper, it became apparent that the increase in overhead and effort associated with managing this category prompted buyers to give up trying to manage it, and instead hand it off to a value-added third-party managed services contract.

Even though the client provided similar solutions, their decliners- and category-only view of the problem blinded them to the opportunities within their own portfolio to migrate the customer to a different solution—until it was too late.

So, what are some of the best practices you can put into place to avoid falling into a similar trap? Here are some suggestions based on work we have done across numerous clients and best practices we see in the marketplace:

1. Start each analytics initiative with a “Delphi Method” brainstorming session

We like to start each analytics initiative with a brainstorming session, where we bring together a number of cross-functional stakeholders to review the problem statement and hypothesize and prioritize scenarios that should be investigated in the analytic process.

In the example above that team included category managers and marketers, account reps and category product specialists, market intelligence personnel and customer service and support. Using a structured process such as Delphi allows you to identify, capture and prioritize hypotheses across customer and category “experts” which can and should be tested in the analytic process.

2. Identify external data sources—including talking to customers—that provide additional insight into customer behavior

In the case above, we incorporated a third-party data source that tracked search data for those clients with increasing spend. We were able to identify select clients that were actively searching for managed solutions as their category usage increased, and we were able to incorporate that in the next set of models we developed which were focused on target accounts for the managed solutions category. This helped the client to identify those accounts who were actively searching for managed service solutions.

An even more obvious way to gather external signal is to actually call customers. This can be formal “qualitative research”, or it can be just picking up the phone, or talking to the sales reps who talk to customers every day.

3. Expand beyond line-of-business organizational boundaries

In many instances, the focus of an analytics initiative is limited by “who pays for it”.  In this instance one category is looking to maximize their LOB revenue vs. looking at the portfolio of business with the customer and understanding how to maximize the overall market basket with that customer.  Looking at relationships between LOB’s and product usage and purchase patterns is key to identifying relationships that may not be readily apparent when viewed through the single-LOB lens.

Integrating data science expertise with business domain knowledge and creativity is key to driving value from your analytics investments, and ensuring that you are focused on solving the right problems.   Combining AI and HI, data and content, creativity and analytic rigor—while often challenging—will yield much greater returns.

Measuring Return on Analytics

The Challenge: What Value Does Data Science Drive?

First of all, a note on scope and audience: this article has to do with marketing analytics / data science (I use the terms interchangeably), and is written as such. While these concepts should absolutely be useful for executives in other areas (biotech, manufacturing, operations, etc.), all of the specific examples are marketing-related. It’s also worth pointing out that this article is written with a management / executive audience in mind. Some of the concepts get technical, but not too technical. I try to provide concrete best practices and even algorithms to measure what analytics and data science actually get an enterprise in terms of financial results.

With that out of the way, there is a little talked-about challenge in these days of massive hype around data science—what does all this data, integration, instrumentation, and fancy modeling actually get me, as an executive allocating budgets? I have spoken to many CMOs who absolutely understand how critical marketing analytics is, but struggle to convince their peers in the C-Suite, or the Board, how specifically it will drive results.

This isn’t because analytics isn’t driving results; it’s because generally these CMOs (or CROs, or VPs of Marketing Operations) haven’t put together a cohesive measurement framework. In fact, higher-level executives may understand why analytics is important, but they simply lack the vocabulary or frame of reference to ask the right questions to heads of analytics, data science, etc. This communication breakdown is common; McKinsey recently identified the role of the “analytics translator” as a must-have role to address this issue.

Measurement of analytics and data science is a classic case of “The Cobblers’ Children Have No Shoes.” Analytics is about data and measurement, so why should it actually think about measuring itself? It’s all very meta. Ignoring this issue, however, is increasingly becoming untenable. And, when the next recession hits—and it will—analytics and data science might find itself moving from a “golden child” with countless unfilled job openings to a “nice-to-have” utility that is fighting for dollars in the annual planning cycle.

With this in mind, I’ve laid out three concrete steps that heads of analytics or data science can start taking, now, to lay the groundwork to make analytics a truly measured function, with quantifiable benefits to the enterprise. As a bonus, putting these steps in place will help make the analytics function much more responsive to feedback, resulting in better performance.

Three Steps to Take to Start Tracking Return on Analytics

This article lays out three concrete steps that heads of analytics or data science can start taking, now, to lay the groundwork to make analytics a truly measured function, with quantifiable benefits to the enterprise. As a bonus, putting these steps in place will help make the analytics function much more responsive to feedback, resulting in better performance. This is based on observations of analytics departments and groups that have actually started to successfully measure the value they drive.

1. Setting the Goalposts: Start Each Analytics Engagement by Filling Out a “Pro Forma ROI” Form

At the outset of most every data science project, stakeholders, modelers, and data people all have a good idea of the goal. However, just like with any kind of coding, ignoring documentation leads to bad places. Writing down what you are trying to accomplish, and estimating how its effectiveness will be measured, puts the entire team on much better footing for success. This should be done before a single line of code is written or a query is executed.

The pro forma ROI form should have a few key components. First, a clear name for the project by which everyone can refer to it. Second, an executive summary of what is being done. Third—and here’s where the ROI comes in—the specific metric that the project is attempting to influence. For example, “increase lead conversion rate”, “increase new customer acquisition”, or “increase call center utilization rate.” It’s not enough to just specify the metric, however. It’s important to write down:

  • Where the metric is coming from (the specific system(s), table(s), and field(s));
  • What the current measured value is;
  • The objective change to the metric (e.g., from 5% to 7%)

Some data scientists might object to writing down an objective for the change; it might be a W.A.G. or “wild-assed guess.” However, in my experience, these “WAGs” are wonderful to have around as reference points throughout the work. They clarify “what we thought success would look like.” If the achieved results are much lower, or non-existent, the team has a clear record of what didn’t work—which is just as important as a record of what did work. The image below shows a sample of this “pre-ROI worksheet.” Note that the entire worksheet can be downloaded at the bottom of the article as a PDF.

Forcing yourself to say what you expect from a project is a valuable process.
Forcing yourself to say what you expect from a project is a valuable process.

2. The Feedback Loop: Post-Project Debriefs and Follow-Ups

It almost goes without saying that after the project is complete, a post-mortem should be conducted. But again, in so many analytics organizations we’ve seen, this isn’t really done, at least not systematically. It’s much more exciting for most people to rush to the next cool challenge, instead of slowing down for several hours or even days, and taking stock of what was accomplished.

Here, the advantage of the pro forma ROI form is very apparent. It might have been weeks, months, or even over a year since the project commenced. What were the objectives again? What metrics did we say we were going to use to evaluate success? It is interesting to see how closely the project stayed to its original objectives. Keep in mind, veering off the original objectives isn’t a bad thing.

First, write down what the results of any models were. What was the lift identified in the validation set? Were there any in-market tests? What were those results? If the project wasn’t an optimization project, write down a summary of what was accomplished. How many segments were identified? What was the assignment rate? This should all be entered on a post-mortem form, similar to the pro-forma form. This step can be done immediately upon completion of the work.

Second—and this is the tough part—call internal stakeholders two or three months after the work has been delivered, to understand exactly how it is being used, or how the models are performing. This is simply not done by most analytics managers or executives, but it has the single most important impact for the ongoing performance of analytics. By calling and seeing how the work is being used—and having an actual verbal conversation, with questions and answers and back and forth—the true learnings will emerge.

After doing the pro forma ROI, the initial project post-mortem, and collecting actual user feedback, managers can now begin collecting digestible, three-part project summaries. These summaries can be kept in a database format if you want to be fancy, but they can just as easily be documents (I prefer markdown). They can be put on a wiki, but they don’t have to be; they can be left in a Dropbox folder, too. The important thing is to do them.

3. Bringing it all Together: Build a Return-on-Analytics Dashboard

Having a wiki or a folder of analytics projects is a great start, but after a few of these have been done, it’s a great benefit to have an accessible summary of the “current value of analytics” to the company.

This is not a dashboard of all of the cool models and ROI calculations that have been created, piled together in one location. Rather, it is a systematic accounting of the benefits of those efforts, grouped by theme. Some of these benefits can be qualitative, but it’s better if most are quantitative. The dashboard should fit one page, perhaps with links back from the summarized statistics to the detailed project readouts—the three-part documents mentioned above. An example dashboard is shown in the image below:

A Return on Analytics dashboard makes it clear what analytics and data science has done for the company—and where it's headed.

Ideally, the dashboard should be built on a reproducible stack, just like the analytics themselves should be (i.e., no click-and-drag dashboarding tools, instead use a code-based framework like D3 or Shiny for R.

In summary, the dashboard should be the single source of truth for the inevitable questions that come up around planning time around “can you show me what analytics got us last year.”

This brings up an interesting question on timeframe. As any reporting specialist knows, a report on performance can either by in an “income statement” format (what did analytics deliver to the company in a timeframe, for example, one year), or in a “balance sheet” format (what is the current value of all of my analytics models and efforts). The simplest approach is the balance sheet approach. This is just a bubbled-up version of the three-part summaries, summed together. The income statement approach is more challenging, because it requires a lot more analysis. To tackle this, you’ll need to estimate the collective performance of models over a time period. I suppose it’s possible to completely automate this, but it would be a big lift. I’d love to hear about anyone who has successfully done so.

Side Note: The Three Components of a Dashboard

In my experience, an analytics or data science team has three primary functions:

  1. Surfacing Insights
  2. Optimizing Operations
  3. Measuring Results

Almost any project can be placed into one of these buckets. For example:

  • Segmentation, exploratory analytics, persona development, digital listening, and text analysis for themes all fit in the “Surfacing Insights” bucket.
  • Propensity models, media mix models, lead scoring, test-and-learn, and pricing optimization all fit in the “Optimizing Operations” bucket.
  • ROMI (return on marketing investment), incrementality analysis, web analytics, and brand health measurement all fit in the “measuring results” bucket.

Herein lies a problem; of these three families, only “Optimizing Operations” has a cut-and-dried, calculatable benefit to the company. For example, propensity model A increased response rate by 0.11%, which drove an additional 561 applications over an annual period, for example. It’s harder to state the value of an insight, or the value of truly understanding the contribution of above-the-line TV advertising on sales effectiveness.

So, what to do about Surfacing Insights and Measuring Results? For each of these tasks, the manager should rely on usage metrics. Everyone knows a successful segmentation is one that is used; thus, the analytics dashboard should reflect the places in which a model or an analysis is actively being used. For example, if an insights project surfaced a set of personas, simply list the campaigns or places these personas are in the field. This certainly isn’t rocket science—but I guarantee to you that this is rarely, if ever, written down.

I leave the specifics of coding this dashboard to the data scientists; everyone will have a cool way to compile the statistics gathered in post-mortems, from stakeholders, and from actual model performance. Bringing all of these data together is a fun challenge that, once framed out for the team, will be gladly tackled.

Conclusion: Measured Things Perform Better

Analytics teams can suffer from an aimless, drifting mentality. Open-ended research and tackling the toughest problems is great, until it isn’t. As data science—and marketing analytics—matures, it increasingly needs to move from a “cool utility” to a measured source of business value.

Some may revolt at this concept. “It’ll get in the way of our actual work,” or “analytics is a utility; we just do what business stakeholders tell us to do” might work for now, but it won’t work for much longer. The best analytics / data science executives should be instrumenting their functions now, both so that they can prove their value when the time comes, but also to (1) give their teams a clear sense of direction and purpose, and, (2) to identify best practices by looking, clear-eyed, at the results of the work that has been done.

The form below allows you to download a nicely formatted PDF of the pro-forma / post-mortem document mentioned above.

Marketing Analytics Family Tree

Marketing analytics is a broad, “meta” field, combining elements of marketing strategy, data science, database management, digital technology, primary research, and psychology, to name a few. To help explain what it is, we’ve created this taxonomy of marketing analytics—a “family tree”—that breaks the field down from high-level to more detailed.

The taxonomy has four levels of hierarchy. The highest level splits analyses broadly into aggregated and discrete classes. Aggregated analyses look at data grouped together—for example, by month, product category, or customer segment. Discrete analyses look at the individual “data objects”—for example, leads, customers, or accounts. The next level down—call it “function”—looks at large categories of analytics that might typically be found on a Director’s business card. For example, “Director of Consumer Research”, or “Director of Customer Analytics.” The third level of the taxonomy, discipline, looks at a thematic area in that function, for example, “qualitative research” or “predictive prospecting.” Finally, at the lowest level are the specific analytics tasks or methodologies that an analyst might be doing on any given day, for example, “social listening” or “customer reactivation.”

Each task has a fairly detailed explanation of what it is below the tree. Where links to greater detail might be helpful, those have been added; but in many cases, they weren’t needed.

You’ll notice that the terms “machine learning” and “artificial intelligence” don’t appear on this taxonomy. This isn’t because they were forgotten, but these are specific techniques that can solve many of the discrete-type problems noted in the taxonomy. In some cases, specific tools are mentioned, like neuro testing, and these were included because they are so uniquely suited to a specific task.

Assuming people find this hierarchy valuable, we are absolutely open to editing it and keeping it fresh with suggested adds, deletions, or merges. Please reach out with suggestions to our Chief Analytics Officer, Andy Hasselwander, at ahasselwander@market-bridge.com.

Download a high-res, printable version of the Marketing Analytics Family Tree at the bottom of the page.

 

Marketing Analytics Family Tree_MB

Below, download a high-res, printable version of the Marketing Analytics Family Tree.

What About Small Data? Part 2

Getting Back to Growth by Playing Small Ball

The ADBUDG curve is a 40-year old handy heuristic for modeling marketing spend vs. return. It was first used for broad-reach advertising. The concept is pretty simple:

  1. The curve starts out flat, as dollars are invested to get breakthrough with a group of consumers
  2. Then, the curve gets steeper as marginal returns reach profitable levels
  3. Finally, the curve flattens as the market is saturated with messaging, and the advertising no longer has much marginal effect.

ADBUDG curve
Certainly, both direct and broad-reach marketers know of this curve, even if they’ve never heard of the word “ADBUDG.” There is a maximum amount of goods or services you can get the market to buy before marginal marketing dollars do not drive a profitable return.

However, this “plateau level”, at least on an aggregate basis, seems to be getting lower, year after year. Over the past two decades or so, at least four factors have conspired to compress this curve for marketers, whether B2C or B2B (I use the term “consumer” interchangeably):

  1. Consumers have become savvier, spotting obviously poor execution and ignoring it out-of-hand
  2. At the same time, consumer behavior has become more search-driven, turning the tables on the advertiser and waiting until they have a real need to find what they want on Amazon or Google
  3. The supply of quality interactions—be these on radio, telephone, television, or at retail—has gone down as consumers have shifted their behavior towards platforms like Netflix and Amazon, and have stopped picking up their phones, etc.
  4. At the same time, the marginal cost-per-touch for low-quality interactions (junk email, crap display ads, lousy ad time on the long-tail of the cable TV spectrum) has gone down, encouraging blasting consumers with touches, thereby further intensifying consumers’ programming to “ignore” marketing tactics.

The net effect of these trends and their impacts can be visualized as a series of curves getting flatter and flatter over time as the aggregate “ROMI,” or return on marketing investment, gets lower and lower.

Downward ADBUDG curve

A traditional way to deal with this problem is optimization. In other words, take that ADBUDG curve and find the best possible marketing mix, message, and consumers to target, which will optimize ROI by lowering costs to get the same return. This works well over the short run. Everyone is happy, because marketing has increased its ROMI. There is no noticeable impact on growth—the plateau of diminishing returns is reached sooner.

Optimized ADBUDG curve

However, over the long run, there is a strategic problem—one that CPG companies, for example, have been facing for years—and that is, growth is harder and harder to come by. However, the economy is growing, and consumers are buying things. So where have the dollars gone? They’ve gone to small competitors, who have marketing departments of one or two, no marketing mix models, and are limited to small digital, agile digital campaigns that go from ideation to execution in days, not months.

If you want to see a concrete example of this, go outside and look at anyone between the ages of 14 and 40 today, at the brands they display conspicuously, at the clothes they wear, at the music they listen to. You will notice that most are telegraphing their individuality in an extremely deliberate way. There is a term that Sigmund Freud coined a hundred years ago that I think perfectly describes consumers and companies as buyers today—the narcissism of small differences:

Every single consumer / company thinks that they are unique, even if they’re actually quite similar.

It’s up to marketers and sellers to recognize this, understand them, and get them what they need to feel like they are being treated as individuals.

The implication is that now more than ever, marketers and salespeople need to go “small” when everyone is talking about “big.” Big data, big advertising, scaled campaigns, and machine learning are great—but what they are so often missing is the touch of the artist, and careful, high-resolution insights-driven thinking.

I was talking to someone the other day whose wife is about to age-in to Medicare, and she’s gotten hundreds of “idiotic” (his word) touches from various companies over the past months blasting them with the same message, over and over. She is now completely turned off on Medicare Advantage. She’ll buy something eventually, but only when she receives the right touch that acknowledges her as an individual—well, maybe not an individual, but at least as someone unique.

Companies optimizing their Medicare Advantage campaigns might put her in a lower decile, as she doesn’t respond to these touches, but she has the means and the need and will buy. Instead of optimizing in aggregate, or across coarse segments of tens of millions, these companies should think about micro-segmenting their campaigns, and understanding what she as a consumer actually needs and wants when it comes to health care; what her specific habits are; and how she lives her life. Simply acknowledging these differences will go a long way towards true optimization, and will drive incremental growth.

Concretely, this means micro-segmentation. This isn’t the huge, enterprise segmentation, but it’s rather a guerilla, agile attempt at understanding the small cells of customers, and reaching out to them in unique ways, all measured rigorously. For each micro-segment, a marketer should strive for unique insights, including:

  • Core needs, wants, insights: What is the real, non-trivial, second- or third-level insight that makes this small segment of consumer care about what I’m talking about?
  • Channel mix: How do I build a go-to-market strategy that intercepts consumers and companies where they travel, and where they care about what I’m selling?
  • Content / message: How do I get the right, unique content and message in front of that call of a few companies / a few hundred thousand consumers, where it will really resonate?
  • Product: It’s not always possible, but can I build a product portfolio with enough diversity to acknowledge difference, while staying profitable?

This requires striving for breakthrough insights among small cells, and these insights then have to spread throughout a marketing organization that embraces “small ball”—the wins that come from looking for nuggets instead of the whole gold mine.

This does not mean giving up on analytics or data science—it’s actually the logical extension of it. Analytics goes from being huge and aggregate to micro and artful.

It does mean spending more time looking for the insights sources that will get marketers a greater depth of insight into markets, prospects, and customers. It does mean doing bringing data scientists into qualitative research sessions.

So what about that ADBUDG curve getting squashed by oversaturation? If you play analytical “small ball,” you’ll be optimizing lots of little ADBUDG curves, one for each of the micro-segments. The saturation level for each of these is higher, so when they are summed up, the aggregate curve rises.

Some of this can be done with technology—for example, some of the very good targeting that is possible with Instagram today based on location, text mining, and image—but much of it still boils to good old-fashioned insights work, and making cell sizes smaller. Another way to think about small-cell marketing is in terms of a campaign / micro-segment portfolio; each “fund” is optimized, and then the entire portfolio of campaigns is optimized for efficiency, as a whole.

Growth Micro ADBUDG curve

One final note; I continue to believe that insights-driven, small-ball analytical marketing is ultimately an organizational challenge.

Marketing and sales organizations have to be built to think like customers, like individuals, while at the same time being relentlessly data-driven.

These two things are not mutually exclusive. The old trope of the “geeks” and the “creatives” is just plain wrong. Merging these two worlds, and successfully playing analytical small-ball, is a really good way to move a big company from efficiency to efficient growth.

 

10 Step Checklist for Creating Actionable Segmentations and Personas

For all of the talk around one-to-one marketing, human beings still need frameworks to understand their world, and marketers are no exception. The word “segmentation” might be ubiquitous among marketers, but it’s difficult to find two people who can agree on what segmentation really means and how they become actionable segmentations. That being said, sales executives and marketers can probably all agree on what a bad segmentation looks like:

  • Bad segmentations are hard to describe in plain English. In other words, only the lead researcher can really describe what the segmentation does, and who the members of the segments are. Another dead giveaway is that the names of the segments or personas don’t really align with what makes them unique. If you can’t name something, it’s probably not really much of a thing.
  • Bad segmentations can’t be used by the business. This kind of segmentation might be brilliant academically, but when you sit down and start to try to implement it, you can’t really do anything meaningful. This is always a challenge for the PhDs who create brilliant latent models of beliefs and wants among buyers—can we actually find these people?
  • Bad segmentations become obsolete quickly. A segmentation that only lasts for a year or two is frankly wasted money. The costs to do the research and analytics, and the opportunity costs of training and implementation across the organization, make it imperative that a segmentation is long-lasting. I’ve found that longevity is rarely stated as an upfront goal at the start of segmentation work.
  • Bad segmentations are too broad or too specific. In some cases, the segmentation is too broad. For example, a segmentation might attempt to describe all of the behaviors of a customer, from media usage to IT behaviors to channel preferences to leisure choices. In other cases, a segmentation might only be useful in describing such a small set of behaviors that the use cases are just too limited.

So, what does a successful, actionable segmentations look like? A successful segmentation effort is first and foremost one that is adopted. Fortunately, there is a checklist that marketers and market researchers can follow to protect against the risk of a six-month segmentation effort ending with a thud (literally, the 100-page PowerPoint hitting the bottom of the shredder bin). These are divided into pre-analysis and post-analysis checks.

Pre-analysis:

Step 1

Set clear descriptive / predictive objectives upfront.

Before any work is started, before a question is written, or before a database query is made, everyone needs to be in agreement on what the segmentation will be used to predict / discriminate, and what it won’t. For example, a segmentation might be great for understanding technology buyers’ wants and needs when it comes to the technology itself, but lousy in predicting where they shop. Of course, segmentation can serve many purposes, but realistically, a segmentation with 3-10 segments can really only be used to predict a few major themes. One way to do this is with a simple table, for example, this table for a printer / copier buyer segmentation:

Will Be Used to Predict

  • Behavioral usage
  • Relationships with others in a company
  • Learning style and objectives
  • Preferences for generic / name brand consumables

Won’t Be Used to Predict

  • Channel preferences
  • Media consumption
  • Technology stack / integration
  • Industry use cases

If everyone signs off on this upfront, before anything else is done, it’ll be much clearer to researchers and stakeholders why this work is being done, and what it’ll be used for.

Step 2

Set actionability requirements upfront.

Actionability is the ability for a segmentation to be used to actually contact or describe specific customers. In other words, “Can the model be handed off to field sales, channel partners, digital, my advertising agency, my database team, etc?” If a segmentation model needs to be useful in tagging every customer in the database, the design must take this into account, by ensuring that “knowable” variables in various databases are included in the research (if primary research is actually being done.)

Concretely, as illustrated in the table below, create a list of knowable variables by customer interaction point, and include these in requirements for the research.

Interaction

Known Data to Include in Segmentation for Assignment

1. Website Registration IP, OS (from browser); previous website traffic (from cookie); age, gender, ZIP code, SIC code, title
2. E-Commerce Purchase All from (2) plus SKU(s) purchased, shipping chosen
3. Retail Channel Retailer, location; firm, title, age, gender, SIC code from loyalty card
Etc…

Another option is actually conducting the research from the database, ensuring a 1:1 tie. Email is a great way to do this. An approach that can work well is doing a split of 50/50 existing customers and unknowns. It goes without saying that the sample would need to be reweighted to account for the true population after the research is completed.

Step 3

Decide where to be on the actionability / insights spectrum.

This is kind of an addendum to the second to-do on pre-building actionability, but leaders should think about whether or not to do any primary research at all. If there is a lot of behavioral data out there—think Amazon—it’s probably possible to create an extremely powerful and descriptive segmentation based on database information alone. However, for a CPG company whose primary concern is deep-seated psychological constructs and how they impact attitudes about cleaning one’s home, a database segmentation won’t cut it. There is no magic rule here, but again, it pays to have these open conversations among stakeholders up-front.

Step 4

Think of quota as the population you are describing.

After setting objectives and being clear on use cases, setting the quota is a critical, no-going-back decision that has to happen very early in the process. When you think about what setting quota actually means, you’re defining the population that you want to segment. In other words, by setting guidelines like those in this table…

Firm Size

Title

Industry

10-100 100 CMO 100 Tech 100
101-500 100 CTO 100 Pharma 100
501-1000 100 VP, Product 100 Other Remainder
1001-5000 100 Developers Remainder
5000+ 50

…you are literally defining the population you are describing. You are also defining who you are not describing. In other words, if you want to be able to use this segmentation to describe DevOps Directors, you can’t be sure that it’s going to work—you’ll be extrapolating. So, spend a lot of time on the quota, and think about it as a description of the buyers you want to describe, rather than just cells to fill out.

Step 5

Get your screener right.

This is a close cousin to #4, but different. The screener is a magical piece of the equation because small changes to questions can have drastic implications on the results. For example, say you want to talk to decision-makers who have a say in what group health insurance plan a company should choose. You could say “Describe your role in the decision-making process for choosing group health insurance for your company,” and only allow in those who say they make the decision or are part of a committee. But what if you also added an option I have veto power (the executive who needs his Doctor to be in the plan), or the person who makes the decision reports to me. You will get different populations for each of these screeners, and consequently, different results. Again, there’s no right or wrong answer—just be very choiceful, and make sure the executives who will choose to adopt or trash the segmentation are well aware of the decisions being made.

Step 6

Do qualitative first to ask better questions.

There seems to be an aversion among analytical marketers to qualitative research; maybe they think it isn’t actionable, or they don’t understand it. However, qualitative research is critical in setting the scope of analysis, and asking the right questions in the quantitative phase.

Qualitative can be done before and/or after a segmentation, but doing it before makes a lot of sense, simply because you’ll end up asking better questions when it comes to quantitatively describing your universe. Of course, qualitative research is an art in and of itself. Focus groups can be effective, but in a business-to-business, considered purchase environment, I’ve found that in-depth interviews or ethnographies (where you literally go into a company and watch them work, peppered with questions) are even better. When you’re writing your discussion guide, take the time to brainstorm all of the “why” questions that might dig up insights for the buyer group. Pick a seasoned moderator who understands the buyers you are talking to. Finally, have the executives, and the people actually doing the analysis, actually attend the research. It will make them much better at driving the questionnaire / analytics effort. They’ll be building the intuition they need to make better decisions when it comes to the questionnaire, analyses to perform, etc.

Post-analysis:

Step 7

Spend a ridiculous amount of time naming your segments.

Naming things doesn’t sound very analytical, but it’s incredibly important when it comes to segmentation. If you can’t name a segment clearly, it will never stand the test of time. The naming exercise is best done by a group of people, sitting in front of the data, over several hours. Ask questions about why a name is applicable, or not. Does it just “sound good”, or is what the name describes actually seen in the data? If you can’t name the segment, there’s something wrong with the solution. You’ll know when you have good names—they’ll describe the segments perfectly, and the data will line up with the names across every crosstab you look at.

Step 8

Check that your solution can actually do its job.

Thought experiments can be done in a few hours, but will save a lot of time down the road. There are three listed here, but the point is to put the model through its paces, before it’s actually put through its paces.

  1. Prima facie validity.
    If you spent the right amount of time naming the segments, this one should be easy. Can you describe, in one or two sentences each, how each segment is unique, and how they should be treated differently by your company? If not, you’re in trouble.
  2. Segment difference.
    The segments should have clear differences in all of the variables / factors that matter. What are the variables that matter, you ask? Those should have been defined back in step one. If you see a lot of grey when looking at crosstabs, you have a problem. Go back and look at the algorithms used for clustering, the number of clusters, etc., to ensure that truly different segments exist.
  3. Go-to-market use cases.
    Do a thought experiment and see if different sales and marketing actions can now be taken with the new information. Is there a different campaign apparent for Segment A? Can we find enough of Segment B in the database to build a meaningful campaign? Could a sales rep build a pitch for Segment C that makes sense and will drive results?

Step 9

Check for reproducibility.

What if a segmentation solution doesn’t hold together when presented with new data? One way to address this is to use the classic machine learning technique of train-test or train-test-validate splits with data. The problem here is that primary research records are more expensive, so a little bit of extra budget will need to be burned, but it’s worth the peace of mind knowing that a solution repeated itself on a 300-n holdout sample. So, do your modeling using train / test, and then score the validation set. Cross-tabs of descriptive statistics should be very close to the train and test data.

Step 10

Get the outputs right.

A PowerPoint deck isn’t enough. For a segmentation solution to be adopted far and wide, two sets of outputs need to be created; communications outputs and operational outputs. The below table lists some frequently used outputs in both categories, but marketers should, by all means, get creative—it’s only through rich, well-thought-out communication that segments will be used and addressed appropriately throughout the organization.

Communications Output Operational Output
  • An executive summary of project goals and the key outputs
  • A master PowerPoint describing the segmentation methodology, the segments themselves, and the go-forward strategies
  • One-page summaries of each resulting segment, that bring the segment to life, potentially as a persona (Bob the IT guy, e.g.
  • A one-page summary of the entire effort that can be put up on bulletin boards, etc., across the organization
  • Models that score prospects, customers, inbound call center contacts, etc.
  • Differentiated go-to-market strategies / routes-to-market for each segment
  • Differentiated creative / content guides for each segment

As I reflect back over my career and think about “segmentations I’ve known”, it’s shocking to me how few of these steps were followed, or even articulated, by huge organizations and their consultants and research providers. If you follow these ten steps, the chances of a catastrophically bad segmentation effort—which happen more often than anyone in marketing would like to admit—will be really low. More importantly, the chances that the segmentation will be robustly adopted—which is, after all, the most important measure of success, will be better.

Download the full checklist to share with your internal team:

What About Small Data? Part 1

Big data remains all the rage. After exploding onto the scene in roughly 2012, with the popularization of the Hadoop framework, the big data lens still dominates the “LinkedIn press.” This myopia is certainly not without its merits; machine-generated data contain vast amounts of signals just waiting to be extracted and put to use. Indeed, many machine learning applications require real-time, streaming data for use as features, and likewise at the very least hundreds of thousands of training examples.

Big Data vs. Small Data

Big data sets are, first and foremost, really big. Observations range in the 100,000s (minimum) to the 100s of billions. Big data sets are typically semi-structured, and while data munging is required, I’ve found that it tends to be pretty straightforward. Likewise, when missing data occurs, it’s usually not that big a deal—assuming the missing data don’t have a pattern, it’s safe to delete the observations. Finally, when it comes to finding more signals, it’s usually a matter of finding another vendor or “thing” generating data, assuming you can key on something in the main data set.

Small data is still out there, though! You can’t just make a small data problem big. Small data isn’t collected in hours; it usually takes at minimum weeks to collect it. The number of observations ranges from the 1,000s to the 100,000s; and the number of “1’s” in the data set can sometimes be really, really small—think 10 or 100. Small data tends to be relational. Missing data is precious, and thus can’t just be ignored. And finally, finding more signals usually takes a lot of creativity.

Big Data Small Data
Data Collected In Seconds, Minutes, Hours Days, Weeks, Months
Number of Observations 100,000s – 100s of Billions 10s – 100,000s
Typical Structure Semi-Structured Relational
Data Munging Effort Moderate Hard
Missing Data Ignore or Interpolate Not so fast…
Finding More Signals Find Another Vendor or “Thing” Get creative
Topical Areas B2C, Digital B2B, Sales, Events, Above-the-Line
Limiting Factors Processing Power, Storage Time, Creativity

Many of the best problems out there today—the ones that will yield the most incremental fruit, in terms of leads, opportunities, loyal customers, dollars, etc.—have to deal with small data. At MarketBridge, we’re experts on small data, and I want to share some of the best practices for working in this messier, potentially more lucrative ROI realm. In this blog post, I’ll go through the first best practice, “providing insight along the recall-precision gradient.”

Part 1: Provide Insight Along the Recall-Precision Gradient


For our example, we’ll look at a wins and losses dataset for a large, considered-purchase solution designed for small and medium businesses. The training data consists of 10,000 accounts that have been touched by inside sales, yielding 35 wins, over a period of six months. From a feature perspective, assume that there is data on marketing stimulus; firmographic data; and competitive situation in the account.

As a data scientist, I’m tasked with providing a scored list of accounts based on training data, for a new set of 10,000 target companies. The list will be used for sales reps to prioritize their activities. I develop my awesome model and give Joe, a sales director, 1,000 likely accounts to call. I tuned the model for maximum recall — that is, I want to pick up every potential buyer because I don’t want to miss any revenue. Joe asks me how I did the model, and I tell him I extracted a bunch of features predicting likelihood to buy, and there was a bunch of code and statistics, and his eyes glaze over and he says, “just give me the list.”

A week later, I go by Joe’s desk and ask him how the list did. He is furious. He’s called 50 people on the list, and not one was interested. I tell him that the list was optimized for recall and to make sure he didn’t miss a single likely buyer. He walks away in a huff. How do we get around this problem?

Well, obviously, we retune our model to optimize for precision, right? I want to minimize the number of false positives in my model, so Joe doesn’t call a bunch of people who have no interest in what he’s selling. So, in this case, I give him a list of 35 predicted positives. The problem with this model is that he’s missing a lot of people he should be calling, and he tells me that he wants the “not so perfect” leads too, because he can actually adjust his activities to win in deals that might not have been perfect fits. I call this the “adaptive component” of high-touch marketing; it often goes hand-in-hand with small data analytics, and I’ll get back to this in a future post. And anyway, he’ll be done making calls in four days, but then what should he do?

In looking at this problem, we can learn a lot from the tradeoffs that epidemiologists make when building tests for diseases. Of course, the optimal goal for any model is perfect recall and perfect precision. In other words, all positive cases are predicted correctly (recall), and no false positives are generated (precision). In the real world, this simply doesn’t happen; we are constantly making tradeoffs between erring on the side of capturing all positives, and not predicting any false positives, or, being aggressive vs. being conservative in our prediction.

An epidemiologist might tune a model to predict the presence of an extremely contagious disease, where the consequences of a false negative are grave, for maximum recall (true positives / true positives + false negatives). Likewise, she might tune a model to predict the presence of a disease that isn’t at all contagious, and extremely rare, to maximize precision (true positives / true positives + false positives) and avoid scaring a lot of people who are actually totally healthy.

Multiple Models to Activate Human Intelligence

So what’s the key? Luckily, I don’t have to give Joe just one list. Instead I should give him three, along the recall-precision gradient:

  • A “primo” list, maximizing precision, that is, “few false negatives”;=
  • A “likely suspect” list, trading off precision and recall, perhaps maximizing F1 score
  • And a “wide net” list, maximizing recall, that is, “get all off the likely buyers into a list”

Technically, (using Python in this example), the list could be generated by running a grid search (via `GridSearchCV`, e.g., from Scikit-Learn) maximizing recall (wide net); F1 (likely suspect); and precision (primo). Of course, this is just general guidance; there’s nothing magical about the number three, or actually maximizing these metrics. The point is, give practitioners choices along the recall-precision decision boundary and teach them how to use this newfound intelligence.

1) Primo Model

Predicted Negative Predicted Positive Total True
True Negative 9,950 10 9,960
True Positive 15 25 40
Total Predicted 9,965 35 10,000

2) Likely Suspect Model

Predicted Negative Predicted Positive Total True
True Negative 9,860 100 9,960
True Positive 1 39 40
Total Predicted 9,861 139 10,000

3) Wide Net Model

Predicted Negative Predicted Positive Total True
True Negative 9,000 960 9,960
True Positive 0 40 40
Total Predicted 9,000 1,000 10,000

Now, Joe has three lists that he can use to tune his calling strategy. He can use the primo list of 35 first, perhaps putting maximum effort into tuning what he says and the content he provides. He can then go on to the likely suspect list of 139, perhaps realizing that there might be something he needs to tweak for these. Maybe the reps making the calls on the training data were not quite handling their calls to these correctly, so Joe uses some human intelligence to boost his performance. And finally, he might send the 1,000 wide-net prospectskeep warm” emails to nurture them and bring them along.

This is obviously an overly simple example, but this heuristic has worked very well for MarketBridge’s clients handling small data.

One final note; upon reading this, one of my friends asked me the good question: “Why not just provide the probability of the win? Why the three lists?” My answer: There’s nothing wrong with providing the probability of the win, too, but I’ve found that explaining things via three lists based on three different decision gradients is better. In simple terms, it helps activate the “human intelligence” component in the small data world and drives better adoption and usage.

 

Coming Full Circle: Bringing Art Back to Marketing Science

Yesterday was my first day (back) on the job at MarketBridge leading marketing analytics. I’ve been gone for nine years, and it is truly great to be back at this special organization. This return to my roots, with person after person telling me “welcome home,” brought on a wave of nostalgia, and caused me to do quite a bit of reflection on the differences in our industry between then and now. It’s also caused me to think about what it means to come full circle.

I looked out over a room of much younger faces and told a few stories of the early days of marketing analytics. I’m sure I bored the hell out of everyone. I talked about how “back then” (in the dark ages, 15 years ago) you needed an expensive statistics software license (e.g. SAS or SPSS), bare metal servers, and some fairly arcane knowledge to do marketing analytics. Relatively very few people were doing really interesting stuff.

Today, that software is free, anyone can buy time and space on AWS for pennies, and pretty much any model you can think of has a package or library available in R or Python. Consequently, the novelty of the term “marketing science,” which some friends of mine at IBM cooked up, isn’t a novelty at all.

All of the tools, Stack Overflow articles, LinkedIn groups, and degree programs have certainly made marketing science and analytics a lot more accessible for organizations. For example, building a propensity model is certainly a lot easier and less expensive. If the goal is to score a lead on its likelihood to close, and assuming the data exists, an analyst with a few years of experience can do a serviceable job with a Jupyter notebook and an AWS login.

Today, data science and marketing go together like peas and carrots.

However, the state of the art in marketing science has changed in the intervening years too. Just like animals and plants rapidly adapt to environmental stresses in nature, buyers have evolved as well. While storage, processing power, and algorithms have all been getting cheaper, so have marketing touches. Buyers have become increasingly immune to crude, and even “clever” tactics. Scoring models and attribution are now table stakes.

Buyers now want to be taught, not told. They want to explore, not be led. They want to be rewarded for their time, not feel dirty after a ten-minute prospecting call.

I am calling this new reality Marketing Analytics 2.0, mainly because I’m not very creative.

What does this mean?

In this new world, marketing scientists will now face new, more interesting challenges. This will mean pushing beyond the obvious data sources to find new signals that define buyer-needs at a more meaningful level. It will mean going back to some of our “basic” tools, such as rich buyer journey research or deep customer insights, and driving these “artful tools” back into decision models. And, it will mean putting the power of analytics into more hands throughout the marketing and sales department. Think of an analytics “toolkit” that can be used by creatives, researchers, pre-sales people, etc., that brings all “analytics components” to bear on whatever problems these practitioners are working on. In other words, bringing human intelligence together with artificial intelligence.

These are just some initial thoughts, but I’m increasingly convinced that Marketing Analytics 2.0 will bring an explosion of creativity to organizations and that it will ultimately drive better outcomes for both companies and buyers. This new era in marketing analytics won’t be defined by better algorithms. It will be defined by more comprehensive, creative thinking, and by remarrying marketing analytics with the creative side of marketing.

I’m really excited to have come full circle, and to start this journey, again.