Big Data’s Big Problem

There’s a common myth inside the discussion about the earth’s natural resources, namely that we run the risk of exhausting them. But a simple research of the matter quickly dispels this. We do not run the risk of “running out,” of our resources, but instead face the dilemma of not being able to extract them. As easier deposits of gold, phosphorous or oil are exhausted, the cost of mining these elements from more difficult source locations means they would be priced beyond consumption on the world market. It quite literally will become the “water water everywhere but not a drop to drink” scenario.

The world of Big Data faces a similar problem. In fact, so close is the parallel that many over the years have simply called Big Data the “next natural resource,” and noted the problem with extracting it as being the growing skills gaps in organizations. This singular issue, the absence of data literate professionals inside organizations that can adequately understand insights and data relevance, continually grows as the single biggest problem facing the growth of the industry. In many of our market research reports for companies in this space, we have seen an increase in the amount of interaction and sharing of articles from people such as Jordan Morrow, Global Head of Data Literacy at Qlik. As the foundation to a recent survey he recently conducted, Morrow cited the accuracy and danger of a 2017 Forbes article noting, “Increasingly, this data literacy divide will impeded organizations of all shapes and sizes from reaping higher rewards from their data investments.”

“Increasingly, this data literacy divide will impeded organizations of all shapes and sizes from reaping higher rewards from their data investments.”

As case in point, note the volume around a single blog posted by Snowflake earlier this year. In it, the author makes the case that one of the biggest hurdles to an enterprise organization ever achieving a maturity in the data and cloud space is simply the absence of resources outside the IT team to help plan and develop a data architecture.

FigureA

So what can sellers of big data solutions and services do to help potential buyers enter this space confidently and successfully? We would offer the following three recommendations based on the current trends we are seeing in the industry as we constantly monitor the top competitors:

1. Focus content creation efforts strongly around DIY and free training content

Earlier this year, we noted multiple providers in the data space offering content intended to help companies trying to navigate the internal skills gap. In January, there was strong engagement around Amazon’s release of free digital training courses for AWS Big Data Training & Certification. Courses included introduction to Amazon EMR, and then allowed for continued online education for paid classroom training on Data Warehousing on AWS and Security Operation. Additionally, in the same period, there was strong engagement around content from Teradata. Using YouTube as the core venue, they posted educational content (promoted via Twitter) as a ten-part series of how-to videos illustrating tasks such as spinning up Teradata on AWS, Loading and Querying Data and Deleting a Cluster.

2. Develop certifications and training acknowledgments

In addition to being a provider of a data and analytics platform, institute courses that yield an official certification from your brand designating an individual as a professional. As an example, consider how much energy Tableau puts into singular events like “Certifiably Tableau Day” on March 15 promoted using #certifiablyTableau. All of their social media content on this day are testimonials from people certified by Tableau sharing how the certification both helped to chart and validate their career path and differentiate themselves in job searches. The intended goal is to get more people to understand the nature of the “professional” data scientist and help close the skills gap inside organizations by training more people on their platform. In addition, Informatica saw strong engagement around a blog post suggesting that even current data scientists need additional learning and that by seeking additional skills development and certifications simply makes them an in-demand asset inside the enterprise space today.

3. Guide prospects and clients through the skills gap challenge

Finally, providers of big data services should realize that part of the reason for the skills gap is that companies are not entirely sure who to hire or what skill sets to seek. The role of the data scientists is still somewhat fluid in definition, and understanding who to hire is something companies need help with. Note the graph below showing engagement for original content published by MicroStrategy. The twin peaks in the middle of the measuring period were due to the MicroStrategy World conference where they published a map for #IntelligentEnterprise. The map offers a collection of technologies and techniques to help customers chart the rough waters of building a successful data-driven organization. The engagement around this content shows the incredible interest and desire by companies to be educated and led in how to adequately stand up in the big data world.

FigureB

Conclusion

There is no question that the world of big data and the advances it offers companies is the future. But the speed with which a company will realize these benefits will be directly dependent on whether or not they can staff appropriately to actually mine the data for relevance. Sellers of Big Data solutions need to realize that helping enterprise companies solve for the skills and data literacy gap needs to be a priority and that small steps (as noted above) can be taken immediately to help towards this goal.

 

The Promise of the Marketing Data Platform

Sales and Marketing Datapalooza

After twenty years of continuous innovation, marketing and sales technology is ubiquitous, extremely powerful, and very expensive. A typical sales and marketing organization spends upwards of 25% of its budget on technology, broadly defined as the software, data, and infrastructure to communicate, track, optimize, and analyze go-to-market activities.

These technologies include (but are not limited to) CRM (customer relationship management), marketing automation, partner relationship management, email marketing, web analytics and management, mobile marketing platforms, content hubs research platforms, data science platforms, and data warehouses, to name a few. Each of these tools provide robust data storage and analytics capabilities—making it possible to build dashboards, reports, and models in Salesforce.com, Eloqua, or Marketo, etc., etc.

Figure 1: Sales and Marketing Datapalooza, or an incomplete list of sales and marketing data sources. The array of data sources required to instrument all of the sales and marketing efforts of a company are dizzying.

However, these dashboards and reports immediately become siloed, as access to each is available only through the few domain experts in these tools. For larger enterprises, the solution to this problem has traditionally been a custom-built marketing data warehouse, integrating all sources via batched loads from the various systems, and accessed via a business intelligence platform. These solutions work, but they are expensive to build and maintain, and when new technology is added, a new IT project must commence, delaying access to integrated data for months or years.

One other solution that has emerged in the past two years is the sales and marketing data lake. In this solution, data are purposefully left unstructured. Concretely, a json instance of an inbound lead is left in its unaltered format. Keying this to other objects, and applying lookup tables to make it sensical, is left to either a data scientist further downstream, or to the reporting system that eventually displays the data to an executive. This is a tempting solution because it bypasses the expensive and complex process of structuring and maintaining the data warehouse, but it’s kind of a copout; while it’s true that reproducible approaches to data science and reporting can make this a more scalable solution in 2018 than it was ten years ago, a mess is still a mess.

The PEPM Trap

Compounding the difficulty of integrating the data exhaust of multiple platforms is the never-ending march of increasing PEPM (per employee, per month) software licensing agreements for software vendors. In the not-so-distant past, enterprises could control their software cost exposure by purchasing software outright and either choosing to include maintenance or “go it alone”. This hemmed in financial risk and made it easier to pick-and-choose what software was truly needed at any given time.

Over the past ten years, however, this model has essentially disappeared in lieu of PEPM models. Shareholders and VCs love the PEPM model, because it drives great financial projections and multiples on spreadsheets. As companies grow, revenue grows linearly. Of course, this isn’t enough—every company is trying to take more and more of the PEPM pie.

Salesforce.com is an egregious offender. For anyone who has attempted to provide additional licenses to users in an organization who only need a small percentage of Salesforce.com functionality (e.g. “just some of the data, for one use case”), it is impossible to get anything under $40 PEPM—an outrageous fee when one screen might be used per week.

This greed (because that’s what it is) has hobbled PEPM software providers over the long run, because it has made organizations unwilling to keep all of their data with them. The more data is stored, and the more customizations that are made, the higher the switching costs to move to another platform; and thus the PEPM thumbscrews are applied.

Sales and Marketing is a Unique Business Function

Every consultant laughs about the adage “my company is unique”—because it’s heard at the beginning of every new client engagement. However, laugh as they might, consultants do have a very significant learning curve at the outset of a new engagement learning the norms, standard operating procedures (SOPs), unique customer value propositions, and technology footprint of their new client. That being said, there are many more similarities than differences, and this is why enterprise software exists.

In many functions, enterprise software is mature. Accounting, operations, and HR, for example, have clear procedures, best practices, and governing bodies. There is a right way of doing accounting (GAAP); there is a right way of managing high-quality operations and manufacturing (ISO-9001). There is not, however, a “right” way of doing sales and marketing. Sales and marketing is perhaps the fastest moving area of business, because its goal—gaining the attention of customers—is one of the most dynamic things imaginable.

This is why sales and marketing technology platforms proliferate faster than a CMO can keep up with—and why sales and marketing data are such a mess.

The Solution: A Standardized Sales and Marketing Data Platform

Even though sales and marketing technology proliferates wildly, the basic data elements of sales and marketing do not. New data elements are certainly added, but the basic data structures of understanding stimulus, response, prospects, customers, companies, and transactions remain wonderfully constant. Elements of these basic structures are already built into CRM systems. Certainly, Salesforce.com would love to be your sales and marketing data platform. However, a truly unified data platform is frustratingly hard to find.

At MarketBridge, we have spent 20 years understanding every sales and marketing data element under the sun—from web leads to outbound direct marketing to weekly time series—and we realized that there is a “Platonic ideal” sales and marketing data structure. In fact, we started structuring our clients’ databases using this structure years ago. It made reporting and modeling scalable and reproducible.

Our PlayCaller platform is the marketing and sales data platform we use for all of our analysis, reporting, and optimization. PlayCaller is, at its simplest, a platonic ideal sales and marketing data structure that can accommodate almost any type of company that spends money on marketing and has customers.

Figure 2: The PlayCaller Marketing Data Platform

Of course, for a data platform to be helpful, it has to have more than a universal schema. Thus, we built PlayCaller to have several other features to help break free of the PEPM, siloed tyranny that most modern sales and marketing organizations face:

  1. Loosely Coupled. PlayCaller does not have any partnerships with existing sales and marketing software vendors, intentionally. This ensures that you won’t be trapped inside one system. If you switch from Salesforce.com to Microsoft Dynamics, just reverse the flow from read to write, and you’re good to go.
  2. No PEPM. Our clients pay for the data they use, period. Less data = lower fees. This fully leverages the promise of cheap cloud storage.
  3. Backward Compatible. We never deprecate a field or a data element, so your imports don’t suddenly break.
  4. Forward-Thinking. Continuous releases extend the data entities available in PlayCaller to work with the latest technology. The marketing technology that no one is even dreaming of today will make it into PlayCaller shortly after our first client needs it, depending on priority.
  5. Best Practices Built In. We make sure to code and document best practices for arranging data, building models, and reporting on results. They aren’t used for every company or every use case, but when we need to create a rolling panel time-series regression, that “module” is parameterized and can be deployed in a day or two.
  6. Develop and Production-Deploy Models. Data science and Artificial Intelligence are as hot as ever, but the translation layer from idea to production remains ridiculously cumbersome in most environments. In PlayCaller, models can be tested and deployed, and then fully scored lists (or objects, available via API) are quickly available. In summary, one place to get data, deploy models, and deploy to production.
  7. Single Source-of-Truth Reporting. With PlayCaller, executives, managers, and practitioners have one place to look for results, with no PEPM fees for additional licenses.

Conclusion: Focus on the Data Platform, Not the PEPM User Interface Software

Many executives buy into the myth that buying “that next piece of software” will solve all of their data problems. In fact, the opposite is true. By relying on CRM, marketing automation, and other “user interface” software companies to fix the data, the data just becomes more tightly entangled with their world.

By focusing instead on creating a loosely-coupled sales and marketing data platform, with inputs from and outputs to PEPM, user-interface systems, many benefits are gained:

  • Data scientists can work much more quickly and deploy better models
  • Executives get a clear understanding of all-up ROI
  • A true 360-degree of the customer is made available, across sales and marketing
  • PEPM software fees are curtailed
  • If using a “platonic ideal” marketing data model, tables, fields, models, and reports don’t have to be reinvented over and over again across companies

The Last Mile Problem: 7 Steps to Closing the Insights-to-Outcomes Gap

Changing front-line behaviors with data-driven insights will be critical to realizing the benefits from your investments in analytics.  It’s harder than you think!

Perhaps you are one of those companies in the CMO Survey that is planning a 218% increase in analytics spend over the next three years, or perhaps you are part of the breakaway group that already invests more than 25% of your IT budget on analytics.

Either way, make sure you are investing enough in embedding data-driven insights into sales and marketing workflow and processes and enabling the changes in front-line behaviors that are necessary to achieve desired outcomes.

This “last mile” problem is one of the key challenges that many B2B companies with complex sales and marketing processes have trouble addressing. Without focus and investment on last mile adoption, companies are severely limiting the return on their data and analytics investments.  McKinsey estimates that analytic leaders spend more than 50% of their analytics budget on solving these activation issues.

Below you will find seven of the best practices for addressing last mile challenges that we have observed in working with numerous B2B clients across industries in the last 10 years.  Adopting these will help you close the Insights-to-Outcomes gap that is inhibiting many companies from realizing the full potential of AI and predictive analytics.

1) Begin With The End

Many companies start their analytics journey with data, and no doubt data accuracy, quality and completeness are critical to analytics success.  DNB’s 6th Annual B2B Marketing Data Report suggests that 89% of B2B companies now agree “Data Quality is Increasingly Important to the Sales and Marketing Organization.” That’s a bit like saying that gas is important for the car (or maybe electricity these days.)

Best-in-class companies start with a very clear roadmap of what business issues they must improve with analytics – and in what order. This is what we call an Applied Analytics Strategy, and it needs to be driven down from the top of the organization.

What business use cases and processes will most benefit from greater insight, and what impact will that have on business outcomes?  Is it a cross-sell problem? A renewal or retention problem? A customer acquisition problem? All of the above?

With this prioritized roadmap, they then determine the type of insights they will need to change outcomes – and the data strategy that will be required to furnish those insights.

For more on Applied Analytics Strategy, read  5 CEO Principles for Developing an Applied Analytics Strategy

2) Data, Data Everywhere, Not a Drop to Drink

Building a 360 view of the customer is still a challenge for most B2B companies today. In a recent survey, 69% of enterprises said they are unable to provide a comprehensive, single customer view today.

A sound data strategy – prioritized based on the analytics roadmap – is key to driving investments in data.  Many firms are deploying cloud-based data lakes as an answer to today’s disparate systems and proliferating martech stack.

Vendors are scrambling to facilitate data interoperability in a way that suits their business models (e.g. Salesforce buying Mulesoft and the recent Open Data Initiative announced by Microsoft, Adobe and SAP).

For more on ODI, read our thoughts on the MSA Data Alliance

While companies wait for the marketplace to catch up, they must be rigorous in defining a clear data ontology for their prioritized use cases, and a master data model for key domains that support their analytic efforts (for example, accounts, opportunities, leads, etc.).

These must be supported with a governance model that assigns responsibility and accountability for data quality and accuracy.

More data isn’t always better.  Make sure you are leveraging the data that you already have – and focus on execution. Evaluate additional data sources carefully, and only add them in when they generate significant gain in the underlying models. And get rid of spreadsheets.

3) Mind the Gap

Data silos aren’t the only challenge.  Many companies still operate in organizational silos.  The gap between sales and marketing is still very wide at many B2B companies today – from a process, technology, data and reporting perspective.

Connected sales and marketing is still a thing.  Create shared processes, shared data and insights and most importantly, shared measurements that align around the end-to-end customer experience.

Consider the creation of a Go-to-Market Council that meets on a regular basis to coordinate marketing and sales activities and coverage for specific customer segments. For example, with one client we host a monthly call to review prior month’s results, next month’s objectives, campaigns and offers, and sales resource availability.

Based on that session, contacts and opportunities are assigned to defined contact strategies and dynamically allocated across marketing and sales teams and platforms based on that session, with shared objectives and KPI’s.  This “Activation Hub” coordinates customer engagement across all platforms and helps ensure all teams are singing from the same hymnal (see #5 below).

4) Play To Your Audience

Getting salespeople or partners to change their behavior can be a real challenge.  Most are creatures of habit.  Inserting analytic insights into sales motions and expecting sales people to change their call patterns without change management and sales enablement support is a sure recipe for failure.  Translating analytic output into sales context is critical to drive the adoption required to change outcomes.

Market leaders engage their sales resources early in the analytics process.  This includes initial input and hypothesis generation when an analytic approach is being determined for a specific business use case; input into presentation of analytic insight within existing tools; and process re-engineering ideation as processes become more data-driven.

It also involves piloting the approach with a subset of your best reps to get feedback about both the insights and the process before it scales to the entire team.

Finally, it requires active enablement and change management from the marketing or sales enablement teams to ensure adoption. This enablement and translation may come in many forms, but key examples include:

  • Converting analytic output into business context that marketers, salespeople, and partners will understand and internalize
  • Aligning appropriate messaging and content with “who / when” models so that “the right touch” isn’t killed by “the wrong stuff”
  • Embedding analytics into the existing sales and marketing workflow so that minimal process change is required
  • Consolidating cross-channel customer engagement dispositions into a single view, to minimize switching windows and applications and to provide a holistic view of customer engagement
  • Reporting positive business outcomes back to users so they believe in and trust the insights

5) Sing From the Same Hymnal

Aligning execution across all customer touchpoints is a significant challenge for almost every B2B organization today, for the reasons outlined above. It involves multiple technologies and platforms, disparate data sources, multiple resources and departments.

Creating a consistent cross-channel customer journey is as much a business strategy as a technology initiative.  Gartner talks about this as the Emerging Customer Engagement Hub which will enable companies to deliver a cross-channel experience.

This is a subject for another blog, but we wanted to share with you our vision of what is included to help convert insights to outcomes and deliver on the Applied Analytics Strategy.

6) Rinse and Repeat

Test and learn.  Test and learn.  Test and learn.  Don’t wait for the perfect analytics solution.  Use an agile approach in your deployment and refine continuously.

Don’t forget your control groups.  We have found many companies don’t like to have a holdout group that doesn’t get the “benefit” of data-driven insights, making it more difficult to identify real differences in performance resulting from analytics.

Maintain discipline in your measurement so that you can validate actual performance drivers, and push your analytics teams to manage their activities with a “product-centric” approach to data science that will ensure your analytics and data investments can scale to support the entire business.

 For more on building a Product-Centric Data Science Organization, read here

7) Trust But Verify

Defining the likely ROI for a specific business use case for advanced analytics is one of the first steps in the process.  Setting the goalposts for what you believe the outcomes should be is crucial, and should be a collaborative effort between the analytics team and the sales and marketing sponsors.  What is our baseline performance today?  How much of an improvement do we think we can make by applying analytics?  What are the levers that will drive that improvement?  If we achieve that level of performance, what are the quantifiable benefits to the business? This should all be documented at the beginning of the project.

For more on Measuring Return on Analytics, read here.

Make sure you set up a measurement framework and capability that will allow you to evaluate your performance against those targets, and more importantly help you identify where the gaps are against your initial assumptions.

For instance, we recently worked with a client to prescribe sales outreach within a defined time horizon before contract expiration.  We tracked activity to make sure target outreach volumes were occurring. When the modeled conversion rate improvements weren’t achieved initially, we were able to identify that a significant volume of outreach was still occurring to customers outside of the prescribed window, thereby dampening results.  This was subsequently addressed by additional enablement and management compliance.

This was a textbook  example of the last mile challenge – if you cannot effectively change front-line behaviors you will not be able to close the Insights-to-Outcome gap and deliver on your investments in analytics and data.

 

MarketBridge Research: The Best B2B Marketing List Providers

Take-the-Survey1

Objective

We are trying to understand the best B2B lists for acquiring new customers. We are interested in lists used to conduct email, direct mail, and telephone campaigns, across small business to Large Enterprise. We are asking B2B marketers to rate the effectiveness, accuracy, breadth, and cost of various list providers.

Once the data are collected, we will report back to survey respondents with the overall results. Where possible, we will provide crosstabs of results by industry and targeted company size. This survey is 100% free, as are the results.

Take-the-Survey1

 

The survey will close on December 15th, 2018, and we anticipate having results by January 5th, 2019.

FinServ Digital Transformation: It’s All About the Data

CEOs and their leadership teams in FinServ companies—from banking to insurance to credit—are moving on to the next wave of digital transformation: data strategy and execution. While every company is at different stages of digital maturity (as are individual BUs and functions within each company), one trend is absolutely clear:

The future of FinServ digital transformation will be “make or break” based on the execution of data-driven go-to-market strategies and marketing systems

FinServ leadership teams are at a point where they need to ask some probing questions about technology strategy.  Below I have outlined a non-tech perspective on digital transformation and data that we have found useful for our senior exec clients.

1. The Good News: New Product Opportunities, Improved Workflow, Better Customer Experience

As McKinsey Digital accurately states “financial services companies have rich sets of exclusive information on their customers (key demographic details, where they live, their lifestyle preferences). When used responsibly, with respect for regulatory constraints and privacy concerns, this data can be analyzed for insights.”

Our experience is that FinServ leaders can find “customer intelligence gold” by tapping into the entire supply chain ecosystem of customer data: from digital/social media content consumption to customer transactions to downstream distributor sales and marketing (agents, advisors, etc.).  These opportunities include:

  • New Products: Most banks, insurance companies, credit card providers, etc. have demonstrated their ability to leverage customer data – profiles, purchase patterns, etc. – to identify new cross-sell and “flanker” product opportunities.  In fact, it increasingly becoming a competitive disadvantage to not have a growing, broader product line to take advantage of (and amortize) the lifetime cost of customer acquisition.
  • Greater Automation: Workflow efficiency and customer experience across the value chain have significantly improved:
    • Direct marketing and sales
    • Customer management platforms
    • Distribution management (e.g. agents, advisors)
    • Underwriting, claims processing
  • Predictive Purchasing Analytics: More data means an enhanced ability to develop artificial intelligence applications to improve customer targeting, next best product offers, underwriting, etc. Smarter, data-driven decision-making will be the norm.

2. The Bad News: New Competitors and Distributed, Fragmented Systems and the Last Mile Challenge

The proliferation of “digital apps” has some downside as well.  Among the challenges our clients are most concerned about are:

  • New FinServ Competitors: While we are still in the early innings of “FinTech” innovation, already companies are seeing cloud-based and SaaS companies like PayPal, Betterment, and even Amazon change the competitive landscape and economics.  While in 2018 only maybe 5% of a legacy FinServ company’s customer use the “next gen” software apps, these new entrants are retraining customer how to purchase and use FinServ products.  The tipping point is coming….
  • Fragmented Systems of Record: Traditional FinServ ERP and CRM systems are now being “augmented” with new SaaS software apps.  The problem: every new cloud app may streamline a workflow but also collects a vast amount of customer data in separate silos.  Bringing data together from multiple disparate systems can be a labor intensive, multi-year, multi-million dollar investment.
  • Inability to “Activate” Predictive Analytics: Even when “customer data gold” is aggregated and analyzed to create powerful predictive insights, landing that analytic insight into front-line workflow systems (e.g. marketing, web sites, CRM systems, agent/advisor systems, etc.) is a big challenge. We call this activating predictive analytics by embedding into day-to-day employee, agent/advisor, etc. workflow systems, and even customer-facing applications.

3. The Next Big Question: How Should Big FinServ Companies Integrate Their Data Ecosystem?

Digital transformation efforts are leading FinServ companies and their third-party partners to acquire and/or build many new, fragmented systems.  The next BIG challenge is bringing all this data together—or at least coordinating it—to not only create a 360° seamless customer experience, but also to enable internal data science operations to monetize its value.

We will get into managing the ecosystem of FinServ customer data through marketing data platforms in future blogs, but let me tee up two options that execs must consider:

  1. Build a totally customized data warehouse that require massive numbers of bodies, hours, years, and $$$$. This, of course, is what many software and data management vendors encourage—and it makes sense, until it doesn’t. After all, every need must be met, right? Keep in mind, this is how the Pentagon buys fighter planes.
  2. Think and act like a nextgen FinTech software startup and aim to build a scalable and extensible data lake, enhanced in an agile way based on business needs in a matter of weeks, not years. The first version might only contain 10% of what’s needed, but the next one, released in two weeks, will have 12%, and so on and so on. The most important data elements will be addressed first. And, it’ll all be built on cheap, scalable cloud storage (e.g. AWS or Azure.)

Clearly we have a bias to #2! This is how we have been building marketing data platforms for years—and it’s nice to see Agile imperatives starting to penetrate huge companies.

Thoughts on the Microsoft-SAP-Adobe Open Data Alliance

This week, Microsoft, SAP, and Adobe (I’ll call it the MAS alliance moving forward) announced an “Open Data Initiative” or ODI. The general consensus is that this is a competitive response to the increasing domination of Salesforce.com in the CRM and marketing technology space, whose PEPM (per employee, per month) SaaS (software-as-a-service) CRM platform has continued to eat up share. Oracle is also not included in this alliance. I think this is a huge event in the marketing technology / CRM / data science world. In this article I’m taking a marketing technologist / analytics perspective to try to deconstruct this and ask some questions.

The ostensible reason for the Open Data Initiative is to make machine learning, AI, and general data transfer much easier between and across systems. The promise seems to be, “if you buy Microsoft, SAP, and Adobe, your data scientists will be able to develop models much faster, and real-time AI algorithms will be able to use data seamlessly from across the systems.” This is a nice promise.

Any marketing operations / data executive knows the frustration that comes with knitting disparate customer-related systems together. Thousands of hours of coding and testing seem to be required with each new system, and even at that point, a lot of manual download / transform / upload analyst time seems to be required. One thing is for sure: This is a needed solution that, if it works, will be in high demand.

More specifically, it seems that three guiding principles have been stated for the ODI:

  1. Control: Every organization owns and maintains complete, direct control of their data
  2. AI Support: Customers can enable AI-driven business processes to derive insights and intelligence from unified behavioral and operational data
  3. Partner Ecosystem: A broad partner ecosystem should be able to easily leverage an open and extensible data model to extend the solution

It’s clear that these three differentiators were chosen deliberately as a response both to Salesforce, and to very real pain-points of organizations, particularly when it comes to marketing / customer data. Let’s go through each in a bit more detail:

Control

This is the most obvious response to Salesforce, whose PEPM walled garden frustrates CTOs who have to pay $80 for an employee who needs to log in once a month to fill in a campaign form. It’s true that apps can be built outside of Salesforce via API, but even so, they make it very clear that they want it to stay inside the system. APIs—particularly for small companies—are rate limited and can be clunky. Most smaller companies, and even some larger ones, still use bulk manual imports and exports using data loader. MAS seems to be saying “you own your data, for reals.” We’ll see how true that is.

AI Support

AI is at a point where “websites” were in 1997. Seemingly overnight, $20M firms arose whose only competency was coding HTML and standing up some active server pages. Today, most technologists still aren’t clear on whether AI modules will be provided mainly by vendors, or developed internally. Software providers clearly want marketers to use (and pay for) their AI solutions, instead of hiring data scientists and building bespoke algorithms. The MAS alliance is smartly promising that AI endpoints will be easily accessible to companies that choose a MAS solution. Of course, they’re still hoping that these companies use the built-in AI being developed by each, or the AI being developed on the Azure cloud. But, by providing an “option of freedom”, they’re putting executives’ minds at ease.

Partner Ecosystem

Finally, the Partner ecosystem approach is really about the open and extensible data model. A trend we increasingly see is the desire for a “platonic ideal data model” for customer and marketing data. This is absolutely possible, just like it is for accounting data (XBRL) or healthcare enrollments (HIPAA 834) (Yes I know these are both old and need refreshing, but they’re still standard.) By providing this open and extensible data model, companies can build applications and data exchanges without fear of deprecation, meanwhile trusting that other companies will also build their software using the same object and data definitions. This is really smart, but it could set off a VHS / Betamax war if Salesforce follows suit and announces its own standard.

Other Observations

Structured Marketing Data Platform vs. Data Lake

What’s also interesting about this announcement is what it’s not, which is a marketing data platform. It’s clear that each member of the MAS alliance wants to keep its part of the sandbox separate, thank you very much. The idea of a co-hosted real-time data store between the three—think an “AI and data exchange real-time clearing house”—was probably discussed but discarded. It’s too risky, too expensive, and frankly too heavyweight and tightly coupled to be successful.

However, a standards-based partnership does support a data lake approach, in which real-time data are fed “raw” from each system into a high-speed, cheap-storage cloud environment, which coincidentally (lol) Microsoft is happy to provide. I’m still not convinced that data lakes are the long-term future for scalable marketing analytics and AI, because of the mess that results and the human data cleaning required of it—but this could change that, and make the data lake concept more realistic for more conservative enterprises.

Speed and Uptake

One thing that will be interesting is to see if the standard data model is quickly deployed, documented, and used. MAS probably knows how many large enterprise customers they have in common, and I’m sure several are lined up to pilot and prototype. Forbes points out that this isn’t just a paper launch, and that a lot of work has already been done.

Blockchain?

One other thing to watch is blockchain. Blockchain is totally overhyped, but this is a use case where it could work. If Microsoft is sharing an opportunity back and forth with Adobe in anything other than a one-way handoff, a distributed ledger could make that opportunity in essence a “shared object” that remained secure. It would be clear who wrote what to the record, and when—even with the record existing simultaneously in two (or three) systems. It’s kind of like a quantum realm of marketing data. It’s both a particle and a wave… or maybe I’m torturing that analogy.

Competitive Response

Lastly, I’ll predict that Salesforce.com further strengthens its partnership with Amazon. Microsoft clearly wants all of the cloud storage and processing business for the MAS partnership, and Amazon is still storing most of its massive CRM data on its own proprietary (and obsolete) cloud of siloed Oracle databases. That’s changing, as Salesforce has committed to put more of its cutting edge products on AWS—but will the walls come down more, and will Salesforce think about decoupling its data storage from its UX? If it does, those PEPM fees will go with it, but it would be an interesting counter-move to the open-extensible alliance that emerged—albeit in an inchoate form—from Microsoft, Adobe, and SAP.

 

Building Data Intuition for Marketers

The more I interact with seemingly sophisticated companies who fat finger their attempts at personalization, the more I appreciate the efforts of companies who really get me. You would think I’m referring to the independent corner coffee shops, but it’s when a behemoth enterprise somehow manages to establish a relationship with me that I’m truly impressed. As a marketer, I know that the first prerequisite to “getting” a customer is a robust, accurate, and densely populated customers database. Increasingly, these data are built on the backs of consumers’ digital activities—the sites they visit, their musical habits, or whether they are a home remodeler or a fly fisherman.

The Face Validity Test

However, even with all of that data, the feeling of “you get me” is still elusive. One reason is that too many sophisticated companies forget the basics because they are so sophisticated. Marketers assume that the models and AI components they get from data scientists are right if they show a lift. Likewise, data scientists assume marketers know their customers at an intuitive level. The real customer is all too often lost in the middle. So, grab some coffees, and ask your data or data science partner out on a “data date.” Your goal is to get to know each other—and the real customer hidden in the data.

Here are three conversation starters:

1. Get and examine a tidy data extract

It’s impossible to get an intuitive understanding of data stored in normalized relational tables, or spread out in JSON files in a data lake. A solution is the tidy data extract:

  • Each row is a customer, and each column is something about that customer. The data will start out in a relational database, and that’s OK. But you want something you can see, touch, feel and eyeball quickly.
  • You don’t want a sample of 1 million records (if your customer base is that large), but you also don’t want a sample of 10 records. Too many records and both you and Excel will be overwhelmed, too few records and you won’t see enough variations of the data to really get it.
  • This should be a point-in-time view, and it doesn’t have to be real-time.

Go left to right, top to bottom. See the column headings. Eyeball the data. Does it mean anything to you? Are you capturing what you thought you were capturing? Do you need to ask for clarification? Are there many blank fields? Producing this kind of extract should be easy for your data partner. It might seem like a “no-brainer” to them, but you’ll immediately see texture that isn’t apparent in models, charts, or deciles.

2. Get the data dictionary

Yes, you just opened the documentation can of sour gummy worms—I guess that’s why data partners often make a face when they hear “data dictionary.” Still, it’s worth asking for one, because if one doesn’t exist, there’s a quick-and-dirty way to get one started.

Take the entire first-row headings of your customer data extract and transpose them on to a new spreadsheet. Start defining as best you can each of the column headings in plain English, with whatever limited knowledge you have. Give it a once-over and pass it on to your data-keeping friend for “review and clarification.” Kindly ask her to fill the remaining blanks, as well as some additional info. You may have to go through a few iterations of this back-and-forth, but do your best to make it thorough and keep it simple.

It’ll be the effort of writing down the definitions that will be valuable. This is college study habits 101—study the information by writing it down. What does this field mean? What is the acceptable range of values? You’ll find yourself full of insights after an exercise like this, ready to test in customer-facing go-to-market efforts.

3. Understand data coverage

Whereas data dictionary is an English-sentence version of what the data is supposed to show, data coverage metrics uncover the truth about the percentage of customer records with data populated for a given field. Any data scientist will tell you that missing data is the bane of their existence. A variable looks solid, and then 20% of the records are NAs. None of the options is good in this case (remove the records, interpolate, take the average, or set to zero for numerics.)

This is why it’s critical to find the gaps and begin to understand the strengths and weaknesses of your data asset. As you work through this understanding, you can start with the basics, as in “what’s filled out vs. what’s not” and put a percentage to it. Here are some examples.

  • Text fields are usually the most basic. For example, [Customer_Email_Address] might have a 76% coverage rate. Coverage doesn’t imply deliverability, but it’s a start.
  • Factor fields, like “segment” or “gender” are also easy to understand. In a database, these will still be stored as text, but they are different in practice—essentially pick-lists that have a set of allowable values. In some cases, you’ll find a variable like “unknown” that is distinct from an actual missing record. This might mean something different, so make sure you understand if “unknown gender” means the customer doesn’t want us to know or we never asked.
  • Continuous variables like income are trickier. You data friend might claim that 100% of records have income populated—but what if most of these are 0? To avoid these kinds of surprises, ask for a distribution of the data, like a histogram, or at the very least statistics like mean, median, and percentiles (usually 10th and 90th percentiles are good.)

These three exercises are just starter topics. Grabbing a coffee—or a conference room—with your data counterpart every month or so to get your collective hands dirty will go a long way to creating customized-feeling, one-on-one interactions with customers. It is this raw feel, more than models or deciles, which will drive the intuition you need as a direct marketer. Remember, it’s not always fancy models and AI that drive authentic, real-feeling experience. Spending time at “ground level” with your customer data isn’t necessarily sexy, but it drives real-world insights.

Let’s Talk About the HiPPO In the Room. Five Steps to Activate Data-Driven Sales and Marketing

Increasing Data and Analytics Usage is the #1 Marketing and Sales Priority for B2B Companies

No one will be surprised to hear that – according to a recent Forrester study1 – 82% of B2B companies believe that increasing the use of data and analytics for marketing and sales is a top priority over the next 12 months – with 22% saying it is critical!

What may surprise many—according to the same study—is that 48% of B2B companies still use their intuition over data to guide their decisions. In today’s world of Big Data, Machine Learning and AI, why do so many still rely on intuition over insight?

Perhaps for some it is the HiPPO effect – the Highest Paid Person’s Opinion – and a latent organizational authority bias.2 In many organizations, decisions often still come down to the dominant HiPPO in the room, where authority trumps data and insight.

Of course, the study identified a number of more “traditional” organizational challenges in activating data-driven decision-making (since I’m guessing HiPPO was not a category on the survey!).

Commonly cited challenges included:

  • Lack of executive sponsorship (hey – maybe these are the HiPPO’s after all!)
  • Lack of mature analytic capabilities
  • Poor data management practices and data quality issues
  • Multiple technology platforms resulting in data silos and challenges in activating multiple channels with any resulting insight
  • Complexity in managing the volume and velocity of data inside and outside the organization
  • Organizational silos and inefficient processes

And the other 52% who are using analytics and insight to make decisions instead of relying on their HiPPOs? Those analytics leaders reported significant improvement in key sales and marketing metrics over the laggards: sales cycle time, return on marketing investment, customer retention and loyalty, etc.

Avoiding the Trough of Disillusionment

The analytics view certainly appears to be worth the organizational climb based on the results that the industry leaders are achieving.

That said, the Gartner Hype Cycle suggests that Predictive Sales Analytics, Predictive B2B Marketing Analytics and Machine Learning, (and other Data Science capabilities) are just now entering the Trough of Disillusionment.

So how can companies making those increased investments in data and analytics for sales and marketing in 2019 avoid the Trough of Disillusionment and move rapidly up the Slope of Enlightenment toward the Plateau of Productivity?

Collectively we refer to this as the Activation Challenge: How do organizations extract greater value and return from their substantial investments in data and analytics, and overcome the HiPPO effect by activating data-driven decision-making directly into their sales and marketing processes?

Here are five best practices to get started with analytics operationalization based on our experiences working with clients in this B2B space over the last ten years:

1) Think Big, But Start Small

As our Chief Analytics Officer so eloquently noted in an earlier blog post, Small Data is still where a lot of the ROI is hidden in B2B Sales and Marketing. Many of the best problems out there today—the ones that will yield the most incremental lift, in terms of leads, opportunities, loyal customers, dollars, etc.—have to deal with small data.

Evaluate additional data sources carefully, and only add them in when they generate significant gain in the underlying models. Work slowly, getting wins on the board (and publicizing them) based on business impact, not the “best data” or the “coolest models.”

2) Lean on Agile Best Practices

The core tenants of the Agile methodology are pretty simple: Stay close to your customers and understand their stories; work in short-cycles; don’t overplan; and show your work aggressively and often. Successful marketing analytics organizations follow these to the letter. Unfortunately, many marketing analytics organizations we see exist in an isolated, siege mentality, only coming out for air and water when they need more data. This almost guarantees that analytics won’t make their way into the parts of the business that are actually talking to customers. Specifically, gather feedback from your pilot participants on a frequent basis (we typically do debrief calls with all pilot participants every other week during a pilot program).

Agile is a product development framework, and marketing analytics is really about building analytic products. They’re not applications with screens and buttons (although they can be), but they are assets that are continually improved. Taking a “product-centric” approach to your data science and analytics capabilities ensures that scale is built over time, and that results will be reproducible back to data using source code.

3) Connect the Dots

Organizations that spend all of their time integrating data don’t get anything done, but the reverse is also true. Try doing analytics when you’re spending six hours a day hunting down csv files from Joe in Accounting. Fortunately, new techniques in data integration—for example, the semi-structured “data lake” concept using cheap cloud storage—allow companies to build an integrated view of their prospects and customers an order of magnitude faster and cheaper than was possible even 18 months ago.

Creating an integrated view of customer engagement touchpoints will enable your organization to see how your customers are engaging across all of your channels and will allow data scientists to ask and answer better questions. Some of the best organizations create an Insights Center of Excellence that is responsible for the “data step”, allowing data scientists to work on solving problems, not munging data.

4) Translate For Your Internal Audience

Several years ago, we developed a killer Likelihood to Purchase model for a large Asset Management firm. Each month, we scored all customers on their “likelihood to purchase” in the next 30 days. We piloted it with half of the sales force.

The client measured the model results for six months (thereby avoiding any “fox-in-the-henhouse” measurement bias). At the end of the pilot, the client reported that 98% of the advisors we predicted were going to purchase in the next 30 days actually purchased in the next 30 days! The model far exceeded their expectations. In fact, I remember telling the head of distribution that had I come into our first meeting telling them we could predict with 98% accuracy who would purchase in the next 30 days, they would have likely thrown me out.

But an interesting thing happened on the way to the predictive analytics Plateau of Productivity with this client – the pilot sales teams did not significantly change their call patterns even when armed with this killer insight and knowledge

Why, you may ask? As I have long said, “the sales force has an infinite capacity to absorb all productivity gains.” In this case, the highly-educated, highly-compensated salesforce was a bloat of HiPPOs who knew better, and simply continued to call on who they always called on. (Did you know a herd of hippos is called a bloat? I didn’t, but it certainly seems fitting in this case!)

We learned that we needed to do a much better job educating the sales force on what to do with the insights, and why, in terms they could understand. To be successful, data and analytics investments must also be accompanied by active enablement of your sales (and marketing!) channels to ensure that insights are translated to meaningful actions, without requiring everyone to have a PhD.

Without this translation layer, even the best models will not produce any outcome at all!

Enablement and translation come in many forms, but some examples that work include:

  • Converting analytic output into business context that marketers, salespeople, and partners can understand and internalize
  • Aligning appropriate messaging and content with “who / when” models so that “the right touch” isn’t killed by “the wrong stuff”
  • Embedding analytics into the existing sales and marketing workflow so that minimal process change is required
  • Consolidating cross-channel customer engagement dispositions into a single view, to minimize switching windows and applications and to provide a holistic view of customer engagement
  • Reporting positive business outcomes back to users so they believe in and trust the insights

5) Work With Trusted Partners

Sounds self-serving I know, but according to Forrester, analytic leaders rely on trusted external partners for both data and analytic services.

Every client we work with has their own data science team(s). We are not their competitors, we are their partners. We are often referred to as an accelerant for driving results in market. A Champion / Challenger approach to analytics should yield greater results.

Avoid “black box” solutions that cannot be brought in-house or internally managed by your own data science team.

And as noted above, without appropriate alignment and activation of your sales (and marketing) channels, even the best data and analytics will not deliver results if you can’t get your sales and marketing resources (and eventually your customers!) to change their behavior.

Ensure your partners have experience in enabling your sales and marketing channels with analytics and insight, content and messaging – not just in developing analytic models.

Summary

Banishing the HiPPOs from sales and marketing will ultimately lead to less waste, better growth, happier customers, and happier employees. Getting there, however, doesn’t happen by installing the latest machine learning technique alone—it happens when analytics is translated into a context that internal users can understand, so they don’t fall back on intuition or opinion alone.

1 The B2B Data Activation Priority: A Forrester Consulting Thought Leadership Paper Commissioned by Dun & Bradstreet, May 2018

2 Forbes.com, Data-Driven Decision Making: Beware Of The HIPPO Effect!, Bernard Marr, October 26, 2017

Hype Cycle, Trough of Disillusionment, Slope of Enlightenment and Plateau of Productivity are trademarks of Gartner

What About Small Data? Part 2

Getting Back to Growth by Playing Small Ball

The ADBUDG curve is a 40-year old handy heuristic for modeling marketing spend vs. return. It was first used for broad-reach advertising. The concept is pretty simple:

  1. The curve starts out flat, as dollars are invested to get breakthrough with a group of consumers
  2. Then, the curve gets steeper as marginal returns reach profitable levels
  3. Finally, the curve flattens as the market is saturated with messaging, and the advertising no longer has much marginal effect.

ADBUDG curve
Certainly, both direct and broad-reach marketers know of this curve, even if they’ve never heard of the word “ADBUDG.” There is a maximum amount of goods or services you can get the market to buy before marginal marketing dollars do not drive a profitable return.

However, this “plateau level”, at least on an aggregate basis, seems to be getting lower, year after year. Over the past two decades or so, at least four factors have conspired to compress this curve for marketers, whether B2C or B2B (I use the term “consumer” interchangeably):

  1. Consumers have become savvier, spotting obviously poor execution and ignoring it out-of-hand
  2. At the same time, consumer behavior has become more search-driven, turning the tables on the advertiser and waiting until they have a real need to find what they want on Amazon or Google
  3. The supply of quality interactions—be these on radio, telephone, television, or at retail—has gone down as consumers have shifted their behavior towards platforms like Netflix and Amazon, and have stopped picking up their phones, etc.
  4. At the same time, the marginal cost-per-touch for low-quality interactions (junk email, crap display ads, lousy ad time on the long-tail of the cable TV spectrum) has gone down, encouraging blasting consumers with touches, thereby further intensifying consumers’ programming to “ignore” marketing tactics.

The net effect of these trends and their impacts can be visualized as a series of curves getting flatter and flatter over time as the aggregate “ROMI,” or return on marketing investment, gets lower and lower.

Downward ADBUDG curve

A traditional way to deal with this problem is optimization. In other words, take that ADBUDG curve and find the best possible marketing mix, message, and consumers to target, which will optimize ROI by lowering costs to get the same return. This works well over the short run. Everyone is happy, because marketing has increased its ROMI. There is no noticeable impact on growth—the plateau of diminishing returns is reached sooner.

Optimized ADBUDG curve

However, over the long run, there is a strategic problem—one that CPG companies, for example, have been facing for years—and that is, growth is harder and harder to come by. However, the economy is growing, and consumers are buying things. So where have the dollars gone? They’ve gone to small competitors, who have marketing departments of one or two, no marketing mix models, and are limited to small digital, agile digital campaigns that go from ideation to execution in days, not months.

If you want to see a concrete example of this, go outside and look at anyone between the ages of 14 and 40 today, at the brands they display conspicuously, at the clothes they wear, at the music they listen to. You will notice that most are telegraphing their individuality in an extremely deliberate way. There is a term that Sigmund Freud coined a hundred years ago that I think perfectly describes consumers and companies as buyers today—the narcissism of small differences:

Every single consumer / company thinks that they are unique, even if they’re actually quite similar.

It’s up to marketers and sellers to recognize this, understand them, and get them what they need to feel like they are being treated as individuals.

The implication is that now more than ever, marketers and salespeople need to go “small” when everyone is talking about “big.” Big data, big advertising, scaled campaigns, and machine learning are great—but what they are so often missing is the touch of the artist, and careful, high-resolution insights-driven thinking.

I was talking to someone the other day whose wife is about to age-in to Medicare, and she’s gotten hundreds of “idiotic” (his word) touches from various companies over the past months blasting them with the same message, over and over. She is now completely turned off on Medicare Advantage. She’ll buy something eventually, but only when she receives the right touch that acknowledges her as an individual—well, maybe not an individual, but at least as someone unique.

Companies optimizing their Medicare Advantage campaigns might put her in a lower decile, as she doesn’t respond to these touches, but she has the means and the need and will buy. Instead of optimizing in aggregate, or across coarse segments of tens of millions, these companies should think about micro-segmenting their campaigns, and understanding what she as a consumer actually needs and wants when it comes to health care; what her specific habits are; and how she lives her life. Simply acknowledging these differences will go a long way towards true optimization, and will drive incremental growth.

Concretely, this means micro-segmentation. This isn’t the huge, enterprise segmentation, but it’s rather a guerilla, agile attempt at understanding the small cells of customers, and reaching out to them in unique ways, all measured rigorously. For each micro-segment, a marketer should strive for unique insights, including:

  • Core needs, wants, insights: What is the real, non-trivial, second- or third-level insight that makes this small segment of consumer care about what I’m talking about?
  • Channel mix: How do I build a go-to-market strategy that intercepts consumers and companies where they travel, and where they care about what I’m selling?
  • Content / message: How do I get the right, unique content and message in front of that call of a few companies / a few hundred thousand consumers, where it will really resonate?
  • Product: It’s not always possible, but can I build a product portfolio with enough diversity to acknowledge difference, while staying profitable?

This requires striving for breakthrough insights among small cells, and these insights then have to spread throughout a marketing organization that embraces “small ball”—the wins that come from looking for nuggets instead of the whole gold mine.

This does not mean giving up on analytics or data science—it’s actually the logical extension of it. Analytics goes from being huge and aggregate to micro and artful.

It does mean spending more time looking for the insights sources that will get marketers a greater depth of insight into markets, prospects, and customers. It does mean doing bringing data scientists into qualitative research sessions.

So what about that ADBUDG curve getting squashed by oversaturation? If you play analytical “small ball,” you’ll be optimizing lots of little ADBUDG curves, one for each of the micro-segments. The saturation level for each of these is higher, so when they are summed up, the aggregate curve rises.

Some of this can be done with technology—for example, some of the very good targeting that is possible with Instagram today based on location, text mining, and image—but much of it still boils to good old-fashioned insights work, and making cell sizes smaller. Another way to think about small-cell marketing is in terms of a campaign / micro-segment portfolio; each “fund” is optimized, and then the entire portfolio of campaigns is optimized for efficiency, as a whole.

Growth Micro ADBUDG curve

One final note; I continue to believe that insights-driven, small-ball analytical marketing is ultimately an organizational challenge.

Marketing and sales organizations have to be built to think like customers, like individuals, while at the same time being relentlessly data-driven.

These two things are not mutually exclusive. The old trope of the “geeks” and the “creatives” is just plain wrong. Merging these two worlds, and successfully playing analytical small-ball, is a really good way to move a big company from efficiency to efficient growth.

 

5 CEO Principles for Developing an Applied Analytics Strategy

Image: Hunter Haley

As both topics of AI and Facebook data usage gain greater attention from the media, customers, investors, and regulators, it’s time for CEOs to get deeply engaged in an Applied Analytics Strategy. So what is an Applied Analytics Strategy? Applied analytics is about the strategic use of data for decisions within a given environment. In this case, business, marketing, and sales decisions. Yet, too many C-level execs are abdicating major strategic decisions to their data scientists, data vendors, and software suppliers. Claiming “lack of expertise” in applied analytics is no longer an acceptable position. CEOs and their leadership teams must roll up their sleeves and get engaged.

5 basic principles CEOs must embrace:

1) Deep customer data analytics is a competitive requirement.

Yes, CEOs need to be very concerned about customer privacy, but leading competitors (particularly cloud-based start-ups) are pushing the envelope on predictive and AI applications to better target and serve your customers.

2) Customers expect you to know them better.

Underneath privacy concerns, there is still a growing customer expectation that you have the data on hand to understand customer interests, and ethically and productively sell and service them better. Amazon, NetFlix, Google, etc. are conditioning consumers (and therefore B2B buyers) to expect more targeted content and tailored solutions based on their data profile.

3) Don’t let your strategy be driven by data and software vendors.

Too often I see mid-level executives absent from a top-down applied analytics strategy. This includes spending on what vendors want to sell them vs. what they need. With everything moving to the cloud, data and software vendor overload may actually be taking your business backward. The 80/20 rule (20 percent of your activities will account for 80 percent of your results) applies to both data and software.

4) Reverse engineer your data and Applied Analytics Strategy.

Rather than buying what vendors promise, talk to your front line marketing, sales, customer service, and operations executives to determine what they need to succeed. For example, your sales team needs three basic questions answered by Applied Analytics: a) whom should we target b) what product(s) and messaging should we use for these unique prospects c) how should they be engaged (face to face, phone, email, website, etc.)?

5) You already own a data gold mine.

The most powerful data is already inside your internal systems. Unfortunately, this data is often siloed; either physically or politically. Specific data on existing customers and their patterns is within reach. Using that information, you can make assumptions for new customers. There is so much powerful data in your existing systems; CRM, purchase history, customer service inquiries, product usage (including IoT), website downloads, and/or social media dialogue that can be used.

CEOs and their leadership team can no longer defer their Applied Analytics Strategy to just the “analytics experts” alone. Get engaged, get knowledgeable, and make smarter investments.