Posts

What Uber Teaches Us About Great Sales and Buyer Enablement

The “last mile” of revenue generation (getting a qualified lead to close) is always the biggest hurdle. Whether B2B or B2C, this ultimately requires some level of personal relationship development and product customization. Yet many times, businesses don’t realize the costs associated in qualifying a lead in the first place – despite whether or not they close!

Consider a real-life scenario:

I recently flew from DC to Atlanta. I needed to get from Point A (DC) to Point B (Atlanta). The total trip cost was about $470 or 74 cents per mile:

Miles Cost Cost/Mile
Airfare DCA to ATL 600 $380  $    0.63
Uber to DCA 12 $40  $    3.33
Uber from ATL 20 $50  $    2.50
TOTAL 632 $470  $    0.74

Notice that the “last mile(s)” – getting me from my specific location in DC to my specific destination in Atlanta cost me 5X more per mile than “bulk” airfare.  Yes, we all intuitively know this – but do we realize the same economics apply to sales? After all, prospects need to go from point A (unqualified, unknown) to point B (closed deal) as well. Bulk demand generation is always measured in “cost per qualified lead”– same as cost per mile above. So say that’s $100 per lead on a $10,000 potential transaction (1%). If you only close 1 in 7 leads that’s $700 (7%) in lead gen cost per deal closed. And that doesn’t even include the cost to-sell (sales rep salary, benefits, commission, and supporting infrastructure).  This math is simple – I want to get to Atlanta at the lowest cost per mile and your business wants to get to close at the lowest cost per lead. But, generate as many leads as you want at whatever cost per lead… if your sales team can’t close the last mile, total cost-to-sell goes through the roof!

What Uber Can Teach Us About Sales Enablement and Buyer Enablement

For the sales rep (the Uber driver), the Uber platform provides pretty effective sales enablement.

Uber’s Driver Experience

  1. Finds the buyer
  2. Ensures the buyer’s ability to pay (credit card or check)
  3. Qualifies the buyer’s integrity (passenger rating)
  4. Maps the route to complete the sale
  5. Completes a friction-less payment process (set price, optional tip)
  6. Rinse, repeat….

Not only has Uber created as a seamless experience for the driver (i.e. sales rep), but for the buyer. Consider what the Uber app does for buyer enablement:

Uber’s Rider Experience

  1. Matches the buyer (rider) with the most convenient seller (driver)  – without worrying about “sales territories!”
  2. Ensures minimum product standards (car quality, safety)
  3. Qualifies seller integrity (driver rating)
  4. Provides transparent pricing before purchase
  5. Offers value-added services (custom music, driver background info)
  6. Completes frictionless payment
  7. Allows immediate vendor rating
  8. Cross-sells other products/services (Uber Eats, Uber credit card)

It’s also important to point out buyers are NOT be forced into negotiations with sellers like traditional taxi drivers. When you pull the pieces together one thing is for sure –

Key Takeaway

While lead generation boasts a hefty budget, with costs at every intersection, the  “last mile” is where you can expect to throw in the extra dollars. When you think about it, Uber did not invent the ride share (i.e. taxing) experience, instead, they reinvented the way in which buyers and sellers interact in the “last mile.” Without investing in technology, process, and a “frictionless” buying experience, well, consider that taxi meter still running!

P.S. I love talking to Uber drivers – very interesting entrepreneurs with diverse backgrounds – but Uber’s buyer enablement is what really sets it apart from traditional last mile transportation services.

How good is your buyer enablement?

 

Cyborgs Will Beat Robots: Building a Human-AI Culture

There are two competing AI narratives bouncing around the internet. On the one hand, AI is seen as a future scourge, a technology that once unchained will push humanity past a singularity. Past this singularity, we cannot predict what will happen—but many think it won’t be good [1].

The other camp is dominated by AI optimists like Ray Kurzweil, who believe that human-machine integration is inevitable, is a great thing that will usher in a new golden age for humanity, and has been happening for years. Many people don’t realize that their brains have already been rewired with a Google API; when we don’t know something, we’ve gotten incredibly good at opening a browser, executing a pretty optimal search, and finding the answer (if there is one)—dramatically increasing the productivity and intelligence of those who use this API wisely. This camp still sees a singularity on the horizon, but in their view, humans and machines will merge, creating “cyborgs” that integrate the best elements of human intelligence and artificial intelligence, and this is a good thing.

I wanted to write this article is to help companies and executives navigate this coming cyborg transformation. Just like in past technology waves, the companies that succeed will not be the ones with the best algorithms; the algorithms will largely become tablestakes. In this new reality, the winners will do a better job transforming their employees into better “AI interfacers.” In other words, the companies with lots of motivated employees who understand how to use AI—and who are staffed with employees equipped to interface with the technology—will ultimately stand out from competitors by developing better use cases, integrating AI into their value-added business processes, and using AI in concert with human intelligence to drive better outcomes.

Good News: We Are Still Early

Early in the personal computer revolution, the distance between the most advanced computer engineer and a 12-year old kid messing around with his Apple IIe wasn’t really that large. It probably seemed huge at the time, but the reality was that the basics of that machine were still simple, and someone with a soldering iron and a few screwdrivers could actually tinker, maybe upgrading the RAM or adding on a graphics card. Try doing that in 2019 with a MacBook Pro. The components could seen. The circuits could be understood. Programming languages, while clunky by today’s standards, were BASIC. (sorry).

I would argue we’re roughly at the Apple IIe stage right now with artificial intelligence. A hobbyist can download open source software like Python, the SciKitLearn library, Jupyter, and Git, and be off and running building an OCR (optical character recognition) algorithm. In fact, one could argue that AI technology is more democratized than PC technology was in the mid-1980s. At that time, it would cost at least a few thousand dollars to get up and running with a good IBM clone, and programming languages had to be purchased as physical boxes of floppy disks. Learning to program or build hardware required physical books; today, it’s possible to take free courses on AI from Stanford on Youtube, and any error typed into Google returns an immediate solution courtesy Stack Overflow.

In other words, an interested, talented person can achieve basic artificial intelligence literacy today pretty easily, if they put their mind to it, and the distance between there and a self-driving car isn’t insurmountable. Granted, millions of developer hours have been spent tweaking each neural net and environmental sensor on that car driving around Pittsburgh, but a tinkerer can basically explain the theory behind how it all works, if they want to. The net-net is that it’s still possible to build an army of AI citizen scientists at your company who will fully embrace the unknown advancements of the next decade—and that not doing so will put your company at risk of faltering, just as slow movers on technology did in the 1990s.

New Role: The AI Interfacer

Companies that successfully transitioned from offline to digital in the 1990s and 2000s all had one thing in common; they built a strong layer of interface employees. We’ve all been there: Bob is the master of database X. He works 70 hours a week; he can answer any question; people worship him, and he has total job security. However, that database never reaches its full potential. Hundreds of reports are written, but few are used. Integrations happen, but fall down over the last mile. The problem in this scenario is that few people have the skills (or the interest) to meet him half-way. There are no interfacers for Bob.

The company that Bob works at spends millions on expensive proprietary software, and armies of consultants to install and configure. The bare metal servers at this company are just as powerful as the servers at their competitor—but yet, it just never seems to “click.” The competitors pull away, and before you know it, this company is on the trash heap. Sound familiar?

This analogy extends to AI flawlessly. An AI system can be built to (in theory) predict the perfect marketing touch at a given point, or detect fraud with uncanny accuracy, but without human advocates and interfacers feeding the algorithm data, providing improvement suggestions, and driving adoption, these systems will fail—or at the very least, they won’t evolve.

AI interfacers are to 2019 what computer literate employees were to 1989, or what database-literate people were to 1999. They may not be developing machine learning algorithms, but they know what a machine learning algorithm does. They may not be on the team developing the self-driving car, but they can explain how a self-driving car is put together. They are the key to AI’s success over the last mile.

AI Interfacers come in five flavors, not mutually exclusive:

  • User: Can interface with AI endpoints and integrate them into their day-to-day processes;
  • Explainer: Understands how machine learning algorithms are trained and validated, and how these can chain together to form systems, and most importantly, teaches other about them;
  • Product Manager: Can see how systems and processes can be improved by AI, and can prioritize these improvement points;
  • Data Gatherer: Understands how artificial intelligence gets information from the world (IoT, big data, etc.), environmental sensors, users);
  • Prototyper: Can prototype simple AI systems using machine learning algorithms (in other words, tinker).

The AI User is equivalent to someone who liked and was facile in using email in 1989, or an SAP power user in 1999. These are individuals who instead of running away from AI, actually attempt to integrate it into their day-to-day, realizing that it will make their job easier, and allow them to surf to higher value-added activities (and perhaps, get a promotion.)

The AI Explainer is a natural teacher who understands how AI elements are knit together within the core business processes of the company, and evangelizes these stories to others. He is the executive who tells the same story over and over again at staff meetings until it has been internalized; the line manager who explains the sales rep why the AI-based next logical product algorithm works; the new employee who teaches upwards to their 45-year-old supervisor what machine learning really is, using simple, approachable language.

The AI Product Manager might not be an actual product manager, but has that DNA. They are constantly stepping back and seeing how AI does and could improve existing processes. They are passionate about driving better performance and outcomes, and tell the stories across the company that drive innovation.

The AI Data Gatherer sees how information flows through the company—from customers, marketing campaigns, the supply chain, IoT, etc.—and makes connections. They see potential signal for learning algorithms, and they see how AI algorithms can feed data into other systems. For example, this individual might see that internet-enabled cooling units report on energy usage every hour; she surmises that when units spike above two standard deviations for long periods that another chiller might be required. She recommends to the cross-sell AI team that they use these data in their algorithm, along with her hypothesis.

The most advanced non-engineer role is the prototyper—the individual who is comfortable tinkering and messing around with AI technology. This is usually a business power user who is impatient for results. These individuals can frustrate engineering teams (think, stepping on my turf,) but at successful, agile companies, interdisciplinary work is encouraged. We ask AI engineers to understand the business problem; successful companies encourage business leaders to get their hands dirty (in a safe environment, of course.)

Principles for Building Your Bench of AI Interfacers

There were several traits that companies who successfully built up a strong bench of digital natives had in common, and a few traits that struggling companies also shared. There is no reason to expect that the core principles have changed, but I’ve adapted them for AI.

The actions below are all totally doable. None of them require spending millions of dollars on a quantum computer, or hiring 50 new developers to go “do some AI stuff.” Rather, they are mainly HR and management actions. If they don’t get done, it’s probably because, like most things worth doing, they don’t drive immediate ROI. They are cultural changes that must be driven from the top (the first DO below.)

Do’s

  1. Hire a Lifetime Learner CEO / Exec Team. It all starts at the top. If you have a CEO who won’t take the time to understand AI at a foundational level—how it works, how it learns, existing use cases—then you’ll be toast. Keep in mind, I’m not talking about hiring a programmer data scientist—I’m talking about someone with an insatiable thirst for learning who never gets tired of reinventing her skillset.
  2. Hire New Cohorts, Every Year. Companies who don’t hire young people for prolonged periods of time quickly fall behind new waves. AI is no exception. I first heard the term “digital native” in 2004, from a technology company marketing executive who lamented his inability to make the transformation to digital. This company had kept old managers in seat for years (they were the original crew) and now needed a talent infusion. If he’d hired one or two 22-year-olds every year, he wouldn’t have been playing catch-up.
  3. Have a Citizen-AI Training Curriculum. One thing that didn’t exist ten years ago was the MOOC. If you wanted a marketing manager to learn the basics of ad exchanges, she either had to learn on the job or go take a course at a university. Today, motivated learners can take AI courses from basic to fairly advanced, essentially for free. As a manager, it’s your duty to (1) create a curriculum based on existing MOOCs and post on your intranet / wiki, and, (2) give employees the time and space they need to get up to speed.
  4. Co-Create, Foster Agency. If an AI-based next logical call algorithm is implemented in a call center, don’t allow it to be cynically jammed in with an explanation of “just do it.” This will drive resentment. Instead, train users on how the algorithm was built. What are its inputs? What algorithms were used to train the model? How do we know it works? Involve your employees in co-creating the AI interfaces; you’ll find that they quickly surface problems and blind spots, and will happily use it / work with it. Analogies for this exist all over, but perhaps the most powerful is the Andon Cord used in lean manufacturing whereby any employee can “stop the line” to identify problems with production.
  5. Force Human Interaction Interfaces. If AI algorithms are only allowed to talk to one another, we might actually get to the “grey goo” scenario pretty quickly, and I’m only half kidding. Rather, focus on human understandable interfaces. The Google search example I started with is a good example of a human-AI interface that is mutually reinforcing. Concretely, building out a next logical product algorithm in a CRM system shouldn’t just spit out a SKU. Expose more about the key inputs; the predictive factors; allow the human to adjust parameters and see how the model changed. Perhaps most importantly,
  6. Promote Tinkering. Siloes and a “guild mentality” kill innovation. Most Silicon Valley companies have done a good job promoting a tinkering culture. However, in too many other places, “stay in your lane” dominates, causing people who stick their neck out to get whacked. AI is no exception. If you want people to stay around, let them play around. Make sure you have safe spaces set up where nothing can be broken—but innovation beats parochialism any day of the week.

Don’ts 

  1. Don’t Go Build Stuff Just Because AI. Perhaps the fastest way to alienate your workforce, and make them AI opponents rather than AI proponents, is to hit the panic button and go off half-cocked on an AI initiative without a clear business reason. A lot of companies did this last year with blockchain. “We need to do something with blockchain, because… blockchain!” (Guilty. Mea culpa.) So don’t do this with AI. Wait for the real use cases. If your employees are excited about it, it’ll be a lot easier, and it’s a really good indication that it’s worth doing.
  2. Be Cautious of Black Boxes. Proprietary black boxes may be awesome, but even more so than with enterprise software, companies need to use extreme caution before committing to them. AI is, by its very nature, opaque. Buying from a vendor who won’t expose the inner working adds another level of opacity, and will make it much harder for employees to interface and find agency. It’s fine to test out proprietary solutions, but be aware of what you’re committing to.
  3. Don’t Build a Monolith. Finally, don’t build the one AI ring to rule them all. When I see IBM advertising Watson as the solution to everything, I definitely get Lord of the Rings Flashbacks. I guess I get why everything should be centralized, but again, if you’re trying to build a cyborg organization, this seems like a giant mistake. Instead, building smaller AIs that humans can work with directly, that communicate with one another but aren’t a hive mind, seems a safer way to go—in more ways than one.

Conclusion

Companies that successfully navigate the coming AI transformation will build an army of AI Interfacers, made up of power users, product managers, teachers, data plumbers, and tinkerers, who will drive a positive feedback loop between the power of AI and human intelligence. These companies will make the creation of this culture a priority, with concrete management, HR, and technology decisions designed to prioritize the human-AI interface, not the raw power of the algorithms. These “Cyborg Companies” will emerge as the clear winners over the coming decade.

[1] In his book Superintelligence (2014), Nick Bostrum laid out many potential dangerous outcomes for an unchained, general intelligence AI: a “grey goo” of endlessly self-replicating nanomachines that takes over the planet; a resource-consuming algorithm gone awry whose sole goal is factoring prime numbers, eventually building a Dyson Sphere around the sun to achieve its objective; and even more malicious scenarios evoking devious, trickster AIs who fool researchers into mailing it what it needs to build a machine to escape its human prison. This is pretty dark, and while I do think we need to be worried about these dangers, this isn’t the focus of this article.

The Last Mile Problem: 7 Steps to Closing the Insights-to-Outcomes Gap

Changing front-line behaviors with data-driven insights will be critical to realizing the benefits from your investments in analytics.  It’s harder than you think!

Perhaps you are one of those companies in the CMO Survey that is planning a 218% increase in analytics spend over the next three years, or perhaps you are part of the breakaway group that already invests more than 25% of your IT budget on analytics.

Either way, make sure you are investing enough in embedding data-driven insights into sales and marketing workflow and processes and enabling the changes in front-line behaviors that are necessary to achieve desired outcomes.

This “last mile” problem is one of the key challenges that many B2B companies with complex sales and marketing processes have trouble addressing. Without focus and investment on last mile adoption, companies are severely limiting the return on their data and analytics investments.  McKinsey estimates that analytic leaders spend more than 50% of their analytics budget on solving these activation issues.

Below you will find seven of the best practices for addressing last mile challenges that we have observed in working with numerous B2B clients across industries in the last 10 years.  Adopting these will help you close the Insights-to-Outcomes gap that is inhibiting many companies from realizing the full potential of AI and predictive analytics.

1) Begin With The End

Many companies start their analytics journey with data, and no doubt data accuracy, quality and completeness are critical to analytics success.  DNB’s 6th Annual B2B Marketing Data Report suggests that 89% of B2B companies now agree “Data Quality is Increasingly Important to the Sales and Marketing Organization.” That’s a bit like saying that gas is important for the car (or maybe electricity these days.)

Best-in-class companies start with a very clear roadmap of what business issues they must improve with analytics – and in what order. This is what we call an Applied Analytics Strategy, and it needs to be driven down from the top of the organization.

What business use cases and processes will most benefit from greater insight, and what impact will that have on business outcomes?  Is it a cross-sell problem? A renewal or retention problem? A customer acquisition problem? All of the above?

With this prioritized roadmap, they then determine the type of insights they will need to change outcomes – and the data strategy that will be required to furnish those insights.

For more on Applied Analytics Strategy, read  5 CEO Principles for Developing an Applied Analytics Strategy

2) Data, Data Everywhere, Not a Drop to Drink

Building a 360 view of the customer is still a challenge for most B2B companies today. In a recent survey, 69% of enterprises said they are unable to provide a comprehensive, single customer view today.

A sound data strategy – prioritized based on the analytics roadmap – is key to driving investments in data.  Many firms are deploying cloud-based data lakes as an answer to today’s disparate systems and proliferating martech stack.

Vendors are scrambling to facilitate data interoperability in a way that suits their business models (e.g. Salesforce buying Mulesoft and the recent Open Data Initiative announced by Microsoft, Adobe and SAP).

For more on ODI, read our thoughts on the MSA Data Alliance

While companies wait for the marketplace to catch up, they must be rigorous in defining a clear data ontology for their prioritized use cases, and a master data model for key domains that support their analytic efforts (for example, accounts, opportunities, leads, etc.).

These must be supported with a governance model that assigns responsibility and accountability for data quality and accuracy.

More data isn’t always better.  Make sure you are leveraging the data that you already have – and focus on execution. Evaluate additional data sources carefully, and only add them in when they generate significant gain in the underlying models. And get rid of spreadsheets.

3) Mind the Gap

Data silos aren’t the only challenge.  Many companies still operate in organizational silos.  The gap between sales and marketing is still very wide at many B2B companies today – from a process, technology, data and reporting perspective.

Connected sales and marketing is still a thing.  Create shared processes, shared data and insights and most importantly, shared measurements that align around the end-to-end customer experience.

Consider the creation of a Go-to-Market Council that meets on a regular basis to coordinate marketing and sales activities and coverage for specific customer segments. For example, with one client we host a monthly call to review prior month’s results, next month’s objectives, campaigns and offers, and sales resource availability.

Based on that session, contacts and opportunities are assigned to defined contact strategies and dynamically allocated across marketing and sales teams and platforms based on that session, with shared objectives and KPI’s.  This “Activation Hub” coordinates customer engagement across all platforms and helps ensure all teams are singing from the same hymnal (see #5 below).

4) Play To Your Audience

Getting salespeople or partners to change their behavior can be a real challenge.  Most are creatures of habit.  Inserting analytic insights into sales motions and expecting sales people to change their call patterns without change management and sales enablement support is a sure recipe for failure.  Translating analytic output into sales context is critical to drive the adoption required to change outcomes.

Market leaders engage their sales resources early in the analytics process.  This includes initial input and hypothesis generation when an analytic approach is being determined for a specific business use case; input into presentation of analytic insight within existing tools; and process re-engineering ideation as processes become more data-driven.

It also involves piloting the approach with a subset of your best reps to get feedback about both the insights and the process before it scales to the entire team.

Finally, it requires active enablement and change management from the marketing or sales enablement teams to ensure adoption. This enablement and translation may come in many forms, but key examples include:

  • Converting analytic output into business context that marketers, salespeople, and partners will understand and internalize
  • Aligning appropriate messaging and content with “who / when” models so that “the right touch” isn’t killed by “the wrong stuff”
  • Embedding analytics into the existing sales and marketing workflow so that minimal process change is required
  • Consolidating cross-channel customer engagement dispositions into a single view, to minimize switching windows and applications and to provide a holistic view of customer engagement
  • Reporting positive business outcomes back to users so they believe in and trust the insights

5) Sing From the Same Hymnal

Aligning execution across all customer touchpoints is a significant challenge for almost every B2B organization today, for the reasons outlined above. It involves multiple technologies and platforms, disparate data sources, multiple resources and departments.

Creating a consistent cross-channel customer journey is as much a business strategy as a technology initiative.  Gartner talks about this as the Emerging Customer Engagement Hub which will enable companies to deliver a cross-channel experience.

This is a subject for another blog, but we wanted to share with you our vision of what is included to help convert insights to outcomes and deliver on the Applied Analytics Strategy.

6) Rinse and Repeat

Test and learn.  Test and learn.  Test and learn.  Don’t wait for the perfect analytics solution.  Use an agile approach in your deployment and refine continuously.

Don’t forget your control groups.  We have found many companies don’t like to have a holdout group that doesn’t get the “benefit” of data-driven insights, making it more difficult to identify real differences in performance resulting from analytics.

Maintain discipline in your measurement so that you can validate actual performance drivers, and push your analytics teams to manage their activities with a “product-centric” approach to data science that will ensure your analytics and data investments can scale to support the entire business.

 For more on building a Product-Centric Data Science Organization, read here

7) Trust But Verify

Defining the likely ROI for a specific business use case for advanced analytics is one of the first steps in the process.  Setting the goalposts for what you believe the outcomes should be is crucial, and should be a collaborative effort between the analytics team and the sales and marketing sponsors.  What is our baseline performance today?  How much of an improvement do we think we can make by applying analytics?  What are the levers that will drive that improvement?  If we achieve that level of performance, what are the quantifiable benefits to the business? This should all be documented at the beginning of the project.

For more on Measuring Return on Analytics, read here.

Make sure you set up a measurement framework and capability that will allow you to evaluate your performance against those targets, and more importantly help you identify where the gaps are against your initial assumptions.

For instance, we recently worked with a client to prescribe sales outreach within a defined time horizon before contract expiration.  We tracked activity to make sure target outreach volumes were occurring. When the modeled conversion rate improvements weren’t achieved initially, we were able to identify that a significant volume of outreach was still occurring to customers outside of the prescribed window, thereby dampening results.  This was subsequently addressed by additional enablement and management compliance.

This was a textbook  example of the last mile challenge – if you cannot effectively change front-line behaviors you will not be able to close the Insights-to-Outcome gap and deliver on your investments in analytics and data.