Cyborgs Will Beat Robots: Building a Human-AI Culture

There are two competing AI narratives bouncing around the internet. On the one hand, AI is seen as a future scourge, a technology that once unchained will push humanity past a singularity. Past this singularity, we cannot predict what will happen—but many think it won’t be good [1].

The other camp is dominated by AI optimists like Ray Kurzweil, who believe that human-machine integration is inevitable, is a great thing that will usher in a new golden age for humanity, and has been happening for years. Many people don’t realize that their brains have already been rewired with a Google API; when we don’t know something, we’ve gotten incredibly good at opening a browser, executing a pretty optimal search, and finding the answer (if there is one)—dramatically increasing the productivity and intelligence of those who use this API wisely. This camp still sees a singularity on the horizon, but in their view, humans and machines will merge, creating “cyborgs” that integrate the best elements of human intelligence and artificial intelligence, and this is a good thing.

I wanted to write this article is to help companies and executives navigate this coming cyborg transformation. Just like in past technology waves, the companies that succeed will not be the ones with the best algorithms; the algorithms will largely become tablestakes. In this new reality, the winners will do a better job transforming their employees into better “AI interfacers.” In other words, the companies with lots of motivated employees who understand how to use AI—and who are staffed with employees equipped to interface with the technology—will ultimately stand out from competitors by developing better use cases, integrating AI into their value-added business processes, and using AI in concert with human intelligence to drive better outcomes.

Good News: We Are Still Early

Early in the personal computer revolution, the distance between the most advanced computer engineer and a 12-year old kid messing around with his Apple IIe wasn’t really that large. It probably seemed huge at the time, but the reality was that the basics of that machine were still simple, and someone with a soldering iron and a few screwdrivers could actually tinker, maybe upgrading the RAM or adding on a graphics card. Try doing that in 2019 with a MacBook Pro. The components could seen. The circuits could be understood. Programming languages, while clunky by today’s standards, were BASIC. (sorry).

I would argue we’re roughly at the Apple IIe stage right now with artificial intelligence. A hobbyist can download open source software like Python, the SciKitLearn library, Jupyter, and Git, and be off and running building an OCR (optical character recognition) algorithm. In fact, one could argue that AI technology is more democratized than PC technology was in the mid-1980s. At that time, it would cost at least a few thousand dollars to get up and running with a good IBM clone, and programming languages had to be purchased as physical boxes of floppy disks. Learning to program or build hardware required physical books; today, it’s possible to take free courses on AI from Stanford on Youtube, and any error typed into Google returns an immediate solution courtesy Stack Overflow.

In other words, an interested, talented person can achieve basic artificial intelligence literacy today pretty easily, if they put their mind to it, and the distance between there and a self-driving car isn’t insurmountable. Granted, millions of developer hours have been spent tweaking each neural net and environmental sensor on that car driving around Pittsburgh, but a tinkerer can basically explain the theory behind how it all works, if they want to. The net-net is that it’s still possible to build an army of AI citizen scientists at your company who will fully embrace the unknown advancements of the next decade—and that not doing so will put your company at risk of faltering, just as slow movers on technology did in the 1990s.

New Role: The AI Interfacer

Companies that successfully transitioned from offline to digital in the 1990s and 2000s all had one thing in common; they built a strong layer of interface employees. We’ve all been there: Bob is the master of database X. He works 70 hours a week; he can answer any question; people worship him, and he has total job security. However, that database never reaches its full potential. Hundreds of reports are written, but few are used. Integrations happen, but fall down over the last mile. The problem in this scenario is that few people have the skills (or the interest) to meet him half-way. There are no interfacers for Bob.

The company that Bob works at spends millions on expensive proprietary software, and armies of consultants to install and configure. The bare metal servers at this company are just as powerful as the servers at their competitor—but yet, it just never seems to “click.” The competitors pull away, and before you know it, this company is on the trash heap. Sound familiar?

This analogy extends to AI flawlessly. An AI system can be built to (in theory) predict the perfect marketing touch at a given point, or detect fraud with uncanny accuracy, but without human advocates and interfacers feeding the algorithm data, providing improvement suggestions, and driving adoption, these systems will fail—or at the very least, they won’t evolve.

AI interfacers are to 2019 what computer literate employees were to 1989, or what database-literate people were to 1999. They may not be developing machine learning algorithms, but they know what a machine learning algorithm does. They may not be on the team developing the self-driving car, but they can explain how a self-driving car is put together. They are the key to AI’s success over the last mile.

AI Interfacers come in five flavors, not mutually exclusive:

  • User: Can interface with AI endpoints and integrate them into their day-to-day processes;
  • Explainer: Understands how machine learning algorithms are trained and validated, and how these can chain together to form systems, and most importantly, teaches other about them;
  • Product Manager: Can see how systems and processes can be improved by AI, and can prioritize these improvement points;
  • Data Gatherer: Understands how artificial intelligence gets information from the world (IoT, big data, etc.), environmental sensors, users);
  • Prototyper: Can prototype simple AI systems using machine learning algorithms (in other words, tinker).

The AI User is equivalent to someone who liked and was facile in using email in 1989, or an SAP power user in 1999. These are individuals who instead of running away from AI, actually attempt to integrate it into their day-to-day, realizing that it will make their job easier, and allow them to surf to higher value-added activities (and perhaps, get a promotion.)

The AI Explainer is a natural teacher who understands how AI elements are knit together within the core business processes of the company, and evangelizes these stories to others. He is the executive who tells the same story over and over again at staff meetings until it has been internalized; the line manager who explains the sales rep why the AI-based next logical product algorithm works; the new employee who teaches upwards to their 45-year-old supervisor what machine learning really is, using simple, approachable language.

The AI Product Manager might not be an actual product manager, but has that DNA. They are constantly stepping back and seeing how AI does and could improve existing processes. They are passionate about driving better performance and outcomes, and tell the stories across the company that drive innovation.

The AI Data Gatherer sees how information flows through the company—from customers, marketing campaigns, the supply chain, IoT, etc.—and makes connections. They see potential signal for learning algorithms, and they see how AI algorithms can feed data into other systems. For example, this individual might see that internet-enabled cooling units report on energy usage every hour; she surmises that when units spike above two standard deviations for long periods that another chiller might be required. She recommends to the cross-sell AI team that they use these data in their algorithm, along with her hypothesis.

The most advanced non-engineer role is the prototyper—the individual who is comfortable tinkering and messing around with AI technology. This is usually a business power user who is impatient for results. These individuals can frustrate engineering teams (think, stepping on my turf,) but at successful, agile companies, interdisciplinary work is encouraged. We ask AI engineers to understand the business problem; successful companies encourage business leaders to get their hands dirty (in a safe environment, of course.)

Principles for Building Your Bench of AI Interfacers

There were several traits that companies who successfully built up a strong bench of digital natives had in common, and a few traits that struggling companies also shared. There is no reason to expect that the core principles have changed, but I’ve adapted them for AI.

The actions below are all totally doable. None of them require spending millions of dollars on a quantum computer, or hiring 50 new developers to go “do some AI stuff.” Rather, they are mainly HR and management actions. If they don’t get done, it’s probably because, like most things worth doing, they don’t drive immediate ROI. They are cultural changes that must be driven from the top (the first DO below.)

Do’s

  1. Hire a Lifetime Learner CEO / Exec Team. It all starts at the top. If you have a CEO who won’t take the time to understand AI at a foundational level—how it works, how it learns, existing use cases—then you’ll be toast. Keep in mind, I’m not talking about hiring a programmer data scientist—I’m talking about someone with an insatiable thirst for learning who never gets tired of reinventing her skillset.
  2. Hire New Cohorts, Every Year. Companies who don’t hire young people for prolonged periods of time quickly fall behind new waves. AI is no exception. I first heard the term “digital native” in 2004, from a technology company marketing executive who lamented his inability to make the transformation to digital. This company had kept old managers in seat for years (they were the original crew) and now needed a talent infusion. If he’d hired one or two 22-year-olds every year, he wouldn’t have been playing catch-up.
  3. Have a Citizen-AI Training Curriculum. One thing that didn’t exist ten years ago was the MOOC. If you wanted a marketing manager to learn the basics of ad exchanges, she either had to learn on the job or go take a course at a university. Today, motivated learners can take AI courses from basic to fairly advanced, essentially for free. As a manager, it’s your duty to (1) create a curriculum based on existing MOOCs and post on your intranet / wiki, and, (2) give employees the time and space they need to get up to speed.
  4. Co-Create, Foster Agency. If an AI-based next logical call algorithm is implemented in a call center, don’t allow it to be cynically jammed in with an explanation of “just do it.” This will drive resentment. Instead, train users on how the algorithm was built. What are its inputs? What algorithms were used to train the model? How do we know it works? Involve your employees in co-creating the AI interfaces; you’ll find that they quickly surface problems and blind spots, and will happily use it / work with it. Analogies for this exist all over, but perhaps the most powerful is the Andon Cord used in lean manufacturing whereby any employee can “stop the line” to identify problems with production.
  5. Force Human Interaction Interfaces. If AI algorithms are only allowed to talk to one another, we might actually get to the “grey goo” scenario pretty quickly, and I’m only half kidding. Rather, focus on human understandable interfaces. The Google search example I started with is a good example of a human-AI interface that is mutually reinforcing. Concretely, building out a next logical product algorithm in a CRM system shouldn’t just spit out a SKU. Expose more about the key inputs; the predictive factors; allow the human to adjust parameters and see how the model changed. Perhaps most importantly,
  6. Promote Tinkering. Siloes and a “guild mentality” kill innovation. Most Silicon Valley companies have done a good job promoting a tinkering culture. However, in too many other places, “stay in your lane” dominates, causing people who stick their neck out to get whacked. AI is no exception. If you want people to stay around, let them play around. Make sure you have safe spaces set up where nothing can be broken—but innovation beats parochialism any day of the week.

Don’ts 

  1. Don’t Go Build Stuff Just Because AI. Perhaps the fastest way to alienate your workforce, and make them AI opponents rather than AI proponents, is to hit the panic button and go off half-cocked on an AI initiative without a clear business reason. A lot of companies did this last year with blockchain. “We need to do something with blockchain, because… blockchain!” (Guilty. Mea culpa.) So don’t do this with AI. Wait for the real use cases. If your employees are excited about it, it’ll be a lot easier, and it’s a really good indication that it’s worth doing.
  2. Be Cautious of Black Boxes. Proprietary black boxes may be awesome, but even more so than with enterprise software, companies need to use extreme caution before committing to them. AI is, by its very nature, opaque. Buying from a vendor who won’t expose the inner working adds another level of opacity, and will make it much harder for employees to interface and find agency. It’s fine to test out proprietary solutions, but be aware of what you’re committing to.
  3. Don’t Build a Monolith. Finally, don’t build the one AI ring to rule them all. When I see IBM advertising Watson as the solution to everything, I definitely get Lord of the Rings Flashbacks. I guess I get why everything should be centralized, but again, if you’re trying to build a cyborg organization, this seems like a giant mistake. Instead, building smaller AIs that humans can work with directly, that communicate with one another but aren’t a hive mind, seems a safer way to go—in more ways than one.

Conclusion

Companies that successfully navigate the coming AI transformation will build an army of AI Interfacers, made up of power users, product managers, teachers, data plumbers, and tinkerers, who will drive a positive feedback loop between the power of AI and human intelligence. These companies will make the creation of this culture a priority, with concrete management, HR, and technology decisions designed to prioritize the human-AI interface, not the raw power of the algorithms. These “Cyborg Companies” will emerge as the clear winners over the coming decade.

[1] In his book Superintelligence (2014), Nick Bostrum laid out many potential dangerous outcomes for an unchained, general intelligence AI: a “grey goo” of endlessly self-replicating nanomachines that takes over the planet; a resource-consuming algorithm gone awry whose sole goal is factoring prime numbers, eventually building a Dyson Sphere around the sun to achieve its objective; and even more malicious scenarios evoking devious, trickster AIs who fool researchers into mailing it what it needs to build a machine to escape its human prison. This is pretty dark, and while I do think we need to be worried about these dangers, this isn’t the focus of this article.