Printer Friendly

Mind the gap.

Those familiar with the experience of travel by passenger rail will recognize how common it is for conductors (or the computerized voices representing conductors) to warn riders that there will be a gap between the train and the platform when the train pulls into the station. We get reminded of the need to mind that gap or risk injury to feet, legs, luggage or more.

In the world of cognitive computing, we need more conductors to issue warnings of another kind of gap that could bring injury to this emerging industry: the gap created by the current shortfall of available talent required to create AI systems. We don't currently have enough people with the education and experience to fill cognitive computing development teams. Perhaps more serious: We are not always sure at this point exactly what kind of talent we might need.

Cognitive computing faces a range of challenges as it moves from computer science labs and pilot projects and edges out into the mainstream. The media is full of articles on the promises and perils of the technology. We can read about a few real successes in the field and a few spectacular misses, like the scandal surrounding IBM and the MD Anderson Cancer Center in Austin, Texas. We can read about how the technology will take all of our jobs or about how the impact of the machines taking on human tasks will not be a big deal. In all the ruckus, much less attention has been directed to the gritty details of what we might call the cognitive sausage factory: How and by whom are these new AI systems--for the better or for the worse--going to be built?

Until we reach the point at which machines can design and implement new systems automatically--autonomously--we will be dependent on humans to do the heavy lifting of design, preparation, curation and implementation of new generation systems.

Breakthroughs

Granted that we have already seen some truly fabulous new capabilities come to light: in image recognition and analysis, for example, where the reading of various kinds of X-rays and complex medical scans or the interpretation of reconnaissance photos or the process of identifying defects in semiconductor manufacturing fabs may well be fully automated in the near future. That breakthrough was in no small part the result of Google's (google.com) determination to make a big bet on the newly accessible technology of deep neural networks. The bet involved loading up teams of hundreds of researchers, buying companies whose people were deeply knowledgeable in the strengths and weaknesses of the technique and investing in new hardware infrastructure to support the processing of analytics across big data loads.

But those seemingly overnight advances in image processing and similar ones in language facilities like auto-translation have not been accompanied by breakthroughs across the wider spectrum of applications. While the intense interest in autonomous vehicles of all kinds, most dramatically self-driving cars, has captured the imagination of the public and the investment dollars of both technology and transport giants, even in that high-stakes field, the sense of real breakthrough to practical adoption is missing. And how do we understand the roadblock to more general progress throughout industry?

The simple fact is that the problems AI systems address are gnarly, multifaceted and require true innovation. As the definition proposed by the Cognitive Computing Consortium (cognitivecomputingconsortium.com) states: "Cognitive computing makes a new class of problems computable." They are often problems that have never been computerized, the last bastions of manual, often highly skilled labor. The world was rocked 40 years ago when computing came to the accountants' spreadsheet. The relentless spread of intelligent automation will now impact our professional and consumer lives in ways that will be similarly hard to predict.

Where will the professionals come from who will create this innovation and spread those new applications far and wide? Clearly we don't have enough of such people now, and there are several reasons why the shift to cognitive computing is not going as quickly as many have predicted.

Beginning or end?

The Boston Globe (bostonglobe.com) reports that at MIT (mit.edu), the Intro to Machine Learning course is among the most popular offerings in 2017, with 700 signing up per semester, requiring four instructors and 15 TAs. While machine learning in general is the primary ingredient in the secret sauce that powers AI, it is "only" an engine, and the complexities of building the whole car around it are often daunting. For example, the current issue of the MIT Technology Review (technologyreview.com) leads with a feature raising the question of whether we are at the end of the "AI boom" instead of the beginning--based on the perception that current progress is coming primarily from 30-year-old technology. The learning systems of today are heavily dependent on curated data, for example, and tend toward fragility, unpredictable changes in direction and output, and "black box" operations. There is a need for a new generation of more flexible intelligence.

The students at MIT will certainly be exposed to those issues top of mind in the machine learning community, but by the end of the course, will they be masters of the emerging discipline of XAI, or explainable AI, which posits that all AI programs should be able to explain themselves automatically to their human designers? The list of issues to address as a neophyte AI developer or AI data scientist is daunting.

So, one reason that progress toward broad-based adoption of cognitive computing is slow across many industries is simply the "gap," the shortfall in trained people. But another perhaps equally key reason is that the talent that has been coming into the market is unevenly distributed.

The companies that have already bet their businesses on machine learning systems are hiring and acquiring as many talented people as they can find, in both research and applied technology. They are not only the top internet firms--Amazon (amazon.com), Google, Facebook (facebook.com)--but also the likes of Microsoft (microsoft.com) and Apple (apple.com), Intel (intel.com) and Salesforce (salesforce.com), and beyond those giants are the many specialized consumer-facing sites like TripAdvisor, (tripadvisor.com) Netflix (netflix.com), Uber (uber.com) and more. What all of those firms share is that they are making extraordinary profits with that technology, and they can therefore afford to pay outsized wages to the scarce people they can find with the requisite skills. This effect raises a high barrier to most "normal" industrial firms that are not benefitting as directly from the new attention-driven business models.

For most of industry, the calls of the conductor are coming through loud and clear: "Mind the gap." In many cases, firms face the kind of existential risk once experienced by the buggy whip manufacturers. But in virtually all cases, finding strategies to partner or acquire or develop the talent to compete in cognitive computing is an imperative. Coming to an accurate understanding of the competence and work profiles that will lead teams to successful AI outcomes is becoming a foundation for the next generation of the business. And getting the enterprise feet and luggage across the gap could be a big part of the legacy accomplishments for today's executives.

By Hadley Reynolds

Hadley Reynolds is principal analyst of NextEra Research, e-mail hadleyr@nexteraresearch.com.
COPYRIGHT 2018 Information Today, Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2018 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:Cognitive computing
Author:Reynolds, Hadley
Publication:KMWorld
Date:Jan 1, 2018
Words:1218
Previous Article:The search for explanations.
Next Article:KM: looking to the future.

Terms of use | Privacy policy | Copyright © 2018 Farlex, Inc. | Feedback | For webmasters