Printer Friendly

Intelligent Systems: The Big Picture: The future of intelligent systems is exciting, but making it happen will require hard work.

I've spent the last 15 years working on ideas for delivering artificial intelligence (AI) solutions to several industries focused on associative memory-based learning and reasoning. My primary work, first at Saffron Technology and as Saffron was acquired by Intel Corporation, focused on the enterprise--defense and commercial--in areas such as intelligence analysis, product issue and defect resolution, unplanned maintenance, personalization of customer recommendations, and fraud and anti-money-laundering investigations, including peer reviews and customer-centric behavior analysis. This is a broad range of topics, but it barely scratches the surface of the possibilities for applying advanced machine learning and end-to-end AI systems within the enterprise.

This terrific experience provides a basis for understanding industry and customer challenges that can be addressed by AI. Saffron's advanced concepts extended application thinking beyond the specific-purpose deep learning and supervised learning that dominate the field today. The ability to see sparse connections between people, places, and things within massive amounts of data and across varied data sources, and understand their context, was exciting. At the same time, progress in perceptual capabilities, such as image recognition, has continued to advance new methods of observation, sense making, and prediction across applications.

AI is not new; Wikipedia's Timeline of AI starts with antiquity. Aristotle is credited with defining syllogism and deductive reasoning around 384-324 BC. Visionary activity around AI was dense in the mid-20th century, with Vannevar Bush's "As We May Think" offering a prescient vision of a future where computers assist humans in many ways; John Von Neumann asserting, "Tell me precisely what a machine can't do and I will program it to do it"; Alan Turing proposing the Turing Test as a measure of machine intelligence; Isaac Asimov publishing the Three Laws of Robotics; and John McCarthy coining the term artificial intelligence for his seminal Dartmouth College conference on the topic. Multiple AI winters have come and gone, and we've emerged now into a new awakening.

Yet we have overimagined what AI is at this point. Machine learning does not equal AI. Deep learning does not equal AI. Ultimately the goal of AI is to make decisions in a way that mimics the way humans learn, think, reason, decide, and learn again from experience. When we look at AI through this lens, it is easy to see that what we're doing today with machine learning and deep learning is a component of AI, but we have much more to design, build, and implement before we can claim to have achieved true AI--a seamless, end-to-end learning, reasoning, and predicting machine.

A systematic approach to enterprise AI--perceiving, learning, abstracting, reasoning, deciding, and acting across the organization--is yet ahead of us. We will add to this the capability to learn from outcomes of previous actions using feedback loops and advanced human-machine interactions through intuitive interfaces. With these tools, we may be able to apply the experience of one to many as knowledge systems become smarter with each experience. One objective we can achieve is individualized, customer-centric analysis and decision making.

Building the Future of AI

As much as we have accomplished with speech, computer vision, and natural language processing, we are touching only a part of learning and reasoning with current approaches to machine learning and perceptual computing. Research in deep reinforcement learning, transfer learning, sequential episodic memories, analogical reasoning, and more points the way to evolved learning and reasoning capabilities that can serve as a foundation for end-to-end AI systems.

DARPA mapped out the next steps of the AI journey in a 2018 report, updated in February 2019, Powerful but Limited: A DARPA Perspective on Artificial Intelligence (Launchbury 2018). DARPA has been a leader in developing rule-based and statistical learning-based AI technologies, and its work in this area continues--the agency allocated $2 billion in funding for AI R&D in September 2018. The agency sees AI developing in three waves: handcrafted knowledge, statistical learning, and contextual adaptation with developments in perceiving, learning, abstracting, and reasoning differentiating each wave from the last (Figure 1). Understanding the waves provides context for understanding the state of AI today.

The First Wave: Handcrafted Knowledge

The first wave of AI focused on handcrafted knowledge. Engineers created expert systems, now commonly referred to as rule-based systems, by defining sets of rules that represented human knowledge within well-defined domains. The idea of these expert systems was that knowledge learned from a few would be available to many. To achieve that goal, engineers interviewed experts in a given field to find out what they did and how they did it; in other words, the experts provided explicit knowledge--knowledge that is explainable.

The first wave had some successes. American Express used an expert decision to provide support for credit decisions. Airlines adopted the technology for scheduling and aircraft maintenance. Manufacturers applied expert systems in asset management and maintenance. Enterprise software vendors extended this thinking into rules-based or best practices-based

approaches to applications.

The first wave also encountered several challenges, the most important being the lack of dynamic learning capability and tacit knowledge--skills, ideas, and experiences that are not codified or easily expressed. (1) For instance, early first-wave systems included medical systems that provided diagnostic support to doctors. Research indicates that doctors have an imperfect knowledge of how they solve diagnostic problems--their personal body of knowledge includes both explicit and tacit knowledge (Groopman 2007). They make decisions using a combination of general principles, including accepted clinical protocols, and experience-based reasoning to confirm or disconfirm diagnoses based on the patient's symptoms. They work from explicit knowledge, but their tacit knowledge, knowledge they can't explain or sometimes don't even consciously know they have, is critical to their diagnostic decision making. As a result, those expert systems had gaps in their knowledge and capabilities. First-wave expert systems had no capacity to learn from tacit knowledge that could not be expressed in the rule definition.

The Second Wave: Machine Learning

AI is currently in the second wave, the wave of machine learning; work in this wave comprises special-purpose knowledge systems that can learn from information in the data they are fed, information we may not even know is there, not unlike tacit knowledge. Deep learning, which provides an approach to building and training neural networks for decision-making nodes, is a hallmark of work in this wave.

Two important changes drive the second wave's advancements: power-efficient computer processing and big-data scale and availability. Deep learning is data hungry, relying heavily on training models supported by data, most often in large volumes and most often based on supervised learning. As the second wave advances, unsupervised learning and reinforcement learning will become more central to its progress.

Also included in the second wave is perceptual computing-the ability of a computer to recognize what is present around it. Perceptual technology is increasingly in use in security and industry. For instance, Intel, with its MobileEye and Movidius platforms, is doing important work in visual cognition, or computer vision. Many industries--health care, oil and gas, utilities, farming, insurance, and many more-are deploying some form of computer vision, using deep learning-assisted devices to diagnose patients, survey operations, monitor oil rigs or insured assets, and more, providing access to critical information quickly with greater precision and less risk to human life.

The second wave has powered advancements in business and society. Voice-activated assistants like Siri, Alexa, and Cortana help find information, order products, play music, and perform language translation throughout the day. Image recognition, natural language processing, cyber security, and autonomous platforms are also outcomes of the second wave.

And there is more progress ahead. The second wave has brought nuanced classification and prediction capabilities. But we do not yet have contextual capability or minimal reasoning ability. All four of these capabilities will be required to support complete intelligence. The third wave will begin to address these important components of true AI.

The Third Wave: Contextual Adaptation

DARPA describes the third wave as 1) systems that construct explanatory models for classes or real-world phenomena, 2) systems that learn and reason as they encounter new tasks and situations, and 3) systems that enable natural communication among machines and people. This wave will be largely enabled through cloud computing; it will combine end-to-end thinking around data management, natural language processing, and data and knowledge representation stores with robust analytic machine learning tools and human/machine dialogue via robust communication platforms. It is also where we will experience a closer representation of complete intelligence--end-to-end decision-making systems that include contextually adaptive learning and reasoning systems that learn online and offline from complex and sparse data. These systems become more collaborative, human-to-machine and machine-to-machine, as they incorporate directed, collaborative, and delegatory interactions based on the use cases served.

These advancements will be built on contextual adaptive reasoning; DARPA's current plan doubles down on AI at this stage, investing $2 billion in the study of contextual reasoning systems (White 2018). Contextual adaptation is what you do every day as you adapt to the situations in front of you. You work with known rules and models in known circumstances, but when novel situations occur, you adapt. You probably use experience-based reasoning to make sense of a situation and decide what to do about it. Let me explain this using the analogy of mountain climbing.

Reinhold Messner is an expert mountain climber. He has climbed every major peak over 8,000 meters in the world, including Mount Everest. He endured the loss of his brother during their first Himalayan climb. Thanks to him and others, there is great topographic and experiential knowledge about Mount Everest, including the two pathways to the top, one from the north and one from the south. The southern route is the easiest way to make the ascent. Many people have attempted it, many people have accomplished it, and at least 297 people have lost their lives attempting to do so.

Why have so many died when the path is so well known, so thoroughly mapped? Because unexpected events happen. In a dynamic world, what appears to be stable, calm, and normal may not be. Mount Everest may look calm and peaceful from the base, but it can be chaotic and deadly during a climb. Climbers survive those conditions through contextual adaptation--starting from the knowledge and the models they have and adapting in real time as needed.

Contextual adaptation is how humans compete and win in the world generally. It's also how true AI will learn. Humans use a variety of tools to adapt, including rules and models as well as the values, ethics, and morals that we bring into a situation. One of the most powerful tools humans have that commercial instances of deep learning cannot yet replicate is one-to-few example learning. Machine learning is well known for requiring thousands, if not millions, of examples to identify something reliably. Teaching a machine what a tiger looks like can take thousands or millions of repetitions, but a child can learn to identify a tiger in one shot. Once a child sees a tiger once and is told, "That's a tiger," they've got it. We don't have to teach that over and over again.

That kind of one-shot learning is an important capability in contextual adaptation. It is important because it allows us to recall a single instance from the past and apply the lessons from it later, in a different situation. AI currently can't do that. To be truly intelligent, systems must be able to recognize a new episode's resemblance to a situation they've seen before and transfer knowledge from that first occurrence to the new situation. How many times do bad events--unscheduled maintenance, for example--have to occur before the system can understand they are bad and act to prevent them from recurring?

The ability to adapt to situations, to understand relationships and their context, to assess potential outcomes, and to learn from those outcomes is the field of cognitive synergy. To a large degree, this is what DARPA is trying to accomplish in the third wave.

As part of realizing the third wave, we will utilize foundational work from current machine learning approaches.

The Five Major Tribes of Algorithmic Learning Models

Pedro Domingos (2015) provides a way of categorizing machine learning algorithms and their purpose in the world. He enumerates five major classes of algorithmic learning:

* Connectionists have focused on learning data representations by creating artificial neural networks that can learn and decide on their own. Deep learning, the most widely known example of this approach, is used in image recognition, machine translation, and natural language processing.

* Symbolists focus on inverse deduction, starting with a set of premises and conclusions and working backward to fill the gaps. Examples include expert systems and knowledge graphs.

* Evolutionaries apply the ideas from genetic science to data processing to create evolutionary algorithms that constantly evolve and adapt to unknown conditions, processes, and data.

* Bayesians focus on uncertainty, using probabilistic inference to predict the likelihood of given outcomes, allowing use of a priori knowledge or beliefs to help in this calculation.

* Analogizers use similarity analysis to reason from experience. Hofstadter and Sander (2013) position analogy-making as the pilot light of creativity, an essential capability for humans' ability to adapt to a changing world.

Algorithms in these five categories are represented in most open-source machine learning platforms today. They provide the starting point for accelerating the development of cognitive decision systems that work across silos and user personas. Great progress has been made in machine learning during the last 10 years, but challenges remain--among them data wrangling, algorithmic bias, explainability and transparency, and privacy and data ownership.

Data Wrangling

Data is complicated for every large corporate environment. Data is located everywhere in the enterprise, and the data environment is made even more complicated by legacy data, often associated with acquisitions that were not merged into corporate data structures. Data that could be important lives in places that make it not immediately available, including local spreadsheets, emails, and other methods of communication. The first step in any AI solution, then, is solving the data problem. What we confirmed in our work at Saffron is that 80 percent of the work is on the data side, doing what is commonly referred to as data wrangling. Data wrangling is the process of transforming and mapping data from one raw data form to another format with the intent of making it more appropriate and valuable for a variety of downstream purposes, such as analytics.

New data preparation tools are emerging to help. Forrester describes data preparation solutions as "the Rocket Fuel for Data Activation" (Little, Leganza, and Perdoni 2018). Metadata cataloging tools, for example, are included in this category; they are available from cloud services, analytics, and data management vendors. With such tools, you do not need to move your data or model it. Machine learning-enabled cataloging helps organize the information about the data to make it accessible to end users. Vendor solutions include models that work across cloud and onsite data stores and tools that have the ability to integrate with existing tools such as vendor-specific catalogs, business intelligence applications, and data science platforms. These tools are designed to accelerate the process from data to insight to value; expect to see growth in the availability and robustness of tools from new and existing vendors. This is great news for enterprises that rely on data accessibility and collaboration to support democratization of decision making.

AI Ethics: Bias, Explainability, and Transparency Another barrier to the widespread adoption of AI is trust. To build trust, systems must be demonstrably free of bias, their results must be explainable, and the processes they use to arrive at those results must be transparent to users and stakeholders.

Inclusivity does matter. Code bias--subtle biases around race and gender that creep into the coding of an algorithm-is created by the mere fact that the engineers who create the code are of a particular race and gender, and that bias naturally comes into the way they think and the way they code. Through that coding, those biases are manifested in AI-driven decisions that affect human lives on a daily basis. This is a highly sensitive topic in human-centric decisions such as loan approval or in hiring. If the algorithms making loan recommendations or reviewing and selecting candidate resumes aren't explainable, they cannot be trusted. These are early examples of areas where unintentional bias in algorithm design can affect humans on a day-to-day basis.

As awareness of the problem rises, there have been some efforts to address it. Joy Buolamwini at the MIT Media Lab formed the Algorithmic Justice League to address this issue and to raise awareness. Organizations such as IBM, Intel, Microsoft, Google, IEEE, the Association for Computing Machinery, and others are working on the problem, working on defining standards. The academic community is working on teaching (or unteaching) bias in academic computer science programs. Microsoft recently announced the creation of a new Chief AI Ethicist position in its organization. IEEE recently published a special issue on the topic, "Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems" (Winfield et al. 2019). Guidelines and recommendations from technology vendors, technology and trade associations, and academia are emerging, as are new approaches in other countries. This is an exciting time for discussing, understanding, and codifying standards. This work is necessary. Without trust, explanation, and transparency, AI adoption will stall.

And progress is critical. As AI takes on more decision-making capabilities, transparency and explainability will be essential. AI may present hidden risks for the enterprise, especially in areas that affect people as well as in decisions that are subject to audit, regulatory compliance, privacy, and ethical limitations. Machine learning processes work with large volumes of training data. The algorithmic process, which is designed to learn a particular function, can select or use data in ways that are biased. It can tell you a result, sometimes with great accuracy, but it is difficult and not always practical to explain the answer when a large number of attributes are in play. End users may not understand how the answer was derived or the circumstances under which it might work or not work. These are things we naturally want to know as we rely on AI-generated decision recommendations. This fundamental element of trust is something we must solve for in order for AI to experience broad adoption in critical fields.

Looking ahead, explainable algorithms and data models will be a natural part of our analytic platforms. The data or evidence will be available for examination and the algorithm will have built-in transparency: we will understand how it works. We will be able to understand in natural language why or how the AI makes a particular recommendation. The trust components will become a standard in enterprise AI system selections. D ARPA is one organization providing leadership and R&D funding for explainability as part of its focus on the third wave of AI.

Data Ownership and Privacy: The Data of Me

Which brings us to "our" data--the new battlefield for ownership of individual data. Today, we don't "own" our personal data. Data are fragmented and not available to the individuals who generated it in a meaningful, associated way. For instance, couldn't we all benefit from having our medical data at our fingertips and combined with other wellness data, fitness data, whatever else we choose to combine it with, to create a better understanding of our personal health? Wouldn't our personal financial wellness benefit from a consolidated data set that could help us understand our buying, saving, and investing patterns? We need our personal data consolidated for good financial planning, health and wellness planning, family planning. That's not available to us such that it can be easily organized across applications, understood, and used in our personal decision making, privately and securely.

The other side of the "data of me" is the ability to protect our own data. Stronger rules for data protection will give individuals more control over their personal data and businesses a more level playing field. The European Union led the way with the General Data Protection Regulation (GDPR), introduced in May 2018. Pushed by efforts like the GDPR, vendors are disclosing data privacy practices more fully. Startups are working to bring blockchain to bear to democratize access to data while allowing individuals to protect their data. Others are working to develop the ability for AI networks to access large stores of data without giving big companies control of the data. If you're one of those big companies, this is a point of disruption. For individual citizens, these ideas have the potential to benefit us all in the future.

Conclusion

During my work in the enterprise software industry, I have not seen an industry transformation as great as the one underway today, wrought by AI. For example, supply chains are being disrupted across industries, from the automotive industry, as battery-driven power replaces catalytic converters, and in health care, where intelligent medicines and devices are disrupting traditional services and diagnostics.

More change is ahead, across industries, across economies and across the globe. Every industry will participate in this transformation, of helping achieve this great shift. I am a great enthusiast for the AI advancements yet to be realized that will help us as consumers, citizens and workers.

Yet even as there are so many exciting things ahead, it is important to be grounded in reality. Making these next advancements happen will require very hard work. As Reinhold Messner has said, "The mountain is always farther away than it looks; it's always taller than it looks, and it's always harder to climb than it looks." Progress will take smart and thoughtful people--subject matter experts, data scientists, physicists, technologists--and leadership. We also need to be smart consumers who are willing to adopt this technology, provide feedback, and challenge the status quo.

Gayle Sheppard is a global technology executive with expertise in artificial intelligence (AI) and enterprise software. She has founded, created, or contributed to startup and Fortune 100 companies focused on AI platforms and solutions in business and consumer markets and the digitization of business. She has provided executive leadership roles for global enterprise software companies, emerging businesses within Fortune 50 technology companies, and startups and led start-up and Fortune 100 global business units ranging from $30 million to $1 billion in revenue while building core organizations to support profitable growth. For the last 15 years, she has focused on building and growing enterprise AI platforms and products in manufacturing, financial services, and government. She holds a BS in business administration with major concentrations in operations research and finance from the University of South Florida, gayle@aixi.co

DOI: 10.1080/08956308.2019.1613115

References

Angioni, G. 2004. Doing, thinking and saying. In Nature Knowledge: Ethnoscience, Cognition and Utility, ed. Glauco Sanga and Gherardo Ortalli, pp. 243-248. Berghahn Books.

Domingos, P. 2015. The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. New York: Basic Books.

Groopman, J. 2007. How Doctors Think. New York: Houghton Mifflin.

Hofstadter, D., and Sander, E. 2013. Surfaces and Essences: Analogy as the Fuel and Fire of Thinking. New York: Basic Books.

Launchbury, J. 2018. Powerful but Limited: A DARPA Perspective on Artificial Intelligence. Updated February 2019. DARPA. https://www.darpa.mil/about-us/darpa-perspective-on-ai

Little, C., Leganza, G., and Perdoni, R. 2018. The Forrester Wave: Data preparation solutions, Q4 2018. Forrester.com, October 31. https://www.forrester.com/report/The-i-Forrester-i-Wave+Data+Preparation+Solutions+Q4+2018/-/E-RES141619

White, E. 2018. DARPA to invest $2B in AI technology. Federal Newscast, September 10. https://federalnewsnetwork.com/federal-newscast/2018/09/ darpa-to-invest-2-bilIionin-ai-technology/

Winfield, A., Michael, IC, Pitt, J., and Evers, V. ed. 2019. Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems. Special issue, Proceedings of the IEEE 107(3). https://proceedingsoftheieee.ieee.org/view-recentissues/march-2019/#articles

(1) Good resources on this topic include Ritesh Chugh's research (http://cqu.academia.edu/RiteshChugh) and Angioni (2004).

Caption: Gayle Sheppard is thinking about how deep learning will enable the AI of the future to use data to contextually connect people, places, and things.

Caption: FIGURE 1. DARPA's framework for artificial intelligence (Source: DARPA)
COPYRIGHT 2019 Industrial Research Institute Inc.
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:FEATURE ARTICLE
Author:Sheppard, Gayle
Publication:Research-Technology Management
Article Type:Report
Date:Jul 1, 2019
Words:4021
Previous Article:Managing Emerging (Mis)Alignments in Data-Driven Servitization: Unresolved tensions between the technical development of complex data-driven...
Next Article:Should Accounting Be the Language of Business?
Topics:

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters