Printer Friendly

Being Intelligent with Artifical Intelligence.

Software as a service offering has been around for a long time. Business competitiveness in implanting fresh intelligent features in systems, processes and products has been the order of the day for sometime. It is good promotion and draws attention. Often this is converted into revenues. Consider the sky rocketing costs of iPhones.

Each brand tries, in its own way to prevent customer migration, sometimes by offering superior ser vices, very own making it excruciatingly difficult to migrate the data involved. It is understandable therefore that interaction with data in the user systems has become a fine art. There are prerequisites and consequences. The prerequisites are of course analytics and the consequences are in the form of machine learnings and increasing reliance on the presumed magic of algorithms. A recent CIO survey put in the public domain seeks to establish that deployment of artificial intelligence has grown three times in the last three years. Enterprise adoption of artificial intelligence has gone up from 10 percent in 2015 to 37 percent in 2019. Between 2018 to 2019 alone, deployment of artificial intelligence has gone up from 25 percent to 37 percent. There is no reported survey, in public columns or otherwise, either of the types of risks or the percentage rise in risks during the same period. Such surveys when they do happen have all the pretenses of a scientific methodology but the sample size is rarely disclosed nor discussed is the geographical spread of the surveyed territory.

Notwithstanding these limitation of pretenses at the scientific method, intuition is dead and skills of interpretation, inference and indeed any clinical sense is buried deep.

Whereas everyone is conscious of security snags and many are talking of privacy rights people tend to walk through technology troubles. It is considered unfashionable to talk of data difficulties. The underlying assumption seems to be that, if someone is having a problem related to data that could only be a factor of his lack of skill! This is a remarkably comforting approach to business promotion but distressingly dubious for a scientific approach to skill formation.

It is clear that artificial intelligence does surpass average human competencies in select parts of jobs. But clearly, does not supplant human dexterity of comparisons beyond the boundaries of algorithms. Used by 'mass oriented' platforms such as YouTube, Facebook algorithms are used to create engagement with humans. This has a novelty value and for many it gives the joy of 'discovering patterns' or even coming to grips with 'predictive validity'. Irrespective this whole effort denies the joy of co-creation.

On the user side, the impact of these instrumentalities across generational characteristics from teenage to the octogenarian status is still to be flagged, let alone studied.

A few institutions of artificial intelligence have attempted to contain risks by limiting the types of machine learning algorithms. This is perhaps a sound approach to business ethics, however, there is no regulatory dimension to it either from the self --imposed limits by the profession or by the government.

The entire paradigm of predictive validity to human endeavors is a palpably intoxicating approach to understanding animate and living intelligence prone entities. The easy solution is that, per se, in the age of big data it is for the government to take strong measures. This is a prima facie a glib solution, usually coming from those who have never really dabbled with governance.

Even more amusing is the attempt to decode human emotions to promote sales.

The answer lies in designing the transparency requirements with clarity and integrity; designing systems which flag systems obsolescence and degradation.

Perhaps no discussion like the one undertaken in the foregoing sentences can be complete without making a reference to the utmost necessity of designing nuanced controls. The experts are only beginning to recognise this. The wishful belief in machine learning models, sorting out customer facing problems can at the best be only be relevant to the commonest problems of the simplest forms of users. The core issue remains of the level of confidence the system inspires. Clearly, artificial intelligence owes its primacy to encoding patterns in the data but that the psychotherapist does, perhaps equally if not more deftly, by simple or not to simple techniques of psychoanalysis.

Perhaps one of the bigger dangers of artificial intelligence, which is as potent as its advantages, is the possibility of the use of AI techniques to develop 'stereotyped' approaches to ethnic issues. If that were to happen data analysis would have gone through a full 360 degrees. In such cases the danger is of the baby being thrown out with the bath water.

Abundant caution would have us remind ourselves that after all is said and done, the post modernism of artificial intelligence is too nascent a discipline to apply to any other domain save inanimate objects. For its relevance to life sciences, it could be at the best, be just one more approach to all problems which are definitionally complex. These require therefore several vantage points of perception.

Dr. Vinayshil Gautam, FRAS (London)
COPYRIGHT 2019 Foundation for Organisational Research & Education
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2019 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Title Annotation:From the Desk of the Honorary Chief Editor
Author:Gautam, Vinayshil
Date:Apr 1, 2019
Previous Article:Lean Six Sigma for Leaders-A Practical Guide for Leaders to Transform the Way they Run their Organization by Martin Brenig-Jones and Jo Dowdall.
Next Article:Examining Dimensions of Flexibility under Uncertainty.

Terms of use | Privacy policy | Copyright © 2019 Farlex, Inc. | Feedback | For webmasters