Closing the Gap between Expectation and Reality When It Comes to AI and Software 2.0.
Traditionally, developers have written software as a series of hard-coded rules: If X happens then do Y. The human instructs the machine, line by line. That's Software 1.0. But Software 2.0 recognizes that--with advances in deep learning--we can build a neural network that learns which instructions or rules are needed for a desired outcome. The argument made by 2.0 proponents like Andrej Karpathy, director of AI at Tesla, is that we won't really write code anymore. We'll just be finding data and feeding it into machine learning systems. In this scenario, we can imagine the role of software engineer morphing into "data curator" or "data enabler." Whatever we call ourselves, we'll be people who are no longer writing code.
However, software engineering is not going away anytime soon. Even if a new role evolves--be it Software 2.0 engineer, data scientist 2.0, etc.--there are ways in which this technology shift will empower the practitioner of Software 1.0. In fact, it's not sure whether software engineering, in the near future at least, will be completely different from what we do now. Yes, we'll have help from deep learning neural network systems, but they'll help us do our current job better rather than replace us entirely.
How will machine learnUing shape software development?
It's a new world, sure, but we're not planning to live in an episode of Black Mirror. In fact, general office assistants are already scheduling your day and starting your conference calls. There are even Al-powered systems on the web that can generate a logo for your business and refine that logo based on your feedback.
Today, your phone automatically checks your spelling and suggests the next word. When you're writing code, a similar tool highlights possible errors. Someone who does pair programming for Pivotal, would be naturally drawn to think about Software 2.0's impact on the way they work. Considering the advances in machine learning and conversational interfaces, it's conceivable that a machine could one day be one half of a pair programming team.
For years, we've been using automated helpers to refactor and save time writing boilerplate code. And we're now welcoming the emergence of Al-driven assistants in more complex software development as well. Lately, they have been appearing among product teams in the form of supercharged IDE features that can suggest better code completion.
The role of AI assistants.
Now imagine a far more advanced AI assistant playing a much larger role in the future. As you're writing code, your machine partner might determine what kind of function you're writing and fill the rest in for you, based on your style, using high-level predictive analysis. Essentially the machine writes the rest of the code for you, then you approve it.
Another area an AI assistant could help with is test-driven development. A human could write the tests while the machine partner iterates millions of times to find the right piece of code to solve those tests. Instead of doing both jobs--writing the tests and making the tests pass--you'd have a machine partner that does the latter. That would be helpful. You'd spend less time on implementation code and more time on understanding and solving business problems.
Way down the line, Software 2.0 might even help guide test-driven development and suggest the next test to be run, giving you the reasons why. Let's imagine the marketing people go to the development team and say they want such and such functionality. If they can express what they want in a way the machine can understand--which is getting easier all the time--the machine could help you choose the tests that are needed and suggest next steps.
Rebuff, not replace
This raises the ultimate concern: will machines just replace software engineers altogether? The reality is more likely that at best we'll get to that more than 90% competence. But that still means failure 1% of the time, which means unpredictability. And that means you need a monitoring system to ensure that the code which is written actually works. Maybe this is a new role for software engineers, similar to what Andrej alludes to in his post: monitoring the code and helping the machine learning system achieve closer to 100% accuracy rate.
Now that we've outlined the conceivable benefits, the next question arises: what parts of software programming can be moved to the deep learning 2.0 framework and what should remain in the traditional 1.0 framework? Today, it's clear that these deep learning neural networks do well in supervised learning settings, if they're provided training data with good examples and bad examples so they can learn what to output correctly. Google, for one, is using deep learning throughout its product suite.
But those systems are only as good as the training data. And, as one of my colleagues pointed out, improving a model's performance frequently involves improving the underlying code and deployment environment, as well as improving the training data. In fact, some machine learning systems are getting so good that they're actually bumping up against the human-caused flaws in the training data.
The reality is that neural networks are not a silver bullet. Rather, we need to design neural networks to work with other solutions. There are certain parts of software development that will work really well with deep learning and there are other parts that won't. If we look again at pair programming, there are many different ways to complete a problem by sharing the reins with someone else.
Software development is a process of constant collaboration with other colleagues. Every time a new pair comes together, the partners bring different experiences and different approaches to tackling a problem. The more pairs you bring together, the more solutions you get.
With Software 2.0, we're adding a new partner to help developers do their job better. We envision the rise of a more energetic collaborative environment that leads to ever more, and ever more effective, solutions. And that's good for everyone.
Ian is a Data Scientist and Engineering Director at Pivotal and has worked on a wide range of topics from fraud detection to transport & logistics. He now runs Pivotal's R&D engineering team in Dublin working on Cloud Foundry.
Ian Huston, Data Scientist and Engineering Director at Pivotal Dublin.
|Printer friendly Cite/link Email Feedback|
|Title Annotation:||DATABASE AND NETWORK INTELLIGENCE: OPINION|
|Publication:||Database and Network Journal|
|Date:||Jun 1, 2019|
|Previous Article:||$145bn Industrial Internet market at risk of slowdown over integration concerns.|
|Next Article:||Over 60% of World Population to Be Covered by GDPR-style Legislation.|