Is it possible to derive any generally-applicable approach to innovation from the story of how the idea of software developed? An approach that could perhaps be used to steer, accelerate or boost digital change in the future? “Not really,” says Dr Manuel Bachmann, a researcher at the University of Basel and lecturer at the University of Lucerne, in his book “The triumph of the algorithm – how the idea of software was invented”.
Innovation comes about by “centuries of remoulding”
The only identifiable principle in the over 2000-year history of the development of computer programs is: “Screw up your eyes and examine any lack of clarity in existing ideas to see if there is any as yet unrecognised potential for making them more precise.” The concept of software which has so radically changed our world is the result of just such a process of centuries-long “remoulding”.
Before arriving at the “triumph of the algorithm”, a great deal of “working, adjusting, refocusing, rearranging, adding and deleting, modifying and bending” had to go on, until finally, in the end, the mathematical problems of a program-controlled machine were solved, in principle.
Now, however, we face numerous new challenges – such as the digitisation of society, artificial intelligence, blockchains and robotic process automation. And the same applies here: we have to latch on to existing concepts and develop them. And that will lead to even more new challenges.
New risks posed by machine learning algorithms
Take, for example, the use of machine learning algorithms: with “deep learning”, a computer program is supposed to be able to learn spontaneously on the basis of incoming data that it analyses; it looks for additional parameters itself and continuously refines the decision-making process. Probabilities then come into play, combined with experience gained from preceding processes.
This means that machine learning is something fundamentally different from the concept of programming as we currently understand it: instead of being given clear rules to work with, these computer programs are supposed to make new findings from an enormous number of examples. And that’s where the first errors might creep in: because what a system like that learns depends on its training data – and that might be the result of biased decisions. So, unconsciously as it were, the AI system takes on the biases that were inherent in the training data.
Researchers led by Aylin Caliskan of Princeton University demonstrated this quite impressively in April 2017 in relation to apparently neutral texts: to train their software, the scientists used one of the biggest collections of computerised language, the “Common Crawl corpus” with 840 billion words drawn from the English-language Internet. The artificial intelligence was supposed to learn by itself which expressions belong together semantically.
The surprising result was that the AI made implicit value judgments. It often assigned positive attributes to flowers and European or American first names, while insects and Afro-American names were associated with negative connotations. The AI suggested that male names were semantically more associated with career-related terms, mathematics and science. It tended to associate female names more with the family and art.
Algorithms need to explain their recommendations
This may sound like scientists just playing around, but it draws attention to a serious challenge facing future developers. It is obvious that today’s deep learning systems do not – as had been hoped – automatically make what is objectively the best decision. Instead, they can be misled by distortions in the training data.
When it comes to driverless cars, or medical or military applications, that could easily put lives at risk. That’s why a program has been launched in the USA called “Explainable Artificial Intelligence”, intended to result in algorithms disclosing how they arrived at a certain decision. “It’s in the nature of these machine learning systems that they produce a lot of false alarms,” says David Gunning, Program Manager with the US military research agency DARPA, “so people need to be able to track and understand how an algorithm arrived at its recommendation.”
*ARTIFICIAL INTELLIGENCE (AI, also machine intelligence, MI) is Intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.