Artificial intelligence

Screw up your eyes and examine any lack of clarity

The only identifiable principle in the over 2000-year history of the development of computer programs is: “Screw up your eyes and examine any lack of clarity in existing ideas to see if there is any as yet unrecognised potential for making them more precise.”

Is it possible to derive any generally-applicable approach to innovation from the story of how the idea of software developed? An approach that could perhaps be used to steer, accelerate or boost digital change in the future? “Not really,” says Dr Manuel Bachmann, a researcher at the University of Basel and lecturer at the University of Lucerne, in his book “The triumph of the algorithm – how the idea of software was invented”.

Innovation comes about by “centuries of remoulding”

The only identifiable principle in the over 2000-year history of the development of computer programs is: “Screw up your eyes and examine any lack of clarity in existing ideas to see if there is any as yet unrecognised potential for making them more precise.” The concept of software which has so radically changed our world is the result of just such a process of centuries-long “remoulding”.

Before arriving at the “triumph of the algorithm”, a great deal of “working, adjusting, refocusing, rearranging, adding and deleting, modifying and bending” had to go on, until finally, in the end, the mathematical problems of a program-controlled machine were solved, in principle.

Now, however, we face numerous new challenges – such as the digitisation of society, artificial intelligence, blockchains and robotic process automation. And the same applies here: we have to latch on to existing concepts and develop them. And that will lead to even more new challenges.

New risks posed by machine learning algorithms

Take, for example, the use of machine learning algorithms: with “deep learning”, a computer program is supposed to be able to learn spontaneously on the basis of incoming data that it analyses; it looks for additional parameters itself and continuously refines the decision-making process. Probabilities then come into play, combined with experience gained from preceding processes.

This means that machine learning is something fundamentally different from the concept of programming as we currently understand it: instead of being given clear rules to work with, these computer programs are supposed to make new findings from an enormous number of examples. And that’s where the first errors might creep in: because what a system like that learns depends on its training data – and that might be the result of biased decisions. So, unconsciously as it were, the AI system takes on the biases that were inherent in the training data.

Researchers led by Aylin Caliskan of Princeton University demonstrated this quite impressively in April 2017 in relation to apparently neutral texts: to train their software, the scientists used one of the biggest collections of computerised language, the “Common Crawl corpus” with 840 billion words drawn from the English-language Internet. The artificial intelligence was supposed to learn by itself which expressions belong together semantically.

The surprising result was that the AI made implicit value judgments. It often assigned positive attributes to flowers and European or American first names, while insects and Afro-American names were associated with negative connotations. The AI suggested that male names were semantically more associated with career-related terms, mathematics and science. It tended to associate female names more with the family and art.

Algorithms need to explain their recommendations

This may sound like scientists just playing around, but it draws attention to a serious challenge facing future developers. It is obvious that today’s deep learning systems do not – as had been hoped – automatically make what is objectively the best decision. Instead, they can be misled by distortions in the training data.

When it comes to driverless cars, or medical or military applications, that could easily put lives at risk. That’s why a program has been launched in the USA called “Explainable Artificial Intelligence”, intended to result in algorithms disclosing how they arrived at a certain decision. “It’s in the nature of these machine learning systems that they produce a lot of false alarms,” says David Gunning, Program Manager with the US military research agency DARPA, “so people need to be able to track and understand how an algorithm arrived at its recommendation.”

*ARTIFICIAL INTELLIGENCE (AI, also machine intelligence, MI) is Intelligence displayed by machines, in contrast with the natural intelligence (NI) displayed by humans and other animals. Colloquially, the term “artificial intelligence” is applied when a machine mimics “cognitive” functions that humans associate with other human minds, such as “learning” and “problem solving”.

Articles on Innovation

  • From Ada to Zuse: the computer has many mothers and fathers15.03.2018.
    Articles on Innovation

    From Ada to Zuse: the computer has many mothers and fathers

    However, many more years went by before the first real mainframe computers saw the light of day, and the final breakthrough had many mothers and fathers.

  • The Polymath's Calculator15.03.2018.
    Articles on Innovation

    The polymath’s calculator

    This was what is called a pinwheel calculator for the four basic arithmetic operations, which could be used to enter up to 8-digit numbers and display up to 16-digit results.

  • Computer Pioneers in Switzerland15.03.2018.
    Articles on Innovation

    Computer pioneers in Switzerland

    Information science in Switzerland owes its birth and early growth primarily to the farsightedness and drive of the Professor of Mathematics, Eduard Stiefel.

  • Basis For the First Programming Languages15.03.2018.
    Articles on Innovation

    Basis for the first programming languages

    “All Cretans are liars,” said the Cretan Epimenides. Is the statement by Epimenides true or false?

  • The concept of the algorithm15.03.2018.
    Articles on Innovation

    The concept of the algorithm

    Although nowadays algorithms are primarily associated with software and computers, their origins lie much further in the past.

  • Wheel Innovation15.03.2018.
    Articles on Innovation

    Innovation doesn’t happen by chance

    Even though in the history of science there have been some spectacular discoveries made by chance from time to time – from penicillin to Teflon to Viagra – these tend to be the exception.

  • Book15.03.2018.
    Articles on Innovation

    How new things come about

    The source of innovation is “epistemic” recycling. This is a term from psychology and denotes the kind of curiosity that is directed at delivering more information to the organism and enabling it to acquire new knowledge.

  • How Software Was Born15.03.2018.
    Articles on Innovation

    How software was born

    In order to compete with the German encryption machine “Enigma”, just a few weeks after arriving at Bletchley Park Turing ordered a machine to be constructed – the hardware.

> Load more