Optimister og depremister i AI forskning

Transformationer og udviklingsmønstre og egenskaber i forbindelser i naturen er aldrig jævne bevægelser.

Transformationer og udviklingsmønstre og egenskaber i forbindelser i naturen er aldrig jævne bevægelser.

Jeg har nu oprettet en AI kategori.

Der er kommet lidt medvind til de der undsiger kunstig intelligens. De tror det er hype og en boble og en ny krise er på vej og en atomvinter vil ramme feltet og mere i den dur. Her er f.eks. en der skriver at man ikke kan forstå biologiske systemer uden at kunne simulere den enkelte celle ned til atomniveau. Ergo vil det tage yderligere 100 års udvikling af computeren at simulere en menneskelig hjerne. Læs Bharath Ramsundars beregninger.

En måde at misforstå alting, og dog stadig være nogenlunde indenfor videnskaben er at fremskrive udviklingen “lineært”, her ikke nødvendigvis snævert forstået som lineært, men det kan også være en jævn eksponentiel udvikling.

Det at man kigger et tidsrum tilbage og på den baggrund udarbejder en matematisk udviklingskurve og så fremskriver ud fra den. Det holder somregel aldrig til et virkelighedstjeck mere end få år frem. Man kan ikke forudsige tendenser eller software eller befolkningstilvækst osv. ud fra simple jævne kurver. Verden virker ikke på den måde, den er skruet sammen ved en kombination af spring frem og tilbagefald, og ind i mellem jævne udviklingsperioder. (kig på naturen).

Derfor er det også sådan at vi ikke kan forudsige hvad det næste spring vil være, vi kan kun med sikkerhed antage de kommer, på samme måde som økonomiske kriser uundgåeligt følger kapitalismen. Det spring der vil komme i forståelsen af hele den menneskelige mekanisme til et niveau så man kan simulere alt ned til atom niveau behøver altså ikke at fungere på den måde at man skal se det hele som en sky af atomer. Det er på den anden side åbenlyst rigtigt at man nok ikke vil opfinde en kunstig intelligens førend vi har en fuld forståelse af vores menneskelige intelligens.

OBS: nørdet gruppe i Århus har startet en studiekreds om kunstig inteligens.

https://www.newscientist.com/article/mg23130824-100-will-ais-bubble-pop-deep-learnings-hype-machine-in-overdrive/

Will AI’s bubble pop? Deep learning’s hype machine in overdrive

The hype around artificial intelligence is building – but we don’t yet know if it will fulfil its potential

By Sally Adee

IN FROM three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that, its powers will be incalculable.

Such rumours of superhuman artificial intelligence have been doing the rounds lately, but this prediction doesn’t come from AI oracles du jour Nick Bostrom or Elon Musk (New Scientist, 25 June, p 18). It was made in 1970 by the man widely considered to be the “father of artificial intelligence” – Marvin Minsky.

But eight years later, the cutting-edge was still only the Speak & Spell, an educational toy that used rudimentary computer logic. When the chasm between Minsky’s promise and reality sank in, the disappointment destroyed AI research for decades – a situation so dire it has since been dubbed the “AI winter”.

Today there are whispers that something similar might be on its way, fuelled by the excitement surrounding deep learning, the technique that enabled an AI to beat the world champion at the board game Go earlier this year. “I can feel the cold breeze on the back of my neck,” says Roger Schank, professor emeritus at Northwestern University in Evanston, Illinois. But are these the grumblings of veterans who missed out on the true AI revolution? Or harbingers of something real?

The original AI winter was brought about by two factors. First, a research monoculture focused on a technique called rule-based learning, which tried to emulate basic human reasoning. This showed great promise in the lab, hence the breathless predictions of Minsky and others.

As these prognostications piled up, the UK Science Research Council commissioned a report to evaluate the claims. The result was damning. The Lighthill report of 1973 revealed that for all the potential that rule-based learning showed in lab problems, these were all it could handle. In reality, it was undone by complexity.

Governments stopped funding university AI research. Graduate students sought greener pastures in disciplines that garnered more respect. The remaining scientists talked about their work in hushed tones and deliberately eschewed the phrase artificial intelligence. It would be another two decades before the field recovered.

The rehabilitation began in 1997 when IBM’s Deep Blue AI defeated the reigning chess champion. In 2005, an autonomous car drove itself for 131 miles. In 2011, IBM’s Watson defeated two human opponents on the game show Jeopardy! But what catapulted AI into the mainstream, in 2012, was deep learning.

This technique is based on collections of algorithms that make sense of the world by filtering information through a hierarchy called a neural network. This is not too dissimilar to how our brains make sense of things.

Imagine you are looking at a cat. Sensory information gets filtered through several layers of specialised neurons. The first layer scours for edges, say. If it finds enough of these, it passes that on to a higher-level layer of neurons, which consolidates data from several such lower-order layers. Integrating all this allows our brain to decide whether to categorise what we’re looking as “cat” or “not cat”.

Neural networks had been in the research community for decades, but it wasn’t until improvements in processing speeds enabled them to be stacked on top of one another that things got interesting.

Once they had this more sophisticated structure, researchers could train them on millions of images, so that eventually they would be able to recognise unfamiliar objects in never-before-seen pictures.

A new dawn

This was achieved in 2012 to much fanfare. A neural network was able to recognise cat faces in video streams – despite never being trained on cats. People began to talk about how deep learning, given enough processing power, would lead to a machine able to develop concepts, and thus an understanding of the world. Two years later, Google bought DeepMind, the firm that went on to win at Go, for $500 million.

These early successes have sparked an AI gold rush – based on some bold claims. One start-up promises to turn cancer into a manageable long-term disease rather than an outright killer, another wants to reverse ageing, while another has ambitions to predict future terrorists by their facial features. What unites them is the idea that, given the right combination of algorithms, some solution to these so far intractable problems will pop out.

“The black magic seduction of neural networks has always been that by some occult way, they will learn from data so they can understand things they have never seen before,” says Mark Bishop at Goldsmiths University of London. Their complexity (157 layers in one case) helps people suspend disbelief and imagine that the algorithms will converge to form some kind of emergent intelligence. But it’s still just a machine built on rule-based mathematical systems, says Schank.

In 2014, a paper that could be seen as the successor to the Lighthill report punctured holes in the belief that neural networks do anything even remotely akin to actual understanding.

Instead, they recognise patterns, finding relationships in data sets that are so complex that no human can see them. This matters because it disproves the idea that they could develop an understanding of the world. A neural network can say a cat is a cat, but it has no concept of what a cat is. It cannot differentiate between a real cat or a picture of one.

The paper isn’t the only thing giving people deja vu. Schank and others see money pouring into deep learning and the funnelling of academic talent.

“When the field focuses too heavily on short-term progress by only exploring the strength of a single technique, this can lead to a long-term dead end,” says Kenneth Friedman, a student at the Massachusetts Institute of Technology, who adds that the AI and computer science students around him are flocking to deep learning.

It’s not just the old guard that’s worried. Dyspeptic rumblings are coming from the vanguard of machine learning applications, including Crowdflower, a data cleaning company, which wonders if AI is suffering from “hyper-hype”.

But this fear that the AI bubble is about to pop – again – is not the mainstream view.

“I don’t think it’s clear that there is a bubble,” says Miles Brundage at the Future of Humanity Institute’s new Strategic AI Research Center in Oxford. Even if there is, he thinks the field is still safe for the moment. “I don’t think we’re likely to see it run out of steam anytime soon. There’s so much low-hanging fruit, and excitement and new talent in the field,” he says.

Even the Cassandras insist they don’t want to undersell it. “I’m impressed by what people have achieved,” says Bishop. “I never thought I’d ever see them crack Go. And face recognition is at nearly 100 per cent.”

But these applications are not what has everyone so excited. Instead, it’s the lure of curing cancer and ending ageing. Even if AI can meet these expectations, there are still obstacles. “What people don’t acknowledge is how inefficient deep learning is,” says Neil Lawrence at the University of Sheffield, UK. Or how difficult it is to get enough data to meet the claims some of the firms are making, especially in medicine, where privacy concerns prove a huge roadblock to obtaining sufficient amounts of big data.

Will this shortfall herald a proper winter? It’s hard to say: “People have been disillusioned by AI in the past without a proper winter setting in,” says Bishop. It will likely depend on how much disappointment people, and funding bodies, are able to tolerate.

To most in the field, though, right now won’t seem like the time to be worrying about an AI winter. In fact, AI’s main problem currently seems to be that investors can’t print money fast enough for the gold rush. But don’t say you haven’t been warned.

The semantics of machine intelligence
Why do we believe machines are on the verge of understanding the world around them? A lot of it comes down to the metaphors we use. There’s machine learning. Deep learning. Neural networks. Cognitive computing.

“Cognition means thinking. Your machine is not thinking,” says Roger Schank at Northwestern University in Illinois. “When people say AI, they don’t mean AI. What they mean is a lot of brute force computation.” Patrick Winston at the Massachusetts Institute of Technology describes such terms as “suitcase words”: definitions so general that any meaning can be packed into them. Artificial intelligence is the prime example. Machine learning is similar – it doesn’t mean learning in the traditional sense. And while there are some parallels between the two, neural networks are not neurons.

It’s not just semantics. Tell people a machine is thinking, and they will assume it is thinking the way they do – and can distinguish, for example, a white van from a bright sky, as a self-driving car failed to do in May. This mismatch can have serious – or in the case of the car, fatal – results. If it happens enough, it could pop the AI bubble (see main article).

“The beginning and the end of the problem is the term AI,” says Schank. “Can we just call it ‘cool things we do with computers’?”

This article appeared in print under the headline “Will AI’s bubble pop?”

Advertisements

Om hubertnaur

Særlig interesse i fri og åben adgang til viden
Dette indlæg blev udgivet i AI kunstig inteligens og tagget . Bogmærk permalinket.

Skriv et svar

Udfyld dine oplysninger nedenfor eller klik på et ikon for at logge ind:

WordPress.com Logo

Du kommenterer med din WordPress.com konto. Log Out / Skift )

Twitter picture

Du kommenterer med din Twitter konto. Log Out / Skift )

Facebook photo

Du kommenterer med din Facebook konto. Log Out / Skift )

Google+ photo

Du kommenterer med din Google+ konto. Log Out / Skift )

Connecting to %s