Wednesday, March 26, 2008

ARTHUR C.CLARKE: CONNECTED

This was my cover story and centrepage spread for 'Connected', the digital supplement of The Daily Telegraph,Tuesday, January 7, 1997. The article was was an extended feature based on a newly-published book of the time: HAL's Legacy: 2001's Computer as Dream and Reality. Edited by David G. Stork. Foreword by Arthur C. Clarke (MIT Press, £16.59). published to celebrate the 30th anniversary of the creation of HAL. Entitled 'Hal's heir - the quest for artificial intelligence', it says 'How close are we to building anything like the famous cinematic computer?' Full text reads as follows:

"I am a HAL Nine Thousand computer, Production Number 3.1. I became operational at the HAL plant in Urbanu, Illinois, on January 12, 1997."

Sunday marks the true birthday of the most fam­ous computer in cinematic history.

In Stanley Ku­brick's film adaptation of Arthur C. Clarke's novel 2001: A Space Odyssey, HAL was born on January 12, 1992; but it is the date given in the novel —1997 — that is being celebrated by researchers as an opportunity to evaluate progress — or lack of it — in the field of artificial intelligence (AI) in that time. Where are the thinking, talking, chess-playing, lip-reading computers like HAL — or preferably, since he also committed murder, not like HAL.

One of the prime movers behind the celebration is David G. Stork, chief scientist and head of the Ma­chine Learning and Perception Group at the Ricoh California Research Centre. He has edited a stimulating collection of essays by luminaries from the computer, per­ception and AI communities — HAL's Legacy: 2001's Computer as Dream and Reality — to be pub­lished, in print and on the Web, for the event. Each asks questions about our progress towards creating intelligent machines, telling us much not only about HAL and 2001 but also about ourselves.

Kubrick's film was released in 1968 — the year of the assassina­tions of Martin Luther King and Robert Kennedy, and the first pho­tograph of the whole Earth from space, taken by Apollo astronauts on the way to the Moon. Computers at that time were not a daily reality for the ordinary person. Most were huge machines that ran on solid-state micro-electronics and used punched cards and tape to input data. The keyboard and video dis­play monitor were new develop­ments. The personal computer, the mouse and the software explosion lay in the future, and the Internet was merely a twinkle in the eyes of a handful of American researchers.

HAL is a child of these times and his conception underlines the folly of predicting the future by extrapo­lating from the present. Even so, 2001, and HAL in particular, con­tinue to fascinate, despite the anachronisms and misconceptions.

Stork writes: "2007 is, in essence, a meditation on the evolution of intelligence from the monolith-inspired development of tools, through HAL's artificial intelli­gence, up to the ultimate (and delib­erately mysterious) stage of the star child."

The consensus in the late Nine­ties, however, is that HAL — reflecting ancient dreams and night­mares — will not be ready by 2001. Beyond that, opinions diverge. Some believe it is only a matter of time before intelligent computers emerge; others that it will never happen because the whole concept is flawed. In many fields we have made great strides, in others piti­fully small steps. Artificial intelli­gence, says Stork, "is a notably hazy matter that we don't even have a good definition for". It is also "one of the most profoundly difficult problems in science".

One of his major contributors is one of the godfathers of AI, Marvin Minsky, who believes that while good progress was made in the early days, the researchers became over­confident. They prematurely moved towards studying practical AI prob­lems such as chess and speech recognition, "leaving undone the cen­tral work of understanding the gen­eral computational principles — learning, reasoning and creativity — that underlie intelligence".

"The bottom line," says Minsky, "is that we haven't progressed too far toward a truly intelligent ma­chine. We have collections of dumb specialists in small domains; the true majesty of general intelligence still awaits our attack." He believes that if we work really hard, we can have such an intelligent system in four to 400 years.

Stephen Wolfram, the principal architect of the Mathematica com­puter system, believes the answer to building HAL lies in the domain of systems in which simple ele­ments interact to produce unexpect­edly complex behaviour. He uses the example of the human brain, in which the relatively simple rules governing neurons have evolved into a complex cognitive system.

Ray Kurzweil, who developed the first commercial large-vocabulary speech-recognition s ystem, believes the way to tackle the task is to reverse-engineer the brain, scan­ning an entire brain down to thelevel of nerve cells and the intercon­nections. We would then need merely to encode all the information into a computer to make a virtual brain every bit as intelligent.

David J. Kuck, a distinguished computer scientist, believes that given the rapid increase in comput­ing power, we could soon build a computer the size and power of HAL. "If automobile speed had improved by the same factor as computer speed has in the past 50 years," he writes, "cars that trav­elled at highway speed limits would now be travelling at the speed of light."

He believes progress in the 21st century will be slower, with gains coming from software and parallel processing, which is used in the human brain. To give some compar­ison, the brain has between a thou­sand billion and 10 thousand billion neurons, plus many more intercon­necting synapses. The fastest com­puter at present has 100 billion switches —10 per cent of the brain's capacity — but Kuck believes that in the future, the physical capacity of computers will match that of the brain.

The only manufacturers that could at present build HAL are IBM or Intel. "However," Kuck writes, "it is not obvious that a HAL-like system will ever be sufficiently interesting to induce governments to fund its development."

HAL's voice is a holy grail for many researchers. Making comput­ers produce natural-sounding speech is remarkably difficult. We have developed programs that work adequately for short utterances or single worlds, but in sentences ma­chines cannot yet convey the human subtleties of stress and intonation. The greatest problem is the ma­chine's inability to comprehend what it is saying or hearing. And while we have made several impor­tant strides in speech recognition, no system remotely approaches HAL's proficiency at speechreading (lipreading) in silence.

A successful automatic speech-recognition system requires three things: a large vocabulary, a pro­gram that can handle any voice and the ability to process continuous speech. We have the first two — and will get the third by early 1998, the book predicts.

Making computers see has also proved to be extremely difficult. There has been success in what researchers call "early" vision — edge and motion detection, face tracking and the recognition of emo­tions. Full vision would include the ability to analyse scenes.

Success has, however, been marked in chess. There are more possible combinations in the game than there are atoms in the uni­verse. Humans play chess by employing explicit reasoning, linked to large amounts of pattern-directed knowledge. The most suc­cessful chess computers use brute force, searching through billions of alternative moves.

The first machine to defeat a grandmaster in tournament play was IBM's Deep Thought, which began playing in 1988. The current champion computer is its successor, Deep Blue, which is capable of examining up to 200 million chess positions a second. Murray S. Campbell, a member of the team that built it, says Deep Blue is actu­ally a system of 32 separate comput­ers (or nodes) working in concert, with 220 purpose-built chess chips, running in parallel.

Garry Kasparov played Deep Blue for the first time in 1989 in a contest he viewed as "a defence of the whole human race". He lost to the machine for the first time last year. Campbell believes man-machine contests will end some time next century. It is only a matter of time before the world's best chess player is a machine, he says, but concedes that "until computers possess the ability to reason, strong human chess players will always have a chance to defeat a computer".

Stork's primary motivation for the book was aesthetic, he says, lik­ening the exercise to that of art his­torians providing fresh insights into a subtle painting. 2001 illustrates many key ideas in several disci­plines of computer science.

"The Internet and the World Wide Web have changed the way people view communication and technology," says Stork. "2001 expressed the anxiety [of the Six­ties] of what computers were and what their potential was. Like much science fiction, it was a metaphor for the salient issues of the present."

He believes the biggest mistake made by early AI researchers was "not to cast the problem as more of a grand endeavour to build useful intelligent machines. The search raises the deepest human questions since Plato."

When Stork saw 2001 in the year of its release, he was "awed. It was overwhelming, and supremely beautiful. It was also mythic and very confusing". The film "shows us and reminds science that it is part and parcel of the highest human aspiration. It also raises the question: is violence integral to the nature of intelligence? It is thus related to Kubrick's Clockwork Orange, which merges violence and aesthetics. It suggests the link can be severed — but at a terrrible cost."

The computing pioneer Alan Turing predicted in the Forties that by early next century, society would take for granted the pervasive intervention of intelligent machines. By the end of this century, scant years away, we will be talking to our PCs and, by 2010, working with translat­ing telephones. Our most advanced programs today may be comparable with the minds of insects but the power of computation is set to increase by a factor of 16,000 every 10 years for the same cost.

Many of HAL's capabilities can already be realised; others will be possible soon. Building them all into one intelligent system will take decades. If we are to achieve that, we must give computers under­standing; but to program them with understanding, we must first under­stand the nature of our own human consciousness. That could take some time.

No comments: