现在的位置: 首页 > 综合 > 正文

Cloud Computing Pioneer Dies

2013年10月03日 ⁄ 综合 ⁄ 共 4171字 ⁄ 字号 评论关闭

John McCarthy, creator of the Lisp programming language and pioneer
in utility computing
—the forerunner of today's cloud computing—died Sunday in Stanford, California. He was 84.

McCarthy was
an important figure in the fields of artificial intelligence (AI) and the design of computer languages.

McCarthy was born September 4, 1927, in Boston. He received his undergraduate degree in mathematics at Caltech in 1948, and a PhD in mathematics from Princeton University in 1951. Four years later, he was the principal author of a 1955 proposal that is credited
with coining the term "artificial intelligence."

"The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it," the proposal began. "An attempt will be made to
find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group
of scientists work on it together for a summer." McCarthy's coauthors were Marvin Minsky, Nathaniel Rochester, and Claude Shannon.

Advertisement

The resulting study took place at Dartmouth College in 1956. Over the next four years, McCarthy designed and implemented Lisp, a powerful computer language that is still widely used today. In 1965, he founded Stanford's Artificial Intelligence Laboratory. In
1971, McCarthy was awarded the Turing Award, regarded by many as computer science's Nobel Prize, for his AI work.

McCarthy was a professor at MIT from 1958 until 1962. While at MIT, McCarthy helped launch Project MAC, which created one of the world's first general-purpose time-sharing systems. Today, Project MAC is known as the MIT Computer Science and Artificial Intelligence
Laboratory (CSAIL).

McCarthy thought that time sharing, in which multiple users could access a powerful centralized machine, was just a stepping stone on the road to "utility computing." This was the concept, revolutionary 50 years ago, that computers would one day be a generally
available resource that could support an information economy. The key to achieving that vision, McCarthy said, was to split the computer's attention into many slices so a single machine could serve the needs of many.

"McCarthy was one of the people pushing it, getting more people to use fewer computers, was how we put it," recalls Minsky, who cofounded Project MAC with McCarthy.

"When we started at MIT, if you had an idea for a program, you would punch a bunch of cards," Minsky says. "You would put [the cards] in a hopper, and sometime within the next day, somebody would put those cards into a computer somewhere and it would print
out the results. So it was two to three days between your thoughts. If the theory was going to take 20 to 30 steps, how long would it take? Sixty or 90 days. Some people had higher priority and they could do two to three runs a day. But once we had time-sharing,
you could run a program and think about it in 30 seconds, and make a little change, edit it, and start it up again. Now you are doing it 100 times a day instead of twice in three days. He sped up the interaction speeds by a factor of hundreds. That all took
place in a very small number of years between 1960 and 1963."

MIT Professor Emeritus Fernando Corbato, who worked with McCarthy, adds: "He inspired the work that I and others did with his articulate and eloquent spelling out of the vision of interactive computing and of the means to accomplish it. McCarthy's sharp and
incisive mind are what I will always remember."

Today, McCarthy is best remembered for his contributions to Lisp, a computer language in which both programs and data are described with the same notation. This makes it easy for Lisp programs to process, analyze, and extend other programs—or even themselves.
Early AI researchers believed that the ability to self-reference was a fundamental requirement for creating thinking machines.

One of the great criticisms of AI is that its proponents systematically overestimated what they would be able to achieve and underestimated how long progress would actually take. Despite rapid successes in the 1960s and 1970s with the creation of programs that
could play chess, prove mathematical theorems, and even impersonate a psychiatrist, AI researchers were never able to create a system that could understand a photograph, think as a human, or learn like a child.

"If my 1955 hopes had been realized, human-level AI would have been achieved before many (most?) of you were born," McCarthy stated in a 2006
slide presentation that is archived on his website
. The presentation, entitled "Human-Level AI Is Harder than It Seemed in 1955," suggests that one of the reasons AI failed to realize the promises of its proponents was that "we humans are not very good
at identifying the heuristics we ourselves use."

抱歉!评论已关闭.