Oct 25 2008
Computer scientists and engineers -- and one neurophysiologist -- met at Cornell Oct. 12-14 for a Symposium on Computing Challenges sponsored by the Kavli Institute at Cornell for Nanoscale Science. It was probably something like what happened when scientists got together in 1980 to discuss ways of going to Mars. The meeting was a brainstorming session, where participants, including many Cornell students, threw out ideas ranging from obvious to fanciful for the future of hardware and software design.
According to Sandip Tiwari, Cornell the Charles N. Mellowes Professor in Engineering and director of the National Nanotechnology Infrastructure Network (NNIN), who organized the symposium, the challenge is that we are approaching what might be a "gotcha" in Moore's law, which says that the number of devices we can cram onto a chip will continue to increase exponentially. Soon we may be able to get a trillion devices, some as small as 30 atoms across, on a postage-stamp-sized piece of silicon. But they can't all operate at once without generating too much heat, Tiwari pointed out, and such small devices may behave differently from one instant to the next, or even intermittently fail. How do we design such devices and create software that can work with them?
"For 30 or 40 years hardware and software development have been largely separated," said Cornell computer science professor Bart Selman. "Maybe computer scientists and hardware engineers have to get back together."
Discussion was stimulated by invited talks on cutting-edge research in such areas as quantum computing, where the states of individual atoms or even individual subatomic particles represent the ones and zeros of binary arithmetic; adaptive computing, where hardware adjusts to the sort of problem it's given; or approximate computing, where you take an "acceptable" answer to a complex problem in order to solve it in reasonable time. Most of this is well in the future, speakers said. Quantum computing, for example, is 99 percent reliable, but it needs to be made 99.999 percent reliable, according to Hans Mooij of the Technical University at Delft, The Netherlands.
In a provocative kickoff, Olaf Sporns, professor of psychology at Indiana University, Bloomington, described the architecture of the human brain. Using magnetic resonance imaging, he explained, neurophysiologists are mapping the brain's "white matter" -- the fibers that connect neurons -- and have found that many small interconnected clusters of activity are in turn connected to one another in what Cornell's Steven Strogatz, the Jacob Gould Schurman Professor of Theoretical and Applied Mechanics, has dubbed a "small-world network." Just as anyone in the United States is an average of "six degrees of separation" from anyone else, each node of the brain is only two or three steps away from any other.
Symposium participants saw parallels in the structure of "combinatorial" problems, where a computer has to find the best combination of many different, interrelated variables, and in maps of the Internet created by Cornell computer scientist Jon Kleinberg, who spoke on the third day of the meeting. It's important to consider such ideas, Tiwari said, because the brain is probably the most efficient computer around in terms of energy use.
What participants took home was a collection of ideas to try, mostly to be applied to such high-level computing problems as managing world finance, predicting the weather or programming self-driving vehicles. We don't need this stuff to run Windows.