From an interview with Eliezer Yudkowsky (the world’s leading paranoid on the dangers of AI):
Richard Hamming used to go around annoying his colleagues at Bell Labs by asking them what were the important problems in their field, and then, after they answered, he would ask why they weren’t working on them. Now, everyone wants to work on “important problems”, so why are so few people working on important problems? And the obvious answer is that working on the important problems doesn’t get you an 80% probability of getting one more publication in the next three months. And most decision algorithms will eliminate options like that before they’re even considered. The question will just be phrased as, “Of the things that will reliably keep me on my career track and not embarrass me, which is most important?”
And to be fair, the system is not at all set up to support people who want to work on high-risk problems. It’s not even set up to socially support people who want to work on high-risk problems. In Silicon Valley a failed entrepreneur still gets plenty of respect, which Paul Graham thinks is one of the primary reasons why Silicon Valley produces a lot of entrepreneurs and other places don’t. Robin Hanson is a truly excellent cynical economist and one of his more cynical suggestions is that the function of academia is best regarded as the production of prestige, with the production of knowledge being something of a byproduct. I can’t do justice to his development of that thesis in a few words (keywords: hanson academia prestige) but the key point I want to take away is that if you work on a famous problem that lots of other people are working on, your marginal contribution to human knowledge may be small, but you’ll get to affiliate with all the other prestigious people working on it.
And these are all factors which contribute to academia, metaphorically speaking, looking for its keys under the lamppost where the light is better, rather than near the car where it lost them. Because on a sheer gut level, the really important problems are often scary. There’s a sense of confusion and despair, and if you affiliate yourself with the field, that scent will rub off on you.
Academia does plenty of good things– but the opportunity cost of our systemic incentives toward ‘safe’ research (I include both the derivative and the esoteric) is rather staggering.
Edit, 8-13-11: A friend blogs,
The answer comes down to ethics. Service as an ethic is alien to so many academics. “I serve.” They don’t get it. Some do. A few. But a number of my friends have gone into the academy for longer or shorter periods of time, and the observations have always been similar – it’s not a place of scholarship and diligent service, but rather of all sorts of politics and backbiting where you desperately try to carve out your own private sphere in a confusing bureaucratic jungle.
I think academia used to have a strong shared sense of duty (Sebastian uses the term ‘warrior ethic,’ where service to a noble cause is its own reward), but for several reasons this has largely eroded or isn’t sustainable in today’s academy. It’s still present, but it’s much weaker. We could point to institutional factors, a changing demographic of who goes into academia, a crossover from our increasingly mercenary private-sector culture, or getting more of what we pay for, but at the end of the day– it seems like many people in academia think of it primarily as a career, not as service. It’s a big loss.
“From an interview with Eliezer Yudkowsky (the world’s leading paranoid on the dangers of AI)”
Somewhat related question: What do you think is the probability of an existential risk(http://www.existential-risk.org/faq.html) occurring before the end of this century?
I think the modern mind tends to avoid answering such a question, which is unfortunate. I’d put a non-trivial chance on it, and if pressed, perhaps 40%.