Update: Eliezer reminded me that the focus of his talk will be the following as stated in the original post Hard AI:
"Recursive Self-Improvement and the World's Most Important Math Problem." In 1965, I. J. Good suggested that smarter minds can make themselves even smarter, leading to a runaway positive feedback that I. J. Good termed the "intelligence explosion". But how do you build an Artificial Intelligence such that it remains stable and friendly through the ascent to superintelligence? Eliezer Yudkowsky talks about the implications of recursive self-improvement, and how it poses the most important math problem of this generation.
One of my observations of the IA-AI Accelerating Change Conference last year was, that most people lean towards the side of intelligence augmentation versus machine intelligence surpassing our capability.
Is that because we feel more comfortable with us being a center part of the next stage of the evolution or is it really fact based?
Eliezer Yudkowsky tomorrow Friday 24th of February at the Future Salon will make the case for the machines. I am curious what the always interesting and stimulating Future Salon audience will lean towards. Come and join us.