Update: Eliezer reminded me that the focus of his talk will be the following as stated in the original post Hard AI:
"Recursive Self-Improvement and the World's Most Important Math Problem." In 1965, I. J. Good suggested that smarter minds can make themselves even smarter, leading to a runaway positive feedback that I. J. Good termed the "intelligence explosion". But how do you build an Artificial Intelligence such that it remains stable and friendly through the ascent to superintelligence? Eliezer Yudkowsky talks about the implications of recursive self-improvement, and how it poses the most important math problem of this generation.
One of my observations of the IA-AI Accelerating Change Conference last year was, that most people lean towards the side of intelligence augmentation versus machine intelligence surpassing our capability.
Is that because we feel more comfortable with us being a center part of the next stage of the evolution or is it really fact based?
Eliezer Yudkowsky tomorrow Friday 24th of February at the Future Salon will make the case for the machines. I am curious what the always interesting and stimulating Future Salon audience will lean towards. Come and join us.
Eliezer Yudkowsky is one of the foremost thinkers on the Singularity. He is a cofounder and current Research Fellow of the Singularity Institute for Artificial Intelligence. Alongside Artificial Intelligence, Yudkowsky's interests include Bayesian probability theory, Bayesian decision theory, human rationality, and evolutionary psychology.
Yudkowsky is author of the papers: Levels of Organization in General Intelligence and Creating Friendly AI. Eliezer Yudkowsky's professional site, personal site.
A Future Salon has the following structure: 6-7 networking with light refreshments proudly sponsored by SAP. From 7-9+ pm presentation and discussion. SAP Labs North America, Building D, Room Southern Cross, 3410 Hillview Avenue, Palo Alto, CA 94304 [map] As always free and open to the public. Improve your commute by sharing it with a fellow Futurist. Check the Ride Board for opportunities. Free and open to the public.
If you can't be there in person join the webcast and chat:
Webcast link: http://mfile.akamai.com/14947/sdp/finnern.com/salon_02_2006.mov
IRC chat as always:
Server: irc.freenode.net
Channel: #futuresalon
By "lean", do you mean they believe that it's more likely that IA will mature before AGI, or that they would prefer if it did?
Don't our preferences count as "facts" in the matter?
And by Yudkowsky's "case", do you mean the case of why artificial/autonomous general intelligence will come before enhancement, or why it SHOULD?
I would really like to be there; unfortunately, I'm poor and in Canada. I really hope the "webcast" comes across in a format that does not discriminate agains GNU/Linux users.
Posted by: Nato Welch | February 23, 2006 at 20:03
The previous blog post, "Hard AI", is a much better description of my planned talk. I had not planned to address the entirely separate issue of IA vs. AI, though I'll be happy to take the question during Q&A. I also note that this post contains no link to RSVP. On the whole, I recommend that you click on the "Hard AI" link above.
Posted by: Eliezer Yudkowsky | February 24, 2006 at 12:02