Next Future Salon Friday February 24th Hard AI with Eliezer Yudkowsky:
"Recursive Self-Improvement and the World's Most Important Math
Problem." In 1965, I. J. Good suggested that smarter minds can make
themselves even smarter, leading to a runaway positive feedback that I. J. Good termed the "intelligence explosion". But how do you build an
Artificial Intelligence such that it remains stable and friendly through
the ascent to superintelligence? Eliezer Yudkowsky talks about the
implications of recursive self-improvement, and how it poses the most important math problem of this generation.
This one is going to be really interesting, because Eliezer has thought deeply about Artificial Intelligence and what will happen when machine intelligence is surpassing our own in the not so distant future for a very long time and it scares the hell out of a lot of people, one more reason for us to take a closer look. Please add your link to the companion Wiki page.
Eliezer Yudkowsky is one of the foremost thinkers on the Singularity. He is a cofounder and current Research Fellow of the Singularity Institute for Artificial Intelligence. Alongside Artificial Intelligence, Yudkowsky's interests include Bayesian probability theory, Bayesian decision theory, human rationality, and evolutionary psychology.
Yudkowsky is author of the papers: Levels of Organization in General Intelligence and Creating Friendly AI. Eliezer Yudkowsky's professional site, personal site.
A Future Salon has the following structure: 6-7 networking with light refreshments proudly sponsored by SAP. From 7-9+ pm presentation and discussion. SAP Labs North America, Building D, Room Southern Cross, 3410 Hillview Avenue, Palo Alto, CA 94304 [map] As always free and open to the public. Improve your commute by sharing it with a fellow Futurist. Check the Ride Board for opportunities. Free and open to the public. Please RSVP, so we can get enough food and drinks.