« Hard AI Future Salon 24th Feb | Main | Electric Sheep in New York »


Nato Welch

By "lean", do you mean they believe that it's more likely that IA will mature before AGI, or that they would prefer if it did?

Don't our preferences count as "facts" in the matter?

And by Yudkowsky's "case", do you mean the case of why artificial/autonomous general intelligence will come before enhancement, or why it SHOULD?

I would really like to be there; unfortunately, I'm poor and in Canada. I really hope the "webcast" comes across in a format that does not discriminate agains GNU/Linux users.

Eliezer Yudkowsky

The previous blog post, "Hard AI", is a much better description of my planned talk. I had not planned to address the entirely separate issue of IA vs. AI, though I'll be happy to take the question during Q&A. I also note that this post contains no link to RSVP. On the whole, I recommend that you click on the "Hard AI" link above.

The comments to this entry are closed.