« Hard AI Future Salon 24th Feb | Main | Electric Sheep in New York »


Nato Welch

By "lean", do you mean they believe that it's more likely that IA will mature before AGI, or that they would prefer if it did?

Don't our preferences count as "facts" in the matter?

And by Yudkowsky's "case", do you mean the case of why artificial/autonomous general intelligence will come before enhancement, or why it SHOULD?

I would really like to be there; unfortunately, I'm poor and in Canada. I really hope the "webcast" comes across in a format that does not discriminate agains GNU/Linux users.

Eliezer Yudkowsky

The previous blog post, "Hard AI", is a much better description of my planned talk. I had not planned to address the entirely separate issue of IA vs. AI, though I'll be happy to take the question during Q&A. I also note that this post contains no link to RSVP. On the whole, I recommend that you click on the "Hard AI" link above.

Verify your Comment

Previewing your Comment

This is only a preview. Your comment has not yet been posted.

Your comment could not be posted. Error type:
Your comment has been saved. Comments are moderated and will not appear until approved by the author. Post another comment

The letters and numbers you entered did not match the image. Please try again.

As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments.

Having trouble reading this image? View an alternate.


Post a comment

Comments are moderated, and will not appear until the author has approved them.

Your Information

(Name and email address are required. Email address will not be displayed with the comment.)