AI Safety

Ozzie Gooen
Members

Eli Lifland, on Navigating the AI Alignment Landscape

Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape. I’ve known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy. This

Nuño Sempere
Members

My highly personal skepticism braindump on existential risk from artificial intelligence

Links to the EA Forum post and personal blog post Summary This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like * selection effects at the level of which