Quantified Uncertainty Research Institute
QURI is a nonprofit research organization researching forecasting and epistemics to improve the long-term future of humanity.
Straightforwardly eliciting probabilities from GPT-3
I explain two straightforward strategies for eliciting probabilities from language models, and in particular for GPT-3, provide code, and give my thoughts on what I would do if I were being more hardcore about this. Straightforward strategies Look at the probability of yes/no completion Given a binary question, like
Six Challenges with Criticism & Evaluation Around EA
Continuing on "Who do EAs Feel Comfortable Critiquing?”
Eli Lifland, on Navigating the AI Alignment Landscape
Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape. I’ve known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy. This
Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism
Cross-post on the EA Forum Misha and I recently recorded a short discussion about large language models and their uses for effective altruists. This was mostly a regular Zoom meeting, but we added some editing and text transcription. After we wrote up the transcript, both Misha and myself edited our
An in-progress experiment to test how Laplace’s rule of succession performs in practice.
Summary I compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by Laplace’s rule—which only depends on the number of years passed since a conjecture was created—are
My highly personal skepticism braindump on existential risk from artificial intelligence
Links to the EA Forum post and personal blog post Summary This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like * selection effects at the level of which
Interim Update on our Work on EA Cause Area Candidates
The story so far: * I constructed the original Big List of Cause Candidates in December 2020. * I spent some time thinking about the pipeline for new cause area ideas, not all of which is posted. * I tried to use a bounty system to update the list for next year but
The QURI Logo and Reflections on 99 Designs
A retrospective look at designing the QURI logo
Who do EAs Feel Comfortable Critiquing?
Many effective altruists seem to find it scary to critique each other
Probing GPT-3's ability to produce new ideas in the style of Robin Hanson and others
Also posted in the EA Forum here. Brief description of the experiment I asked a language model to replicate a few patterns of generating insight that humanity hasn't really exploited much yet, such as: 1. Variations on "if you never miss a plane, you've been
Can EAs use heated topics, in part, as learning opportunities?
Given that we're going through this anyway, maybe we could at least learn from it.
EA Could Use Better Internal Communications Infrastructure
EA is lacking in many areas of enterprise infrastructure. Internal communication feels like a pressing one.