Quantified Uncertainty Research Institute
QURI is a nonprofit research organization researching forecasting and epistemics to improve the long-term future of humanity.
Patrick Gruban, on Effective Altruism Germany and Nonprofit Boards in EA
Here’s a discussion between myself and Patrick Gruban. Patrick Gruban is the co-director and managing director of Effective Altruism Germany and the co-founder and managing director of Rosy Green Wool. He is a serial entrepreneur with over 25 years of experience in different fields, including software development, visual arts,
Squiggle 0.7.0
New functions, ESM Modules, a better Playground editor, several fixes
Owain Evans on Ideas for Language Models
A varied discussion about Truthful AI, AI composition, and possibilities for language models like ChatGPT
Accuracy Agreements: A Flexible Alternative to Prediction Markets
A simple proposal to convert money into complex forecasts
Some estimation work in the horizon
Too much work for any one group
Estimation for sanity checks
I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate
Winners of the Squiggle Experimentation and 80,000 Hours Quantification Challenges
In the second half of 2022, we announced the Squiggle Experimentation Challenge and a $5k challenge to quantify the impact of 80,000 hours' top career paths. For the first contest, we got three long entries. For the second we got five, but most were fairly short. This post
Use of “I’d bet” on the EA Forum is mostly metaphorical
Epistemic status: much ado about nothing.
Straightforwardly eliciting probabilities from GPT-3
I explain two straightforward strategies for eliciting probabilities from language models, and in particular for GPT-3, provide code, and give my thoughts on what I would do if I were being more hardcore about this. Straightforward strategies Look at the probability of yes/no completion Given a binary question, like
Six Challenges with Criticism & Evaluation Around EA
Continuing on "Who do EAs Feel Comfortable Critiquing?”
Eli Lifland, on Navigating the AI Alignment Landscape
Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape. I’ve known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy. This
Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism
Cross-post on the EA Forum Misha and I recently recorded a short discussion about large language models and their uses for effective altruists. This was mostly a regular Zoom meeting, but we added some editing and text transcription. After we wrote up the transcript, both Misha and myself edited our