tl;dr: I present relative estimates for animal suffering and 2022 top Animal Charity Evaluators (ACE) charities. I am doing this to showcase a new tool from the Quantified Uncertainty Research Institute (QURI) and to present an alternative to ACE’s current rubric-based approach. Introduction and goals At QURI, we’
Too much work for any one group
I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate
In the second half of 2022, we announced the Squiggle Experimentation Challenge and a $5k challenge to quantify the impact of 80,000 hours' top career paths. For the first contest, we got three long entries. For the second we got five, but most were fairly short. This post
Epistemic status: much ado about nothing.
I explain two straightforward strategies for eliciting probabilities from language models, and in particular for GPT-3, provide code, and give my thoughts on what I would do if I were being more hardcore about this. Straightforward strategies Look at the probability of yes/no completion Given a binary question, like
Continuing on "Who do EAs Feel Comfortable Critiquing?”
Summary I compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by Laplace’s rule—which only depends on the number of years passed since a conjecture was created—are
Links to the EA Forum post and personal blog post Summary This document seeks to outline why I feel uneasy about high existential risk estimates from AGI (e.g., 80% doom by 2070). When I try to verbalize this, I view considerations like * selection effects at the level of which
The story so far: * I constructed the original Big List of Cause Candidates in December 2020. * I spent some time thinking about the pipeline for new cause area ideas, not all of which is posted. * I tried to use a bounty system to update the list for next year but
Also posted in the EA Forum here. Brief description of the experiment I asked a language model to replicate a few patterns of generating insight that humanity hasn't really exploited much yet, such as: 1. Variations on "if you never miss a plane, you've been