Research

Ozzie Gooen
Members

Relative Value Functions: A Flexible New Format for Value Estimation

We just published a blog post outlining the details of relative value functions. This was touched on in the presentation I sent out yesterday. https://forum.effectivealtruism.org/posts/EFEwBvuDrTLDndqCt/relative-value-functions-a-flexible-new-format-for-value The post has several code sections and tables, so I won’t attempt to copy that over here. Instead,

Ozzie Gooen
Members

Presentation: Estimating Everything Everywhere Always

An overview of our plan at QURI

Ozzie Gooen
Members

Conveniences in Thought and Communication

Communication often makes winners and losers. I propose we use the term "convenience" to help clarify what's going on.

Ozzie Gooen
Members

Accuracy Agreements: A Flexible Alternative to Prediction Markets

A simple proposal to convert money into complex forecasts

Nuño Sempere
Members

Some estimation work in the horizon

Too much work for any one group

Nuño Sempere
Members

Estimation for sanity checks

I feel very warmly about using relatively quick estimates to carry out sanity checks, i.e., to quickly check whether something is clearly off, whether some decision is clearly overdetermined, or whether someone is just bullshitting. This is in contrast to Fermi estimates, which aim to arrive at an estimate

Nuño Sempere
Members

Use of “I’d bet” on the EA Forum is mostly metaphorical

Epistemic status: much ado about nothing.

Nuño Sempere
Members

Straightforwardly eliciting probabilities from GPT-3

I explain two straightforward strategies for eliciting probabilities from language models, and in particular for GPT-3, provide code, and give my thoughts on what I would do if I were being more hardcore about this. Straightforward strategies Look at the probability of yes/no completion Given a binary question, like

Nuño Sempere
Members

Six Challenges with Criticism & Evaluation Around EA

Continuing on "Who do EAs Feel Comfortable Critiquing?”

Ozzie Gooen
Members

Eli Lifland, on Navigating the AI Alignment Landscape

Recently I had a conversation with Eli Lifland about the AI Alignment landscape. Eli Lifland has been a forecaster at Samotsvety and has been investigating said landscape. I’ve known Eli for the last 8 months or so, and have appreciated many of his takes on AI alignment strategy. This

Ozzie Gooen
Members

Misha Yagudin and Ozzie Gooen Discuss LLMs and Effective Altruism

Cross-post on the EA Forum Misha and I recently recorded a short discussion about large language models and their uses for effective altruists. This was mostly a regular Zoom meeting, but we added some editing and text transcription. After we wrote up the transcript, both Misha and myself edited our

Nuño Sempere
Members

An in-progress experiment to test how Laplace’s rule of succession performs in practice.

Summary I compiled a dataset of 206 mathematical conjectures together with the years in which they were posited. Then in a few years, I intend to check whether the probabilities implied by Laplace’s rule—which only depends on the number of years passed since a conjecture was created—are