Epistemic status: speculative fiction It's difficult to imagine how human epistemics and AI will play out. On one hand, AI could provide much better information and general intellect. On the other hand, AI could help people with incorrect beliefs preserve those false beliefs indefinitely. Will advanced AIs attempting
Epistemic Status: Early idea A common challenge in nonprofit/project evaluation is the tension between social norms and honest assessment. We've seen reluctance for effective altruists to publicly rate certain projects because of the fear of upsetting someone. One potential tool to use could be something like an
Update I recently posted this to the EA Forum, LessWrong, and my Facebook page, each of which has some comments. Epistemic Status A collection of thoughts I've had over the last few years, lightly edited using Claude. I think we're at the point in this discussion
Thanks to Slava Matyuhin for comments Summary 1. AIs can be used to resolve forecasting questions on platforms like Manifold and Metaculus. 2. AI question resolution, in theory, can be far more predictable, accessible, and inexpensive to human resolution. 3. Current AI tools (combinations of LLM calls and software) are
Introduction Summary Mathematical models are important tools for reasoning and decision-making across diverse fields. With the advent of large language models (LLMs), there's an opportunity to integrate these AI systems with formal mathematical modeling, potentially enhancing the capabilities and applications of both. However, evaluating the quality and effectiveness
Podcast Link There was recently a lengthy thread on the EA Forum about the value of forecasting as a potential cause area, between Eli Lifland and myself. We thought it would be interesting to expand on this in a podcast episode. Some Summary Points * Open Phil's expanded forecasting
Epistemic Status: Early. The categories mentioned come mostly from experience and reflection, as opposed to existing literature. On its surface, a utility function is an incredibly simple and generic concept. An agent has a set of choices with some numeric preferences over them. This can be Von Neumann–Morgenstern (VNM)
First, my presentation from Manifest 2023 is now on YouTube. Enjoy! Second, I’m holding a intimate Squiggle Introduction Workshop in Berkeley this Thursday, at FAR Labs. Come and practice estimating things you care about. Sorry for the lack of other updates recently. Slava and I have been focused on
While working on Squiggle, we’ve encountered many technical challenges in writing probabilistic functionality with Javascript. Some of these challenges are solved in Python and must be ported over, and some apply to all languages. We think the following tasks could be good fits for others to tackle. These are
There’s a lot of discussion on the EA Forum and LessWrong about epistemics, evidence, and updating. I don’t know of many attempts at formalizing our thinking here into concrete tables or equations. Here is one (very rough and simplistic) attempt. I’d be excited to see much better
Recently I partipated in the EA Strategy Fortnight, with this post on the EA Forum. There are some good comment threads there I suggest checking out. Note: Our main work recently has been on making a “Squiggle Hub” that we intend to announce in the next few weeks, so there’
tl;dr: I present relative estimates for animal suffering and 2022 top Animal Charity Evaluators (ACE) charities. I am doing this to showcase a new tool from the Quantified Uncertainty Research Institute (QURI) and to present an alternative to ACE’s current rubric-based approach. Introduction and goals At QURI, we’