Recent Updates

Squiggle AI & Sonnet 3.7

We've updated Squiggle AI to use the new Anthropic Sonnet 3.7 model. In our limited experimentation with it so far, it seems like this model is capable of making significantly longer Squiggle models (roughly ~200 lines to ~500 lines), but that it takes an equivalently higher cost and time to do so. Frustratingly, this means that with the default Bubble Tea example it often isn't capable of fully debugging its first results within the limits, but other prompts seem to do better.

The system still makes a decent number of mistakes, especially with search & replace for some reason (I've been surprised that this aspect has proven so challenging so far). Also, it often hits the top Anthropic rate limits for our tier (I believe this is Tier 4). I get the impression that non-Enterprise tier limits are just far too low for many interesting experiments and usage patterns.

We haven't explored the new "Extended Thinking" functionality yet. In theory this could improve performance with code generation and fixing, but we'll need to do further testing to find out.

We've increased the price and time limits for now. If usage gets to be too high, we'll limit these.

Do feel free to try the new version out! Any feedback is greatly appreciated. Do expect errors. Remember that if you hit our rate limits, you can use your own Anthropic key.

Effective Altruism Global

Last weekend I attended EA Global: Bay Area 2025. I held a 1-hour Squiggle workshop, similar to the one I gave previously there, but with more emphasis on Squiggle AI. Approximately 60 people showed up to the event, and around 10 stayed a while later for my office hours.

I had a few good conversations with enthusiastic Squiggle users. Also, I've learned about a few potential groups potentially excited in working on or funding the field of Epistemic AI sometime. I really hope this field starts taking off, it feels like it's been a long time coming.

Other Work

In the last two months, we've been spending a lot of time on both funding applications and some EA client work. Funding has been a highly significant bottleneck recently. Fingers crossed that this gets resolved in the next few months and we can get back into intense work mode on our main projects.

One benefit of fundraising is that it forced us to spend time thinking through and outlining potential projects for the next year. Overall I'm feeling most bullish on things that could potentially lead to significant advances in the area of AI and Epistemics. This means less attention on regular features for Squiggle, and more on experiments and evaluations that might scale with better LLMs.

AI is progressing quickly and chaotically now. This makes it very difficult to make long-term plans, but also could mean that the right projects and writing might have outsized impact.

Epistemic AI Hackathon, March 3

Some friends are holding an open "AI for Epistemics hackathon" soon, I'm planning on attending. I personally plan to use the event primarily as an opportunity to chat with others in the space. I assume it will be a friendly event, more focused on encouraging work in the area than being highly competitive.

Comments