Pivot Archive
All pivots
Zoa Research logo

Zoa Research

S24Pivot 3 of 3
3 people|Active|Hiring|Website
Data ScienceAI
85°Major Pivot
Before

Realistic RL environments for open-horizon tasks

After

Powerful quantitative forecasting models

Full description — before

We are entering what Rich Sutton terms the “Era of Experience”, where agents learn from continuous feedback in a realistic environment. In our view, the key bottleneck to entering this era is high-feedback realistic environments which agents can inhabit. So far, coding and math have proven to be rich task-horizon environments, but we believe that open-horizon environments are the critical next step — environments with a continuous, open goal, much like the world we live in. At Zoa Research, we're focusing on one of the highest-feedback open environments in the world: trading. We believe trading is one of the best domains to train models to develop research taste and intuition based on real-world continuous feedback. We're building the infrastructure to make this happen. Sam's ex-girlfriend introduced him to Greg back at Carnegie Mellon in 2017, and while that relationship didn't last, their friendship has. After college, Greg went to Harvard Law School, while Sam worked for three years at Jane Street on their Options desk, building & leading a satellite dev team.

Full description — after

Historically, quantitative models are domain specific. Brilliant people spend their best years testing features, tuning hyperparameters, and iterating architectures within a narrow domain. But scale is the panacea: large models will find patterns people, and specialized models, could not. Forecasting generalizes. Zoa trains cross-domain event forecasting engines. *Automating Iteration* LLMs—embedded in multi-agent optimization loops and evaluated against fixed policies—can automate the build-test-improve modeling cycle. Think AlphaEvolve for forecasting problems. *Sample-Efficient General Models* Today’s forecasting models are narrowly crafted with deep human priors. But larger models will outperform state-of-the-art specialized models. Unlike existing event models, our models leverage data from across contexts and rely less on human intuition. And compared to LLMs, our models are built with more inductive priors and rely more heavily on inference-time compute—improving sample efficiency. *Why It Matters* In the real economy, our models could be useful for forecasting supply chain volatility, energy supply and demand, even earthquake risk. Science is, Ian Hacking writes, the taming of chance. It is the process of iteratively updating priors (something like: identify uncertainty, conceive experiment to reduce uncertainty, execute, update). If science is uncertainty-reduction, forecasting is a critical measure of progress. Better forecasting improves our ability to select interesting experiments (roughly those with greatest expected uncertainty reduction) and update priors. Our models will be used by labs and academics in data-heavy domains. Sam's ex-girlfriend introduced him to Greg back at Carnegie Mellon in 2017, and while that relationship didn't last, their friendship has. After college, Greg went to Harvard Law School, while Sam worked for three years at Jane Street on their Options desk, building & leading a satellite dev team.

Category shift
LLM Development ToolsAI Investment & Research
Summary

Zoa Research shifted from building realistic RL/trading environments to building general-purpose, cross-domain forecasting models—moving from RL environments as a product to actual predictive modeling engines. This is a meaningful product pivot, even though there is conceptual continuity in their AI/forecasting expertise.

Detected 7 months ago · 2025-07-31
Company journey — 3 pivots
Current

Powerful quantitative forecasting models(viewing)

84.5°Major Pivot2025-07-31
117.9°Near Reinvention2025-02-06
Started as

AI medical summaries for injury lawyers