|
View in browser
|
philippdubach
February 2026: AI Scaling Laws, the Variance Tax, and Japan's $5T Repatriation Risk
|
|
Forwarded this email? Subscribe here for more
|
|
On why scaling assumptions break, how volatility destroys returns, and what Japan's yield shift means for Treasuries
|
|
The agentic AI arms race dominated the last few weeks. Opus 4.6 shipped with a 1M token context window and agent teams, GPT-5.3-Codex launched hours later, Kimi K2.5 went open-source at 1T parameters. Claude Code crossed $1B in annualized revenue within six months. What matters isn't the benchmarks -- it's that multi-model workflows are becoming the default. The most expensive assumption in AI remains that scaling laws hold indefinitely, and a new paper finds the same compute-complexity tradeoffs in return predictability. Scaling laws aren't just an AI story. I also dug into how Netflix and Spotify actually build recommender systems in 2026: hybrid stacks where cheap classical models do the filtering and expensive LLMs only touch the final candidates.
|
|
On the quant side, I wrote about the variance tax, where volatility systematically destroys compound returns (G ~ mu - 1/2 sigma^2), which explains why leveraged ETFs underperform over time even when average returns look fine. AQR's latest data shows PE returning 4.2% versus 3.9% for public equities, a 30bp illiquidity premium that barely justifies years of lockup. And Japan's $5T in foreign assets now face repatriation pressure as domestic yields rise above 3%, threatening to flip Japanese institutions from marginal Treasury buyers to marginal sellers.
|
|
We also launched the Agentic Coding Tool Index (ACTI), an open-source survey tracking AI coding tool adoption. The January results (270+ responses, slides): 90% report productivity gains, heavy users see 2.9x higher productivity, and Claude Code jumped from 35% to 69% adoption while GitHub Copilot dropped from #1 to #4.
|
|
Below: what I've been writing, working on, and reading across AI, quant finance, and macro.
|
What I've been writing
|
|
The Most Expensive Assumption in AI
Sara Hooker's research challenges the trillion-dollar scaling thesis. Compact models now outperform massive ones as diminishing returns hit AI.
|
|
The Variance Tax
Variance drain is the hidden cost of volatility: why a portfolio averaging +10% can lose money. The ½σ² formula explains the gap between paper and ...
|
|
Big in Japan
Japan holds $5 trillion in foreign assets. With 30-year JGB yields now above 3%, the carry trade that defined Japanese investing faces new friction.
|
|
Enterprise AI Strategy is Backwards
85% of AI projects fail. Only 26% translate pilots to production. The winners automate the coordination layer where employees spend 57% of their wo...
|
|
What I've been working on
|
|
ACTI - Agentic Coding Tool Index
An open-source monthly survey tracking AI coding tool adoption among professional developers. 270+ responses in January with Claude Code at 69%, 90...
|
|
What I've been reading
|
|
■
Prediction Markets for Economic Monitoring: NBER finds Kalshi prediction market estimates on economic data and Fed funds rates match or beat economist consensus and Fed funds futures. Real-time updating is the edge.
via NBER
|
|
■
De-dollarization: Is the US dollar losing its dominance?: JPMorgan's take: downtrend, not downfall. USD share in reserves at a two-decade low.
via J.P. Morgan
|
|
■
How to Use the Sharpe Ratio: Lopez de Prado, Lipton, and Zoonekynd (ADIA) on why most Sharpe ratio analyses are statistically flawed. Covers the Probabilistic Sharpe Ratio, minimum track record length, and multiple testing corrections.
via SSRN
|
|
■
The Science and Practice of Trend-following Systems: Sepp and Lucic's unified framework for classifying trend-following systems. Covers European, American, and Time Series Momentum approaches with data through mid-2025.
via SSRN
|
|
■
A Unified Theory of Order Flow, Market Impact, and Volatility: A single framework connecting order flow, price impact, and volatility. The kind of paper that reframes how you think about market microstructure.
via arXiv
|
|
■
Compute, Complexity, and the Scaling Laws of Return Predictability: Scaling laws aren't just an AI phenomenon. This paper finds analogous compute-complexity tradeoffs in return predictability, directly relevant to the scaling debate in AI.
via SSRN
|
|
|
For the full academic reading list covering economics, finance, and technology papers, check out reading.philippdubach.com. For weekly AI strategy updates, subscribe to The AI Lab.
Thank you for reading. Have a wonderful week!
— Phil
|
|
|
|