Hello Builders, issue #3
Your weekly briefing on the signals that actually matter in AI distilled, contextualized, and built for people who are shipping.
This week delivered an unusually dense concentration of enterprise-grade AI developments, anchored by AWS re:Invent 2025 and NeurIPS 2025. Google’s Gemini 3 broke the 1500 Elo barrier on reasoning benchmarks, DeepSeek released a 671B-parameter open-weight model at ~90% lower cost than proprietary alternatives, and AWS and Google Cloud announced an unprecedented multicloud interconnect partnership.
This week’s signal in the noise
Model economics transformed: DeepSeek V3.2 at $0.21/$0.32 per million tokens (10-30x cheaper than proprietary models, MIT license). Gemini 3 hit 1501 Elo (first to break 1500). Mistral Large 3 launched at 80% below OpenAI pricing, with an Apache 2.0 license.
Infrastructure developments: AWS Trainium3 delivers 4.4x performance boost; AWS-Google Cloud multicloud interconnect enables 1-100 Gbps private links in minutes. Google TPU v7 scales to 42.5 exaflops. H100 pricing cut by 44%.
$1.7B+ funding: Black Forest Labs ($300M, $3.25B valuation), Harvey ($160M, $8B valuation). OpenAI acquired Neptune.ai—external services sunsetting, migration required. Marvell acquired Celestial AI for photonic interconnects.
Regulatory/research updates: EU AI Act Annex III deadlines delayed 16 months (now Dec 2027). NeurIPS findings suggest RLVR hitting capability ceiling. NVIDIA achieved 4-bit LLM training matching 8-bit performance.
Model releases reshape the competitive landscape
December 1, 2025, marked what industry observers dubbed “Super Sunday”—exactly three years after ChatGPT’s launch—with multiple frontier-model announcements in a 48-hour window.
Google Gemini 3 Pro achieved an LMArena Elo of 1501, becoming the first model to break the 1500 barrier. Its GPQA Diamond score of 91.9% surpasses human expert performance (~89.8%), while the extended reasoning variant “Deep Think” reached 93.8% on the same benchmark. The model supports a 1-million-token context window, while DeepSeek V3.2 represents the most significant open-weight release of the week. At 671B total parameters (37B active per token) with a Mixture-of-Experts architecture, it achieves parity with GPT-5 on reasoning benchmarks while pricing at $0.21/million input tokens and $0.32/million output tokens—roughly 10-30x cheaper than proprietary alternatives. The cheap-model trend continues with the release this week of Mistral Large 3, priced approximately 80% below OpenAI’s flagship pricing.
https://blog.google/products/gemini/gemini-3/
https://api-docs.deepseek.com/news/news251201
Infrastructure investments accelerate across hyperscalers
AWS re:Invent 2025 dominated infrastructure news with Trainium3, AWS’s first 3nm AI chip, and the AWS-Google Cloud multicloud interconnect partnership integrates AWS Direct Connect with Google Cloud Cross-Cloud Interconnect, enabling private links between platforms in minutes rather than weeks. Bandwidth ranges from 1-100 Gbps with MACsec encryption and quad redundancy. In the meantime, Google’s TPU v6e (Trillium) reached general availability, and TPU v7 (Ironwood), previewed this week, with a total capacity of 42.5 exaflops. Capital expenditure across the eight major hyperscalers is projected to increase 44% year-over-year to $371 billion in 2025. JPMorgan estimates $5 trillion in global data center and AI infrastructure spending over the next five years.
https://www.webpronews.com/amazon-and-google-launch-multicloud-networking-for-ai-and-outage-resilience/
Funding rounds signal continued investor conviction
Approximately $1.7 billion in disclosed funding closed during the week across enterprise AI infrastructure, vertical applications, and agentic AI platforms. Black Forest Labs raised $300 million in Series B at a $3.25 billion valuation. The company’s FLUX image generation models power Adobe, ElevenLabs, and Grok. Eon closed a $300 million Series D round led by Elad Gil at an approximate $4 billion valuation for its cloud data backup business. Harvey, the legal AI platform, secured $160 million at an $8 billion valuation (up from $5B in June), led by Andreessen Horowitz. The company serves 50+ of the top 100 AmLaw firms and surpassed $100 million ARR in August. Back-to-back funding rounds totaling $760 million indicate strong market validation.
https://techcrunch.com/2025/12/01/black-forest-labs-raises-300m-at-3-25b-valuation/
https://techcrunch.com/2025/12/04/legal-ai-startup-harvey-confirms-8b-valuation/
Research breakthroughs from NeurIPS challenge training assumptions
NeurIPS 2025 (Nov 30 – Dec 7) produced several findings with direct enterprise implications. The Gated Attention paper from Alibaba’s Qwen team received Best Paper honors for introducing head-specific sigmoid gating after scaled dot-product attention. Tested across 30+ experiments on models up to 15B parameters trained on 3.5 trillion tokens, the technique eliminates the “attention sink” phenomenon, enables larger learning rates, and improves long-context extrapolation—all with minimal implementation overhead. The approach is already deployed in production Qwen3-Next models.
A critical RLVR (Reinforcement Learning from Verifiable Rewards) study received Best Paper Runner-Up recognition for demonstrating that RLVR improves sampling efficiency but does not elicit fundamentally new reasoning patterns. Base models achieve higher pass@k scores when k is large; distillation—not RLVR—introduces genuinely new reasoning capabilities. This finding suggests current post-training methodologies may be approaching fundamental limits.
DeepSeekMath-V2 demonstrated self-verifiable mathematical reasoning with a meta-verification system that checks verifier accuracy and reduces hallucinations. On the 2024 Putnam Competition, it scored 118/120 points, compared to a top human score of 90. The model achieved gold-medal performance at IMO 2025 (83.3%, 5/6 problems) and the China Mathematical Olympiad 2024.
https://blog.neurips.cc/2025/11/26/announcing-the-neurips-2025-best-paper-awards/
https://blog.neurips.cc/2025/11/26/announcing-the-neurips-2025-best-paper-awards/

