Hello Builders, issue #2
Your weekly briefing on the signals that actually matter in AI distilled, contextualized, and built for people who are shipping.
This week’s AI focus is on three converging forces: “agentic” models that can drive your UI like an operator, open and highly specialized models that challenge frontier systems on niche tasks, and an apparent acceleration in AI industrial policy from both the US and Europe. As a technology leader, this is the week where AI stops being just an API call and increasingly becomes an operating layer across devices, infrastructure, and regulation.
This week’s signal in the noise
Agentic, on-device models move from research demos to deployable products (Microsoft’s Fara-7B).
Open, specialized models push frontier-level performance in narrow domains like math (DeepSeek Math V2 and the wider DeepSeek stack) while frontier model vendors sharpen price/performance and context length for long-running enterprise workloads (Anthropic Claude Opus 4.5).
Policy and regulation are shifting rapidly, with the EU warning that it is “missing the boat” and the US simultaneously ramping up national AI programs while weakening state-level oversight.
The human factor in AI adoption is critical; without deliberate communication, upskilling, and change management, it can easily become the main brake on a company’s AI journey.
On-device agentic models for computer use
Microsoft quietly released Fara-7B, a 7B-parameter “agentic” small language model designed to act as a computer-use agent: it can see your screen, click, scroll, type, and navigate the web much like a human operator. The model runs locally on Copilot+ PCs and is also available via Microsoft’s Foundry and open platforms like Hugging Face, having been trained on a large synthetic dataset of multi-step web interactions. This is a concrete signal that “UI-as-API” agents are becoming a product category rather than an experiment. You should start classifying internal applications by how “agent-compatible” their UI is (deterministic layouts, clear affordances, instrumented telemetry). Local execution also strengthens the case for on-device agents for sensitive workflows, but it raises new surface areas for monitoring, auditability, and access control—your endpoint security and DLP strategies will need to adapt.
https://www.microsoft.com/en-us/research/blog/fara-7b-an-efficient-agentic-model-for-computer-use/
Specialized reasoning over monolithic LLMs
China’s DeepSeek released DeepSeek Math V2 (https://www.scmp.com/tech/tech-trends/article/3334553/deepseek-releases-first-open-ai-model-gold-level-scores-maths-olympiad), an open model specialized for mathematical problem-solving, reportedly delivering performance comparable to that of International Math Olympiad (IMO) gold medalists on benchmark suites. This follows the wider DeepSeek family of open models, which have already drawn attention for their strong reasoning capabilities at significantly lower compute and cost profiles than many Western frontier models.
This continues a strategic trend: highly specialized open models that outperform giant generalist models on narrow but economically important domains (quant, optimization, verification, scientific computing). It’s a strong signal to shift from a single “one-model-fits-all” architecture to a portfolio of domain models. DeepSeek’s cost and hardware efficiency also underline that GPU scarcity is no longer an excuse for not experimenting with strong reasoning systems. This is even more true if we consider a recently published paper (https://x.com/omarsar0/status/1993695515595444366?t=lpD7UiqjTbYd453g-DPvpw&s=33) that shows how trained 12B models can exhibit reasoning capabilities comparable to those of frontier models.
On the other side of the river, Anthropic introduced Claude Opus 4.5 that combines lower API pricing with longer context windows and improved reasoning, explicitly targeting long-running chats and multi-step workflows that previously hit context or budget ceilings. This is yet another hint to avoid using a single LLM architecture for your applications. You should plan a Q4–Q1 refresh of your model cost models and routing strategies.
“Missing the boat” on competitiveness...
European Central Bank President Christine Lagarde warned that the EU is risking its economic future by falling behind the US and China in AI adoption and development (https://www.reuters.com/business/eu-missing-boat-ai-jeopardising-its-future-lagarde-warns-2025-11-24/). She highlighted the risk of Europe becoming structurally dependent on foreign AI infrastructure, compute, chips, and hyperscale data centers, if it remains primarily a buyer rather than a builder, and called for open standards, cheaper energy, and integrated capital markets. This could probably be an explicit signal that AI capacity is now viewed as strategic infrastructure, on par with energy or telecoms. Expect a new wave of national and EU-level incentives, consortia, and regulatory adjustments to accelerate domestic AI capability. Companies operating in Europe should actively track these programs (funding, tax treatment, regulatory sandboxes) and, where possible, position flagship AI initiatives as part of the region’s industrial policy story.
This is still far from the US AI-aggressive “Genesis Mission” (https://www.federalregister.gov/documents/2025/11/28/2025-21665/launching-the-genesis-mission), a federal effort to turn national scientific and engineering data into discoveries using AI, led by the Department of Energy and powered by public and private supercomputers. US AI policy is simultaneously turbocharging federal AI projects and weakening decentralized regulation.
The slow AI adoption
A Harvard study (https://hbr.org/2025/11/leaders-assume-employees-are-excited-about-ai-theyre-wrong) found that 76% of executives believe employees are excited about AI, yet only 31% of individual contributors agree. The same survey reveals a deep organizational blind spot: 75% of leaders describe their company as “employee-centric”, but only 23% of employees share that view. Only 30% of employees feel informed about their organization’s AI strategy, versus 80% of leaders who believe they are. The study concludes that truly employee-centric organizations are up to seven times more likely to succeed with AI. Daniel Goleman attributes the origin of this friction to the psychological impact of change and to the role Emotional Intelligence could play in a successful strategy (https://www.kornferry.com/insights/this-week-in-leadership/the-big-ai-roadblock-in-our-heads).

