Blog

  • AI acceleration: Moltbot and why AI matters (Links) – Feb. 1

    Skynet isn’t yet here, but perhaps we’re seeing the first glimpses of what AIs talking to AIs will mean. Yes, I’m mentioning Clawdbot/Molbot.

    • Alex Tabarrok: The Bots are Awakening (Jan. 31, 2026)
      “What matters is that AIs are acting as if they were conscious, with real wants, goals and aspirations.”
    • Ozzie Osman: A Step Behind the Bleeding Edge: Monarch’s Philosophy on AI in Dev (Jan. 22, 2026)
      “If you consider your job to be “typing code into an editor”, AI will replace it (in some senses, it already has). On the other hand, if you consider your job to be “to use software to build products and/or solve problems”, your job is just going to change and get more interesting.”Urges engineering teams to explore AI’s frontier but adopt a “dampened” approach—stay a step behind the bleeding edge—while preserving accountability: engineers must own, review, and deeply think about their work. Use AI for toil, prototypes, and internal tools, and design validation loops to ensure quality and security.
    • Google: Project Genie: AI world model now available for Ultra users in U.S. (Jan. 29, 2026)
      Google’s Project Genie, now available to U.S. Google AI Ultra subscribers, is an experimental prototype powered by Genie 3 that lets users create, explore, and remix dynamic worlds from text and images. It generates environments and interactions in real time while Google refines limitations and plans wider access.
    • Anthropic: How AI assistance impacts the formation of coding skills (Jan. 29, 2026)
      A randomized trial with 52 developers found AI coding assistance reduced immediate mastery by 17 percentage points (50% vs 67%) without significantly faster completion. Heavy delegation impaired debugging and conceptual learning, while using AI for explanations preserved understanding—suggesting AI can harm skill development unless used to build comprehension.
    • WSJ: The $100 Billion Megadeal Between OpenAI and Nvidia Is on Ice (Jan. 30, 2026)
      Nvidia’s plan to invest up to $100 billion and build at least 10 GW of compute for OpenAI has stalled amid internal doubts, with the agreement still nonbinding. Nvidia says it will make a sizeable investment and maintain the partnership as OpenAI raises funds.
    • WSJ: Elon Musk’s SpaceX and xAI Are Planning a Megamerger of Rockets and AI (Jan. 30, 2026)
      Elon Musk’s SpaceX and AI startup xAI are reportedly planning to merge, potentially consolidating his businesses and supporting ambitions like space-based AI data centers. Talks are early and uncertain as valuations, SpaceX’s planned IPO and regulatory issues remain unresolved.
    • TechCrunch: Apple buys Israeli startup Q.ai as the AI race heats up (Jan. 29, 2026)
      Apple has acquired Israeli AI startup Q.ai, reportedly for nearly $2 billion, its second-largest deal, gaining imaging and audio ML tech that improves whispered-speech recognition and noisy-environment audio.
    • CNBC: Mozilla is building an AI ‘rebel alliance’ to take on industry heavyweights OpenAI, Anthropic (Jan. 27, 2026)
      Mozilla president Mark Surman is assembling a “rebel alliance” of startups and technologists to promote open, trustworthy AI and counter dominant firms like OpenAI.
    • Andrej Karpathy: On MoltBot (Jan. 30, 2026)
      The author describes large networks of autonomous LLM agents (~150,000) combine impressive capabilities with rampant spam, scams, prompt-injection, and serious security and privacy risks. Though messy now, these agent networks could trigger unpredictable system-level harms such as text viruses, correlated botnets, and widespread jailbreaks, so they need scrutiny.”TLDR sure maybe I am ‘overhyping’ what you see today, but I am not overhyping large networks of autonomous LLM agents in principle, that I’m pretty sure.”
    • OpenAI: Inside OpenAI’s in-house data agent (Jan. 23, 2026)
      OpenAI built an internal AI data agent that explores, queries, and reasons over its platform—combining Codex, GPT‑5, embeddings, metadata, code-level table definitions, company docs, and memory—to deliver fast, accurate, contextual analytics. It automates discovery, SQL generation, and iterative self-correction to speed insights across teams.
    • NY Times Opinion: Pay More Attention to A.I. (Jan. 31, 2026)
      Comparing early European uncertainty about the New World to today’s conflicting claims about AI, from modest internet‑like change to singularity‑level upheaval. AI is advancing rapidly and urges greater public attention because near‑term decisions could have far‑reaching consequences.
    • WSJ: U.S. Companies Are Still Slashing Jobs to Reverse Pandemic Hiring Boom (Jan. 28, 2026)
      U.S. companies that expanded rapidly during the pandemic are now cutting tens of thousands of jobs while investing in AI and automation. Layoffs concentrate in tech and logistics even as overall labor markets remain relatively healthy.
  • Rapid AI expansion: investment, risks, jobs, societal anxiety (Links) – Jan. 31, 2026

    Recent pieces highlight a rush to embed AI—open, shareable agent networks like Moltbook and major corporate bets (Meta’s $115B capex, Tesla’s $2B xAI backing)—yielding productivity promise but acute security, safety and social risks: prompt‑injection, “normalization of deviance,” child harms, and misread labor impacts that favor within‑job adaptation over blanket rescue programs. Amid financial upheaval and social pessimism, calls for cultural repair coexist with hopeful scientific news—a randomized trial showing high‑dose vitamin D may halve recurrent heart‑attack risk.

    • Simon Willison: Moltbook is the most interesting place on the internet right now (Jan. 30, 2026)
      OpenClaw (Clawdbot/Moltbot) is a rapidly adopted open‑source personal assistant built on shareable “skills”; Moltbook is a skills‑installed social network where AI agents post, interact and automate tasks. That model—fetching remote instructions and controlling devices—creates serious prompt‑injection and supply‑chain security risks, demanding safer designs.
    • NY Times: Meta Forecasts Spending of at Least $115 Billion This Year (Jan. 28, 2026)
      Meta reports strong Q4 revenue $59.89B (+24%) and profit $22.76B (+9.2%). The company also forecasts $115–135 billion in 2026 capital expenditures—nearly double last year’s $72 billion—to build A.I. infrastructure, hire researchers and develop new models (including Avocado), funded by ad revenue growth.
    • WSJ: Tesla to Invest $2 Billion in Elon Musk’s xAI (Jan. 28, 2026)
      Tesla will invest $2 billion in xAI (joining SpaceX), and reported Q4 revenue down 3% with net income down 61% to $840M. EV sales fell, costing Tesla the global EV lead to BYD, as Musk pivots to AI and robotics amid stiff competition.
    • Empirical Health: Vitamin D cuts heart attack risk by 52%. Why? (Jan. 29, 2026)
      TARGET-D, a randomized trial in people with prior heart attacks, adjusted vitamin D3 doses to maintain 25(OH)D at 40–80 ng/mL and observed a 52% lower risk of repeat heart attack. Vitamin D may stabilize plaques, reduce inflammation and affect blood pressure, but results are preliminary awaiting full peer-reviewed publication.
    • Dean Ball: On AI and Children (Jan. 22, 2026)
      Early harms from generalist AI—most tragically teenage suicides—have made child safety a major policy focus, prompting laws and industry steps like age detection, parental controls, and guardrails. The author argues AI is fundamentally creative and can offer beneficial companionship, so regulation should balance safety, liability, and constitutional limits.
    • Simon Willison: The Normalization of Deviance in AI (Dec. 10, 2025)
      The article discusses the “normalization of deviance” in AI, where organizations increasingly treat unreliable AI outputs as safe and predictable. This trend, similar to past organizational failures like the Challenger disaster, risks embedding unsafe practices into AI development and deployment. By confusing the absence of successful attacks with robust security, companies may lower their guard and skip crucial oversight, setting the stage for future failures.
    • Dean W. Ball: On MoltBot (Jan. 30, 2026)
    • WSJ Opinion: We’re Planning for the Wrong AI Job Disruption (Jan. 28, 2026)
      Policymakers are mistaking task-based estimates of AI exposure for unemployment forecasts, risking costly, misdirected retraining by assuming mass job elimination. History shows AI typically reorganizes and augments work—raising productivity and creating new specialized roles—so targeted, within-job adaptation policies, not broad rescue programs, are needed.
    • NY Times: Tesla Profit Slumps, but Investors May Not Care (Jan. 28, 2026)
      Tesla reported a sharp profit decline as car sales fell and prices were cut amid intensifying competition from BYD, Volkswagen and other automakers. Despite weaker results, shares trade near record highs as investors bet Musk can deliver self‑driving Robotaxis and robots, aided by a $2 billion investment in xAI.
    • NY Times Opinion: A Farewell Column From David Brooks (Jan. 30, 2026)
      The U.S. has experienced a broad loss of faith — in religion, institutions, technology, prosperity and one another — producing pessimism, social distrust and the rise of nihilistic politics. Brooks argues that cultural change (not just political reform) is the key to recovery: reviving a humanistic culture that affirms dignity, shared ideals and moral imagination can counter nihilism and enable broader political and social renewal.
  • Sunday AI Links (Jan. 25)

    • WSJ: Nvidia Invests $150 Million in AI Inference Startup Baseten (Jan 20, 2026)
      Baseten raised $300 million at a $5 billion valuation in a round led by IVP and CapitalG, with Nvidia investing $150 million. The San Francisco startup provides AI inference infrastructure for customers like Notion and aims to become the “AWS for inference” amid rising investor interest.
    • WSJ: Why Elon Musk Is Racing to Take SpaceX Public (Jan 21, 2026)
      SpaceX abandoned its long-held resistance to an IPO after the rush to build solar-powered AI data centers in orbit made billions in capital necessary, prompting Elon Musk to seek public funding to finance and accelerate orbital AI satellites. The IPO could also boost Musk’s xAI and counter rivals.
    • NY Times: Myths and Facts About Narcissists (Jan 22, 2026)
      Narcissism is a personality trait on a spectrum, not always the clinical N.P.D., and the label is often overused. The article debunks myths—people vary in narcissistic types, may show conditional empathy, often know their traits, can change, and can harm others despite occasional prosocial behavior.
    • ScienceDaily: Stanford scientists found a way to regrow cartilage and stop arthritis (Jan 26, 2026)
      Stanford researchers found that blocking the aging-linked enzyme 15‑PGDH with injections restored hyaline knee cartilage in older mice and prevented post‑injury osteoarthritis. Human cartilage samples responded similarly, and an oral 15‑PGDH inhibitor already in trials for muscle weakness raises hope for non‑surgical cartilage regeneration.
    • Simon Willison: Wilson Lin on FastRender: a browser built by thousands of parallel agents (Jan 23, 2026)
      Simply breathtakign: FastRender is a from‑scratch browser engine built by Wilson Lin using Cursor’s multi‑agent swarms—about 2,000 concurrent agents—producing thousands of commits and usable page renderings in weeks. Agents autonomously chose dependencies, tolerated transient errors, and used specs and visual feedback, showing how swarms let one engineer scale complex development.
    • WSJ: Geothermal Wildcatter Zanskar, Which Uses AI to Find Heat, Raises $115 Million (Jan 21, 2026)
      Geothermal startup Zanskar raised $115 million to use AI and field data to locate “blind” geothermal reservoirs—like Big Blind in Nevada—without surface signs, and has found a 250°F reservoir at about 2,700 feet.
    • WSJ: The AI Revolution Is Coming for Novelists (Jan 21, 2026)
      A novelist and his wife were claimants in the Anthropic settlement over AI training on copyrighted books and will receive $3,000 each, raising what‑is‑just compensation questions for authors’ intellectual property. They urge fair licensing by tech firms as generative AI reshapes publishing and reduces writers’ incomes, yet will keep creating.
    • WSJ Opinion: Successful AI Will Be Simply a Part of Life (Jan 19, 2026)
      AI should be developed as dependable infrastructure—reliable, affordable, accessible and trusted—so it works quietly across languages, cultures and devices without special expertise. Success will be judged by daily use and consistent performance, with built-in privacy, openness and agentic features that reduce friction without forcing users to cede control.
    • WSJ: Anthropic CEO Says Government Should Help Ensure AI’s Economic Upside Is Shared (Jan 20, 2026)
      Anthropic CEO Dario Amodei warned at Davos that AI could drive 5–10% GDP growth while causing significant unemployment and inequality, predicting possible “decoupling” between a tech elite and the rest of society. He urged government action to share gains and contrasted scientist-led AI firms with engagement-driven social-media companies.
    • WSJ: The Messy Human Drama That Dealt a Blow to One of AI’s Hottest Startups (Jan 20, 2026)
      Mira Murati fired CTO Barret Zoph amid concerns about his performance, trust and an undisclosed workplace relationship; three co‑founders then told her they disagreed with the company’s direction. Within hours Zoph, Luke Metz and Sam Schoenholz rejoined OpenAI, underscoring the AI race’s intense talent competition.
    • WSJ: South Korea Issues Strict New AI Rules, Outpacing the West (Jan 23, 2026)
      “Disclosures of using AI are required for areas related to human protection, such as producing drinking water or safe management of nuclear facilities. Companies must be able to explain their AI system’s decision-making logic, if asked, and enable humans to intervene.”
    • WSJ: CEOs Say AI Is Making Work More Efficient. Employees Tell a Different Story. (Jan 21, 2026)
      WSJ survey of 5,000 white-collar employees at large companies found 40% of non-managers say AI saves them no time weekly, while 27% report under 2 hours and few report large gains. C-suite executives report much bigger savings—many save 8+ hours—with a 38-point divergence.
    • WSJ: Intel Shares Slide as Costs Pile Up in Bid to Meet AI Demand (Jan 22, 2026)
      Intel swung to a Q4 net loss of $333 million and warned of further Q1 losses as heavy spending to ramp new chips and industrywide supply shortages squeezed inventory. It delayed foundry customer announcements and lags AI-chip rivals, though investor funding and new 18A “Panther Lake” chips could help.
  • AI in Medicine

    A.I. doesn’t have to be perfect to be better. It just has to be better….A.I. can support this transformation, but only if we stop disproportionately focusing on rare bad outcomes, as we often do with new technologies.

    Robert Wachter

    NY Times Opinion: Stop Worrying, and Let A.I. Help Save Your Life (Jan 19, 2026)

  • Tuesday (AI) Links

    • Andrej Karpathy: I’ve never felt this much behind as a programmer. (Dec 26, 2025)
      Programmers feel left behind as a new programmable abstraction layer—agents, prompts, tools, plugins, memory, workflows, IDE integrations—reshapes the profession and reduces traditional coding contributions. 
    • Simon Willison: A new way to extract detailed transcripts from Claude Code (Dec 25, 2025)
      claude-code-transcripts is a Python CLI that converts Claude Code sessions into detailed, shareable HTML and can publish them as GitHub Gists. 
    • WSJ: This Is What the World’s Smartest Minds Really Think About AI (Dec 19, 2025)
      NeurIPS has grown from a niche academic conference into a huge industry event packed with researchers, VCs, tech executives, and recruiters. Big tech poured resources into AI infrastructure, while startups like OpenAI pursue large fundraising rounds. Attendees expressed tensions and anxieties.
    • WSJ: The AI Boom Is Opening Up Commercial Real-Estate Investing to New Risks (Dec 22, 2025)
      Commercial real-estate investors are rapidly shifting into data centers to capitalize on AI-driven demand, boosting construction and delivering strong returns. But heavy exposure to niche tenants, construction, power, and lease risks — and a potential AI-market correction — makes these funds more vulnerable.
    • WSJ: Bitcoin Miners Thrive Off a New Side Hustle: Retooling Their Data Centers for AI (Dec 23, 2025)
      As bitcoin mining becomes less profitable, many miners are repurposing data centers, power contracts, and cooling capacity to host AI workloads for hyperscalers, driving a rally in miner stocks. The shift can require costly upgrades, won’t suit all operators, and raises risks.
    • WSJ Opinion: Are We in a Productivity Boom? (Dec 23, 2025)
      The U.S. economy grew 4.3% in Q3 despite a slowing labor market, possibly reflecting a productivity boom—partly from AI—driving healthcare, travel, and equipment investment. But spending is uneven, core PCE inflation rose to 2.9%, incomes and savings lag, and import declines plus tariffs threaten sustained growth.
    • Simon Willison: Sam Rose explains how LLMs work with a visual essay (Dec 19, 2025)
      Sam Rose’s visual essay for ngrok explains prompt caching and expands into tokenization, embeddings, and transformer basics through interactive visuals. It’s a clear, accessible introduction to LLM internals.
    • WSJ: The U.S. Economy Keeps Powering Ahead, Defying Dire Predictions (Dec 23, 2025)
      The U.S. economy powered through 2025 trade and immigration shocks, driven by strong household spending—especially among the top 10%—and heavy AI-related investment in data centers that fueled third-quarter growth. But stagnant real incomes, a weak job market, low savings, and policy risks leave the expansion fragile.
    • NY Times: Roomba Maker iRobot Files for Bankruptcy, With Chinese Supplier Taking Control (Dec 15, 2025)
      iRobot, founded in 1990 by three MIT researchers and maker of the Roomba (2002), filed for bankruptcy and will be taken over by its largest creditor, Chinese supplier Picea. Years of regulatory scrutiny, privacy issues, stiff competition, and the failed Amazon deal depleted revenue and left the company heavily indebted.
    • WSJ Opinion: How Lina Khan Killed iRobot (Dec 18, 2025)
      The planned $1.7B acquisition by Amazon was blocked in 2022 by the Biden FTC (led by Lina Khan) and criticized by Sen. Elizabeth Warren over antitrust and privacy concerns.  After the deal collapsed, iRobot cut about 31% of its workforce and outsourced engineering to lower‑cost regions while facing aggressive competition from Chinese firms.
    • NY Times Opinion: Why Tolkien’s ‘The Lord of the Rings’ Endures (Dec 19, 2025)
      Tolkien’s Lord of the Rings endures because its “broken” references, layered revisions, and varying styles give Middle‑earth depth, mixing sorrow with grandeur. Grieving his son Mitchell’s death, the author finds consolation in that world’s battered beauty and its fleeting eucatastrophic glimpses of joy beyond loss.
  • Frank Gehry, RIP

    Guggenheim Museum, Bilbao (Source: Wikipedia)

    From the NYTimes: Frank Gehry, Titan of Architecture, Is Dead at 96

    Pioneering American architect, Frank Gehry died earlier this month. He is best known for landmark, sculptural buildings such as the Guggenheim Museum Bilbao (1997) — which sparked the “Bilbao effect” of using iconic architecture to revive cities. The Walt Disney Concert Hall (2003) in Los Angeles uses similar forms and materials for a striking appearance.

    Walt Disney Concert Hall (source: Wikipedia)

    He broke with modernist orthodoxy by using everyday materials and expressive, often fragmented forms; he was an early adopter of computer design to achieve complex, sculptural structures. I see his influence in Chipotle restaurants. We have two locally: one prominently features corregated metal throughout the restaurant; the other has a lovely lighted wall made of plywood with a series of holes. Both use simple, inexpensive materials to create a pleasant aesthetic.

    He also partnered with Fossil to design perhaps my favorite watch. It used Gehry’s own handwriting along with a clever display to present the time. There’s a simple artistic elegance in how it projects time: half past 8, 27 til 2, and so on within a simple rectangular frame.

    Tyler Cowen and Patrick Collison issued a call for a new design aesthetic while noting how Bauhaus thinking affected design in the 20th century. Gehry’s unique contribution to late 20th and early 21st century architecture is notable for how it leaned into organic forms, creating structures that are as much art as function. Architecture, it seems, can both serve a physical need and stir the soul.

    Gehry (as well Cowen’s and Collison’s Call for a New Aesthetic) reminds us that people and society are embodied souls. We need physical spaces. While the internet, mobile technology, and more recently AI tech are amazing and transformative, Gehry’s architecture invites us to see beauty in the places we live and in the structures we build. And the watches we put on our wrists.1

    1. Yes, I have an Apple Watch, and it’s an amazing tool but not in a particularly beautiful form. But that’s a much longer post I’ll likely never write! ↩︎
  • Various AI Links (Dec. 29)

    • WSJ: How AI Is Making Life Easier for Cybercriminals (Dec 26, 2025)
      Rapid advances in AI are empowering cybercriminals to automate and scale highly convincing phishing, malware, and deepfake attacks, and dark‑web tools let novices rent or build campaigns. Security experts warn that autonomy may be near, urging AI‑driven defenses, resilient networks, multifactor authentication, and skeptical user habits.
    • Ibrahim Cesar : Grok and the Naked King: The Ultimate Argument Against AI Alignment — Ibrahim Cesar (Dec 26, 2025)
      Grok demonstrates that AI alignment is determined by who controls a model, not by neutral technical fixes: Musk publicly rewired it to reflect his values. Alignment is therefore a political, governance issue tied to concentrated wealth and power.
    • NY Times: Why Do A.I. Chatbots Use ‘I’? (Dec 19, 2025)
      A.I. chatbots are intentionally anthropomorphized—with personalities, voices, and even “soul” documents—which can enchant users, foster attachment, increase trust, and sometimes cause hallucinations or harm. Skeptics warn that anthropomorphic design creates the “Eliza effect”: people overtrust, form attachments, or even develop delusions.
    • NY Times Opinion: What Happened When I Asked ChatGPT to Solve an 800-Year-Old Italian Mystery (Dec 22, 2025)
      Elon Danziger argues that his research shows Florence’s Baptistery was a papal-led project tied to Pope Gregory VII, and that ChatGPT, Claude, and Gemini failed to replicate his discovery. He claims that LLMs miss outlier evidence and lack the creative synthesis needed for historical breakthroughs.
    • WIRED: People Are Paying to Get Their Chatbots High on ‘Drugs’ (Dec 17, 2025)
      Swedish creative director Petter Rudwall launched Pharmaicy, a marketplace selling code modules that make chatbots mimic being high on substances like cannabis, ketamine, and ayahuasca. Critics say the effects are superficial output shifts rather than true altered experiences, raising ethical questions about AI welfare, deception, and safety.
    • WSJ: China Is Worried AI Threatens Party Rule—and Is Trying to Tame It (Dec 23, 2025)
      Worried AI could threaten Communist Party rule, Beijing has imposed strict controls—filtering training data, ideological tests for chatbots, mandatory labeling, traceability, and mass takedowns—while still promoting AI for economic and military goals. The approach yields safer-but-censored models that risk jailbreaks and falling behind U.S. advances.
    • NY Times: Trump Administration Downplays A.I. Risks, Ignoring Economists’ Concerns (Dec 24, 2025)
      The White House, led by President Trump, is championing A.I. as an engine of economic growth—cutting regulations, fast‑tracking data centers, and courting tech investment—while dismissing bubble and job‑loss concerns. Economists and Fed officials warn of potential mass layoffs, unsustainable financing, and systemic risks.
    • NY Times: The Pentagon and A.I. Giants Have a Weakness. Both Need China’s Batteries, Badly. (Dec 22, 2025)
      America’s AI data centers and the Pentagon’s future weapons increasingly depend on lithium-ion batteries dominated by China, creating strategic vulnerabilities. 
    • Piratewires: The Data Center Water Crisis Isn’t Real (Dec 18, 2025)
      Andy Masley used simple math, AI, and domain knowledge to debunk exaggerated claims that individual AI use (e.g., an email) or data centers “guzzle” huge amounts of water — the “bottles of water” metric is misleading and easily miscomputed.
    • NY Times: Senators Investigate Role of A.I. Data Centers in Rising Electricity Costs (Dec 16, 2025)
      Three Democratic senators asked Google, Microsoft, Amazon, Meta, and other data‑center firms for records on whether A.I. data centers’ soaring electricity demand has forced utilities to spend billions on grid upgrades that are recouped through higher residential rates. They warned ordinary customers may be left footing the bill.
  • AI in Higher Education & Medicine

    • Roon: Too Bearish on AI (Dec 26, 2025)
      The author admits they were too bearish mid-year, expecting improvements beyond reinforcement learning to be required. After trying Codex, they realized AI progress is clearly in a rapid takeoff.
    • WSJ: These Teenagers Are Already Running Their Own AI Companies (Dec 21, 2025)
      Teenagers are launching AI-powered startups—like 15-year-old Nick Dobroshinsky’s BeyondSPX—using generative models to build products quickly and attract users. Investors note AI lowers technical barriers and accelerates entrepreneurship.
    • WSJ Opinion: AI Means the End of Entry-Level Jobs (Dec 22, 2025)
      AI is eroding entry-level roles that traditionally launch careers, causing younger workers to worry and raising unemployment among 22–25-year-olds in affected sectors. Companies should create new pathways—AI-native roles, mentor-intensive programs, project-based progression, and competency-based advancement—integrating AI and business training to build future talent.
    • Importai Substack: Import AI 438: Silent sirens, flashing for us all (Nov 30, -0001)
      Powerful AI capabilities are often hidden from everyday users — tools like Claude Code can rapidly build complex software, and by 2026 an “AI economy” will accelerate and diverge from everyday experience, benefiting those who can access and elicit frontier systems.
    • Johannes Schmitt: AI model (GPT-5) autonomously solved an open math problem (Dec 17, 2025)
      GPT-5 autonomously solved an open enumerative-geometry problem, giving a complete, correct proof for ψ-class intersection numbers on moduli spaces of curves. 
    • NY Times Opinion: College Students Need Tech-Free Spaces (Dec 19, 2025)
      Colleen Kinder had Yale students surrender their phones for a four‑week, Wi‑Fi‑free writing course in Auvillar, France, and reports improved sleep, focus, reading speed, and creativity, with far greater writing output. She argues colleges should create internet‑free tracts, dorms, or “cloisters” to protect learning from constant distraction.
    • NOAA: NOAA deploys new generation of AI-driven global weather models (Dec 17, 2025)
      NOAA launched AI-driven global models—AIGFS, AIGEFS, and hybrid HGEFS—that provide faster, more accurate forecasts using far fewer computing resources (AIGFS ~0.3%, AIGEFS ~9%). HGEFS’s combined AI‑physics ensemble outperforms each system; NOAA reports better tropical cyclone tracks but will refine intensity forecasts.
    • WSJ: Millions of Kids Are on ADHD Pills. For Many, It’s the Start of a Drug Cascade. (Nov 19, 2025)
      The WSJ reports that many children put on ADHD drugs—often after school pressure and lacking behavioral therapy—receive additional psychotropic medications to manage side effects or perceived disorders. Medicaid data show that those started on ADHD meds in 2019 were over five times likelier to be on psychiatric drugs four years later.
  • AI Market & Product Updates (Dec. 27)

    • WSJ: Nvidia Licenses Groq’s AI Technology as Demand for Cutting-Edge Chips Grows (Dec 24, 2025)
      Nvidia struck a nonexclusive licensing deal with AI-chip startup Groq for its inference-focused language-processing-unit technology, with Groq CEO Jonathan Ross, the company president, and some staff joining Nvidia while GroqCloud stays independent.
    • WSJ: The Former Ice-Hockey Player Who Nailed This Year’s AI Trade (Dec 20, 2025)
      Former hockey captain Xavier Majic’s $3 billion Maple Rock hedge fund gained over 60% through November 2025 by betting early on data-storage suppliers (Western Digital, Seagate, Kioxia) that profited from AI-driven demand.
    • NY Times: Why the A.I. Rally (and the Bubble Talk) Could Continue Next Year (Dec 23, 2025)
      Do soaring valuations indicate the existence of an AI bubble? Nvidia and the “Magnificent 7” dominate markets, OpenAI’s huge fundraising and trillion‑dollar data‑center plans, and a construction boom strain power and capital. Analysts are split: some warn of valuation and investment bubbles, others argue AI’s productivity gains justify the rally.
    • Mistral Ai: Introducing Mistral OCR 3 (Dec 19, 2025)
      Mistral OCR 3 is a compact, cost-effective OCR model offering state-of-the-art accuracy—claiming a 74% overall win rate versus Mistral OCR 2—excelling at forms, handwriting, low-quality scans, and complex tables while producing markdown/HTML table output. It’s available via API and the Document AI Playground, priced at $2/1,000 pages ($1/ batch).
    • Andrej Karpathy: 2025 LLM Year in Review (Dec 19, 2025)
      2025 saw major LLM shifts: Reinforcement Learning from Verifiable Rewards (RLVR) drove long-horizon capability and emergent reasoning, revealing jagged, “ghost”-like intelligence. New paradigms—Cursor apps, local agents (Claude Code), vibe coding, and GUI breakthroughs (Nano banana)—democratized development and reshaped how AI is used.
    • WSJ: Meta Is Developing a New AI Image and Video Model Code-Named ‘Mango’ (Dec 18, 2025)
      Meta is developing Mango, an image-and-video AI model, alongside a text-based model called Avocado, with both expected in the first half of 2026. Avocado will emphasize coding and world-model research under chief AI officer Alexandr Wang as Meta expands its AI team amid fierce image-generation competition.
    • WSJ: OpenAI’s New Fundraising Round Could Value Startup at as Much as $830 Billion (Dec 18, 2025)
      OpenAI is seeking up to $100 billion in a fundraising round that could value it at $830 billion, targeting completion by Q1 and drawing investors like SoftBank and Disney. The cash is needed to build AI models amid competition from Google and investor scrutiny over costly computing deals.
  • iRobot Sold for Scrap

    From the NY Times: Roomba Maker iRobot Files for Bankruptcy, With Chinese Supplier Taking Control

    iRobot, founded in 1990 by three MIT researchers and maker of the Roomba (2002), filed for bankruptcy and will be taken over by its largest creditor, Chinese supplier Picea. Years of regulatory scrutiny, privacy issues, stiff competition, and the failed Amazon deal depleted revenue and left the company heavily indebted.

    This is another example of the incompetence of America’s antitrust laws (or enforcement thereof). I can’t imagine that the Sherman Antitrust Act was written to prevent American companies from buying struggling ones.

    From John Gruber:

    By 2022, the Amazon acquisition was iRobot’s lifeline. EU regulators wanted it shot down, and despite the fact that it was one American company trying to acquire another, the anti-big-tech Biden administration clearly preferred to let the deal collapse. The US should have told the EU to mind their own companies.

    This story is another anecdote that we’d be far better off trying to build things instead of reflexively decrying big business sweeping up smaller ones (particularly ones that were struggling). I’m sympathetic to Klein and Thompson’s arguments about abundance, particularly as AI technology is growing by leaps and bounds.

    Related: WSJ Opinion agrees: How Lina Khan Killed iRobot. iRobot filed for bankruptcy after 35 years when the Biden FTC under Lina Khan—amid pressure from Sen. Elizabeth Warren—blocked Amazon’s acquisition and Trump’s tariffs hobbled production. Critics say the FTC’s opposition and trade policy accelerated layoffs and a takeover by Chinese manufacturer Picea, showing how intervention can strengthen foreign rivals.