Blog

  • Tuesday AI Links (Nov. 4)

  • Various (AI) Links (Nov. 2)

  • Various (AI) Links (Oct. 31)

  • The Atlantic: Tilly Norwood Is Not Ready for Its Closeup

    Yesterday, I posted about AI as the Bicycle of the Mind, suggesting that AI could be democratizing, particularly for lower budget filmmakers: “AI tools have the potential to unlock more creativity for countless filmmakers who aren’t named Spielberg or Lucas.”

    Today, I saw the The Atlantic article, Tilly Norwood Is Not Ready for Its Closeup (October 25, 2025). Sharon Waxman’s conclusion is that AI isn’t ready…yet.

    But ultimately, as Tilly Norwood demonstrated and insiders affirmed, the AI models available just aren’t Hollywood-caliber—yet. “Hollywood studios have a very, very high bar of technical quality that AI currently doesn’t get. But it will,” Weintrob said.

    Netflix, however, seems to be committed to expediting the improvement of AI quality:

    This month, Netflix announced that it is merging its visual-effects studio, Scanline, with its research lab, Eyeline, to expedite its own AI-led efforts. The race to get ahead goes on.

    But I think the most interesting part is Waxman’s conversation with low-budget producers:

    Producers—mainly of low-budget films rather than major studio productions—told me that the technology is helping them reduce their spending on visual effects.

  • The Bicycle of the Mind

    Steve Jobs famously described the computer as a “bicycle for the mind.” In an interview decades ago, he compared the efficiency of various species traveling a mile, noting that humans were far from the most efficient.

    But when you gave a human a bicycle, the energy required to travel that same distance dropped dramatically — surpassing nearly every other creature. He then talked about humans as tool builders.

    Jobs used this analogy to explain how computers empower people and “amplify” human creativity, allowing us to do extraordinary things. Looking at technology today, it’s clear his prediction was on the mark. Computers have indeed enabled humans to create, design, and communicate in ways that were once unimaginable. Reach into your pocket (or purse) and grab your smartphone. That phone is far more than a device used to make calls. Personally, I have over 25,000 photos and several thousand videos.

    Computers have given rise to entirely new professions — designers, photographers, programmers, content marketers — jobs that simply didn’t exist a generation ago. The same appears likely with AI.

    AI: The Next Bicycle of the Mind

    Artificial intelligence tools represent another step in human tool building. AI has the potential to democratize creativity in ways that were previously unthinkable.

    Just weeks ago, OpenAI released Sora 2 (following Google’s Nano Banana) that similarly focused on image feature fidelity. These systems allow creators to upload a photo of a person and generate remarkably accurate, lifelike images — trying on different outfits, hairstyles, or even placing themselves in imaginative settings, a huge leap from earlier models. You can create fantastical scenes — climbing Mount Everest, eating dinner on the Titanic, etc. — things that defy reality but are fun. These tools give everyone, not just professional artists, the ability to create.

    There are dedicated apps for Sora and Meta AI, both of which have a growing amount of AI-generated photos and videos (and a lot of AI slop).

    Creative Industries and AI

    The implications go far beyond personal creativity. Filmmakers, for instance, can now generate entire scenes — a cheering crowd, a packed stadium — with minimal cost. What once required massive budgets and production teams (here’s a story about the stadiums in Ted Lasso) can now be achieved with AI tools.

    George Lucas waited more than 10 years between Star Wars: Episode VI and Episode I because the technology he needed to capture his creative vision simply didn’t exist. After seeing Jurassic Park, he realized that computer-generated imagery had advanced enough to make his vision possible. AI tools have the potential to unlock more creativity for countless filmmakers who aren’t named Spielberg or Lucas.

    The Productivity Curve

    Economist Jason Furman recently discussed the possibility of a productivity J-curve in relation to AI — where initial productivity may decline as we adopt these tools, but long-term gains will follow. 

    Filmmakers adopting AI today may not see immediate results — it takes years to produce a film — but these technologies are entering creative pipelines now. In a few years, we should begin seeing the results: imaginative, visually stunning works produced at lower costs. (As an aside, the WSJ reports on the new film company, B5 Studios, that plans to  create content more quickly with less expensive.)

    The same pattern applies to app development and web creation. Coding agents like OpenAI’s Codex or Anthropic’s Claude Code are dramatically lowering barriers for developers, and Anthropic lists their customers who have built using Claude with impressively good examples. Apple is integrating Claude Code into Xcode, paving the way for a new wave of iPhone apps from creators who previously lacked the resources to build them.

    AI in Education and Creativity

    For university and educational institutions, these advances offer tremendous opportunities. Creative professionals can produce higher-quality work with fewer resources. Students in creative programs can now create visually rich, engaging projects that would have been technically or financially impossible just a few years ago.

    And the possibilities extend beyond visual arts and programming into writing. Every aspiring writer now has access to an editor, proofreader, and creative partner through AI. A budding novelist can write a first chapter and instantly receive feedback, grammatical corrections, and stylistic suggestions. AI becomes a bicycle for the mind — not replacing editors, but extending editorial support to those who previously lacked such resources.

    Of course, professional authors like John Grisham and JK Rowling will continue to rely on human editors and publishers. But for new authors, AIcan help them polish their work and realize their creative ideas.

    The Human Potential

    As leaders, the challenge is to encourage people to see these tools not as job killers or creativity crushers, but as amplifiers of human potential. AI, like the computer before it, can help extend human flourishing.

    It’s a tool that can make us more creative, more expressive, and more capable of bringing our ideas to life. Like the bicycle that allows humans to move faster and farther than ever before, AI is the next great vehicle for the mind — empowering us to go places we never could have reached on our own.

  • Tuesday Links (Oct. 28)

  • Monday links (Oct. 27)

  • Sunday Links (Oct. 26)

  • AI Catastrophy?

    I love the quote from George Mallory about climbing Mt. Everest:

    When asked by a reporter why he wanted to climb Everest, Mallory purportedly replied, “Because it’s there.”

    We all know that it didn’t turn out so well for Mr. Mallory, and 100 years later, this meme:

    Perhaps the same can be said for AI scientists: why do you build even more powerful AI systems? Because the challenge is there!

    The race to build these systems is on. Companies left and right are dropping millions on talent in their attempt to build superintelligence labs. Meta, for example, has committed millions and millions to this effort. OpenAI (the leader), Anthropic (the safety-minded one), xAI (the rebel), Mistral (the Europeans), DeepSeek (the Chinese), Meta, and others are building frontier AI tools. Many are quite indistinguishable from magic.

    Each of these companies purports to be the best and the most trustworthy organization to get to superintelligence for one reason or another. Elon Musk (xAI), for example, has been quite clear that he only trusts the technology if he controls it. He even attempted a long shot bid to purchase OpenAI earlier this year. Anthropic is quite overtly committed to safety and ethics, believing they are the company best-suited to develop “safe” AI tools.

    (Anthropic founders Dario and Daniela Amodei and others left OpenAI in 2021 in response to concerns about AI safety. They focused on so-called responsible AI development as central to all research and product work. Of course, their AI ethics didn’t necessarily extend to traditional ethics like not stealing, but that’s a conversation for another day.)

    I’m not here to pick on the Amodeis, Musk, Meta, or any of the AI players. It’s clear that they’ve created amazing technologies with considerable utility. But there are concerns at a far higher level than AI-induced psychosis on an individual level or pirating books.

    Ezra Klein recently interviewed Eliezer Yudkowsky on his podcast, another bonkers interview that positions AI not as just another technology but as something with a high probability of leading to human extinction.

    The interview is informative and interesting, and if you have an hour, it’s worth listening to in its entirety. But I was particularly struck by the religious and metaphysical part of the conversation:

    Klein:

    But from another perspective, if you go back to these original drives, I’m actually, in a fairly intelligent way, trying to maintain some fidelity to them. I have a drive to reproduce, which creates a drive to be attractive to other people…

    Yudkowsky:

    You check in with your other humans. You don’t check in with the thing that actually built you, natural selection. It runs much, much slower than you. Its thought processes are alien to you. It doesn’t even really want things the way you think of wanting them. It, to you, is a very deep alien.

    Let me speak for a moment on behalf of natural selection: Ezra, you have ended up very misaligned to my purpose, I, natural selection. You are supposed to want to propagate your genes above all else

    I mean, I do believe in a creator. It’s called natural selection. There are textbooks about how it works.

    I’m familiar with a different story in a different book. It’s about a creator and a creation that goes off the rails rather quickly. And it certainly strikes me that a less able creator (humans) create something that behaves in ways that diverge from the creator’s intent.

    Of course, the ultimate creator I mention knew of coming treachery and had a plan. So for humanity, if AI goes wrong, do we have a plan? Yudkowsky certainly suggests that we don’t.

    I’m still bullish on AI as a normal technology, but there are smart people in the industry telling me there are big, nasty, scary risks. And because I don’t see AI development slowing, I find these concerns more salient today than ever before.

  • Thursday Links (Oct. 22)