Silicon Valley Is in a Frenzy Over Bots That Build Themselves

Englishto
A group of people recently blocked the streets of San Francisco with signs that read “Stop the AI race” and “Don’t build Skynet.” The target? Artificial intelligence companies that are working on bots capable of improving themselves: machines that don't just follow orders, but that learn, optimize themselves, and—at least in theory—could eventually design future generations of AI. The argument that is driving Silicon Valley crazy is this: We are no longer dealing with computers that work for us, but with systems that work on themselves. And the rate at which they improve is no longer linear: it could become exponential. The idea is not new. As early as the 1960s, the statistician I. J. Good predicted that “the machine capable of designing a better version of itself would be the last invention humanity would need.” For decades, however, this remained science fiction. Until a few years ago, the idea that a bot could conduct real research was almost ridiculous: ChatGPT struggled with simple addition, let alone designing new algorithms. Today, the story is different. OpenAI is openly talking about an “AI research assistant” coming in the next six months. Anthropic claims that 90% of its code is already written by Claude, its AI model. Dario Amodei, CEO of Anthropic, estimates that automatic coding tools have made internal processes 15–20% faster. For its part, Google DeepMind has developed AlphaEvolve, an AI agent that has reduced the computational consumption of its global data centers by 0.7% and accelerated the training of the Gemini model by 1%. But behind these numbers lies a question: How much of this work is truly autonomous, and how much is supervised by humans? Jack Clark, Anthropic’s Head of Policy, is the first to admit that the real priority today is to “understand the extent to which we are automating aspects of AI development.” Because for now, automation is fragmented: bots optimize small tasks, but they don’t yet fully direct the research process. Yet the race is already on. Sam Altman of OpenAI has set a goal for 2028: a “fully automated AI researcher” capable of making significant discoveries without human intervention. Others, such as Eli Lifland of the AI Futures Project, predict full automation of research by 2032. But some are putting the brakes on. Pushmeet Kohli of DeepMind points out that, for now, “the full cycle of self-improvement doesn't exist yet”: a machine can optimize, but it still has nothing worth optimizing for. The real breakthrough is not getting a bot to write code, but giving it what is known as a “taste” for research—the ability to select interesting questions and judge what is important, as a great engineer would. And this is a human quality that is still out of reach. Then there is the issue of resources: chips, energy, money. If just one of these factors becomes scarce, progress grinds to a halt. Yet, even small steps toward automation are already making a difference. Dean Ball, a former Trump advisor on AI, warns: “This could reshape global competition in AI, alter tech geopolitics, and much more.” Public institutions are lagging behind: the U.S. bureaucracy still uses COBOL for taxes, a programming language from the 1960s. If AI moves faster, politics risks falling even further behind. And the most extreme dreams of self-replicating AI don’t need to come true to have an impact: the very possibility of it happening is already shifting resources, attention, and power. Nick Bostrom, a Swedish philosopher who studies the risks of AI, now describes himself as a “moderate fatalist.” And twenty of the top researchers at DeepMind, OpenAI, Meta, Stanford, and Berkeley have identified the automation of AI research as one of the most pressing risks facing the industry. Bernie Sanders raised the alarm in the Senate: “Humans could really lose control of the planet.” But behind the fear, there is also a wave of hype that benefits precisely those who develop these technologies. If you thought machines would simply perform our tasks, it's time to update your perspective: now they aim to improve themselves, and the race has already begun. If you feel that this story concerns you, on Lara Notes you can press I'm In: it's not a like; it's your way of saying that this idea is now yours too. And if you end up talking about it with someone—perhaps mentioning that 90% of Anthropic’s code is already the work of a bot—on Lara Notes, there’s Shared Offline, the gesture that puts an end to that important conversation for good. This Note is based on an article from The Atlantic: instead of spending over 15 minutes reading, here you’ve saved almost 45 minutes.
0shared
Silicon Valley Is in a Frenzy Over Bots That Build Themselves

Silicon Valley Is in a Frenzy Over Bots That Build Themselves

I'll take...