Understanding or delegating to AI: Asimov’s dilemma applied to science

Frenchto
Imagine a group of scientists obtaining from an artificial intelligence the solution to one of humanity’s greatest dreams: interstellar travel. But they don't even bother to ask how it really works. This is not the future: it is the core of a 1945 short story by Isaac Asimov, where the real breakthrough is not the space rocket, but the way humans delegate deep understanding to machines. And today, in real science, this dilemma is more acute than ever. We are used to thinking of AI technology as just a powerful tool, something that allows us to do what we already know how to do more quickly. But what happens when the solution that AI finds not only eludes us, but also becomes literally incomprehensible, even to its creators? The new dilemma is no longer “Can we solve this problem?” but “Do we need to understand the solution, or is it enough for us that it works?” Take Kolmogorov–Arnold Networks, or KAN, a new neural network architecture presented in a scientific paper in December 2025. Scientists typically work with neural networks where each neuron adds up signals and passes them on according to fixed rules, and learning occurs by adjusting the weights of these sums. But in KANs, the summation remains simple, and instead, the neurons learn increasingly complex transformation rules. This means that, unlike in traditional networks where knowledge is hidden in a forest of numbers, in KANs you can see—and even plot—the functions that the network has learned. Here’s an incredible detail: A KAN managed, on its own, to rediscover a highly sophisticated mathematical symmetry related to the structure of spacetime around a black hole, a problem that had occupied entire generations of physicists for nearly twenty years. Here, the stakes are not just technical. When we let AI solve problems that no human understands anymore, we risk losing our sense of ownership over knowledge. The scientists described by Asimov never ask for explanations: they accept that the complexity is beyond their reach, and they are content with the result. But science has always been, above all, an exercise in appropriating reality: understanding, not just applying. A physics researcher who read the article on KANs says that, for the first time, he was able to “see” what the network had learned, rather than just observing that it worked. The difference may seem small, but it is the line that separates doing science from delegating blindly. Those who say that “the important thing is that it works” risk slipping into a new form of superstition: blind faith in the algorithm rather than understanding. And there is a question that no one is asking: What do we lose, as humanity, when we allow explanation to become optional? And if, one day, no one is able to explain how the things we use every day work—why should we still call it science? We are nearing a crossroads: should we delegate understanding to machines, or should we insist on truly understanding? Those who are satisfied with the result deprive themselves of the true joy of discovery. AI can come up with surprising solutions, but science is born when someone still asks, “How did we get there?” If this story resonates with you, on Lara Notes you can press I'm In — it's not a like; it's your way of saying: This idea is mine now. And if tomorrow you tell someone about the KAN that solved a black hole puzzle, you can note it down on Lara Notes: Shared Offline is your way of saying that conversation mattered. This insight comes from Le Monde, and it saved you almost five minutes compared to reading the full article.
0shared
Understanding or delegating to AI: Asimov’s dilemma applied to science

Understanding or delegating to AI: Asimov’s dilemma applied to science

I'll take...