How To Protect Human Autonomy In An Age Of AI

Englishto
Have you ever wondered if you really choose what you think? In the 1980s, in Lund, some researchers conducted a brilliant experiment: they showed people two photos, asking which one they found more attractive. Then, with a trick, they returned the photo that the participant had NOT chosen, and most of them didn't even notice. In fact, they confidently explained why that was their favorite. A sharp blow to the idea that we are absolute masters of our choices. The truth? Human autonomy is not that untouchable fortress we imagine. We grew up with the belief that being autonomous means living "without external influences," as if our will were a pure well, isolated from the rest of the world. But as Castoriadis said, this is a philosophical fantasy: autonomy is not a wall that separates us, but a process that is continuously built and that depends on the context in which we live. The stories we tell ourselves about autonomy have concrete effects: they guide laws, politics, and the way we judge others and ourselves. But today, with AI everywhere, this view needs to be updated. We are surrounded by systems that don't just make suggestions: they directly shape the environment in which we make decisions, often before we even realize it. Think about social media algorithms: they don't persuade you with a sentence, but they rewrite in real time what you see, what seems important, even the choices you think you make. This is not just a step forward in manipulation: it is a quantum leap in the management of our autonomy. Take Francisco Varela's theory of autopoiesis: every organism lives because it continually renews itself, distinguishing itself from the environment without ever completely separating from it. It applies to cells, it applies to us — and today it also applies to the way our minds intertwine with technology. There is no longer a pure "self," but only a process of negotiation between the brain, the body, the environment, and now also AI. Some fear that this "biological" or "ecological" view of autonomy will take away our ability to truly choose. But the greatest risk is another: continuing to believe that we are absolute sovereigns while in reality the very structure within which we think, desire, and decide is designed, often without us realizing it. What if your autonomy had already been "switched" like the photo in Lund's experiment? There is no longer any need for proclamations about inner freedom: we need to rethink the environment that makes freedom possible. This is where the radical proposal comes in: not just protecting privacy or the brain, but the cognitive context in which attention, judgment, and critical capacity develop. In 2021, Chile even included the protection of mental integrity in its Constitution, treating neural data as an organ of the body. It is the birth of a new right: habeas cogitationem, the right not to be manipulated in thought. Because true autonomy is not the absence of influences, but the ability to navigate among them, to remain agents even in environments designed to divert our attention and our choices. But be careful: this does not mean returning to paternalism, nor deciding from above what is right to think. Rather, it means building digital environments that leave room for uncertainty, learning, and discussion, and that make influences transparent instead of hiding them in the automatic speed of a feed. And this is a collective responsibility, to be shared among citizens, companies, legislators, and users. The real question is no longer "how free are you inside?" but "how much does the environment in which you think still allow you to be free?". We are in an era where the difference between guiding your autonomy and telling yourself about it after you have already lost control is no longer philosophical, but practical. The sentence that remains: autonomy is not a property to be guarded, but a fragile process that must be rethought and protected in the world that makes it possible. If you were struck by the idea that true autonomy is a matter of environment rather than will, you can declare it on Lara Notes with I'm In — choose whether it's a discovery, a conviction, or something you've experienced firsthand. And if tomorrow you find yourself telling someone about Lund's experiment, on Lara Notes you can mark that moment with Shared Offline: this way, not only what you heard, but also the conversation that arose from it, remains in the memory. This journey on the illusions of autonomy comes from NOEMA and has saved you 23 minutes.
0shared
How To Protect Human Autonomy In An Age Of AI

How To Protect Human Autonomy In An Age Of AI

I'll take...