Created on

3

/

20

/

2026

,

8

:

31

Updated on

3

/

20

/

2026

,

8

:

53

AI Ethnics(iv): The Emotiv Case

Inference Surveillance and the End of the Private Self

Preface: Co-written with Gemini.


In 1787, the philosopher Jeremy Bentham conceived of the "Panopticon"—a prison designed so that a single watchman could observe every inmate without the inmates ever knowing if they were being watched. The power of the Panopticon was not constant supervision, but the possibility of it; the prisoner, never certain of their privacy, would eventually become their own jailer. By 2026, Artificial Intelligence has realized Bentham’s architectural dream on a global scale. However, unlike the stone-and-mortar prisons of the past, the modern Panopticon is "ambient." It is woven into our smart glasses, our biometrically-locked phones, our "intelligent" cities, and the very airwaves that carry our digital existence.

The ethical crisis of privacy in the age of AI has undergone a fundamental phase shift. It is no longer just about "data collection"—the simple, often transactional act of recording what we do. It has evolved into a far more invasive and psychologically destructive phenomenon: Inference Surveillance. In this new regime, AI does not need to see your secrets to know them. By analyzing the "digital exhaust" we leave behind—our typing cadences, our scrolling speeds, the metadata of our photos, and the subtle patterns of our social connections—AI can infer our most intimate traits with startling accuracy. We have moved from a world where we are watched for our actions to a world where we are mapped for our predispositions.

The traditional concept of privacy was built on the sanctity of the "secret"—a piece of information, such as a medical diagnosis, a political leaning, or a sexual orientation, that an individual could choose to hide or reveal. AI has rendered this choice obsolete. Through a process called Predictive Behavioral Modeling, AI systems can "connect the dots" across seemingly unrelated, non-sensitive data points to uncover information you never intended to share. The classic example, which has only grown more relevant as algorithms have sharpened, is the "Target Pregnancy Prediction." By identifying subtle changes in a customer’s shopping habits—such as switching to unscented soaps or purchasing specific mineral supplements—an AI was able to predict a teenager’s pregnancy before she had even told her family. In 2026, these capabilities have moved from retail into the core of our lives. An AI can now infer a person’s likelihood of developing Parkinson’s disease by analyzing the micro-tremors in their mouse movements, or their political radicalization by the specific "rabbit holes" they fall down on video platforms. The AI didn't "steal" these secrets; it calculated them. This is the core ethical violation of the Digital Panopticon: it bypasses the Mental Sanctum. If a company or a government can know your biological or psychological state before you have even consciously acknowledged it yourself, the very notion of a private internal life begins to dissolve. We are no longer the authors of our own disclosures.

In the context of AI ethics and philosophy, the Mental Sanctum (often referred to as the Inner Sanctum or Forum Internum) is the "last frontier" of human privacy. It refers to the private, internal space of your mind—your unspoken thoughts, raw emotions, core beliefs, and cognitive processes—that has historically been considered physically impossible for anyone else to access. Think of it as the "sacred vault" inside your head where you are truly free because no one can see what you’re thinking.

For all of human history, if you didn't say it, write it, or act on it, your thought remained a secret. This "biological wall" protected your individuality. AI doesn't need to "read your brain" with electrodes to breach this sanctum. It uses Inference. By analyzing your micro-behaviors (how fast you type, which words you hesitate on, your "likes," or your gait), AI can mathematically "guess" your mental state, sexual orientation, or political leanings—often before you’ve even consciously realized them yourself. In legal philosophy, there is a distinction between: Forum Externum, your outward actions and speech (which the state can regulate), and Forum Internum, your internal conscience and thoughts (which, according to human rights like the UDHR, are absolutely protected). The concern today is that AI is effectively "liquefying" the wall between the two, turning our internal forum into public data.

Because the Mental Sanctum is under threat, ethics experts are pushing for Neurorights or Cognitive Liberty. These are proposed new human rights that would legally recognize your mind as a "protected space," just like your home or your body. The goal is to make it illegal for an AI to "infer" or "profile" your internal mental states without explicit, high-level consent. To combat this, a coalition of neuroethicists and legal scholars (most notably the Neurorights Foundation) has proposed a set of five fundamental rights designed to protect the "human essence" in the age of AI and Neurotech.

  1. Cognitive Liberty: The right to use, or refuse to use, neurotechnology. It is the right to mental self-determination—ensuring that no state or corporation can force an individual to have their mental states monitored or altered.

  2. Mental Privacy: The right to have brain data (and the inferences drawn from it) kept private. This is a direct response to "Neuro-capitalism," where our subconscious reactions are treated as raw material for profit.

  3. Mental Integrity: Protection against unauthorized interference with mental states. This guards against "algorithmic hacking" of the mind, where AI could subtly alter your moods or beliefs through targeted stimuli.

  4. Psychological Continuity: The right to maintain a sense of self. If an AI "nudge" is so effective that it changes your personality or core values without your awareness, your continuity as a person has been violated.

  5. Fair Access to Mental Augmentation: Ensuring that if brain-enhancing AI becomes a reality, it doesn’t create a permanent "biological class divide" between the augmented and the unaugmented.

Cognitive Liberty is the foundational principle here. It is the right to be "internally different." In an "optimized" society, there is a massive pressure for everyone to be productive, happy, and compliant. AI-driven surveillance already flags "unproductive" behavior or "dissident" sentiment. Without Cognitive Liberty, we risk a future of Neuro-normativity, where any brain state that doesn't fit the "ideal" (as defined by a corporation’s KPI or a state’s social credit system) is flagged for "correction." We are already seeing the first ripples of this. Some companies have experimented with "attention-monitoring" AI for remote workers, while others use "Emotion AI" to vet job candidates. This is the beginning of Neuro-discrimination: being denied a mortgage or a job not because of what you did, but because an AI inferred that your neural patterns suggest a "lack of resilience" or a "propensity for risk."

The Panopticon effect is thus complete: when you know that every action—even a "disloyal" look captured by a smart streetlight—is being fed into a permanent, inscrutable reputation score, you stop taking risks. You stop associating with "unfiltered" people. You stop thinking "deviant" thoughts. The algorithm doesn't need to arrest you to silence you; it just needs to make the cost of non-conformity too high to bear. This is the death of the "Human Margin"—the space where progress and dissent are born.

Central to human dignity is the ability to change—to grow out of youthful mistakes and reinvent oneself. This is why many cultures value the "Right to be Forgotten." AI, however, possesses a Permanent, Latent Memory. Because AI models are trained on historical data, every embarrassing post, every youthful indiscretion, and every "data point" of our past is frozen in the model’s weights. The ethical catastrophe here is that even if you delete the original data, the inference remains. If an AI "learned" that people with your background or your history are "low-value" or "high-risk," that statistical ghost will follow you forever. In the Digital Panopticon, you are not who you strive to become; you are the sum of every data point you have ever generated, forever held hostage by the "Stochastic Parrot" of your own history. This creates a "Digital Triage" that stagnates social mobility and punishes those who come from over-policed or over-monitored backgrounds.

To fight back against the Panopticon, our legal frameworks must undergo an emergency update. Standard privacy laws (like the GDPR) focus on protecting "Personal Identifiable Information" (PII)—names, addresses, and ID numbers. But in 2026, PII is a relic. The AI doesn't need your name to know your soul. We must establish the legal concept of Inference Privacy. This would be a "Right to the Mental Sanctum," stipulating that even if a company has legal access to your data, they do not have the right to use that data to infer sensitive traits (like sexual orientation, health status, or political leaning) that you did not explicitly disclose. It would ban "Inference Surveillance" in hiring, lending, and policing. It would acknowledge that our digital shadows are not "public property" to be mined for psychological vulnerabilities, but are an extension of our physical and mental selves.

In 2021, Chile amended Article 19 of its constitution to establish that "scientific and technological development must be at the service of people." The most revolutionary aspect of this reform—and subsequent legislation—is the legal treatment of neurodata. By treating brain activity as a "biological organ," the law grants it the same protections as a kidney or a heart. Under this framework, your neural patterns are not just "information" that can be licensed away in a 50-page Terms of Service agreement. They are part of your physical self. This means: Just as you cannot legally sell your internal organs in most jurisdictions, your raw neural data cannot be "bought" or "trafficked" for commercial exploitation.

The Emotiv Case in 2023 had Chile testing this in court. The case was brought by Guido Girardi, a former Chilean senator who was ironically the primary architect of Chile’s 2021 Neurorights constitutional amendment. Girardi purchased an "Insight" headset from the U.S.-based company Emotiv. The device is a consumer-grade EEG headband designed to track cognitive performance, stress, and focus. When Girardi used the device, he realized that while he could see his own brain activity, he had no control over where that data went. Because he was using a "free" version of the software, Emotiv’s terms of service allowed the company to store his brainwave data in their cloud for "scientific and statistical research," even if he deleted his account. Girardi sued, alleging that this "eternal" storage of his neural fingerprints violated his constitutional right to mental integrity. Emotiv’s defense rested on a common industry argument: Anonymization. They argued that once brain data is stripped of a name and turned into a statistical aggregate, it is no longer "personal data." They claimed it was merely "statistical information" used to improve their algorithms, much like a website might track where people click. They argued that by clicking "I Agree" to the terms of service, Girardi had signed away his rights to the raw data in exchange for using the technology.

The Chilean Supreme Court rejected Emotiv’s arguments entirely. The court’s ruling established several revolutionary precedents. The court ruled that "clicking a box" on a 50-page digital contract is not valid consent for the collection of brain data. Because brain data is so intimate, the court required prior, express, and specific informed consent for every single use of that data. The court leaned on Law No. 20.120, which regulates scientific research on human beings. It ruled that because the headset records biological signals from a human, it must be treated with the same ethical rigor as a clinical medical trial, even if it is marketed as a "wellness" toy.The court ordered Emotiv to immediately delete all of Girardi’s data. More significantly, it ordered the Chilean Customs and Health Services to suspend the marketing and sale of the Insight device in Chile until its data policies were proven to be in 100% compliance with the new Neurorights laws.

The Emotiv case proved that the Chilean constitutional amendment wasn't just "science fiction" or "symbolic." It had teeth. It ended the "Anonymization Myth". The court acknowledged what neuroethicists had long warned: brain data is uniquely identifiable. Much like a DNA sample, a person's neural patterns are a "fingerprint." Even if you remove the name, a sophisticated AI can re-identify the individual by matching their brain activity with other digital footprints. It shifted the Burden of Proof. Before this case, the user had to prove they were being harmed. After this case, the corporation must prove that their device is safe and their data collection is "medically ethical" before they can even enter the market. Since this ruling, organizations like UNESCO and the UN have used the Chilean decision as a blueprint for how to handle the rise of "consumer neurotech" from companies like Apple (which recently patented EEG-sensing AirPods) and Neuralink.

The move in Chile acted as a "regulatory laboratory" for the rest of the world. By November 2025, UNESCO’s General Conference—representing 194 member states—officially adopted the Recommendation on the Ethics of Neurotechnology. This is the first global normative framework that explicitly enshrines "the inviolability of the human mind." As of early 2026, the debate at the United Nations Human Rights Council (UNHRC) has shifted into a high-stakes legislative battle. The core tension is between two camps. The New Rights Camp: Led by Chile and various neuroethicists, this group argues that AI and neurotech have created "unprecedented" threats that require entirely new human rights—specifically the Right to Cognitive Liberty and the Right to Mental Privacy. And the Interpretative Camp, some states argue that we don’t need new rights, but rather a "radical expansion" of the existing Right to Freedom of Thought. They fear that creating "new" rights might accidentally weaken the universal nature of existing ones. ☀️