Created on

3

/

20

/

2026

,

7

:

52

Updated on

3

/

20

/

2026

,

8

:

7

AI Ethics(ii): Data Colonialism

Power, labor, and the structural erosion of collective truth

Preface: Co-written with Gemini.


The ethical crisis of AI is not contained within the model’s weights; it spills over into how we, as a society, grant authority to these systems. We are currently witnessing a massive migration of decision-making power from human institutions to opaque algorithms. This is not merely a technical upgrade; it is a fundamental shift in the social contract. When a bank uses an AI to determine creditworthiness, or a hospital uses one to triage patients, they are not just using a tool—they are delegating moral authority. The danger here is the "Black Box of Meritocracy." Because AI presents its outputs as the result of "objective" data analysis, its decisions are shielded from the kind of scrutiny we apply to human managers. If a human boss denies you a promotion, you can demand a reason and point to their personal biases. If an algorithm denies you, the bias is buried under billions of parameters. This creates a new form of power: one that is everywhere but accountable to no one.

In the traditional imagination, human prejudice is loud, emotional, and often obvious. We picture a biased hiring manager or a dismissive doctor acting on conscious or subconscious whims. To solve this, the tech industry offered a seductive promise: the "Neutral Gatekeeper." By replacing fallible human intuition with cold, hard data, we were told that meritocracy would finally be automated. However, as AI moved from the laboratory to the HR department and the hospital ward, we discovered that algorithms do not eliminate prejudice—they simply give it a promotion. When bias is "mathematized," it becomes a structural feature of the system rather than a bug. In this article, we examine how the pursuit of "optimization" in the workforce and healthcare has led to some of the most profound ethical failures in modern technology, proving that an algorithm is only as fair as the society that recorded the data it feeds upon.

While the media often focuses on the "Terminator" scenario of AI, the more immediate ethical threat is the quiet, structural displacement of human labor. Unlike the Industrial Revolution, which replaced physical brawn with machines, the Generative AI revolution targets the "cognitive middle class." Writers, coders, paralegals, and graphic designers are finding their expertise commodified into "training data."

There is a profound parasitic irony at the heart of this transition. AI models are trained on the creative and intellectual output of millions of humansoften without their consent or compensation. Once trained, these models are then sold back to the very industries they were designed to disrupt, effectively using a worker’s past brilliance to render their future obsolete. The ethical question here is one of Data Sovereignty: Who owns the "essence" of a professional’s craft? If a model can mimic an artist's style perfectly because it "ate" ten thousand of their drawings, does that artist deserve a royalty on every prompt? Without a new framework for intellectual property, AI risks becoming the ultimate engine of wealth concentration, stripping value from the many to enrich the few who own the servers.

Data Sovereignty is the digital-age manifestation of an ancient political concept: the right of a self-governing state, or an individual, to exercise exclusive control over their own territory—in this case, their data. At its core, it means that data is subject to the laws and governance structures of the nation or person from which it originated. Historically, the internet was envisioned as a "borderless" frontier. In the 1990s, the prevailing ethos was that data should flow freely across the globe, unencumbered by national friction. However, as data became the "new oil" of the 21st century, nations realized that allowing their citizens' information to be stored on foreign servers (predominantly in the US or China) created a massive power imbalance. This led to the rise of Digital Protectionism. Countries began to view data not just as information, but as a critical national resource and a matter of national security.

Today, Data Sovereignty is a complex tug-of-war between three parties: Countries are passing "Data Localization" laws. For example, the EU’s GDPR dictates that European data cannot be moved to countries with weaker privacy protections. Similarly, China and India have strict mandates requiring that financial or personal data of their citizens be physically stored on servers within their borders. Tech giants often fight sovereignty to maintain "Cloud Fluidity," arguing that borders slow down innovation and increase costs. A newer movement, often called Self-Sovereign Identity (SSI), argues that sovereignty shouldn't belong to the state, but to the user. It envisions a world where you own your "digital twin"—your health records, browsing history, and social graphs—and can revoke a company’s access to them at any time. In the context of AI, the history of data sovereignty is being rewritten as a struggle against "Data Colonialism." AI companies have spent years scraping the open web—harvesting the cultural heritage, private conversations, and intellectual labor of the entire world—to build models that they now sell back to those same people. Data sovereignty today is the fight to ensure that the "value" of that data stays with its creators, rather than being extracted by a few centralized server farms.

Beyond the workplace, AI is terraforming our information ecosystem. The advent of high-fidelity Deepfakes—synthetic audio and video that are indistinguishable from reality—marks the end of an era of shared truth. For nearly a century, photographic and video evidence served as the "gold standard" of proof in courts of law and journalism. That standard has now collapsed. The ethical crisis is twofold. First, there is the obvious harm of Targeted Disinformation: using a politician’s synthetic voice to incite a riot or a faked video to ruin a private citizen's reputation. But the second, more subtle harm is what researchers call the "Liar’s Dividend." The Liar’s Dividend is a term coined by legal scholars Bobby Chesney and Danielle Citron to describe a secondary, perhaps more destructive, consequence of the deepfake era. While the primary threat of AI-generated content is the "positive" act of deception—creating a fake video to frame an innocent person—the Liar’s Dividend represents the "negative" power of deniability. In a world where everyone knows that high-fidelity audio and video can be fabricated, the mere existence of deepfake technology provides a universal "get out of jail free" card for those caught in actual wrongdoing. Ultimately, the Liar’s Dividend leads to Reality Apathy. When the "cost" of verifying information becomes too high, citizens stop trying to engage with objective facts altogether. This hollows out the democratic process, as accountability becomes impossible in a world where no evidence is definitive. The dividend doesn't just protect the liar; it bankrupts the truth, leaving society in a state where the most powerful actor is simply the one with the loudest, most persistent denial.

Surveillance is another big problem. Traditional surveillance is about what you do: the camera sees you enter a shop; the GPS tracks your car. AI-driven surveillance is about who you are and what you might do. This is the shift from "Data Collection" to "Inference Privacy." Modern AI models are so adept at pattern recognition that they can infer sensitive information about you that you never actually disclosed. By analyzing your typing rhythm, your choice of adjectives, or your "likes" on social media, an AI can predict your political leaning, your sexual orientation, or your likelihood of developing a mental health condition with startling accuracy. The ethical violation here is a breach of the "Mental Sanctum." If a company can predict you are pregnant before you’ve told your family, or if a government can flag you as a "potential dissident" based on your linguistic patterns, the concept of privacy is dead. We are moving into a "Full-Spectrum Surveillance" state where our very thoughts and biological predispositions are mapped and monetized by entities we cannot see.

As AI moves into the realm of mental health apps and elder-care robots, we face the "Empathy Paradox." These systems are designed to simulate care. An AI chatbot can be programmed to use "empathetic" language—phrases like "I understand how you feel" or "I'm here for you." However, as we mentioned earlier the AI has no "I" and no capacity to "feel." It is a mathematical simulation of compassion. The ethical risk is that we are creating a "Counterfeit Sociality." For vulnerable populations—the lonely, the grieving, or the elderly—synthetic empathy may provide temporary comfort, but it risks replacing real human connection with a digital hollow. If we solve the "loneliness epidemic" by giving everyone a "Stochastic Parrot" to talk to, have we actually solved the problem, or have we merely automated our neglect of one another? ☀️