EA PATH2IMPACT
FELLOWSHIP 2026
THE RIGHT TO PRIVACY
How AI Will Erode Our Autonomy and What We Can Do
Path to Impact Fellowship | University of Bath

Before reading, there are some things I should make clear.

I am pro-AI. I am also pro-human.


AI should preserve human values, not erode them. We live in a time of great change, and there is a thin line between overregulation (stagnation) and unchecked progress (danger). All those building AI, myself included, must be held to the highest responsibility to ensure a healthy future between humanity and AI.

Privacy is defined as "a state in which one is not observed or disturbed by other people." Privacy is freedom from being watched, to make your own choices and to live your life without fear. Without it, our behaviour is no longer free and our lives are no longer our own. Privacy enables self-actualisation, equal opportunity, and is the cornerstone of any healthy society.

5B+
Internet users under AI surveillance daily (World Population Review, estimates vary).
87M
Facebook profiles harvested by Cambridge Analytica (see Huntress for more)
r=0.56
Correlation between real personality and machine prediction - your friend's prediction is only r=0.48 (Nature)

Data is THE commodity of the Fourth Industrial Revolution. Five billion internet users leave digital footprints daily. Even without AI, these reveal locations, purchases, relationships, and political views. With AI, the picture becomes complete, all-encompassing and profoundly dangerous.

THE PROBLEM: Delayed suffering. Unlike physical surveillance cameras, AI-driven profiling is opaque, hidden and insidious. The harm isn't apparent until it's too late: until you're denied a job, manipulated into a purchase, or pushed toward extremism.
01

MANIPULATION

Models can know you better than you know yourself. AI predicts behaviour more accurately than close friends or family. This asymmetry of knowledge creates asymmetry of power.

  • Behavioural nudging: Algorithms don't just predict what you want, they create it. Each click trains the system to manipulate you better.
  • 87 million profiles harvested for psychological micro-targeting that we know of (just from Cambridge Analytica).
  • Dead Internet Theory: AI content is beginning to drown authentic discourse. One can no longer trust if an opinion was created under the conscious scrutiny of a human mind.
  • Democracy requires good-faith debate of clashing ideas, and informed citizens making autonomous choices. Networks of bots and AI echo chambers destroy both of these things.
02

INEQUALITY

The wealthy can afford privacy. Encryption, private VPNs, legal teams. The wealthy can buy their way out.

The poor are surveilled, profiled, exploited. Free services harvest data. Algorithmic hiring filters. Predictive policing. Predatory advertising.

The counterargument fails: "Profiling gives people what they want." Wrong. The person should want to purchase first, not have desire manufactured for them. This marketing fabricates problems and then acts as the solution, exploiting masses with little benefit to average people.

Privacy inequality compounds all inequalities. Without privacy, you're locked into algorithmic profiles determining credit, jobs, insurance - often proxies for race and class.

When used this way, AI is a tool of oppression. It strips the average person (who cannot meaningfully contest) of their agency and concentrates power at the top.

03

SELF-CENSORSHIP

The Panopticon effect: People self-censor when watched. Jeremy Bentham's (Panopticon) prison design worked through uncertainty alone. Today we live in a digital panopticon.

  • All individuals at risk, but especially activists, journalists, whistleblowers.
  • Progress requires challenging status quo. EA itself challenges conventional charity. Without privacy and the ability to act on your will alone, such challenges become dangerous.
  • Surveillance breeds conformity. Conformity breeds stagnation. When every deviation is recorded and has the potential to be weaponised, people choose safety over truth.

Why This Problem Persists

Individual consent is broken. Average Terms of Service takes 74 minutes to read. Companies design them to be incomprehensible. No one reads them. Everyone clicks "agree."

Regulation lags by years. By the time laws address one violation, three new techniques emerge. Legislators often fail to understand the nuances of the technology. Lobbyists ensure they never will.

Economic incentives favour surveillance. The surveillance economy is worth $150+ billion (see here). Every privacy-preserving choice reduces profit.

Capitalism is structurally disincentivised from prioritising goodwill. When immoral profit opportunities exist, they will be exploited. Change requires changing incentives: tighter regulation, real-time enforcement, criminal penalties for executives.

Most AI safety work ignores privacy. Resources flow toward alignment and existential risk. Privacy advocacy remains fragmented and under-resourced.

I argue: Intentional misuse is a greater threat than accidental misalignment. We debate hypothetical superintelligence while companies actively manipulate billions. The catastrophe isn't coming. It's here.

SOLUTIONS: What We Can Do

1. Privacy-Preserving AI

Federated learning, differential privacy, homomorphic encryption. Technology exists to protect privacy while enabling AI, it needs funding and adoption.

2. Open-Source Alternatives

Signal, Firefox, DuckDuckGo. Privacy-respecting tools need funding to compete with surveillance giants.

3. Auditing & Transparency

Mandatory algorithmic impact assessments. Red-teaming for privacy violations. Legislation requiring explainability.

4. Privacy Literacy

People need to viscerally understand surveillance, not just abstractly. Show them their data profile. Reveal the hidden infrastructure. Build collective action.

74min
To read average Terms of Service (The Biggest Lie on the Internet, estimates vary)
$150B+
Surveillance economy valuation (2024, Business Research Company)
30B+
Photos in Clearview AI's facial recognition database (see BBC)

FINAL REMARKS

To conclude, we cannot appeal to and wait for the goodwill of companies to protect our privacy, goodwill and profits rarely mix. If there is to be any change, then it is the insentives themselves that encourage the use of aggressive AI surveillance that must be dismantled. Sadly, this is a problem that only legislators can solve, and outside of educating yourself and others on the threat that AI presents to privacy, or supporting specific NGOs, there is little we can do as individuals.

Here is the bottom line that I want you to remember: the system that exploits our every action relies on your learned helplessness. It relies on the nihilistic belief that you are powerless to change your situation and so you should accept your current state as the norm. But this is not normal, at least not in any society that values freedom. So even though your influence as an individual is small, you should never accept this exploitation. One day, the law will catch up.

"Until then, we must stay true to our values and keep pushing for a better future, because one day we will get there."