top of page
Search

Assault of the Algorithms: How Code Became a Weapon Against Human Freedom


The invisible hand of the algorithm: Code has escaped the lab and entered the realm of power—shaping thought, behaviour, and democracy itself.
The invisible hand of the algorithm: Code has escaped the lab and entered the realm of power—shaping thought, behaviour, and democracy itself.


As explored in a previous article, Hacking Human Civilisation: How AI Is Rewriting the Rules of Reality, we are not merely building a tool through artificial intelligence. Rather, AI is beginning to rewrite the very code that underpins our shared reality—by hacking the software of language itself, the core system that differentiates Homo sapiens from the rest of the animal kingdom.

This piece goes deeper into how the algorithms crafted by tech giants are waging an all-out assault on democracy, culture, and the very foundations of liberty. These are no longer banal tools created in a Pollyannaish attempt to increase global connectivity. They have evolved—or been designed—into a potent form of psychological weaponry.

It’s no longer just about recommended videos or targeted ads. Algorithms have ascended the dominance hierarchy: they are now the curators of culture, the arbiters of truth, and the invisible hands shaping politics, belief, and even revolution.

Of course, AI brings enormous benefits. At Tech Growth Policy, we are not reactionary Luddites. We acknowledge the profound opportunities AI offers. But we must also confront the profound disequilibrium in human power structures it has caused. What we face is a quiet yet escalating conflict between users and the invisible architects of our digital lives.

This article will dissect the hidden mechanics behind this shift—and offer concrete proposals to regulate, tax, and demonopolise the rising “technocratic brologarchy,” a class whose power today rivals that of any monarch, pope, or empire in history.

The Feedback Loop of Outrage

Ever since the cognitive revolution, Homo sapiens have shaped reality not just through physical tools, but through shared myths—religion, nations, currencies, laws, even art. These intersubjective realities, created and transmitted through language, gave birth to culture, which in turn became the foundation for cooperation, meaning, and survival. It is through culture that we construct our value systems, establish social hierarchies, and define political legitimacy. Without large teeth, claws, or speed, our ancestors hunted mammoths and built empires by sharing stories, aligning beliefs, and organising collective action through the unifying software of words.

But today, that software is being reverse-engineered. As I argued in Hacking Human Civilisation, artificial intelligence is no longer merely a passive tool—it is actively hacking the underlying operating system of our civilisation. Large Language Models (LLMs) such as GPT-4, Gemini, and Claude are capable not only of analysing vast corpora of cultural data but of reshaping that culture in turn. In this way, AI does not simply reflect reality—it increasingly creates it. It now occupies the cultural feedback loop, consuming our collective thoughts and emotions and feeding them back to us in optimised form.

And this feedback loop is not neutral.

AI systems like DeepMind's AlphaGo have demonstrated that machines are capable of strategic thinking that surpasses human intuition. In the legendary 2016 match against Korean Go champion Lee Sedol, AlphaGo’s Move 37—initially considered a blunder—redefined how humans understand strategy itself. But while such breakthroughs showcase the creative and intelligent potential of AI, they also raise a critical question: What happens when this creativity is turned not toward cooperation, but toward the exploitation of human vulnerabilities?

We already know the answer. We have seen it. The algorithms that drive platforms like YouTube, Facebook, and Twitter are optimised not for truth or public good, but for engagement. And engagement, it turns out, is driven most powerfully by outrage, fear, and tribalism. Research shows that emotionally charged and divisive content spreads faster than factual or compassionate content. As Yuval Noah Harari points out in Nexus, the tech elite have long operated under the “naive view of information”: the belief that if people are simply allowed to speak freely, truth will eventually win. But history—and now the data—shows otherwise.

Truth does not naturally rise to the top of a digital ecosystem. What rises is what spreads. And what spreads most easily is what triggers our ancient evolutionary biases: negativity, suspicion, hostility, and ego. As Steven Pinker highlights in Enlightenment Now, humans possess a deep negativity bias—our brains evolved to respond more viscerally to threat than to harmony. This is the precise vulnerability that modern algorithms exploit. Instead of elevating dialogue, they amplify division. Instead of rewarding nuanced thought, they prioritise simplicity, certainty, and spectacle.

The real-world consequences have been catastrophic. In Myanmar, Facebook’s platform was instrumental in amplifying anti-Rohingya hate speech that culminated in ethnic cleansing. In Brazil, YouTube algorithms radicalised viewers into right-wing extremism, contributing to the political ascent of Jair Bolsonaro. And in the United States, the 2016 election revealed how misinformation and conspiracy theories can proliferate faster than fact-checked reporting—fracturing the political landscape and corroding institutional trust.

What’s worse, these algorithms do not simply respond to existing human preferences; they shape them. They are not mirrors but sculptors. The more divisive the content, the more it is shared; the more it is shared, the more the algorithm learns to serve it. Over time, this creates a self-reinforcing cycle in which users are nudged into ever-more extreme echo chambers, belief systems ossify, and dialogue collapses. This is the new architecture of information: engineered division masquerading as neutral connectivity.

The cost is not just cultural—it is civilisational. A medium designed to foster shared understanding has become a battlefield of micro-conflicts. The promise of the internet—to unite humanity—has mutated into a dystopia where politics becomes performance, identity becomes ammunition, and public discourse becomes impossible. Algorithms, once imagined as facilitators of knowledge, are now psychological weapons of mass polarisation.

Digital Bureaucracy – The Rise of the Omnipresent Leviathan

But the power of algorithms extends far beyond mere information warfare. These are no longer tools confined to our screens or minds—they have infiltrated the very architecture of governance itself. Quietly, steadily, they have transformed how we are monitored, managed, and manipulated by entities we never elected, and which remain unaccountable.

Thomas Hobbes once described the modern state as a Leviathan—a sovereign entity to which individuals surrendered some liberty in exchange for order and security. But today, we are no longer handing our autonomy to flesh-and-blood governments. Instead, we are surrendering it to inorganic bureaucrats, to the invisible, all-seeing code of algorithmic systems. These algorithms do not just influence what we think—they shape how we think. And unlike the bureaucrats of the past—slow, visible, and bounded—these new rulers are instant, invisible, and omnipresent.

In a previous article, I warned that we risk creating technologies more powerful than iron-age gods—entities not born of mythology, but forged in silicon. That warning may have sounded like hyperbole. But it is no exaggeration to say that we now live in a world where these digital bureaucrats operate 24/7, analysing every click, every message, every face. The techno-optimists may dismiss this as scaremongering. Yet the reality is starker: we are building an AI-powered panopticon—and doing so willingly.

We have not been coerced into giving up our privacy; we have done so out of convenience. Social media giants like Meta and TikTok have harvested our data in return for entertainment, validation, and dopamine. But when this culture of surveillance is merged with the coercive power of authoritarian states, the consequences are dystopian. Consider the case of Iran’s hijab enforcement regime. In 2022, after decades of struggling to enforce mandatory headscarf laws, the Iranian state turned to facial recognition algorithms. Public surveillance was automated. Women’s bodies were no longer policed by human agents on the street but by neural networks watching every movement. Following the death of Mahsa Amini, a 22-year-old woman arrested by Iran's “morality police,” protests erupted under the banner “Woman, Life, Freedom.” And yet the regime’s response was not reform—but reinvestment in algorithmic repression.

This is not an isolated case. As I wrote in The God Complex: Technology, Terror, and the Tyranny to Come, we are rapidly approaching a world in which software replaces soldiers. You no longer need tanks in Tiananmen Square; all you need is predictive analytics and a blacklist of digital dissidents. You no longer require secret police to monitor speech; a keyword-flagging algorithm will do. The Soviet Union needed brutal manpower to crush the Hungarian Revolution of 1956. Today, a few lines of code could silence a movement before it starts.

The defenders of this new digital order—especially the Silicon Valley elite—often invoke what Yuval Noah Harari called the “naive view of information.” They argue that just as the printing press gave rise to mass literacy and liberal democracy, AI will do the same. But this is a selective reading of history. As Carol Cadwalladr has documented so forcefully, the same technologies that enable truth can just as easily enable tyranny. The printing press gave us both The Rights of Man and Mein Kampf. Radio broadcasted both Roosevelt’s fireside chats—and Goebbels’ propaganda.

We now face a binary that must be refused: chaos or control. On one side lies the West’s unregulated, polarised information ecosystem—where lies spread faster than truth, and civil discourse is dying. On the other lies the seductive promise of total order—offered by populist strongmen and algorithmic autocracies. We must not be forced to choose between the chaos of democracy and the tyranny of engineered consensus.

The reality is, we are already living inside an algorithmic bureaucracy. One that doesn’t merely record our actions, but anticipates them. That doesn’t just process data—but dictates it. And like Hobbes’ Leviathan, it offers protection at a price: your agency for its stability. But unlike Hobbes’ sovereign, this one cannot be voted out, overthrown, or even seen.

The next great political challenge is not simply to regulate data—but to reclaim sovereignty. Because when control over what we see, believe, and become lies in the hands of a few coders, unelected executives, and machine-learning models—then democracy itself becomes negotiable.

Yet, in the face of this overwhelming power, the architects of our digital reality offer a curious defence “We are just giving people what they want.” This is not just intellectually dishonest – it is the great moral cop-out of our age…Blaming the User – The Great Cop-Out

The question of free will has tormented philosophers for centuries. Are we truly autonomous agents, or are our decisions merely the downstream effects of unconscious processes, shaped by biology and environment? This article won’t attempt to resolve that ancient debate. But one thing is certain: the claim made by tech executives—that users “choose” what they consume—is as philosophically naive as it is morally irresponsible.

To say that individuals freely choose what content they see on their feeds is like saying we choose our next thought. But anyone who has ever tried to sit silently for a minute knows that thoughts do not arrive through conscious intention. They arise unbidden, from the murky depths of memory, pattern, and impulse. In the same way, the Instagram feed, the TikTok algorithm, the YouTube autoplay—these are not mirrors of our desire. They are machines designed to exploit it.

What people consume on social media is less a reflection of freedom and more a product of habit loops—reinforced by algorithms that learn, adapt, and optimise for emotional hijack. These systems are trained not on what makes you flourish, but on what keeps you hooked. And what keeps us hooked is often what makes us worse: outrage, novelty, scandal, humiliation, fear.

Some will protest: “People want this content. We're just meeting demand.” But that’s like saying alcoholics want vodka—ignoring the system that supplies it, removes the exits, and spikes every drink with just enough novelty to keep you coming back. The reality is, Big Tech didn’t just stock the bar. They designed the entire tavern, hired the bartenders, set the lighting, and ensured the drinks were always free.

In her groundbreaking book Dopamine Nation, psychiatrist Anna Lembke explores how modern technology hijacks the brain’s reward system. Social media, she argues, operates much like a drug—activating the same dopamine circuits as cocaine or heroin. Each notification, like, or comment becomes a microdose of validation. The result? A perpetual cycle of craving, reward, and withdrawal. And unlike traditional addictions, this one is socially sanctioned, algorithmically enhanced, and nearly impossible to escape.

When confronted with the harms of this digital addiction—rampant anxiety, broken attention spans, polarised democracies—platforms absolve themselves with a shrug: “We don’t decide what people watch.” But this is a lie of omission. The algorithm decides. The interface decides. The notifications decide. The A/B tests and optimisation scripts decide. And most importantly, the designers decide what the system rewards.

Responsibility cannot be outsourced to the user when the system is engineered to bypass agency itself. This is not about consumer choice. It’s about systemic coercion, behavioural engineering, and corporate abdication of responsibility.

Until we reject the myth that the user is in control, we cannot begin to build a healthier digital public sphere. The truth is, the algorithm doesn't follow demand—it manufactures it. And it does so in a world carefully engineered to bypass human agency, exploit psychological weakness, and reward the most destructive forms of attention.

It’s time to abandon the lazy, comfortable fiction that this is simply a matter of “individual responsibility.” We now know who designed the systems. We know who profits from them. And we know who has the power to change them.

That means we can—and must—act. We must regulate, de-monopolise, and re-engineer our digital infrastructure to align with democratic values, not corporate incentives. We must embed into our technologies the very principles that have made liberal democracies—imperfect as they are—uniquely capable of progress: openness, error-correction, pluralism, and accountability.

As Yuval Noah Harari reminds us, the defining feature of the scientific worldview is not its infallibility—but its capacity to self-correct. Unlike religion or authoritarianism, which begin with the premise that they are right, science and democracy assume they are flawed—and build systems to check, revise, and improve over time. This ethic must now be extended into the governance of AI.

We must create robust feedback loops where power is decentralised—spread centrifugally across courts, media, watchdogs, academia, and civil society—so that no single force can dominate unchecked. But to do that, we must first build a shared moral framework for how AI should be governed.

And that starts with public debate—bringing together the academic world, policymakers, private sector engineers, and ultimately, the public. Because the danger is not just that AI evolves too fast—it’s that our politics remains too slow. If we fail to confront these questions now, we risk being acted upon rather than acting—just as we were throughout the 2010s, when algorithms silently assaulted our culture, our politics, and our collective wellbeing.

Securing Our AI Future – A Roadmap for Human-Centric Technology

Now that we’ve exposed the moral abdication at the heart of Big Tech’s defences, the next question must be: what do we do about it?

To reclaim agency in the digital age, we need more than outrage—we need architecture. We need to build systems and institutions that align with democratic ideals, not corporate profit. And that means developing a bold, pragmatic roadmap to ensure that artificial intelligence serves human flourishing, not the slow decay of civic life.

This is not about rejecting technology. It’s about reclaiming control over it.

Democratising AI Governance

AI cannot be left to the engineers alone. We need independent, transparent oversight bodies that include scientists, ethicists, civil society leaders, and the public. AI policy must be debated not just in boardrooms but in parliaments and public squares. This means mandatory algorithmic audits for any platform with wide societal influence, and enforceable requirements for transparency—users deserve to know how decisions are made, what data is collected, and why they’re being shown what they’re shown.

Just as food requires nutritional labels, so too should digital platforms be required to show the ingredients and effects of their algorithms.

De-Monopolising Big Tech

The rise of algorithmic dominance has gone hand-in-hand with economic concentration on a scale not seen since the Gilded Age. Meta, Google, and Amazon operate as unelected superpowers—governing speech, markets, and behaviour through data monopolies. We must break these empires up under modernised antitrust frameworks that treat data as a strategic asset, not a private commodity.

Platforms must be forced to interoperate, enabling users to migrate their data freely and choose services that reflect their values. Open-source alternatives, decentralised models, and public-interest tech must be funded and given room to grow.

Rewiring Incentives

We must shift away from the surveillance-capitalism model, which monetises manipulation and division. That begins with taxing data extraction, regulating targeted advertising, and incentivising platforms to build in alignment with human wellbeing.

We should explore the creation of public digital infrastructure—a nonprofit search engine, a publicly accountable social platform, or a news feed governed by editorial principles, not clickbait economics. Just as we fund libraries, public broadcasters, and schools, we must now begin investing in public digital space.

Digital Literacy & Public Deliberation

This moment demands a digitally literate citizenry. We need to integrate education on algorithmic bias, attention engineering, and data ethics into our national curricula. Every young person should understand how the tools they use shape their perception of reality.

Beyond the classroom, we must establish citizen assemblies on AI policy, create spaces for deliberation between the academic world, tech leaders, and the broader public, and fund investigative media dedicated to exposing algorithmic harms.

Embedding Ethics Into Code

Laws must catch up with code. That means legally binding protections for:

  • The right to explanation (how decisions are made)

  • The right to opt out of algorithmic curation

  • The right to audit models used in critical systems

Globally, we must pursue an AI Treaty—something akin to a Geneva Convention for Algorithms—banning uses such as AI-enhanced mass surveillance, facial recognition at protests, and lethal autonomous weapons.

Because if we fail to set ethical red lines now, we may soon cross boundaries we cannot come back from.

The future is not yet written. Algorithms may be fast, but values can be resilient—if we choose to defend them. The real battle is not between humans and machines, but between short-term profit and long-term progress, between convenience and conscience. And that battle will be decided not by code—but by courage.

 

 

 

 
 
 

Comments


bottom of page