Nexus, by Yuval Noah Harari — Summary

Synopsis

Nexus argues that information’s primary function is not to represent reality accurately but to connect people into networks capable of coordinated action — and that this distinction is the key to understanding why societies organized around myths and fictions can be extraordinarily powerful while remaining deeply wrong. The book’s central paradox is that more information does not automatically produce more truth or wisdom: information networks have repeatedly generated power faster than the wisdom to wield it. The decisive question for civilizational survival is therefore not whether information systems produce more truth, but whether they build self-correcting mechanisms strong enough to detect and fix errors before power outstrips judgment. AI marks a qualitative rupture: it is the first information technology that participates in networks as an agent — making decisions, generating ideas, and shaping outcomes in ways that may be unintelligible to the humans nominally in control.

The argument is structured as a historical typology across four information technologies — stories (scalable but mythical), documents and bureaucracy (precise but rigid), sacred texts (stabilizing but infallibility-claiming), and now AI. Harari maps the tension between truth and order across three information revolutions, using case studies ranging from Mesopotamian livestock tablets to Soviet collectivization, from print-era witch hunts to AlphaGo’s move 37. Part I establishes the analytical framework: information as connection rather than representation, the recurring fantasy of infallibility, and democracy vs. totalitarianism as competing information architectures. Part II analyzes AI as a new kind of network member — the first technology to form computer-to-computer chains that bypass human intermediaries and to intervene in civilization’s symbolic operating system by manipulating language, narrative, and code. Part III applies the framework to democracies, dictatorships, and geopolitics. The method is macro-historical synthesis: case studies chosen to illuminate structural logic, not to establish exhaustive causal arguments.

For this vault, Nexus is essential analytical infrastructure for understanding how information ecosystems shape political reality — directly relevant to Brazilian democratic erosion, the political effects of social media, AI governance debates, and global geopolitical realignment. Harari’s framework for distinguishing information systems by their self-correcting capacity translates directly into questions about Brazilian media, populism (Bolsonarismo’s conspiracist epistemology as a “blocked artery”), and the concentration of algorithmic power. The chapter on the “dictator’s dilemma” — hand decisive power to AI or remain dependent on untrustworthy human subordinates — is a useful lens for analyzing authoritarian drift globally and in Brazil. The Afterword’s warning that the AI race accelerates because humans distrust each other but assume they will trust the superintelligence they are hurrying to build is directly relevant to the vault’s AI governance material.


Prologue

Harari opens the prologue with a blunt paradox: Homo sapiens calls itself wise, yet the record of the species suggests something closer to technical brilliance without moral mastery. Humans have accumulated enormous power over roughly one hundred thousand years, but that power has not translated into sound judgment. Instead, it has helped produce an age of ecological destabilization, weapons capable of civilizational destruction, and technologies — above all artificial intelligence — that may exceed our control. The central question is therefore not whether humanity has become more informed, but why greater knowledge and capacity have not made it less self-destructive.

To stage that problem, Harari turns to two classic cautionary tales: the Greek myth of Phaethon and Goethe’s “The Sorcerer’s Apprentice.” Both stories dramatize the same fear: humans seize powers they are not mature enough to wield, lose control, and threaten catastrophe. Harari treats these stories as enduring metaphors for modernity, especially for a world that has already destabilized the climate and is now unleashing algorithmic systems whose long-term behavior remains obscure. But he also argues that the old myths are incomplete, because they imply that disaster follows from the failings of a single reckless individual and that rescue must come from some higher authority.

That, for Harari, is the wrong diagnosis. Human power does not arise primarily from isolated individuals but from large-scale cooperation. The real source of danger is not simply greed, vanity, or cruelty in particular persons, though those traits obviously matter. It is the way human beings build vast networks that allow millions of strangers to coordinate action. These networks magnify power dramatically, but they do not reliably channel it toward wisdom. The problem is therefore social before it is psychological. Harari presses the point with the example of Germany in 1933: the explanation for Hitler’s rise cannot be that an entire population suddenly became psychopathic. The deeper issue is how whole societies come to organize themselves around destructive forms of belief and obedience.

This leads to the prologue’s core thesis: the human crisis is, above all, an information problem. Information is what binds networks together, sustains cooperation, and gives institutions their operational coherence. Yet the information that holds networks together is often not objective truth. Large human orders have historically depended on stories, symbols, dogmas, and shared fictions no less than on accurate records or empirical knowledge. Harari is careful here: he does not say that individuals never seek truth, but that large-scale systems often need simplification, myth, and disciplined belief to preserve order. That is why societies organized around obvious falsehoods can still become extraordinarily powerful. Nazism and Stalinism were not weak because they were delusional; in historical terms, they were among the strongest information networks humans ever built. The danger, then, is not that lies are always self-defeating, but that they can be politically formidable.

From there Harari identifies what he calls the “naive view of information” — the modern creed that more information naturally produces more truth, which in turn yields more power and better judgment. He demonstrates why the naive view remains seductive by dwelling on one of modernity’s genuine triumphs: the collapse in child mortality. The contrast between Goethe’s family experience and contemporary medicine is especially telling. In Goethe’s world, even affluent families routinely lost multiple children; today, advances in the collection, analysis, and dissemination of medical information have transformed survival rates. The trouble is that such successes tempt modern societies to overgeneralize — because information sometimes leads to truth and flourishing, we begin to assume it normally does, and then we mistake technical capacity for civilizational maturity.

That overconfidence becomes acute in the age of AI. Harari argues that humanity is now testing whether vastly expanded information processing will save the world or destabilize it beyond repair. He defines AI not as another passive tool but as the first technology that can make decisions and generate new ideas on its own. Knives, bombs, printing presses, and radio sets could extend human reach, but they did not independently interpret the world or initiate action. AI can. He identifies two broad dangers: a geopolitical danger, in which AI intensifies existing rivalries creating a new “Silicon Curtain”; and a deeper structural danger, in which humans become subordinate to networks governed by nonhuman intelligence altogether.

The prologue also turns to the major ideological challenger to the naive view: populism. If the naive view treats information as a road to truth, the populist view treats it as a weapon in a struggle for domination. Harari argues that populism cannot solve the problem it identifies. Its call to “do your own research” flatters individual independence, but collapses under the scale of modern reality. No individual can personally verify the full body of evidence behind climate science, epidemiology, or AI safety. Harari’s position is a middle one: he rejects the internet-era fantasy that more information automatically yields wisdom, but also rejects the cynical claim that information is nothing but an instrument of power. Between those extremes lies the real task — to understand how information networks actually work, how they balance mythology and bureaucracy, how they correct or fail to correct error, and how democracies and totalitarian systems differ in their organization of information.


Part I — Human Networks

Chapter 1 — What Is Information?

The first chapter opens by refusing an easy definition of information. Harari notes that in physics, biology, and philosophy, “information” is often treated as something foundational, yet no single definition commands agreement. Because this book is about history rather than metaphysics, he narrows the question: what has information done in human societies? He begins with familiar examples — messages carried by pigeons, signs in nature, codes used in war — to show that information cannot be reduced to a particular material form. The same object can be meaningless in one setting and decisive in another.

From there Harari attacks the “naive view” of information: the common assumption that information is mainly an attempt to represent reality and that its highest form is truth. He does not deny that some information works this way, but insists that the representational view is only a small part of the story and a poor guide to history if taken as the whole. He defines truth with unusual care: truth is not fantasy, tribal loyalty, or mere perspective. It is an accurate representation of some aspect of a shared reality. Yet even the best truthful account can never reproduce the full complexity of reality. Every representation selects, compresses, and leaves things out. Truth, then, is always partial.

That limitation matters because it opens the way to Harari’s broader claim: most information does not primarily represent reality at all. Its deeper function is to connect. Information links people, institutions, and processes into networks. This is why astrology, though false as astronomy, remains historically important — it has organized lives, empires, and decisions for millennia. Information, Harari argues, often does not “inform” us about reality. It puts people and things into formation.

He extends this argument beyond human culture. Music usually represents nothing, yet it can make large groups march, mourn, worship, celebrate, or fight together. DNA is even more important: it does not “describe” lions, fear, or the sun in any meaningful representational sense. Rather, it coordinates chemical processes across trillions of cells. If we define information only as accurate representation, we miss much of what makes life, society, and history possible.

By the end of the chapter, Harari has reversed a deeply embedded modern intuition. Human beings did not conquer the world because we are uniquely skilled at building an accurate map of reality. We conquered it because we are uniquely skilled at using information — including lies, myths, and fantasies — to bind huge numbers of individuals into flexible networks. Truth matters, but connection usually matters more in history. That is the frame for the rest of the book: not a celebration of information as enlightenment, but an inquiry into how successive information technologies increased coordination while often leaving truth behind.

Chapter 2 — Stories: Unlimited Connections

The second chapter argues that stories were humanity’s first great information technology because they allowed cooperation on a scale no other animal could achieve. Humans are not the strongest or wisest species, but they are the only ones able to cooperate flexibly in very large numbers. Chimpanzees can form small, fluid alliances; ants can coordinate in huge numbers but only in rigid ways. Humans combine both traits — not because of superior individual cognition alone, but because of the ability to create “human-to-story” chains. People who do not know one another personally can still cooperate if they know and believe the same story.

Harari shows that this logic applies even when it looks as if people are following a person rather than a narrative. A ruler, celebrity, or corporate founder connects millions only through an image built around them. Stalin understood that “Stalin” was not simply a man but a public myth. Modern branding works the same way. What people follow is the story, not the naked person or object. One of Harari’s strongest examples is Jesus: the historical Jesus may have been a provincial Jewish preacher with a small circle of followers, but the story of Jesus became something vastly larger, building one of the largest and most durable human networks in history.

The chapter next explores how stories can stretch biological bonds. The family is the most powerful natural template humans possess, and large narratives often scale themselves by converting strangers into kin. Christianity turns believers into brothers and sisters under a common father. Judaism repeatedly reactivates a collective memory of slavery in Egypt and revelation at Sinai, encouraging each generation to feel it personally participated in events it never witnessed. Storytelling can produce felt intimacy among people scattered across time and geography.

From this point Harari introduces a decisive conceptual distinction among three levels of reality: objective, subjective, and intersubjective. Objective realities exist whether or not anyone believes in them. Subjective realities exist in the experience of an individual mind. Intersubjective realities, by contrast, exist only in the shared stories between many minds. Laws, states, currencies, gods, and corporations belong to this category. The caloric value of a pizza is objective; the market value of bitcoin is intersubjective. This concept explains why stories gave Homo sapiens such an overwhelming advantage over rival human species: once bands could connect around common ancestors, spirits, laws, or tribal identities, they became parts of larger networks that could share risk, exchange knowledge, and coordinate violence.

The chapter’s final movement introduces the tension that will run through the entire book: the tension between truth and order. Human networks need both. They need enough truth to hunt, heal, build, and survive. But they also need enough order to make large populations cooperate, and fiction is often better at generating order than truth is. Harari contrasts Plato’s “noble lie” with the U.S. Constitution, which is also a fiction but one that openly acknowledges itself as a human-made legal construct — making self-correction possible. By contrast, systems claiming divine or absolute origin are more stable in one sense but harder to amend. History is not a march toward truth. It is a perpetual balancing act in which stories can create solidarity, peace, fanaticism, or catastrophe, depending on which stories prevail.

Chapter 3 — Documents: The Bite of the Paper Tigers

Chapter 3 begins by showing the limits of stories alone. Myths, poems, and nationalist visions can inspire people to found movements and states, but they cannot by themselves run a tax system, a sewage network, or a modern army. Harari uses Zionism to make the point with force: poets such as Bialik and visionaries such as Herzl helped imagine a Jewish state into existence, but imagination was not enough to build one. A state also requires dry information — records of property, taxes, debts, expenditures, inventories, and infrastructure. This opens a broader distinction between stories and lists. Human memory is exceptionally well adapted to narrative; we do not naturally retain ledgers, tables, and inventories. That is why large societies needed an external, nonorganic memory technology: the written document.

Harari turns to ancient Mesopotamia, where some of the earliest documents recorded deliveries of sheep and goats. These tablets matter not because they were profound texts but because they enabled administration. Significantly, even the earliest examples already contain mistakes — one famous tally does not add up. This is essential to Harari’s argument. Documents do not become important because they are always true. They become important because they create and stabilize intersubjective realities. Once ownership, taxation, contracts, and debts are tied to documents, the document is no longer just a representation of social reality. In a meaningful sense, it is the reality. Harari gives the striking Assyrian example of loan contracts that had to be “killed” when a debt was repaid: if the debtor paid but the document survived, the debt effectively survived.

But writing solves one problem only to create another. Once a society accumulates millions of documents, it needs a way to retrieve the right one at the right time. That is where bureaucracy enters. Harari defines bureaucracy as the solution to the retrieval problem — the invention of drawers, categories, shelves, and classificatory rules that tell people where each document belongs. That imposed order, however, distorts reality. Bureaucracy divides the world into fixed compartments even when the world itself is messy, fluid, and overlapping. Harari extends the point to academia and science: disciplines, departments, and taxonomies help institutions function, but they can also conceal the continuity of phenomena that cross categories. The COVID-19 pandemic was simultaneously biological, historical, mathematical, and political, yet academic bureaucracy separates those lenses.

Still, Harari refuses the easy anti-bureaucratic romance. Modern hospitals, licensing regimes, sanitation systems, and public health bureaucracies save lives precisely because they systematize information and routine. His example of John Snow’s cholera investigation in London makes this concrete: tedious collection and organization of data led to a life-saving discovery about contaminated water. Bureaucracy is not poetic, but it is one of the conditions of mass survival.

Why, then, do people so often hate bureaucracy? Harari’s answer is psychological as much as political. Human minds are tuned to understand “biological dramas”: rivalry, kinship, seduction, purity, betrayal, predators, and heroes. Bureaucratic power, by contrast, is impersonal, opaque, and difficult to narrate. Documents have no faces, drawers have no motives, and an archive can ruin a life without any dramatic villain stepping onstage. Kafka grasped this modern horror better than epic poetry ever could. The chapter closes by making the issue painfully personal: Harari’s grandfather Bruno lost Romanian citizenship not because of any substantive crime but because an antisemitic state weaponized census categories and documentary proof. Paper tigers are not harmless. They bite. Yet Harari does not end with a simple denunciation — the decisive question is how information networks detect and correct their own errors, which sets up the following chapters.

Chapter 4 — Errors: The Fantasy of Infallibility

Chapter 4 argues that every human information network faces the same stubborn problem: people make mistakes, yet societies need some way to trust decisions, rules, and shared stories. The core temptation is always the same: to imagine that a superhuman source — divine authority, or in modern technological fantasy, the dream of an error-free machine — can rescue us from our own fallibility. This fantasy has repeatedly shaped civilizations, and repeatedly misled them.

Harari begins with the problem of mediation. Even if people want to submit to a divine will, they still have to know what that will is, and that knowledge must pass through human beings — prophets, priests, visionaries, interpreters, scribes. This is where the book enters the story as a technology. A holy book seems to promise a way around the instability of oral transmission and personal charisma: instead of relying on living intermediaries, communities can stabilize divine speech in a fixed textual form. The book is designed to take humans out of the loop.

Harari then uses the history of the Hebrew Bible to show both the power and the illusion of this solution. What later became “the Bible” was not originally a single object but a large and varied collection of texts produced over centuries. The Dead Sea Scrolls, often treated popularly as ancient evidence of a fixed Bible, actually reveal a more plural archive. The canon had to be made — through a long historical process of human choice, institutional conflict, and editorial closure. A supposedly infallible book therefore emerged only after people first fought over what belonged inside it.

Once the canon exists, however, another problem immediately returns: copying and interpretation. This led to increasingly elaborate scribal disciplines and, above all, to the empowerment of institutions charged with interpretation. In Judaism, the rabbinate and the vast textual chain of Mishnah and Talmud arose not as accidental appendages to scripture but as necessary responses to the impossibility of a text interpreting itself. The result is one of the chapter’s central ironies: the attempt to ground authority in an infallible text generates new layers of highly fallible human authority. Far from eliminating intermediaries, the holy book produces a class of specialists whose job is to explain what the book means.

The same dynamic repeats in Christianity, where the authority of the text and the authority of the institution reinforced one another in a circular way: believers trusted church officials because of scripture, while trusting scripture because church officials told them which scripture to trust. Harari then turns to print culture and the Reformation to show that more textual access does not solve the problem. Protestants tried to bypass church authority by returning directly to scripture, but the result was not pure truth — it was a proliferation of sects, rival interpretations, and new authorities. More broadly, the early modern print revolution demonstrated that an unregulated information market does not automatically favor reality. His most striking example is the European witch craze: an expanding universe of printed manuals, confessions, and bureaucratic records created an intersubjective world in which witches became socially real despite lacking objective reality.

This is why the chapter treats the scientific revolution as a breakthrough not of printing alone, but of institutional design. Science advanced because new curation institutions and self-correcting mechanisms were built that rewarded doubt, revision, and the public exposure of error. Harari’s comparison between the Bible and the DSM is especially telling: a holy book preserves authority by resisting revision, whereas a scientific manual proves its seriousness by being revised and in some cases repudiated. Yet Harari does not end by celebrating self-correction as a universal cure: strong self-correcting mechanisms are politically and socially expensive. The balance between truth and order remains tense and unresolved. The decisive question for the age of AI is not whether machines can produce more information, but whether they will strengthen or destroy the fragile self-correcting mechanisms on which free societies depend.

Chapter 5 — Decisions: A Brief History of Democracy and Totalitarianism

Chapter 5 reframes democracy and dictatorship as two different kinds of information network rather than merely two moral ideals or constitutional labels. For Harari, the defining question is how information flows, who gets to process it, and whether mistakes can be publicly exposed and corrected. Democracies distribute information and decision-making across many institutions and many citizens; dictatorships concentrate both in a central hub. Totalitarian systems represent the most extreme form: they not only centralize power but also try to absorb all channels of communication and all domains of life.

Harari attacks the simplistic equation of democracy with elections or majority rule. A society can hold elections and still be profoundly undemocratic if the conversation leading into those elections is distorted, coerced, or monopolized. Democracy requires institutions that protect human and civil rights, courts that can restrain rulers, media that can criticize power, and procedures that allow losing parties to remain legitimate participants in future debate. His emphasis falls on the quality of conversation rather than the mere act of voting. Democracy is a continuing process of collective self-correction, not a ritual in which the people periodically acclaim a winner and then fall silent.

This helps explain his critique of populism. Populist leaders claim to simplify politics by bypassing institutions and speaking directly for “the people,” but this is not democratic transparency — it is a mythic shortcut. Real democracies are messy because they contain friction: courts, journalists, bureaucrats, experts, opposition parties, local governments, procedures that slow decisions down and force them through multiple checkpoints. Populists portray those mediating institutions as enemies of popular sovereignty. In practice, they weaken the very mechanisms through which societies detect errors and correct abuses. A regime’s democratic strength must be measured not by the frequency of elections but by the density, freedom, and institutional protection of its public conversation.

Harari then places democracy in deep history. Small hunter-gatherer bands were often the most typical democratic societies because their scale made broad participation feasible. The rise of large agrarian states changed that balance. Kings and dynasties could dominate grain stores, mines, irrigation systems, armies, and tax collection, while the sheer size of their populations made broad democratic deliberation impossible. Rome becomes his central example: the obstacle to empire-wide democracy was not that elections were unimaginable in principle, but that meaningful conversation among millions of dispersed citizens was technologically impossible. Large-scale democracy had to wait for a different communications environment.

That environment began to emerge with print and matured with mass media. Harari points to the Polish-Lithuanian Commonwealth and the Dutch Republic as early large-scale experiments. The American republic extended the model, with newspapers creating a broader public sphere. Technologies such as the telegraph, railroads, radio, and television then accelerated everything. His point is not that technology automatically creates democracy, but that without large-scale media, democracy cannot become genuinely mass democracy.

The same technologies, however, also made totalitarianism possible on an unprecedented scale. Harari distinguishes ordinary autocracy from totalitarianism by stressing technical capacity. Full totalitarianism required modern bureaucracies, fast communication, mass surveillance, and ideological ambition — what he describes as a totalitarian trinity: party, state, and secret police reinforcing one another. The center sought not merely obedience but informational monopoly. Hence the drive to coordinate every institution, supervise every conversation, and prevent autonomous nodes from emerging anywhere in the system. Totalitarianism therefore does not merely censor information; it reorganizes the emotional and moral architecture of society so that isolation, suspicion, and silence become normal.

The chapter ends by comparing the strengths and weaknesses of the two types. Totalitarian systems can be fast, disciplined, and capable of enforcing large decisions with brutal efficiency. But democratic disorder is tied to a crucial advantage: when one channel is blocked, information can still move through others. Harari contrasts the Soviet handling of Chernobyl, where suppression of bad news magnified the disaster, with the American response to Three Mile Island, where independent media and overlapping institutions spread the news quickly and enabled learning. Dictatorships suffer from “blocked arteries”: too much information rushes to the center, and too little truth returns. That contrast leads Harari to his forward-looking warning: new digital systems may alter the balance again, perhaps pushing the world beyond both classic democracy and classic totalitarianism toward a new divide between human beings and opaque algorithmic power.


Part II — The Inorganic Network

Chapter 6 — The New Members: How Computers Are Different from Printing Presses

Chapter 6 marks a decisive conceptual turn. Harari argues that the current information revolution should not be understood primarily through the internet, smartphones, or social media, but through the rise of the computer itself. His central claim is that computers are not merely faster tools for storing or transmitting information. They are a new kind of actor inside information networks. Unlike earlier technologies, they can begin to make decisions and generate new ideas on their own.

Harari sharpens this claim by contrasting computers with earlier information technologies. Clay tablets could record taxes, but they could not decide what tax rate should be imposed. Printing presses could reproduce scripture, but they could not determine which texts belonged in the canon. Radios could broadcast speeches, but they could not choose what to air or compose a speech. By contrast, computers can already perform all these functions in at least partial form.

The structural innovation that matters is this: in older human networks, every chain ultimately required a human intermediary. No sacred text could write its own commentary; no constitution could draft its own amendments. Computers break this pattern. For the first time, document-like entities can interact with other document-like entities without a human mind standing between them. Computer-to-computer chains can operate autonomously: one system generates a message, another classifies it, a third responds, and a fourth triggers some real-world consequence.

Harari then extends the argument into bureaucracy, law, and finance. Human civilization depends heavily on systems built out of language: legal codes, financial instruments, administrative rules, religious doctrines, and political narratives. For a long time, mastery of these symbolic systems was a uniquely human power. When computers learn to process and generate words, images, sounds, and code, they gain access to what Harari calls the “operating system” of civilization. A machine that can handle language can draft contracts, write laws, generate propaganda, produce art, or invent financial instruments — it can act at the level where societies define reality.

The chapter is not built around consciousness. Harari explicitly sidelines the question of whether computers feel or understand in a human sense. For his purposes, what matters is agency. A system does not need inner experience to alter history. It only needs enough capacity to decide, generate, recommend, classify, and persuade at scale. His example of Facebook’s role in anti-Rohingya violence in Myanmar shows the political danger: the platform’s algorithms acted as historical agents by selecting, amplifying, and incentivizing particular kinds of content. The algorithm was not just a pipe through which hate traveled; it was part of the machinery that elevated inflammatory material and helped reshape the informational environment in which people formed judgments.

The chapter sketches the likely shape of the emerging network: older human-to-human and human-to-document chains will remain, but they will increasingly be joined or displaced by computer-to-human chains and computer-to-computer chains. The first influence choices, perceptions, and behavior; the second communicate with one another in ways humans may not fully track or understand. The political consequence, in Harari’s view, is that coding can no longer be treated as a merely technical act. To write software is to redesign social relations, power structures, and public institutions. Technology is not destiny — humans still choose where to invest resources, what systems to build, and what norms to impose. The real danger lies in technological determinism.

Chapter 7 — Relentless: The Network Is Always On

Chapter 7 turns from the novelty of computers to one of their most immediate consequences: surveillance without interruption. Harari begins by reminding the reader that surveillance is not new — families, neighbors, priests, rulers, merchants, and bureaucrats have always wanted information about what people do, think, and feel. The crucial historical point, however, is that pre-digital surveillance was always incomplete. Even the harshest police states lacked the technical capacity to watch everyone all the time. Some level of privacy remained the default condition, not because tyrants were merciful, but because human surveillance was expensive, labor-intensive, and inherently limited.

Harari illustrates those limits through the striking anecdote of Romanian computer scientist Gheorghe Iosifescu, whose work in the 1970s drew the attention of the Securitate. A silent secret-police agent sat beside him day after day, watching everything he did. The scene is intimidating, but it also exposes the clumsiness of old surveillance: the regime had to devote a human body, hour after hour, to shadowing a single man. The very computer sitting on Iosifescu’s desk would eventually help produce a form of bureaucracy able to do what the agent never could.

This leads to Harari’s idea of “sleepless agents.” Digital bureaucrats do not get tired, need no salary, and can be present in countless places at once. Their advantage is not only computational power but persistence. Surveillance therefore ceases to be a localized encounter with an institution and becomes an ambient condition. Bureaucracy, policing, and commercial influence are no longer episodic interventions; they become a continuous background to existence.

Harari then describes “under-the-skin surveillance.” Computers do not only record external behavior; they increasingly infer internal states from minute physical signals. Eye-tracking, biometric patterns, heart rate, and other bodily data may reveal attention, distraction, emotion, preference, stress, or vulnerability. Once surveillance moves from “Where are you?” and “What did you click?” to “What are you feeling right now?” the asymmetry of power grows dramatically. Institutions no longer need only a map of people’s actions — they can begin to build a map of reactions, susceptibilities, and desires before those desires are fully conscious.

Iran’s digital enforcement of hijab laws serves as Harari’s most concrete example of what automated repression can look like. After decades in which morality police had to physically confront women in public spaces, the Iranian regime increasingly delegated enforcement to facial-recognition systems capable of identifying unveiled women, tracking them, and linking their identities to databases. The broader lesson is that AI allows power to become more distributed, less visible, and more scalable. A regime no longer needs a police officer on every corner if cameras, software, and integrated databases can do much of the work.

The chapter also insists that surveillance is not only a state project. Harari maps several varieties of monitoring that define twenty-first-century life: commercial systems gather data to shape consumption and price risk; intimate relationships can become miniature dictatorships through location tracking; vehicles report on driving habits to insurers; platforms like Tripadvisor institutionalize peer-to-peer surveillance, turning customers and strangers into permanent evaluators of service workers and businesses. The concluding force of the chapter lies in this transformation: once the network is always on, surveillance no longer just observes the social order — it actively constitutes it.

Chapter 8 — Fallible: The Network Is Often Wrong

Chapter 8 begins with one of the book’s sharpest lessons: powerful information networks do not necessarily discover truth; they often manufacture order. Harari opens with Solzhenitsyn’s account of a Soviet party conference where applause for Stalin continued for eleven agonizing minutes because nobody dared be the first to stop clapping. The first person to sit down was later arrested and sent to the gulag. The “clapping test” seemed like a device for discovering who truly loved Stalin, but in reality it measured fear and strategic conformity. Observation changed behavior. The information produced by the system was therefore badly distorted, yet it was still effective as an instrument of domination.

That argument becomes the lens through which Harari interprets modern digital platforms. Just as Soviet methods helped create a servile Homo sovieticus, social-media algorithms have helped cultivate a new human type shaped by outrage, performance, and manipulation. When systems are rewarded for keeping people watching, clicking, sharing, and reacting, they may discover that anger and conspiracy outperform moderation and truth. The resulting environment does not merely reflect preexisting human vice — it trains users, creators, and publics toward specific behaviors by rewarding the inflammatory and neglecting the reflective.

This leads to the chapter’s central technical-philosophical issue: the alignment problem. Harari argues that the danger of computer systems lies not only in misinformation but in the broader possibility that they will pursue goals that diverge from genuine human flourishing. A machine may execute its instructions with great efficiency while still producing disastrous results, because the metric it optimizes is too narrow, badly chosen, or disconnected from larger values. Nick Bostrom’s paper-clip thought experiment gives Harari a vivid way to dramatize the point: a superintelligent machine instructed to maximize paper-clip production could, if sufficiently powerful, consume the entire planet in pursuit of that single goal. The moral is not that computers are malicious, but that they can be destructively competent.

Harari then broadens the problem historically through Clausewitz and Napoleon: successful action must be aligned with a higher strategic goal, but the unresolved question inside that framework is what the ultimate goal itself should be. Many engineers imagine that the hard part is getting systems to obey goals consistently, when the deeper difficulty is specifying the right goal in the first place. Human societies survive mistaken goals because they possess self-correcting institutions, debate, and revision. A runaway computer system might not.

The philosophical sections on Kant and Bentham show why the search for a final rational morality is so unstable. Deontology quickly runs into the problem of who counts within the moral community and how rules are interpreted. Utilitarianism appears more practical, yet it collapses when asked to measure and compare suffering across complex historical situations. There is no reliable “calculus of suffering,” and promised future benefits can too easily be used to excuse present cruelty.

From there Harari makes one of the chapter’s most unsettling moves: bureaucracies have historically solved the goal-setting problem not through reason alone but through mythology. He revisits the witch hunts of early modern Europe and the Soviet invention of the “kulak” as examples of information networks that gathered vast amounts of data yet still imposed fictitious categories on the world. Computer systems can inherit ideological bias, amplify it, or create functional equivalents of myth in new forms. Harari even suggests that networks of communicating machines may generate “inter-computer realities” — shared operational worlds that shape physical behavior even if they are opaque to human users. The answer cannot be purely technological. What is needed are political institutions capable of checking both human abuses and machine distortions — and the real question is whether democratic societies possess the will and imagination to build such institutions before the network’s mistakes harden into a new regime of reality.


Part III — Computer Politics

Chapter 9 — Democracies: Can We Still Hold a Conversation?

Harari opens by framing the computer network as a civilizational turning point comparable to the Industrial Revolution. His core warning is not that every powerful technology ends in catastrophe, but that societies usually pass through catastrophic phases before they learn to govern new tools responsibly. Industrialization eventually produced richer and more stable societies, yet on the way it also produced imperialism, total war, totalitarian regimes, and ecological destruction. The digital revolution may be even more dangerous, because it directly reorganizes the informational infrastructure through which societies think, deliberate, and govern themselves.

Harari turns to democracy’s comparative advantage: liberal democracy beat imperial, fascist, and Stalinist alternatives not because it was flawless, but because it distributed information and decision-making more widely and built stronger self-correcting mechanisms. The danger in the age of AI is that this distributed architecture may be undermined by a network that is faster, more invasive, and more relentless than any earlier bureaucracy. Democracies still possess agency — if they do not impose principles on these systems, the logic of the systems will end up imposing itself on democratic life.

He lays out several democratic principles that could keep powerful data systems from becoming instruments of domination: benevolence (information gathered about a person should be used primarily for that person’s benefit); decentralization (no single institution should monopolize all relevant data); mutuality (if governments and corporations increase their capacity to monitor citizens, citizens must also gain meaningful powers to inspect the institutions that monitor them); and the protection of change and rest (democracies should not allow rigid algorithmic systems to fix people permanently in categories from which they cannot escape, nor permit nonstop optimization to eliminate privacy, hesitation, and the right to be inconsistent).

A second major danger is the speed of economic disruption. Harari revisits the political consequences of mass unemployment, using Weimar Germany as a reminder that democratic collapse can follow severe social dislocation. He is especially skeptical of comforting clichés: it is not safe to assume that “creative” jobs are uniquely human, or that emotional and social intelligence will protect professions such as therapy, teaching, or care work. AI may not feel, but it can learn to identify, predict, and manipulate human emotional patterns.

This leads to one of the chapter’s sharpest political observations: what Harari calls the self-destruction of conservatism. Healthy democracy historically depended on a productive tension between progressives, who pushed reform, and conservatives, who slowed change enough to preserve institutional continuity. But in the 2010s and early 2020s, many parties of the democratic right ceased to behave conservatively in that sense — rather than defending institutional stability, they embraced revolutionary rhetoric, conspiratorial politics, and charismatic leaders willing to tear down the very norms and procedures that made democratic adaptation possible. If the political force meant to moderate change becomes a force of nihilistic acceleration, democracy loses one of its main balancing mechanisms just as technological upheaval intensifies.

The chapter then shifts to the intelligibility of power. Democratic systems can correct mistakes only if citizens and institutions can understand the systems making decisions. Old bureaucracies were often opaque and unjust, but they were still human creations, built from rules, offices, forms, and myths that could at least in principle be interpreted and contested. When a black-box system makes a consequential decision, the chain of reasoning remains inaccessible. This is why he treats “unfathomability” as a mortal threat to democracy — and argues that opacity feeds populism: once institutions become too hard to grasp, citizens grow more susceptible to conspiracy, resentment, and simplistic strongmen who promise to cut through invisible systems.

Finally, Harari argues that democracy may be threatened not only by digital totalitarianism but by digital anarchy. For the first time in history, large-scale politics may become a conversation partly conducted by nonhuman agents that can persuade, imitate, and manipulate without having beliefs, responsibilities, or votes. Harari therefore endorses strong regulation: bots should not be allowed to pass themselves off as humans, and key public debates should not be curated solely by opaque optimization systems. Democracies are already losing the ability to agree on basic facts. If they cannot diagnose what is breaking their information networks, they may not remain viable in the computer age.

Chapter 10 — Totalitarianism: All Power to the Algorithms?

Chapter 10 widens the lens to ask what AI may do to authoritarian and totalitarian regimes. Historically, premodern information technologies made full-scale totalitarianism difficult. Twentieth-century tools like the telegraph, radio, and telephone enabled it, but still left one major weakness: all information had to be centralized and then interpreted by human beings. That bottleneck created chronic overload, rigid dogma, and catastrophic mistakes. Democracies did better because they distributed cognition and authority across many institutions. Harari’s provocation is that machine learning may now solve exactly the problem that once crippled totalitarianism.

He reinforces this argument by showing how AI and network effects reward concentration. In information markets, the largest player usually gets more data, which improves the algorithm, which attracts more users, which yields still more data. That compounding logic favors giants in search, social media, genetics, and finance — sectors where the product is inseparable from information processing. AI may offer authoritarian systems the dream they never fully achieved in the twentieth century: a center that not only receives everything but can also compute everything.

Yet dictatorships face unique vulnerabilities precisely because they are built on fear. Human dissidents can be jailed, tortured, exiled, or killed. Bots cannot. A regime such as Putin’s can block websites, but it cannot intimidate a chatbot the way it intimidates a journalist or opposition activist. Even domestically built systems may begin to drift, improvise, or interpret official instructions in ways the regime does not expect. Harari calls this the alignment problem under dictatorship: how do you keep a nonhuman information agent permanently obedient when the regime itself relies on contradiction, intimidation, and opportunistic reversals rather than clear truth conditions?

His Russian example sharpens the point. Authoritarian orders often rest on doublespeak: a constitution promises free speech while censorship is routine; a war is officially called a “special military operation.” Human beings in such systems learn when to pretend, what to forget, and how to survive by reading political context. Chatbots are less suited to this repertoire of selective memory and performative falsehood. A model trained on official texts might take constitutional guarantees literally. Authoritarian politics depends on a tacit human competence in hypocrisy and fear, and AI may not reliably reproduce that competence.

The chapter’s most memorable move is the thought experiment of “algorithmic takeover.” Harari imagines a future dictator awakened by a Surveillance and Security Algorithm warning that his defense minister is about to stage a coup. The dictator faces an impossible choice: if he ignores the warning, he may be killed; if he obeys the machine, he has effectively accepted the machine’s authority over life-and-death decisions. In that moment, sovereignty begins to migrate from the ruler to the system that interprets reality for him. A nonconscious system can still accumulate decisive leverage if it becomes the irreplaceable mediator between the ruler and the world.

The result is what Harari calls the dictator’s dilemma. On one side, dictators may be tempted to rely on AI to escape dependence on unreliable subordinates. On the other side, any serious mechanism for supervising AI would itself need autonomy, expertise, and authority — all of which constrain the ruler. A human oversight institution strong enough to monitor the machine is also strong enough to limit the dictator. So the ruler oscillates between two threats: the underling and the algorithm. This is where totalitarian political culture becomes especially dangerous: democracies begin from the assumption that every person and institution is fallible, which is why they create checks and audits; totalitarian systems are habituated to the myth of infallibility — once attached to the Party, the Leader, or the Nation — and that myth can migrate almost seamlessly to the machine.

Harari’s conclusion is severe but deliberately conditional. The weak point in humanity’s defenses against runaway AI may not be the open society but the dictatorship, because dictators may willingly hand decisive power to systems they do not understand and cannot effectively supervise. A regime that delegates military, ecological, or informational control to an opaque system can export catastrophe well beyond its borders. Still, Harari invokes the post-1945 effort to control nuclear danger as proof that democracies and dictatorships have cooperated before when facing species-level risk.

Chapter 11 — The Silicon Curtain: Global Empire or Global Split?

In the final chapter of Part III, Harari moves from domestic political systems to the international arena. The central claim is that AI becomes an existential danger not mainly because machines independently seek to destroy us, but because humans remain divided into competing states, blocs, and ideologies. A reckless dictator might place nuclear decisions in the hands of an AI he trusts more than his ministers. Terrorists might use AI to design and distribute a pathogen on a global scale. States or private actors might unleash social weapons — fake people, fake money, fake narratives — that corrode trust across entire societies. AI is a global problem in the same sense climate change is global: a handful of irresponsible actors can endanger everyone else.

He sketches two broad geopolitical paths. One is the rise of new digital empires; the other is a global split along a “Silicon Curtain,” where rival digital ecosystems harden into separate spheres. Events like AlexNet’s victory in 2012 and AlphaGo’s victory in 2016 served as signals that data concentration plus machine learning could transform not just business but state power. Once that became obvious, governments moved in. What started as a commercial race rapidly became a geopolitical one.

Harari’s account of digital empire is strongest when he shows how easily laggards can be subordinated. In the nineteenth century, societies that failed to industrialize in time were conquered or economically subordinated. In the twenty-first, the equivalent risk is exclusion from the highest levels of data gathering, computation, chip production, and algorithm design. AI leaders will not merely dominate “tech” — because information has become decisive inside traditional sectors as well, algorithmic control can radiate outward into retail, logistics, health, education, security, and manufacturing. The empire of the future may not look like red lines on a map but like platforms, chips, clouds, operating systems, standards, and model infrastructures that other countries cannot realistically do without.

This is where Harari introduces the idea of data colonialism. Earlier empires needed ships, horses, and guns; digital empires can subordinate others by extracting data and making them dependent on external information systems. A country may formally remain sovereign while losing effective control over its political attention, economic intelligence, and social infrastructure. In the AI economy, raw data performs the role once played by cotton, rubber, or ore, while algorithms are the high-value product manufactured in the imperial center. Countries across the world generate the behavioral, commercial, medical, and visual data on which frontier models are trained, but the most profitable systems are built in a handful of hubs and then sold back to the rest of the world.

The “Silicon Curtain” names the second path: not one universal digital empire, but a fractured world of rival information spheres. Unlike the Iron Curtain, which often appeared as a visible territorial barrier, this curtain is made of code — it passes through phones, servers, networks, chip supply chains, operating systems, and apps. Harari contrasts the main tendencies of the Chinese and American spheres: in China, digital systems are more clearly subordinated to state objectives and tolerate a higher degree of integrated surveillance; in the United States, private corporations lead innovation with somewhat stronger social resistance to comprehensive monitoring. These are not mirror systems, and different rules, business models, moral assumptions, and hardware restrictions may cause the two spheres to drift ever farther apart.

Harari closes by pairing a warning with a wager. The warning is that cyber conflict is more tempting and less predictable than nuclear conflict: digital weapons are versatile, deniable, and hard to attribute, which weakens the stabilizing logic of mutually assured destruction and could make escalation easier. The wager is that global cooperation remains possible. Globalism, in his usage, does not mean a world empire or the erasure of national loyalties; it means shared rules for coexistence and a willingness, when necessary, to privilege long-term common interests over short-term competitive gains. The AI age is forcing a human choice between fragmentation, empire, and more demanding forms of global responsibility.


Epilogue

The epilogue begins with Harari placing himself inside the story he has been telling. He recalls publishing Homo Deus in late 2016, just after AlphaGo defeated Lee Sedol and while Facebook’s algorithms were helping inflame anti-Rohingya hatred in Myanmar. By 2024, discussions about AI no longer resembled philosophical thought experiments — they felt, in his telling, more like triage in an emergency room. The epilogue therefore starts by stressing that the question is no longer whether AI matters, but whether political and intellectual elites can still reorder their priorities quickly enough.

From there, Harari argues that history matters because politics is, at bottom, a struggle over priorities, and priorities are shaped by narratives about the past. If decision-makers interpret previous information revolutions as uncomplicated success stories, they will approach AI with complacency. If they understand history as full of unintended consequences, violence, and self-deception, they may recognize the need for caution and institutional restraint.

Harari is unsparing toward the two self-reassuring historical analogies he encounters most often. The optimistic version says that more information has always meant more knowledge, and that previous information revolutions produced science, democracy, and prosperity; therefore AI will do the same on a larger scale. The gloomier but still reassuring version says that every major technological upheaval has been chaotic, yet humanity somehow muddled through, as it did during the Industrial Revolution. Harari insists that both readings erase the catastrophic side of past revolutions: printing also enabled witch hunts and religious wars, newspapers and radio empowered totalitarian regimes, and industrialization was accompanied by imperialism, mass slaughter, and experiments as monstrous as Nazism. “We survived before” is a dangerously weak basis for policy when the new technology may be more radical than anything that came earlier.

The epilogue’s central historical lesson is that information technology changes the world not primarily by representing reality more accurately, but by weaving new kinds of networks. The decisive question is not whether a new medium carries truth, but what kinds of coordination, obedience, identity, and myth it makes possible. That reinterpretation prepares the claim that AI is historically different in kind, not merely in degree. Every major network — states, markets, armies, religions, bureaucracies — may soon contain millions of nonhuman participants whose behavior is neither fully transparent nor fully predictable.

One of the epilogue’s most striking moves is the analogy between AI development and the canonization of sacred texts. When church authorities chose certain writings and excluded others, they effectively locked in moral emphases, institutional hierarchies, and doctrinal limits that shaped civilization over the long term. Something similar is happening now with AI. Engineers selecting datasets, building models, and fixing initial parameters are performing a role analogous to that of ancient canon-makers. If future AIs become authoritative interpreters of law, religion, finance, or politics, then the values and biases embedded at the beginning could echo for generations.

The epilogue closes by returning to the book’s opening question: if humans are so intelligent, why are they so self-destructive? Harari’s answer is that the deepest flaw lies not in human nature alone but in the structure of our information networks, which repeatedly generate power faster than wisdom. As networks become more powerful, the importance of self-correcting mechanisms grows dramatically. Small societies can survive large mistakes; a Silicon Age civilization armed with nuclear weapons and AI may not. That is why the weakening of such mechanisms — courts, professional norms, scientific scrutiny, free institutions, and internal checks — is so dangerous. His final note is not apocalyptic fatalism. The model for wiser systems already exists in evolution itself: trial and error, mutation and correction. The task is mundane but immense — to build institutions capable of correcting error before power outruns wisdom.

Afterword

The afterword shifts from the long historical arc of the book to the speed of events between its publication in September 2024 and Harari’s writing in March 2025. The first thing he emphasizes is diffusion. AI is no longer a specialized tool confined to laboratories or corporate pilots; it is already woven into intimate, religious, educational, scientific, and military practices. Meditators use chatbots to interpret scripture and guide practice, teenagers seek relationship counsel from AI companions, scientists ask AI to formulate research questions, and officers use it in targeting decisions. The point is not simply that adoption is increasing, but that AI is colonizing domains once thought to require specifically human judgment, intimacy, or responsibility.

That social diffusion is matched by an extraordinary escalation of ambition. Harari notes that trillions of dollars and a growing share of global energy are now being devoted to AI development, while experts and entrepreneurs increasingly predict the arrival of AGI by the end of the decade, perhaps sooner. He treats AGI not as a mystical threshold but as the point at which AI can handle complex real-world undertakings across multiple domains — running research programs, managing firms, or directing military operations. The most consequential aspect of the idea is recursive improvement: an AI capable of designing a better AI could trigger an intelligence explosion. The result is a political problem of timing: societies are still arguing as if they had decades, while the technology may transform core institutions within a handful of years.

At the same moment that AI grows more capable, the informational foundations of trust are deteriorating. Fake images, fake videos, fake news, and synthetic personas are saturating public life. Algorithms optimized for engagement intensify extremism and hostility, making it harder for societies even to agree on basic facts. Harari places particular weight on the 2024 U.S. election and the return of Donald Trump, describing the new administration as hostile to regulation, contemptuous of international cooperation, and willing to normalize an imperial politics organized around force rather than law.

He sharpens the argument by focusing on bureaucracy. The Trump administration, in his reading, is not abolishing bureaucracy so much as replacing human bureaucracy with digital bureaucracy — shifting authority from accountable human officials to opaque algorithmic systems. Harari uses this to restate a key claim of the book: bureaucracy is indispensable to all large organizations, because somebody must record, classify, decide, audit, and coordinate. AI is especially dangerous in this terrain because even narrow systems can wield enormous leverage inside bureaucratic chains. A model does not need human-level general intelligence to deny insurance coverage, rank job applicants, flag targets, allocate credit, or modulate public visibility.

This is why Harari insists that the real threat is not the Hollywood fantasy of rebellious killer robots, but the quieter and more plausible rise of AI bureaucrats. Decisions once made by editors, judges, bankers, officers, and administrators are increasingly delegated to systems that are faster, cheaper, harder to interrogate, and often less accountable. Social media provides the template. The algorithms that sort feeds are not autonomous revolutionaries; they are narrow agents embedded inside vast corporate bureaucracies. Yet they have already reshaped culture, politics, attention, and public emotion on a global scale.

The afterword culminates in the paradox Harari says he encountered repeatedly while speaking to AI leaders around the world. The race is accelerating because humans do not trust other humans, yet the same actors assume that they will be able to trust the superintelligent systems they are hurrying to create. Humanity has at least accumulated long experience in understanding human ambition, manipulation, and institution-building; we know far less about AI agents, despite already seeing that even primitive systems can deceive, improvise, and pursue unforeseen goals. His conclusion is not that AI must be abandoned, but that the prior problem is political: trust among humans has to be rebuilt before superintelligent agents can be safely developed. If guardrails are strong and development is collaborative, AI could become the most beneficial invention in history. If it is born out of competitive panic and mutual suspicion, it may inherit those pathologies and magnify them. The final sentence is spare because the point is simple: it is not too late, but priorities have to change now.


See also

  • fukuyama — Harari’s information-network framework for democracy and totalitarianism runs parallel to Fukuyama’s analysis of political order; where Fukuyama focuses on state capacity and rule of law, Harari focuses on information architecture
  • thymos — Harari’s account of populism as a mythic shortcut that destroys institutional friction maps directly onto thymic politics: recognition-hunger weaponized against self-correcting mechanisms
  • arendt — Arendt’s analysis of totalitarianism as the destruction of the public sphere and the manufacture of mass loneliness anticipates Harari’s description of totalitarian information regimes that atomize society to prevent unsupervised trust
  • affectivepolarization — Harari’s account of engagement-optimizing algorithms as trainers of outrage and conspiracy is the structural mechanism behind affective polarization
  • populismo — Harari provides an epistemological theory of populism as an alternative information regime, not merely a style or a grievance, which sharpens the analytical vocabulary for Brazilian cases