Homo Deus: A Brief History of Tomorrow – Yuval Noah Harari
Thoughts: Similar to Sapiens, this book has a bunch of truisms that are stated as facts. Some assertions are supported with a half-dozen references, while others have none. In spite of this, there are a bunch of stimulating and worthwhile ideas - Harari takes a big, zoomed-out view of the world/history. He writes in a clear, no-nonsense style (and has a habit of describing a potential future in a forceful style that on its surface implies certainty, reaching the end of the story, and saying something to the effect of “right, so we don’t know whether this will happen - it’s only one of a range of possible futures”) and employs rather idiosyncratic definitions of “religion” and “humanism”.
(The notes below are not a summary of the book, but rather raw notes - whatever I thought, at the time, might be worth remembering.)
Harari, Yuval Noah. 2015. Homo Deus: A Brief History of Tomorrow. Signal.
1. The New Human Agenda
- 3: The biological poverty line, “below which people succumb to malnutrition and hunger”
- 17: Knowledge has become the most valuable resource, more so than natural resources. Since knowledge cannot be captured via conquest as farmland and oilfields can, the profitability of war has decreased.
- 21: good analogy: “Terrorists are like a fly that tries to destroy a china shop. The fly is so weak that it cannot budge even a single teacup. So it finds a bull, gets inside its ear and starts buzzing. The bull goes wild with fear and anger, and destroys the china shop.”
- 25: whereas death was once explained as, “alas, it was his time to go”, we now recognize that humans die, in Harari’s words, “due to some technical glitch” - some system in the body that has failed.
- 27: even when people die of natural disasters, accidents, such deaths tend to be treated as a person or a system’s fault - a person’s negligence, or a safeguard that should have been in place, led to the death
- 28: to look up/learn about: Google’s sub-company Calico, whose stated mission is to “solve death”
- 30: if the human lifespan lengthens considerably, it will have deep social implications - Harari ventures that serial monogamy will become more common (otherwise, couples will routinely celebrate their 70th, 80th, 100th anniversaries together), careers will need to be rethought (30-30-30 years of school-work-retirement will hardly be a useful model)…
- 34: the pursuit of happiness has changed. In contrast to Epicurus, who thought of the pursuit of happiness as a solitary endeavour, Jeremy Bentham (18th C) “declared that the supreme good is ‘the greatest happiness of the greatest number’”
- 61: three-parent babies - babies with nuclear DNA from two parents and mitochondrial DNA from a third
- 63: so far, we’ve managed to largely avoid some of the apocalypses that have become possible due to technology - nuclear war hasn’t happened, trade in human organs has remained a peripheral activity, eugenics is no longer seriously discussed. If we think carefully about the future, we can avoid other potential nightmare scenarios.
- 67: “knowledge that does not change behaviour is useless. But knowledge that changes behaviour quickly loses its relevance.”
- 74: “This is the best reason to learn history: not in order to predict the future, but to free yourself of the past and imagine alternative destinies.”
Part I: Homo Sapiens Conquers the World
2. The Anthropocene
- 85: external events (climate change, asteroid impacts, etc.) have changed the course of evolution, but the process of evolution has remained the same. The process of genetic modification, guided by intelligent beings, is unprecedented since life began on earth.
- 97: as far as we can tell, all mammals, as well as at least some reptiles and fish, can experience emotion
- 98: Harari notes that human beings can be thought of as very complicated algorithms, with cells and neurons following instructions that, taken together, lead to behavior.
- 116: humans have historically valued animal lives less than human lives by arguing that animals are unfeeling, unconscious, etc., the lives of people with less agency have been valued less than the lives of people who hold power. Superhuman intelligences may quite possibly appear in the coming decades/centuries, and we would do well to consider what value should be given to “lesser” human beings when that happens.
3. The Human Spark
- There’s a lengthy discussion of consciousness etc. that makes up the first part of this chapter. The subject is addressed much more clearly in Part 5 of Sean Carroll’s The Big Picture
- 138: Scientists have been able to identify neural patterns that occur only during consciousness. This has allowed them to communicate, for example, with stroke victims who can think of things but not communicate their consciousness by other means.
- 140: Harari notes that it’s not a coincidence that Alan Turing proposed the test now known as the Turing Test - testing a computer on whether it can pass as a human was analogous to testing a gay man on whether he can pass as straight in 1950s Britain.
- 142: cf. p. 138: “Initial tests on monkeys and mice indicate that at least monkey and mouse barins indeed display the signatures of consciousness.”
- 164-165: Harari notes that many primates (humans included) tend to have strong egalitarian values, but when humans form large societies, inequalities tend to develop, and empires/societies can be quite efficient/stable in spite of these inequalities.
- 168: Harari says that “realities” can fit into three categories: “objective”, “subjective”, and “intersubjective”. Intersubjective entities arise through the communication of many humans, rather than the individual experiences of one or a few humans. E.g. money - physical bills/coins objectively exist, but it’s money because everyone values it.
- 177: Harari states that intersubjective reality will influence/leave its mark on objective reality as humans modify genes, create brain/machine interfaces etc.
Part II: Homo Sapiens Gives Meaning to the World
4. The Storytellers
- 183-184: for much of human’s evolutionary past, human cooperation was limited because any institutions existed only in people’s minds, and people can keep track of only a certain number of conventions/traditions/debts etc. Around 5000 years ago, the Sumerians began using writing and money, which removed some of the limitations of memory and allowed more people to cooperate.
- 187: Harari suggests that through writing, human societies became algorithms: decisions were made by decentralized networks of people rather than by individuals.
- 194: Originally, written language described reality. But as literate societies grew and became more complex, language began more and more to shape reality, creating real-world effects. e.g. legal codes etc.
- 198-199: human societies rely on a balance of truth and fiction: fictions/myths are required to coordinate large groups of people, but if you distort reality too much, it can put you at a disadvantage compared to clearer-sighted competitors.
- 202: philosophers who arrived at a view of reality fairly similar to the modern one (i.e. good/bad things can happen because of larger processes or by chance, rather than as punishments/rewards from a deity): Herodotus, Thucydides, Sima Qian
5. The Odd Couple
- Similar to Chapter 3, the discussion of reason vs. values (which Harari frames as Science vs. Religion) is discussed much more clearly in Part 6 of The Big Picture. Introducing Hume’s Guillotine would have helped.
- 213: Harari defines “religion” as a belief in “some system of moral laws that wasn’t invented by humans, but that humans must nevertheless obey.” He argues that all human societies have a religion in this sense.
- 228: Several philosophers (Harari singles out Sam Harris) argue that science can always resolve ethical questions, “because human values always conceal within them some factual statements”
6. The Modern Covenant
- 233: “modernity is a surprisingly simple deal…: humans agree to give up meaning in exchange for power.”
- 240: humans grew up in an environment with an effectively static amount of resources, causing us to tend to think of situations as zero-sum games. “Accordingly, traditional religions such as Christianity and Islam sought ways to solve humanity’s problems with the help of current resources, either by redistributing the existing pie, or by promising a pie in the sky”
- in modern understanding, however, we recognize that economic growth is possible - the pie can be made bigger
- 246: Harari notes a correspondence between societies’ values and their games: pre-modern societies play games like chess and mancala, where you begin with a set number of pieces, and you can never have more. In contrast, modern (capitalist) games tend to involve growth, investment
- j: if games help us train to solve problems in real life, it would be worth playing games with a range of such approaches, since some problems in real life have significant zero-sum components
- 247-248: a fixed-pie view of the world holds that there are two kinds of resources in the world: raw materials and energy. Harari notes, however that there are in fact three kinds of resources: raw materials, energy, and knowledge. The first two are limited, but knowledge can be increased without being used up.
- 248: “The greatest scientific discovery was the discovery of ignorance. Once humans realized how little they knew about the world, they suddenly had a very good reason to seek new knowledge, which opened up the scientific road to progress”
- 249-250: historically, scientific/technological progress has allowed for economic growth while staving off mass ecological/social collapse, but Harari notes that this is not a law of nature, and there’s no guarantee that this process will go on forever.
7. The Humanist Revolution
- 258: Science/modernity has discovered that there is probably no great cosmic plan for the universe, or meaning behind existence. But Harari notes that human societies need a sense of meaning to sustain order, so philosophers/artists/etc bend over backwards trying to convince people that there is meaning after all.
- 272: Harari characterizes the central purpose of education within a humanist society is to teach individual humans how to think for themselves.
- 275-277: Harari characterizes various systems of knowing that have operated in the world (particularly in Europe/the West):
- in medieval Europe: Knowledge = Scriptures * Logic
- following the Scientific Revolution: Knowledge = Empirical Data * Mathematics (but cannot solve ethical issues - it can only observe how the world behaves)
- to resolve ethical issues, according to Humanism: Knowledge = Experiences * Sensitivity
- 278: Harari states that experiences and sensitivity feed on each other: one cannot experience anything without any sensitivity, and one needs a variety of experiences in order to develop sensitivity.
- 279: our ethical knowledge develops along these lines: no one is born with a ready-made consciousness, but if we pay attention to how it feels when we interact with others, we can develop a moral sensitivity
- 281: Harari identifies James Joyce’s Ulysses as “the apogee of [the] modern focus on inner life rather than external activities” - as it describes a single day in the life of two people, who don’t do all that much.
- 289-290: Harari identifies three main varieties/offshoots of humanism:
- liberal humanism (the most common/popular in today’s world)
- socialist humanism
- evolutionary humanism
- 290: all of which hold that “human experience is the ultimate source of meaning and authority”. Liberal humanism values liberty as one of its central values. socialist and evolutionary humanism note that individuals have a wide range of often contradictory desires that need to be resolved.
- 292: “People feel bound by democratic elections only when they share a basic bond with most other voters. If the experience of other voters is alien to me… then even if I am outvoted… I have absolutely no reason to accept the verdict. Democratic elections usually work only within populations that have some prior common bond, such as shared religious beliefs or national myths.”
- cf. recent US election
- 293-294: Socialist humanism holds that focussing on individual experiences/values is selfish and indulgent, and that people should focus on the wellbeing of others. A socialist humanist recognizes that “my current political views, my likes and dislikes, and my hobbies do not reflect my authentic self. Rather, they reflect my upbringing and social surroundings.”
- 295-296: Evolutionary humanism holds that the experiences/desires of the fittest/most powerful should take precedence over the experiences/desires of the less powerful, leading to humans that are increasingly well-evolved to handle modern reality.
- 319: Harari predicts that the next great revolution in human progress will depend on advances in biotechnology and computer algorithms, and that it will likely create enormous inequalities. “In the twenty-first century, those who ride the train of progress will acquire divine abilities of creation and destruction, while those left behind will face extinction.”
- 321-322: Harari predicts that while religions will have some influence in the future, it will play an increasingly minor role, noting that changes in the moral positions of various religions have been driven by ideas emerging outside of religion, and that religion is not likely to lead to new discoveries/technologies.
- 322: mentioned: Donna Haraway’s “A Cyborg Manifesto”, as an example of a sources of new moral ideas among the religious
Part III: Homo Sapiens Loses Control
8. The Time Bomb in the Laboratory
- 328: Asserting that humans have free will is not an ethical judgement, but it’s supposed to be a factual description of reality. As evidence has come in, however, this assertion is becoming increasingly untenable.
- 330: Humans may be able to act on their desires, but humans don’t really seem able to choose their desires.
- cf. Hank Green’s video on wanting to want something
- 333: Scientists have been able to create remote-controlled rats by implanting electrons in the sensory and reward centres of their brain, which can then be stimulated. Since the actions are caused by stimulating the rats’ brains’ reward centres, one could make a strong argument that the rats aren’t feeling coerced - they “want” to do all the things they are being instructed to do by the researchers.
- 335: helmets have been developed to stimulate/inhibit certain brain sensors allowing soldiers to have enhanced focus etc.
- 335-336: To look up: article in the New Scientist by journalist Sally Adee, who tried the helmet and found it a “near-spiritual experience”, with her usual inner monologue of self-doubt etc. silenced.
- 342-343: to learn more about: the work of Daniel Kanhneman, who posits two “selves” within each of us: the experiencing self and the narrating self
- experiencing self: moment-to-moment consciousness
- narrating self: “self” which tells stories based on memories
- 343-344: the Narrating Self tends to follow the peak-end rule to remember how an experience felt: it takes the most intense sensation during the experience, and the final sensation, and averages the two of them. Leads to people preferring things like longer colonoscopies, in which the probe is left in for several minutes at the end of the procedure.
- 345: evolution has hacked this bit of human neurochemistry in mothers giving birth: the brain delivers an enormous ruch of oxytocin at the end of the birth, averaging/balancing out the agony experienced during childbirth.
- 347: people generally identify with their narrating self even while the experiencing self tends to call the shots in-the-moment. This is part of the reason we have a hard time building habits: The narrating self creates plans, and the experiencing self decides whether or not to follow them.
- 349-351: the “Our Boys Didn’t Die In Vain” syndrome: when a mistake is made that costs lives (particularly an ill-advised military campaign), the person in charge has two options: either admit that the mistake has happened and change course, or double down, saying that the lives were not lost in vain (c.f. sunk costs fallacy). The outcome of the second choice is that you become more invested in the story that you didn’t make a mistake
- 351: religions have leveraged this tendency: people are asked to make large sacrifices, and when they do, they’ll be likely to continue to make sacrifices, since changing their behavior would be akin to admitting that your first sacrifice was a mistake. (cf. cognitive dissonance)
- 355: Harari states that Dawkins, Pinker and other “champions of the new scientific world view” take pains to continue to hew to the ethical ideas of Locke/Rousseau/Jefferson/etc. in spite of evidence that free individuals probably don’t exist.
9. The Great Decoupling
- 361: Harari notes that until recently, high intelligence has been coupled with developed consciousness - the only agents that were able to play go and design rockets were conscious agents. But with the rise of AIs, there are now many tasks that can be done better by unconscious AIs than by humans.
- 362: “Armies and corporations cannot function without intelligent agents, but they don’t need consciousness and subjective experiences.”
- 365: to look up: Mindojo - which is “developing interactive algorithms that will not only teach me maths, physics and history, but will simultaneously study me and get to know exactly who I am”, suggesting different exercises/questions based on a person’s past responses.
- 370: to look up: Mattersight Corporation: customer-service service, where callers are asked to answer a few questions at the beginning, and they’re then routed toward a customer-service agent with a compatible personality for the caller in their current mood based on content of answers, tone of voice, etc.
- 371: Harari notes that people have been able to find new jobs even as new technologies emerged, because there have always remained tasks that humans are better at doing than machines. But he states that this isn’t a law of nature, so we should be prepared for the possibility that machines will become better than humans at just about everything.
- he suggests that it can be productive to divide human capabilities into physical and cognitive capabilities. Since the industrial revolution, most technologies have meant that machines become better at doing the physical tasks than humans, but they can’t do the thinking. Now, there are beginning to emerge domains in which computers are better at doing thinking than humans are.
- 375: “As time goes by it becomes easier and easier to replace humans with computer algorithms, not merely because the algorithms are getting smarter, but also because humans are professionalizing. Ancient hunter-gatherers mastered a very wide variety of skills in order to survive, which is why it would be immensely difficult to design a robotic hunter-gatherer. Such a robot would have to know how to prepare stone tools, find edible mushrooms in a forest and track down prey.” Now, since jobs are becoming more specialized, it becomes easier to design/train algorithms to accomplish the much narrower range of tasks required to do the job.
- 405: Harari suggests that whereas in the 20th century, medicine aimed to heal the sick, in the 21st century, “medicine is increasingly aiming to upgrade the healthy.”
- 405-406: “Healing the sick was an egalitarian project, because it assumed that there is a normative standard of physical and mental health that everyone can and should enjoy…. In contrast, upgrading the healthy is an elitist project, because it reject the idea of a universal standard applicable to all and seeks to give some individuals an edge over others.”
10. The Ocean of Consciousness
- 411-412: “Just as the spectrums of light and sound are far broader than what we humans can see and hear, so the spectrum of mental states is far larger than what the average human perceives. [Harari describes how humans can see only a small slice of the electromagnetic radiation spectrum.] Similarly, the spectrum of possible mental states may be infinite, but science has studied only two tiny sections of it: the sub-normative and the WEIRD.”
- 416: Article mentioned: “What Is It Like to Be a Bat?” by philosopher Thomas Nagel. “One of the most important articles about the philosophy of mind, [in which Nagel] points out that a Sapiens mind cannot fathom the subjective world of a bat.”
11. The Data Religion
- 434: Harari contends that “capitalism did not defeat communism because capitalism was more ethical, because individual liberties are sacred or because God was angry with the heathen communists. Rather, capitalism won the Cold War because distributed data processing works better than centralized data-processing, at least in periods of accelerating technological change.”
- 440: Harari suggests that the human species can be interpreted as a “single data-processing system, with individual humans serving as its chips.”
- 440-441: he suggests that throughout history, this system has improved through four basic methods:
- “Increasing the number of processors”
- “Increasing the variety of processors”
- “Increasing the number of connections between processors”
- “Increasing the freedom of movement along existing connections”
- 441-442: Sapiens have been increasing the number of processors throughout history. As they spread throughout the globe, they increased the variety of processors through cultural radiation (without significantly increasing connections or freedom of movement) Over the past few centuries, improved communication systems have been increasing the number of connections, and increased transportation systems have been increasing freedom of movement.
- 443: “We often imagine that democracy and the free market won because they were ‘good’. In truth, they won because they improved the global data-processing system.”
- 440-441: he suggests that throughout history, this system has improved through four basic methods:
- 454: “Dataism now gives humanists a taste of their own medicine, and tells them: ‘yes, God is a product of the human imagination, but human imagination in turn is just the product of biochemical algorithms.’ In the eighteenth century, humanism sidelined God by shifting from a deo-centric to a homo-centric worldview. In the twenty-first century, Dataism may sideline humans by shifting from a homo-centric to a data-centric view.”
- 456: “when you read the Bible you are getting advice from a few priests and rabbis who lived in ancient Jerusalem. In contrast, when you listen to your feelings, you follow an algorithm that evolution has developed for millions of years, and that withstood the hardest quality-control tests of natural selection…. Your feelings are not infallible, of course, but they are better than most other sources of guidance. For millions upon millions of years, feelings were the best algorithms in the world.”
- 462: “in the past, censorship worked by blocking the flow of information, in the twenty-first century censorship works by flooding people with irrelevant information…. In ancient times having power meant having access to data. Today having power means knowing what to ignore.”
Notes
- 491: book to check out: Matthew Crawford, The World Beyond Your Head: How to Flourish in an Age of Distraction (London: Viking, 2015).
Posted: Oct 18, 2022. Last updated: Aug 31, 2023.