The best introduction to AI and its future uses and risks. Whether you're brand new to the field or have some knowledge already, it's an amazing books. Way more in-depth than I originally expected. The first chapter about the Omegas should be required reading for any founder/programmer.
Had our universe never awoken, then it would have been completely pointless - merely a gigantic waste of space. Should our universe permanently go back to sleep due to some cosmic calamity or self-inflicted mishap, it will become meaningless.
Perhaps life will spread throughout the cosmos and flourish for billions of years - and perhaps this will be because of decisions we make here on our little planet during our lifetime.
Quantum mechanics forbids anything from being completely boring and uniform
It takes only twenty doublings to make a million.
Thirty to make a billion.
Forty to make a trillion.
(If you can double your money 37 times, you'll be the richest person on Earth)
Life = a process that can retain its complexity and replicate.
Evolution rewards life that's complex enough to predict and exploit regularities in its environment, so a more complex environment will lead to the evolution of more complex and intelligent life.
The cultural evolution has emerged as the dominant force shaping our human future, rendering our slow biological evolution almost irrelevant. Technological advancement has led to changes in human behavior in the past couple of millennia, not evolution.
Life 1.0 = Biological stage. Evolves its hardware and software (animals)
Life 2.0 = Cultural stage. Evolve its hardware but designs its software (human civilization)
Life 3.0 = Technological stage. Design both hardware and software (AI)
For the first time, we might build technology powerful enough to permanently end the scourges of poverty, disease, and war - or to end humanity itself.
We can't say with great confidence that the probability of creating superhuman general AI is zero this century
The average AI researcher thinks we'll see human-level AI by 2055.
The reason for AI governance / regulation: As long as we're not 100% sure it won't happen this century, it's smart to start safety research now to prepare for the eventuality. To support a modest investment in AI-safety research, people don't need to be convinced the risks are high, just that they're non-negligible
Machines can obviously have goals. The behavior of a heat-seeking missile is best explained as a goal to hit a target.
The real worry isn't malevolence, but competence. An AI may be very good at attaining its goals, so we need to ensure that its goals are aligned with ours. You're probably not an ant hater who steps on ants out of malice. But if you're building a hydroelectric dam and there's an anthill in the region that will be flooded, too bad for the ants.
There's no agreement on what intelligence is, even among intelligent intelligence researchers.
Intelligence = the ability to accomplish complex tasks.
Comparing the intelligence of humans and computers: Humans win hands-down on breadth, while machines outperform us in a small but ever-growing number of narrow domains.
Intelligent behavior is inexorably linked to goal attainment.
Intelligence is all about information and computation, not flesh, blood or carbon atoms. There's no fundamental reason why machines can't one day be at least as intelligent as us, nor made of different materials.
Substrate independence: information can take on a life of its own, independent of its physical medium. Computation is substrate-independent
Over the past six decades, computer memory has gotten half as expensive every couple of years. Hard drives are 100 million times cheaper and memory storage has become 10 trillion times cheaper. If you could get such a "99.99999999999% off" discount on all your shopping, you could buy all real estate in NYC for about 10 cents and all the gold that's ever mined for around a dollar.
Auto-associative memory: retrieving data by specifying something about what is stored, not so much where.
You can implement any well-defined function simply by connecting together enough NAND (Not-And) logic gates.
Once technology gets twice as powerful, it can often be used to design and build technology that's twice as powerful in return, triggering repeated capacity doubling in the spirit of Moore's Law.
Something that occurs just as often as regularly as the doubling of our technological power is the appearance of claims that the doubling is ending.
We're nowhere near the limits of computation, as imposed by the laws of physics.
Just as we don't fully understand how our children learn, we still don't fully understand how neural networks learn, and why they occasionally fail.
Intelligent agents = entities that collect information about their environment and process it to decide how to act back on their environment.
Deep reinforcement learning: getting a positive reward increases your tendency to do something again and vice versa.
Within a year of beating the World Champion at Go, DeepMind's AlphaGo system had played all twenty top players in the world without losing a single game.
Since the Turing test is fundamentally about deception, it's been criticized for testing human gullibility more than true artificial intelligence.
Verification = "Did I build the system right?"
Validation = "Did I build the right system?"
If any military power pushes ahead with AI weapon development, a global arms race is inevitable. The endpoint of this trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. They're ideal for tasks such as assassinations, destabilizing nations, subduing populations and ethnic cleansing.
Kennedy emphasized that hard things are worth doing when success will greatly benefit the future of mankind.
If you're already top dog, it makes sense to follow the maxim "If it ain't broke, don't fix it". Those who stand to gain most from an arms race aren't superpowers but small rogue states and terrorists. Once mass-produced, small AI-powered killer robots are likely to cost little more than a smartphone.
The reason that the Athenian citizens had lives of leisure where they could enjoy democracy, art, and games was that they had slaves to do much of the work.
Digital technology drives inequality in three different ways:
Career advice for future kids: go into professions that machines are currently bad at and seem unlikely to get automated in the near-future.
There's evidence that greater equality makes democracy work better: when there's a large well-educated middle class, the electorate is harder to manipulate and it's tougher for people to buy undue influence over the government
It should be possible to make everyone as happy as if they had their personal dream job, but once one breaks free of the constraint that everyone's activities must generate income, the sky's the limit.
3 Steps to take over the world:
Since it's hard to dismiss step one as forever impossible, it therefore becomes hard to dismiss the other two.
History reveals a trend towards more coordination over larger distances. New transportation technology makes coordination more valuable and new communication technology makes coordination easier.
Globalization is merely the latest example of this multi-billion year trend of hierarchical growth.
The most fundamental driver of decentralization will remain: it's wasteful to coordinate unnecessarily over large distances.
For AI, the laws of physics will place an upper limit on technology, making it unlikely that the highest levels of the hierarchy would be able to micromanage everything.
We won't get an intelligence explosion until the cost of doing human-level work drops below human-level hourly wages. Once the cost of having computers reprogram themselves becomes cheaper than paying human programmers to do the same, the human can be laid off.
A good system of governance balances four concerns:
The Catholic Church is the most successful organization in human history in the sense that it's the only one to have survived for two millennia.
Exterminating 100% of humanity would be infinitely worse than exterminating 90%. It would've killed all descendants that would otherwise have lived in the future, perhaps during billions of years on billions of trillions of planets.
"In the long run we are all dead" - John Maynard Keynes
The annual probability of accidental nuclear war is 0.1% with our current behavior. That means the probability we'll have on in the next 10,000 years is ~99.995%.
We can't trust our fellow humans never to commit omnicide: nobody wanting it isn't necessarily enough to prevent it.
We’ve dramatically underestimated life’s future potential. We're not limited to century-long life spans marred by disease. Life has the potential to flourish for billions of years, throughout the cosmos.
There is reason to suspect that ambition is a rather generic trait of advanced life. Almost regardless of what it's trying to maximize, it will need resources. It has an incentive to push its technology to its limits, to make the most of the resources it has. After this, the only way to further improve is to aquire more resources by expanding into ever-larger regions of the cosmos.
We could meet all our current global energy needs by harvesting the sunlight striking an area smaller than 0.5% of the sahara desert.
"We should expect that within a few thousand years of its entering the stage of industrial development, any intelligent species should be found occupying an artificial biosphere that completely surrounds its parent star."
If your stomach were even 0.001% efficient, you'd only need to eat a single meal for the rest of your life.
The cost of computation drops when you compute slowly, so you'll ultimately get more done if you slow things down as much as possible. (Best excuse for procrastination I've ever heard!)
If we don't improve our technology, the question isn't whether humanity will go extinct, but merely how.
Nature always prefers the optimal way when it chooses to do something. It always maximizes some quantity.
A hallmark of living systems is that they maintain or reduce entropy by increasing the entropy around them. Life maintains or increases complexity by making its environment messier.
If you start with one and double just three hundred times, you get a quantity exceeding the number of particles in our Universe.
Our cosmos invented life to help it approach heat death faster.
Not only do we as humans contain more matter than all other mammals except cows, but the matter in our machines, roads, and buildings appears on track to soon overtake all living matter on Earth.
Almost all goals can be better accomplished with more resources, so we should expect a superintelligence to want resources almost regardless of what ultimate goal it has.
The ethical views of many thinkers can be distilled into four principles:
A fast-forward replay of our 13.8-billion-year cosmic history:
Societies that have survived until the present tend to have ethical principles that were optimized for promoting their survival and flourishing.
Consciousness = Subject experience
The two mysteries of the mind:
Any theory predicting which physical systems are conscious (the hard problem) is scientific, as long as it can predict which of your brain processes are conscious.
If consciousness is the way that information feels when it's processed in certain ways, then it must be substrate-independent. It's only the structure of the information processing that matters, not the structure of the matter doing the processing.
Since there can be no meaning without consciousness, it's not our Universe giving meaning to conscious beings, but conscious beings giving meaning to our Universe.
Science gathers knowledge faster than society gathers wisdom.
Mindful optimism is the expectation that good things will happen if you plan carefully and work hard for them.