Claim #1: Humans have a very general ability to solve problems and achieve goals across diverse domains.

We call this ability “intelligence,” or “general intelligence.” This isn’t a formal definition — if we knew exactly what general intelligence was, we’d be better able to program it into a computer — but we do think that there’s a real phenomenon of general intelligence that we cannot yet replicate in code.

Alternative view: There is no such thing as general intelligence. Instead, humans have a collection of disparate special-purpose modules. Computers will keep getting better at narrowly defined tasks such as chess or driving, but at no point will they acquire “generality” and become significantly more useful, because there is no generality to acquire.

Why this claim matters: Humans have achieved a dominant position over other species not by being stronger or more agile, but by being more intelligent. If some key part of this general intelligence was able to evolve in the few million years since our common ancestor with chimpanzees lived, this suggests there may exist a relatively short list of key insights that would allow human engineers to build powerful generally intelligent AI systems.

Claim #2: AI systems could become much more intelligent than humans.

We do, however, expect that (a) human-equivalent machine intelligence will eventually be developed (likely within a century, barring catastrophe); and (b) machines can become significantly more intelligent than any human.

Alternative view #1: Brains do something special that cannot be replicated on a computer.

Short response: Brains are physical systems, and if certain versions of the Church-Turing thesis hold then computers can in principle replicate the functional input/output behavior of any physical system. Also, note that “intelligence” is about problem-solving capabilities: even if there were some special human feature that computers couldn’t replicate, this would be irrelevant unless it prevented us from designing problem-solving machines.

Alternative view #2: The algorithms at the root of general intelligence are so complex and indecipherable that human beings will not be able to program any such thing for many centuries.

Short response: This seems implausible in light of evolutionary evidence.

Alternative view #3: Humans are already at or near peak physically possible intelligence. Thus, although we may be able to build human-equivalent intelligent machines, we won’t be able to build superintelligent machines.

Short response: It would be surprising if humans were perfectly designed reasoners, for the same reason it would be surprising if airplanes couldn’t fly faster than birds. Simple physical calculations bear this intuition out: for example, it seems well possible, within the boundaries of physics, to run a computer simulation of a human brain at thousands of times the normal speed.

Why this claim matters: Human-designed machines often knock the socks off of biological creatures when it comes to performing tasks we care about: automobiles cannot heal or reproduce, but they sure can carry humans a lot farther and faster than a horse. If we can build intelligent machines specifically designed to solve the world’s largest problems through scientific and technological innovation, then they could improve the world at an unprecedented pace.

Claim #3: If we create highly intelligent AI systems, their decisions will shape the future.

Humans use their intelligence to create tools plans and technology that allow them to shape their environments to their will (and fill them with refrigerators, cars, and cities). We expect that systems that are even more intelligent would have even more ability to shape their surroundings, and thus, smarter-than-human AI systems could wind up with significantly more control over the future than humans have.

Alternative view: An AI system would never be able to out-compete humanity as a whole, no matter how intelligent it became. Our environment is simply too competitive; machines would have to work with us and integrate into our economy.

Short response: I do not doubt that an autonomous AI system attempting to accomplish simple tasks would initially have strong incentives to integrate with our economy, but what if the system accrues a strong technological or strategic advantage?