We’re creating a new form of AI that understands the world like people do, and using it to amplify human intelligence, reverse-engineer how the mind and brain work, and improve human well-being. We’re able to do this thanks to the emerging field of probabilistic programming, which we helped to pioneer, that can be much more efficient, safe, and controllable than machine learning with neural networks.

What does it mean to make AI that understands the world like people do? And what does this have to do with probability?

Our minds receive limited data and have limited attention and knowledge. Yet we perceive physical objects, infer the probable beliefs and feelings of other people, and learn concepts, all with a reliability and speed and efficiency that vastly outstrips today’s AI. Even four year old children can see the world more reliably than today’s best autonomous driving systems.

Probability is needed because of the uncertainty due to limited data, processing power, and knowledge. We experience uncertainty pervasively — when driving in fog or at night, when learning a new skill or concept, when meeting a new person who we don’t know yet. In fact, cognitive scientists have shown us that underneath our illusions of uncertainty, our mind is continually making lightning-fast guesses and bets.

What is probabilistic programming?

Probabilistic programs are a new symbolic medium — like the alphabet, musical notation, mathematical formulae, and hieroglyphs — for representing uncertain knowledge and stochastic processes for perceiving and thinking with that knowledge.

There are two main ideas: probabilistic programs and traces. A probabilistic program is like a board game — a set of symbolic rules that define a vast (possibly infinite) space of possible games. A trace is one sequence of moves in the game. These probabilistic programs are embedded in meta-programs that explore the space of possible moves, to find the most probable or useful traces, to model the world accurately or make good decisions.

We developed probabilistic programs to encode and scale the “game rules” that underpin human knowledge, human reasoning, and someday also human values. And because probabilistic programs are probabilistic, we can do this even when we are uncertain what those game rules are. For example, we often write probabilistic meta-programs — meta-games in which moves can generate other games and play those games. Humans do this, too.

Can machines write and learn probabilistic programs themselves?

Yes, this is common. It requires a probabilistic meta-program that generates probabilistic programs. Many important applications of probabilistic programming use this idea — for example, our systems that learn probabilistic programs to predict econometric data, and our systems that learn to perceive 3D objects, all do this.

How does probabilistic programming relate to today’s neural network AI?

Neural networks are prediction machines — similar to autocorrect on a phone, but looking at thousands of words instead of just the last couple. Because the internet has so much text, in some cases, they can approximate or predict the things a person will probably say or perceive — but they don’t necessarily understand or model the world like people.

Some scientists think that given enough data, neural networks trained to predict data will eventually locate all the same potential patterns that probabilistic programs would. Nobody knows how large the network would need to be, or how costly it might be to train. Today’s networks already cost hundreds of millions of dollars to build.

Probabilistic programming can be much cheaper and more energy efficient. It can start with more knowledge about how the world actually works, as opposed to needing to learn it from patterns in the data. For example, unlike neural networks, our probabilistic programs start with the assumption that there is a world out there, and that world has rules, even if they cannot be known with perfect certainty.

How will this matter for human society?

Many problems with today’s AI may be solved by scaling AI that understands the world like people do, based on probabilistic programming. For example, we could have safe autonomous driving systems that see better than humans, and conversational AI that explains the probable implications of data in terms of human understanding of the world.

Probabilistic programs are much more efficient and controllable than today’s AI. Because they are symbolic, humans can read, understand, and edit them, and modify them to encode human values. This makes it possible for society to regulate AI.