Free Shipping on orders over US$39.99 How to make these links

Quanta Magazine


He joined the faculty of Princeton in 2011. Anna joined the university as a clinical psychologist, and the couple bought a house three blocks from the computer science building. At this point, it’s been his home longer than anywhere else. They would go on to have two children – one in 2016, the other in 2018.

Once settled in New Jersey, Braverman began to work in information complexity, the field that Rao had introduced him to in 2008. Information complexity spun out of the larger discipline of information theory, which began with the trailblazing work of Claude Shannon, who in 1948 laid out the mathematical framework for one person to send a message to another through a channel. That paper sought to quantify information, and introduced the world to the concept of a “bit” as its basic, conserved unit. The fundamental challenge, Shannon wrote, was to communicate data from one point to another with exact or near-exact precision. That’s easy enough if the data contain something “simple,” like a stream of random digits, and the channel is noiseless.

But the task quickly becomes tricky if those data have complicated statistical relationships (like the hundreds of thousands of words in the English language) or if the channel is noisy. Borrowing the tools of statistical mechanics, Shannon formalized the idea of data compression and adapted the idea of “entropy” as a quantity related to the amount of information in a message. His key insight, Braverman said, was to separate the entropy of the message itself from the capacity of the channel — the rate at which information is sent. In those terms, a communication task can be executed as long as the entropy of the message does not exceed the capacity of the communication channel. That idea naturally suggests another question: What’s the least amount of entropy you need to send a message through a channel?

“Shannon showed that if the conversation is one-way, where only one party speaks, then you can compress the data optimally,” meaning with the least possible amount of entropy, said Omri Weinstein, a theoretical computer scientist at the Hebrew University of Jerusalem and one of Braverman’s first doctoral students at Princeton.

Shannon was thinking about noisy telephone lines. Braverman focused on extending Shannon’s work beyond one-way transmission. In particular, he wanted to think not only about how data is sent, but about how it’s shared and manipulated, as by an algorithm. He could see ways that Shannon’s ideas could be applied to the field of communication complexity, which asks: What is the communication cost of computation? This question more closely reflects what happens billions of times every day on the internet: Two parties, say a buyer and seller, communicate with each other to accomplish some transaction. They only care about the result — knowing how much money was paid, and to whom — not how they get there. Reaching that goal could require repeated back-and-forth messages. Braverman saw that the formalism of information theory could be adapted to the scenario, but it would be complicated. What math underlies this communication?  At first this challenge suggested a set of discrete, approachable problems, not unlike the one-way situations that Shannon originally explored.

“Originally my motivation was to solve some problems, but I enjoy building theories more, so it turned out that there was a theory to be built,” Braverman said. Building a theory meant solving problems, devising proofs, finding connections to other areas, and recruiting graduate students to help. “I spent a good part of the next 10 years working on it.”

Finding Answers

Braverman’s biggest contribution was to build a broad framework that articulated the big, general rules that describe the boundaries of interactive communication — rules that suggest new strategies for compressing and securing data when it’s sent online by algorithms.

Previous researchers had studied how two people might send information back and forth, especially in straightforward situations where one person might not know anything, or if they had no overlapping information. In the 1970s, computer scientists studied and solved scenarios about what happens if the two people had some overlapping information to begin with. But Braverman and his collaborators were the first to show that these exchanges are computational tasks, rather than data transmission tasks.

For example: Imagine two people each have a list of animals and a protocol, which is a way to send messages back and forth. They want to determine what animals they have in common, if any, and how much effort it will take to figure that out. In this situation, each person has some information to start with (they know they’re talking about animals), and every message adds to that information. That increase in information is connected to the communication cost in bits, and a paper by Braverman and Rao in 2011 made that connection tighter than ever before.





Source link

We will be happy to hear your thoughts

Leave a reply

Info Bbea
Logo
Enable registration in settings - general
Compare items
  • Total (0)
Compare
0
Shopping cart