Friday, February 12, 2016

How many degrees of separation?

As many of you know, the good folks at Facebook have been investigating the "degrees of separation" issue in human social networks. They recently announced that on the average, a given Facebook member is separated from other FB members by an average of 3.57 intermediaries (for me, the number is 2.95, which is rather astonishing given my hermit nature). Duncan Watts is a mathematician and sociologist who has investigated this problem (Six Degrees: The Science of a Connected Age) and has some interesting observations.

First, there is a semantic issue. X degrees of separation  (as the matter is often put) corresponds to X-1 intermediaries. That's mere semantics and implies, of course, that FB's announcement, which was stated in terms of intermediaries, correponds to 4.57 degrees of separation. Another distinction, however, is much more interesting. It is between algorithmic and topological versions of the problem:
Second, though, Milgram’s experiment was a subtly but importantly different test than the one run by Facebook. Whereas the latter measured the length of the shortest possible path between two people — by exhaustively searching every link in the underlying Facebook graph — the former is simply the shortest path that ordinary people could find given very limited information about the underlying social network. There are, in other words, two versions of the small-world hypothesis — the “topological” version, which refers only to underlying network structure, and the “algorithmic” version, which refers to the ability of people to search this underlying structure. From these definitions, it follows that algorithmic (search) paths cannot be shorter than topological paths and are almost certainly longer. Saying that the world has gotten smaller because the shortest topological path length is 4.5 not 6 therefore makes no sense — because the equivalent number would have been smaller in Milgram’s day as well.
And then there is this:
In a nutshell what we showed is that it is easy to turn a “large” world into a “small” one, just by adding a small fraction of random, long-range links, reminiscent of Mark Granovetter’s famous “weak ties.” The flip side of our result, however, is that once the world has already gotten small — as it was already by the 1960's — it is extremely hard to make it smaller. Obviously Facebook did not exist in 2003 so possibly since then something has indeed changed. But I suspect that the difference will be small.
So?
Why does any of this matter? There are three reasons. First, the two versions of the small-world hypothesis — topological and algorithmic — are relevant to different social processes. The spread of a sexually transmitted disease along networks of sexual relations, for example, does not require that participants have any awareness of the disease, or intention to spread it; thus for an individual to be at risk of acquiring an infection, he or she need only be connected in the topological sense to existing infectives. On the contrary, individuals attempting to “network” — in order to locate some resources like a new job or a service provider — must actively traverse chains of referrals, and thus must be connected in the algorithmic sense. Depending on the application of interest, therefore, either the topological or algorithmic distance between individuals may be more relevant — or possibly both together. Second, whereas the topological hypothesis has been shown to apply essentially universally, to networks of all kinds, the algorithmic hypothesis is largely (although not exclusively) concerned with social networks in which human agents make decisions about how to direct messages.

No comments:

Post a Comment