Pattern-detectors didn’t work out? Dig deeper, try cliodynamics. Cliodynamics didn’t work out? Well, keep trying.
There’s a certain collateral associated with analysing social networks, and I don’t know if you’ve already noticed it. It concerns the notional inclusion of culture in a scientific analysis that falls short of being able to “game” cultural factors.
Most social networks, such as communities in reality, are parametrised in terms of simple factors and are available to “game” on the web as platforms like Facebook.
On Facebook, you can like “Like”, “Share”, and “Comment” in place of engaging with something, and you can different kinds of objects to engage with. There are images, status updates, questions you can answer, polls you can vote in, and events you can attend, etc.
All of this, like I said, is a simplification of what goes on really. Sitting in my chair and pecking away at my keyboard, I am constantly distracted. I can look outside the window, I can push shut my door, I can put on some music, etc. In other words, if I’m the nuclear unit, there’s a lot I can accomplish offline, some of which can be modeled somewhat accurately and published for others to interact with online.
As more such nuclear units assemble on one platform, the simplicity of the system, or the network, begins to get multiplied, stacked, and coupled.
It’s like a ball of yarn: First, you start with one thread, and then entwine it with another one, and then another, and then another, until you’ve got different threads intertwined to different extents. Although all of them would have followed the same physical process called entwining, what you’ll actually observe is that they’d be knotted.
Now, if you defined twining as an involvement of two entities that can be de-actuated by pulling at just one of them, and knotting as an involvement of two entities that can be de-actuated by pulling at both of them, then you’ll see that somewhere along the way of simplicity stacking over itself, complexity emerged.
A social network will behave the same way as the ball of yarn.
Let’s say you’re developing a social networking platform with
· The parameters “age”, “height”, “country”, and “sex”,
· Whose values may be 18-99 (years), 50-250 (cm), one ~200 countries, and M/F,
You end up with approximately 6.48 million distinct combinations to start with. Once all these people are on the platform, let’s say they can interact with
· The objects of type “image”, “audio”, “video”, “status”, and “event”,
· In terms of their option to “Like”, “Comment” or “Share”,
You end up with at least 15 different types of behaviour for each user. So, imagine the truly mind-boggling number of patterns of online-behaviour that could emerge from interactions on this platform. Now, the point is that depending on some initial conditions (i.e., the “properties” of the set of people who make up this network), and some preferential attitudes amongst them (e.g., “21 year-olds are most likely to interact with other 21 year-olds”, etc.), when and how – but not what kind of – complexity emerges in the network is determinable to some extent.
If the nature of this complexity is of any real value is the problem I was talking about in the first line.
Such complexities are emergent phenomena: They cannot be deliberately crafted but instead emerge from “multiply simple” systems. This is because the phenomena are only described as being complex; their internal structure, if unknotted, may be understood one knot at a time. What’s interesting (and of note) is that the truly complex structures only emerge, and because of being exclusively emergent, they evade predictability very well.
Thus, even in the social-network I described above, the probability of a particular complex structure emerging instead of all the other possibilities is quite low – especially given that the system is simple in terms of 6.48 million people and a total distinct-choice volume of 194.4 million at least. Superimposed on top of this is a layer of factors determined by reality and circumstances, which further contribute to the multiplicity of complex phenomena.
My point is how one can determine if any information gleaned from this mass of data will be useful.
On a (tenuously) related note, consider this piece by 'The Economist' which states that 2012 had the lowest number of air-travel accidents since 1945, and how such “safe performance” fares much better than that of travel by trains or cars. While the piece goes on to illustrate beautifully the many lurking fallacies in advertising air-travel as such, one point is of value to us: “The accident rate for the airline industry is now so low that someone taking a flight a day could theoretically expect 14,000 years of trouble-free flying.”
That’s a useful bit of information: It makes people feel safe. However, you know that when you board a plane, that number is going to make no difference to you because you suddenly realise that it’s the circumstances in the present that matter. This is true also of the stock market, although in a different context: It offers large amounts of predictable content and trends, but when the market crashes, it is usually compared to previous crashes, patterns picked out, and analyses drawn up to insure risk in people’s minds so that they may feel comfortable about investing again.
The Dow Jones Industrial Average has always behaved similarly to the US economy, and the DJIA plot above shows that it's been evidently cyclical, only functioning at different volumes, over the last century.
In our social-network, too, the existence of patterns (we needn’t recognise them but only know that they’re there) serves to fuel a notion that the world of social interactions can be modeled to generate useful knowledge that can then be applied to other fields. But is this really possible? The sheer number of possibilities, and then the further inclusion of changes due to emergent phenomena, is capable of rendering any conclusion firmly subjective and almost completely irreproducible.
In other words, if you can’t predict the probability of pertinent issues occurring on a circumstantial basis, then is a field of study founded on that assumption meaningful if it has no other tools to offer than picking out patterns on hindsight? Remember watching Moneyball and thinking "Can baseball really be reduced to some numbers"?*
One such field of study is cliodynamics. According to Wikipedia, it is “a new multidisciplinary area of research focused at mathematical modeling of historical dynamics.” It gets its name from Clio, one of the nine Muses in Greek lore. Cliodynamics takes upon itself to mathematically model long-term trends in human history so as to put mankind in a position to be able to predict future occurrences of those trend.
Forgive me at this point because my undergraduate degree was in mechanical engineering and I can’t help but look at this self-proclaimed purpose thermodynamically: Cliodynamics thinks the entire world is one system, bringing to mind the words of Immanuel M. Wallerstein, the noted American sociologist.
“We must invent a new language [to encompass] three supposedly distinctive arenas [of society, economics, and politics.] … One question, therefore, is whether we will be able to justify something called social science in the twenty-first century as a separate sphere of knowledge.”
Herein lies the answer. In thinking of social phenomena in terms of numbers, we are inherently attempting to qualify social interactions in the same way we have qualified machines. This is not possible because there is a lot of emergent information that we cannot parametrize. Howsoever far cliodynamics may be able to go in terms of being able to quantify historical events in terms of parameters, its inability to model, rather remodel, emergent phenomena will always leave it lacking in terms of its prophetic power.
It’s like you’re digging for gold at a particular spot. You know for some reason that that’s where all the gold is, but for every foot you dig and don’t find anything, you’re convinced that you haven’t dug deep enough. Similarly, if something eludes capture in a logical framework, we simply think we haven’t gone far enough. Social-media predictions didn’t work out? Dig deeper, try large-scale network analysis and pattern detection. Pattern-detectors didn’t work out? Dig deeper, try cliodynamics. Cliodynamics didn’t work out? Well, keep trying while I push math to the next meaningless level.
Right now, I still can’t find it in me to be completely unsupportive of “digging deeper”. How can you ever know if you’ve dug deep enough?
In the world-system analytical framework that Wallerstein was talking about, we are quantifying certain aspects that are essentially cultural, and the shortcomings of our language leaves out important bits of information. This language can be any logical framework - such as Tamil, English, C++ or Boolean algebra. What we’re performing is essentially a cultural analysis - of history, of the social network - and we’re refusing to include culture, which is evidently emergent.
For instance, consider this pictographic representation of emotions that don’t find exact expression in the English language (compiled and visualised by Pei-Ying Lin; source: Parrott, W. (2001), ‘Emotions in Social Psychology’, Psychology Press, Philadelphia).
(Click here for larger version)
As a result, we’re treating the social sciences as something scientific, and trying to make history useful in terms of being able to forecast the nature of long-term trends, whereas it is only irreconcilably so.