Niculae, Vlad, et al. “Linguistic harbingers of betrayal: A case study on an online strategy game.” arXiv preprint arXiv:1506.04744 (2015).
The paper discusses a very interesting research question, that of friendships, alliances and betrayals. The key idea here is that between a pair of allies, conversational attributes like: positive sentiment, politeness and focus on future planning can foretell the fate of the alliance (i.e. if one of the allies will betray the other). Niculae et. al. analyze 145K messages between players, from 249 online games of “Diplomacy” (a war-themed strategy game) and trained two classifiers to classify betrayals from lasting friendships and seasons preceding the last friendly interaction from older seasons respectively.
Niculae et. al. do a good job of defining the problem in the context of Diplomacy. Specifically, the “in-game” aspects of movement, support, diplomacy, orders, battles, acts of friendships and hostilities. I feel that unlike real world, a game environment leads to a very clear and unambiguous definition of betrayal and alliance. While this makes it easier to apply computational tools like machine learning for making predictions in such environments, the developed approach might not readily applicable to real world scenarios. While talking about relationship stability in “Diplomacy” the authors point to the fact that the probability of a friendship dissolving into enmity is about five times greater than hostile players becoming friends. I feel this statistic is very much context dependent and might not relate to similar real world scenarios. Additionally, there seems to be an implicit “in-game” advantage for deception and betrayal (“solo victories” being more prestigious than “team victories”). The technique described in the paper only uses linguistic cues within dyads to predict betrayal, however there might be many other aspects leading to a betrayal. Although difficult, it might be interesting to see if the deceiving player is actually being influenced by another player outside the dyad (maybe by observing the betrayer’s communication with other players?). Also there might be other reasons to betray “in-game”. For example, one of the allies becoming to powerful (maybe the fear of a powerful ally taking over a weak ally’s territory might make the weak ally betray). The point being, only looking at player communication might not be a sufficient signal for detecting betrayal, more so in the real world.
Also, there can be many other aspects associated with communication in the physical world like: body language, facial expressions, gestures, eye contact, tone of voice. These verbal and non-verbal cues are seldom captured in computer mediated textual communication, although they might play a big role in decision making and acts of friendship as well as betrayal. I feel it would be really interesting if the study can be repeated for some cooperative game that supports audio/video communication between players instead of only text. Also, I believe the “clock” of the game, i.e. the time taken to finish one season, and making decisions is very different from the real world. The game might afford the players a lot of time to deliberate and choose their actions. In real world, one may not have this privilege?
Additionally, the accuracy of the logistic regression based classifier discusses in section 4.3 is only 57% (5% higher than chance) and I feel this might be because of under-fitting, hence it might be interesting to explore other machine learning techniques for classifying betrayals using linguistic features. While, the study tries to address a very important and appealing research question, I feel it is quite difficult to predict lasting friendships, eventual separations and unforeseen betrayals (even in a controlled virtual game), principally because of the inherent human irrationality and strokes of serendipity.