This paper describes a study that uses the online game Diplomacy to learn whether betray can be detected in advance through the choice of wording used in interactions between the betrayer and the victim. After explaining the game and its interactions, the authors describe their methodology and begin listing their findings. Of most interest to me were the findings that the betrayer was more likely to express positive sentiments before the betrayal, that an imbalance in the number of exchanged messages also plays a role, that future betrayers don’t plan as far ahead as their victims (based on linguistic analysis of future plans in-game), and that it was also possible to computationally predict a betrayal in advance more accurately than humans could.
Much like some of the earlier papers in this course, I appreciated that the authors included descriptions of the tools that they used, including the Stanford Sentiment Analyzer and the Stanford Politeness classifier. I don’t anticipate using either of those in our course project, but it is still nice to know that they exist for potential future projects.
The authors don’t argue that their findings are fully generalizable, but they do make a claim that their framework can be extended to a broad range of social interaction. I didn’t find that claim well substantiated. In Diplomacy, a betrayal is a single obvious action in which a pair of allies is suddenly no longer allied. However, betrayals in many human relationships are often more nuanced than a single action, and often take place over longer timescales. I’m not certain how well this framework will apply to such circumstances when much more than a few lines of text precede the betrayal.
I appreciated the note in the conclusion that the problem of identifying a betrayal is not a task that the authors expect to be solvable with high accuracy, as that would necessitate the existence of a “recipe” for avoiding betrayal in relationships. I hadn’t thought about it that way when reading through their results, but it makes sense. I wonder how fully that logic could be extended to other problems in the computational social science realm – how many problems are computationally unsolvable simply because solving them would violate some common aspect of human behavior?