Reflection #7 – [02/13] – [Hamza Manzoor]

[1]. Niculae, V., Kumar, S., Boyd-Graber, J., & Danescu-Niculescu-Mizil, C. (2015). Linguistic harbingers of betrayal: A case study on an online strategy game. arXiv preprint arXiv:1506.04744.

This research paper explores linguistic cues in a strategy game ‘Diplomacy’ to examine patterns that foretell betrayals. In Diplomacy each player chooses a country and tries to win the game by capturing other countries. Players form alliances and break those alliances, sometimes through betrayal. The authors have tried to predict a possible betrayal based on sentiment, politeness, and linguistic cues in the conversation between players. The authors collected data from 2 online platforms that comprised of 145k messages from 249 games. The authors predict betrayal with 57% accuracy and find a few interesting things such as: betrayer has more positive sentiment before betrayal, less planning markers in betrayer’s language and have more polite behavior.

I thoroughly enjoyed reading this paper………. until section 4, where they explain modeling. I felt that either modeling process was poorly performed or otherwise should have been explained better. All they say is that “expert humans performed poorly on this task”, but what did those experts do? What does poorly mean? After that, they built a logistic model after univariate feature selection and best model achieves a cross-validation accuracy of 57% and F1 score of 0.31. What is the best model? They did not explain their “best model” and what features went into it. Secondly, is 57% accuracy good enough? A simple coin toss would have given us 50%. They also had various findings on eventual betrayer such as: he is more polite before betrayal etc. but what about the cases when betrayal does not happen? I felt that they only explained statistics for cases with betrayals.

Finally, can we generalize the results of this research? I can claim with 100% certainty that everyone will say “NO” because human relations are much more complex than simple messages of attack or defense. I appreciate the fact that authors have addressed that they do not expect the detection of betrayal to be solvable with high accuracy. But supposing 57% significant enough, can we generalize it to real-world scenarios that are similar to Diplomacy? For example: a betrayal from your office colleague working on a same project as you but taking all the credit to gain promotion. Can we detect that kind of betrayal from linguistic cues? Can we replicate this research on similar real-life scenarios?

Read More

Reflection 3- 02/13 [Anika Tabassum]

Summary:

The paper finds out the linguistic and discourse cues that generally occurs before the betrayal. Niculae et.al. study user behavior on popular online diplomatic games and analyze contexts, sentiment, language and discourse used by both parties (victim and betrayer) in different seasons of the game. They observe that betrayals occur generally when there arises an imbalance of sentiment, politeness and discourse marks between the parties. This study is the primary step to create a model that can forecast betrayals and help understanding friendships in diplomatic relations in real life.

 

Analysis:

This paper tries to find out the impact of linguistic cues before the occurrence of betrayal. For understanding this, they also compute sentiment, discourse, and politeness of the user from both parties in online behavior. I like the idea of the paper and their analysis how they show the imbalance between the communication between two parties cause betrayal by one. My question is: will this be applicable in real life also? Because online game is played mostly by inexperienced teenagers who have very little about diplomacy and complex relationships. Besides, as a future research direction, we can think of creating a model using series of events happened in historical data of diplomacy that can predict betrayals in diplomatic and political relations in real life.

Read More

Reflection #7 – [02-13] – [Patrick Sullivan]

Linguistic Harbingers of Betrayal: A Case Study on an Online Strategy Game” by Nucuale et al investigates how language can be used to predict future interactions and choices of users within an online game.

I wonder if these results can go beyond online games to in-person and normal interactions between friends? I think that there are many people who are relatively unconcerned with the game’s outcome either due to a non-competitive nature or having a separate motivation for playing. These players would act quite differently when placed in a situation that demands more commitment. I feel that the authors hoped that this research would extend beyond the game table, but I do not see a strong connection and think a new study in real-world relationships would be needed to find linguistic patterns that can be safely generalized.

There is also the question of if these findings can be extended to other games? Many games do not have a “Prisoner’s Dilemma” setup, and therefore, would not entice players to betray one another. Even for games that do have a possible win-loss scenario, games are not always played between strangers or anonymized. An interesting demonstration of how trust evolves naturally from simple systems that are in place over long periods of time can be seen in “The Evolution of Trust“, an interactive graphic by Nicky Case. It can be seen how players that cooperate in games actually fare better than players that do not in the longer run. Perhaps the same kinds of small simulations that “The Evolution of Trust” uses can be applied to simulate a multitude of Diplomacy games, and see if the results match up to human behavior. These simulations can even account for occurrences of miscommunication or overall player strategics. Perhaps this would give an alternative conclusion to the original paper’s findings by suggesting the type of communication and strategy that would benefit Diplomacy players the most in games, and also verify if the real Diplomacy games and players can be effectively simulated in order to find better modes of communication (whether by building more trust or betraying more).

I highly recommend taking the time to view “The Evolution of Trust“, as it is a great demonstration of some core facets of sociology and communication, and is applicable to everyone, not just computer scientists.

Read More

Reflection #7 – [02/13] – Aparna Gupta

Niculae, Vlad, et al. “Linguistic harbingers of betrayal: A case study on an online strategy game.” arXiv preprint arXiv:1506.04744 (2015).

This research paper explores linguistic cues in a game ‘Diplomacy’ (a strategy game) where players form alliances and break those alliances through betrayal. The authors have tried to predict a possible betrayal based on following attributes:  positive sentiment, politeness, and structured discourse. However, in my opinion there can be other factors like body language and facial expressions of a player which can also determine a possible betrayal.

The authors have collected data from 2 online platforms. The dataset comprises of 145k messages from 249 games.  Diplomacy is unique in a way that all players submit their written orders and these orders are executed simultaneously; there is no randomness. Hence the communication of the players depends only on the communication, cooperation, and movement of players.

In sec 3 of the paper, authors talk about relationships and stability and how interactions within the game define the relationship between players. The authors have used external tools for sentiment analysis and politeness classification. The authors have built a binary classifier to predict whether a player is going to betray another player.Such computations might give satisfactory results in a game scenario; however, they cannot be extended to real-life scenarios.

In the end, the paper explores relationships in a war based strategy game which doesn’t quite relate to the real world and looks quite unrealistic.

Read More

Reflection #6 – [02/13] – [Md Momen Bhuiyan]

Paper: Linguistic Harbingers of Betrayal: A Case Study on an Online Strategy Game

Summary:
This paper tries to find linguistic cues that can predict “betrayal” in an online game called “Diplomacy”. Diplomacy is an online war strategy with a little different order of actions. Here all the player perform their moves at the same time. This makes it very similar to the problem of “Prisoner’s dilemma”. For this reason, players make and break alliances. The main communication medium for the game is through messages. The authors collected messages between users to find out what type of language cues are used by the users before betraying alliance. The authors found that betrayers express more positive sentiment, politeness while less argumentation and planning in their messages. Based on these attributes authors create a model to predict betrayal which achieves about 57% cross-validation accuracy.

Reflection:
Authors’ choice of Diplomacy was a good source for analyzing the interaction of betrayer and victim. The authors in this paper are overly protective about the effect of time on their relationship which is surprising from the given result when they just ignore the status of the game in predicting betrayal. One of the results of the study that doesn’t make sense is that betrayer’s plan less while the victim’s do more. This is counter-intuitive in the sense if the victim plans more it is likely that they have a better grasp of different situations in the game. This begs the question: what is the effect of different level of experience in the game on betrayal? It is unlikely that a novice player (noob) will betray his alliances. Another thing noticeable in the tables in the paper is the absence of the value of the coefficient for both positive features and negative features in their prediction power. Although this paper provides some interesting insight into the behavior of a betrayer, it doesn’t seem to have any direct application in real life.

Read More

Reflection #7 – [02/13] – Jiameng Pu

Niculae, V., Kumar, S., Boyd-Graber, J., & Danescu-Niculescu-Mizil, C. (2015). Linguistic harbingers of betrayal: A case study on an online strategy game. arXiv preprint arXiv:1506.04744.

Summary:

The paper explores linguistic cues that indicate fickle interpersonal relations, like close friends becoming enemies. Since data that define the relationship between friends or enemies are not extensively accessible, researchers turn to a war-themed strategy game in which friendships and betrayals are orchestrated primarily through language. By studying dyadic interactions in the game and analyzing languages under cases that players form alliances and betray each other, they characterize subtle signs of imminent betrayal in players’ conversation and examine temporal patterns that foretell betrayal. From conversation scenarios in Diplomacy (the war-themed strategy game), we can actually see that betrayer would unconsciously reveal their planned treachery, meanwhile, the eventual victim can rarely be able to notice these signals.
They find that if the balance of conversational attributes like positive sentiment, politeness, and structured discourse shows sudden changes, imminent betrayal would happen in the future conversation. Then researchers provide a framework for analyzing communication patterns and explore linguistic features that are predictive of whether friendships will end in betrayal. They also discuss how to generalize methods to other domain and how automatically predicting relationships between people can help advance the study of trust and relationships using computational linguistics.

Reflection:

“Despite people’s best effort to hide it, the intention to betray can leak through the language one uses.” This reminds me of another idea that we may use the same strategy to detect people’s relationship pattern. Under normal circumstances, there are different relationship patterns when people getting along with each other. For example, some are balanced relationships like friends, colleagues, and relatives, while others are unbalanced relationships like leaders and subordinates, professors and students. Through the conversation content between people, we can extract the linguistic features following the same direction such as sentiment, argumentation, and discourse, politeness, and talkativeness, to predict the possible relationship between people. By analyzing the patterns of interpersonal relationships, we can have a deeper understanding of the current status of people’s life or whether the patterns will change over time, which is a more macro-sociological issue to figure out.
Apart from detecting the relationship pattern, we may use specific semantic features to study much other potential information hidden in the conversation, such as trust, familiarity, and intimacy between people.
Since the intuition of this paper is that a stable relationship should be balanced, it makes sense that all the predictions of Betrayal in the paper is based on signal an imbalance in the communication patterns of the dyad. However, my concern is whether these mentioned semantic features could provide a complete and efficient predictive analysis. Are there other available properties? e.g., Humor, straightforwardness. Or, in addition to detecting the imbalance of both sides in conversation, we may analyze the change of speech mode of the betrayer according to the timeline, i.e., imbalance of speech pattern before and after the decision to betray. This can solve difficult problems, for example, some people in nature are not as polite as the other party of the conversation.
In addition to the logistic regression which is often used for binary classification, support vector machine(SVM) is another classic algorithm that can be used for classification. As they have different advantages, we can design a control group in the experiment to choose the best classifier. Similarly, the semantic features can also be experimented in a controlled manner, so as to select the optimal combination of linguistic features that are most efficient for predicting betrayal.

Read More

Reflection #7 – [02/13] – Vartan Kesiz-Abnousi

Review:

Niculae, Vlad, et al. “Linguistic harbingers of betrayal: A case study on an online strategy game.” arXiv preprint arXiv:1506.04744 (2015).

Summary

The authors are trying to find linguistic cues that can signal revenge. To this end, they use online data for a game called “Diplomacy”. The users are anonymous. The dataset is ideal in that friendships, betrayals and enmities are formed and there is a lot of communication between the users. The authors are interested on Dyad communications, between two people. Subsequently, this textual communication might provide some verbal cues for an upcoming betrayal. As the authors suggest, they do find that certain linguistic cues, such as politeness, leads to betrayals.

Reflections

I have never played the game and I had to read the rules in order to understand it. The research focus is on Dyad communication and betrayal. But are the conversations public, or private? Can the players see you communicating with someone? To get a better understanding I read the rules of the game and the answer is as follows:In the negotiation phase, players communicate with each other to discuss tactics and strategy, form alliances, and share intelligence or spread disinformation about mutual adversaries. Negotiations may be made public or kept private. Players are not bound to anything they say or promise during this period, and no agreements of any sort are enforceable.”[1]. Subsequently, there is an extra layer of choice besides communication, on whether the negotiations are private or public. These choices might be not captured on Dyad communications.

The authors make a serious point with respect to game theoretic decision making models that attempt to model decision making and interactions. However, I believe that the approach of the authors and the game theoretic approaches are complementary and do not necessarily contradict each other. “Decision theory” is a highly abstract, mathematical, discipline and the models are rigorous in the sense that they follow the scientific method of hard sciences. In addition, just for clarification, the vanilla Prisoners Dilemma is not a “repeated game” as the authors stress. It can’t be formulated as a repeated game however the Nash Equilibrium changes when that happens. Something that should be stressed is that this is not a “repeated game”. The online game was not repeatedly played with the same players over and over again. Had the players played the game repeatedly, eventually they might have changed their behavioral strategies. This in turn might have affected the linguistic cues. In game theory, repeated games, as in games with the same agents and rules that are played over and over again, have different equilibria than static games. Subsequently, I am not sure if the method can be even generalized for this particular game, let alone other games where the rules are different for this reason.

Idiosyncrasy/rules of the game. Eventually, in order to win you need to capture all the territories. Thus the players anticipate that you might eventually become their enemy. This naturally has an impact on the interactions. In the real world, you don’t expect – at least me – to be surrounded by enemies who want to “conquer your territories”. The game induces “betrayal incentives”. If we design the game differently, it is likely that the linguistic predictive features that signal “betrayal” will change. There is a field called “Mechanism Design” dedicated to see how changing the rules of a game yield different results (“equilibria”).

The authors focus on friendships that have at least two consecutive and reciprocated acts of friendship. Should all acts of friendship count the same way? Are some acts of friendship more important than other acts? In other words, should there be a “weight” on an act of friendship?

The authors focus on the predictive factors of betrayal. I wonder, how can we use this in order to inform people on how to maintain friendships. The article makes an implicit assumption that friendships necessarily end due to betrayals. This is natural, because these terms used in the content of “Diplomacy” (the game). In the real world, there could many reasons why friendships can end. It would be interesting to develop a predictive, behavioral, algorithm that predicts the end of friendships because of misunderstandings.

The authors are trying to understand the linguistic aspects of betrayal and as a result, they do not use game specific information. However, if this information is not taken into account, then it is likely that the model will be wrong. By controlling for these effects, we can have a clearer picture of the linguistic aspects of betrayal.

Questions

  1. What if the players read this paper before they play the game? Would this change their linguistic cues?
  2. Should all acts of friendship count the same way? Are some acts of friendship more important than other acts?
  3. What if the authors did control for game specific information as well? Would this alter the results? Based on some models for the same game, it seems that if you select some countries, you will ultimately have to betray your opponent. For instance, “adjacency” is apparently an important factor that determines friendships and enmities. An adjuscency map can be seen in the attached map.[2]
  4. What if the users knew each other and played the game again and again, rendering the game repeated? Would this change the linguistic cues, after obtaining information regarding the behavioral patterns from the previous rounds?
  5. Can players visually see other players interacting and the length of that interaction? What if they can?

[1] Wikipedia

[2] http://vizual-statistix.tumblr.com/post/64876756583/i-would-guess-that-most-diplomacy-players-have-a

Read More

Reflection #7 – [02/13] – Ashish Baghudana

Niculae, Vlad, et al. “Linguistic harbingers of betrayal: A case study on an online strategy game.” arXiv preprint arXiv:1506.04744 (2015).

In very much the same vein as The Language that Gets People to Give: Phrases that Predict Success on Kickstarter by Mitra et al., the authors in this paper look for linguistic cues that foretell betrayal in relationships. Their research focuses on the online game Diplomacy that is set in the pre-World War 1 era. An important aspect of this paper is understanding the game and its intricacies. Each player chooses a country, forms alliances with other players, and tries to win the game by capturing different territories in Europe. Central to the game are these alliances and betrayals, and the conversations that happen when a player becomes disloyal to a friend.

The paper uses draws on prior research work in extracting politeness, sentiment, and linguistic cues for several of its features, and it was instructive to see the uses of some of these social computing tools in their research.

The authors find that there are subtle signs that predict betrayal, namely:

  1. An imbalance of positive sentiment before the betrayal, where the betrayer uses more positive sentiment;
  2. Less argumentation and discourse from the betrayer;
  3. Less planning markers in the betrayer’s language;
  4. More polite behavior from the betrayer; and
  5. An imbalance in the number of messages exchanged

Intuitively, I can relate to observations #2, #3, and #5. However, positive sentiment and polite behavior would perhaps not indicate betrayal in an offline context. I do wish that these results were explained better and more examples given to indicate why they made sense.

I also felt that the machine learning model to predict betrayal could have been described better. I could not immediately understand their feature extraction mechanism — were linguistic cues used as binary features or count features? Assuming it wasn’t a thin-slicing study and they used count features, did they normalize the counts over the number of times two players spoke? Additionally, they compared the performance of their model against the players (who were never able to predict a betrayal, i.e. their accuracy was 0%). While 0% -> 57% seems like a big jump, the machine learning model could have predicted at random and still obtained a 50% accuracy rate. This begs the question of how accurate the model really is and what features it found important.

Papers in computational social science often need to define (otherwise abstract) social constructs precisely, and quantitatively. Niculae et al. attempt to define friendships, alliances, and betrayals in this paper. While I like and agree with their definitions with respect to the game, it is important to recognize that these definitions are not necessarily generalizable. The paper studies a small subset of relationships online. I would be interested in seeing how this could be replicated for more offline contexts.

Read More

Reflection #7 – [02/13] – Pratik Anand

This paper is about decoding real world people interaction using thought experiments and gaming scenarios. A game of diplomacy with its interactions provides a perfect opportunity to learn about changes in player communucations with relation to oncoming betrayal.
Can the betrayal really be automatically predicted by tonal change in communication ? Is it generalizable to other real world scenario or even artificial scenario like other games ?
The paper shows that the game supports long term alliances but it offers lucrative solo victories too, leading to betrayals. Alliances get broken with time which is intuitive. The paper provided defined structures for defining friendship, betrayal and the parties involved, victim and betrayer through the communications as well as the game commands. The structure of the discourse provides enough linguistic clues to determine whether friendship will last or not for a considerable period of time. The authors also develop a logistic regression to create a model for predicting betrayal. Few questions which arise are : are linguistic cues general enough because people talk differently even in a strict English speaking nation. Similar question for betrayal : the betrayer are usually more polite before the act . It could be in context of this game and may or may not apply elsewhere, even in other games.
The paper makes a point of sudden yet inevitable betrayal where more markers are provided by the victim than the betrayer. The victim uses more planning words and is less polite than usual. In context of this game, long term planning is a measure of trust, so can it be generalized to more trust results in inevitable betrayal conclusion ? It could be far-fetched even with many anecdotal evidences.

Lastly, I believe the premise is highly unrealistic and not at all comparable to real world scenarios. Also, the proposition that betrayal can be predicted is highly doubtful and cannot be relied upon for real world communications. Also, since it is based on linguistic evaluations, this system can be gamed making the point of prediction useless.

Read More

Reflection #7 – [02/13] – [John Wenskovitch]

This paper describes a study that uses the online game Diplomacy to learn whether betray can be detected in advance through the choice of wording used in interactions between the betrayer and the victim.  After explaining the game and its interactions, the authors describe their methodology and begin listing their findings.  Of most interest to me were the findings that the betrayer was more likely to express positive sentiments before the betrayal, that an imbalance in the number of exchanged messages also plays a role, that future betrayers don’t plan as far ahead as their victims (based on linguistic analysis of future plans in-game), and that it was also possible to computationally predict a betrayal in advance more accurately than humans could.

Much like some of the earlier papers in this course, I appreciated that the authors included descriptions of the tools that they used, including the Stanford Sentiment Analyzer and the Stanford Politeness classifier.  I don’t anticipate using either of those in our course project, but it is still nice to know that they exist for potential future projects.

The authors don’t argue that their findings are fully generalizable, but they do make a claim that their framework can be extended to a broad range of social interaction.  I didn’t find that claim well substantiated.  In Diplomacy, a betrayal is a single obvious action in which a pair of allies is suddenly no longer allied.  However, betrayals in many human relationships are often more nuanced than a single action, and often take place over longer timescales.  I’m not certain how well this framework will apply to such circumstances when much more than a few lines of text precede the betrayal.

I appreciated the note in the conclusion that the problem of identifying a betrayal is not a task that the authors expect to be solvable with high accuracy, as that would necessitate the existence of a “recipe” for avoiding betrayal in relationships.  I hadn’t thought about it that way when reading through their results, but it makes sense.  I wonder how fully that logic could be extended to other problems in the computational social science realm – how many problems are computationally unsolvable simply because solving them would violate some common aspect of human behavior?

Read More