Reflection 11 – [Aparna Gupta]

[1] King, Gary, Jennifer Pan, and Margaret E. Roberts. “Reverse-engineering censorship in China: Randomized experimentation and participant observation.” Science 345.6199 (2014): 1251722.

[2] Hiruncharoenvate, Chaya, Zhiyuan Lin, and Eric Gilbert. “Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions.” ICWSM. 2015.

Reflection #1

Paper 1 by King et al., have presented an interesting approach to reverse-engineering censorship in China.  The experiment performed by the author looks more like a secret operation to analyze how censorship works in China. King et al., created accounts on various social media websites and submitted posts from them to analyze whether they get censored or not. The authors even created their own website and conducted interviews. Their approach was unique and interesting. However, I was not convinced why the authors only considered posts from all over the world between 8 AM and 8PM China time. How about the content being posted before 8 AM and after 8 PM? What I found interesting in the paper is the action hypothesis vs state critique hypothesis. Non-familiarity with the language is a major drawback in understanding it. The authors reported that Chinese social media organizations will hire 50,000 – 70,000 people who will act as human censors which is quite interesting and too less looking at the number of internet users in China.

Reflection #2

Paper 2 by Hiruncharoenvate et al., presents a non-deterministic algorithm for generating homophones that create a large number of false positives for censors. They claim that homophone-transformed weibos posted Sina Weibo remain on site three times longer than their previously censored counterparts. The authors have conducted two experiments – first where they posted original posts and homophone-transformed posts and found that although both the posts eventually were deleted, the homophone-transformed posts stayed 3 times longer. second, they analyze that native Chinese speakers on AMT were able to understand these homophone-transformed weibos. I wonder how this homophone-transformed approach will work in other languages? The dataset used consists of 11 million weibos which was collected from Freeweibo.  Out of all the social science papers, we have read so far I found this paper most interesting and their approach well structured.  It would be interesting to implement this approach in other languages as well.

 

Read More

Reflection #11 – [03-27] – [Meghendra Singh]

  1. King, Gary, Jennifer Pan, and Margaret E. Roberts. “Reverse-engineering censorship in China: Randomized experimentation and participant observation.” Science6199 (2014): 1251722.
  2. Hiruncharoenvate, Chaya, Zhiyuan Lin, and Eric Gilbert. “Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions.” ICWSM. 2015.

The first paper presents a large-scale experimental study of Chinese social media censorship. The authors created accounts on multiple social media sites and submitted various texts, while observing which texts get posted and which get censored. The authors also interviewed employees of a bulletin board software company and other anonymous sources to get a first hand account of the various strategies used by social media websites to censor certain content. This approach is analogous to reverse engineering the censorship system; hence the title of the paper is appropriate. The key hypothesis that this study tries to prove is that of collective action potential, i.e. the target of censorship is people who join together to express themselves collectively, stimulated by someone other than the government, and seem to have the potential to generate collective action in the real-world [How censorship in China allows government criticism but silences collective expression.].

Overall, I find the paper to be an interesting read and Figure 1 gives a nice overview of the various paths a social media post can take on Chinese discussion forums. The authors find that most social media websites used hand-curated keyword matching for automatic review of user posted content. The most interesting fact was that large Chinese social media firms will be hiring 50, 000 to 75, 000 human censors and Chinese Communist Party’s propaganda department, major Chinese news websites and commercial corporations had collectively employed two million “public opinion analysts” (professionals policing public opinion online) as early as 2013 [1]. This implies that for every 309 Internet users in China there was one human censor (There were approximately 618 million Internet users in China in 2013) [2]. With regards to the histogram presented in Figure 4, other than the reasons presented in the paper for the high number of automated reviews on government websites, it may be the case that these websites might be getting a lot more posts then private websites. I believe a large number of posts, would lead to a greater number of posts being selected for automatic review. Additionally, if a person has an issue with a government policy or law, trying to publish their disagreement on a government forum might seem more appropriate to them. Now, given the fact that phrases like: change the law (变法) and disagree (不同意) are blocked from being posted, even on Chinese social media sites, I believe any post showing concern or disagreement with a government policy or law on a government website will be highly likely to be reviewed. Moreover, given the long (power-law like) tailed nature of Chinese social media (as shown in the pie chart below from [King et. al. 2013]), I feel majority of the small private social media websites would be acting as niche communities (e.g., food enthusiasts, fashion, technology, games) and it is unlikely that individuals would post politically sensitive content on such communities.

The second paper discusses an interesting approach to evade censorship mechanisms on Sina Weibo (A popular Chinese microblogging website). The authors cite the decision tree of Chinese censorship from the first paper and highlight the fact that homophone substitution can be used to evade keyword based automatic review and censorship mechanisms. The paper details a non-deterministic algorithm that can generate homophones for sensitive keywords that maybe used to filter microblogs (weibos) for review by censors. The authors prove that the homophone transformation does not lead to a significant change in the interpretability of the post by conducting Mechanical Turk, Human Intelligence Task experiments. The key idea here is that if the censors try to counter the homophone transformation approach by adding all homophones for all blocked keywords to the blocked keyword list, they would end up censoring as much as 20% of the daily posts on Sina Weibo. This would be detrimental for the website as this implies loosing a significant amount of daily post and users (if the users are banned for posting the content). The authors suggest that the only approach which would work to censor homophone transformed posts, while not sabotaging the websites daily traffic would be to employ human censors. This would impose 15 additional human-hours per day worth of effort on the censors for each banned word, which is substantial as there are thousands of banned words.

In Experiment 1, the authors stopped checking status of posts after 48 hours, a question I have is that do all posts ultimately get read by some human censor? If this is the case, is there a justification for the 48-hour threshold to consider a post as uncensored? As the authors suggest in the study limitations, posts by established accounts (specially those having a lot of followers) might be scrutinized (or prioritized for review/censorship) more. It would be interesting to see if there exists a correlation between the number of followers an account has and the time at which their sensitive posts get deleted.

Furthermore, in the results for Experiment 1, the authors specify that there is a statistically significant difference between the publishing rate of the original and transformed posts, in terms of raw numbers, we don’t see a huge difference between the number of original (552) and transformed (576) posts that got published. It would be interesting to repeat Experiment a couple of times to see if these results remain consistent. Additionally, I feel we might be able to apply a Generative adversarial network (GAN) here, with a generator generating different transformations of an original “sensitive” weibo which have high interpretability although can fool the discriminator, the discriminator would act like a censor and decide whether or not the generated weibo should be deleted. Although, I am not sure about the exact architecture of the networks or the availability of sufficient training data for this approach.

Addendum: An interesting list of terms blocked from being posted on Weibo.

Read More

Reflection #11 – [03-27] – [Patrick Sullivan]

Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions” by Chaya Hiruncharoenvate et al.
Reverse-Engineering Censorship in China: Randomized Experimentation and Participant Observation” by Gary King et al.

It seems obvious that as long as massive and automatic censorship is possible to the censor without incurring any major cost, then the censor will remain powerful. However, if the only action the censor can effectively employ is through using human actors, then they should eventually be defeated by any anti-censorship group (except by some extreme response). This is because a larger anti-censor group with the same tools available will be able to focus efforts on overwhelming the censor. Similarly, a small anti-censor group can be innocuous or unassuming where they can focus on remaining undetected. There is another issue for any powerful and growing censor because the increased chances that anti-censor groups will infiltrate and sabotage the censor’s goals.  However, a censor can employ computational methods to judge content en masse and with great detail as an effective guard against all of these points.

This leads me to believe that the censorship seen in this research is not sustainable, and is only kept alive through computational methods. The direct way of defeating any censorship is to defeat any machines driving them currently.  I think this is the greatest implication of this research by both Hiruncharoenvate et al and King et al. By understanding and breaking down a censor’s computational tools in this manner, a censor would only be able to revert to human censor methods. And when this censorship is unacceptable to people, then they have the strategies I listed above to actually defeat the censor. This is a necessary point to make because without an accompanying anti-censorship movement of people, then defeating the computational tools of the censor is meaningless. So in this case, the computer adversaries are best defeated by computational approaches, and the human adversaries are best defeated by human approaches. I think there should be special consideration taken for problems that match this description because not tackling them in the best way proves to be an incredible waste to time and energy.

I also have trouble estimating whether the censorship is really working as intended. From King’s findings, if China is very concerned about calls for collective action, then it should be surprising that it is less concerned with what could be ‘seeds’ of outrage or activism. China may censor the calls for action by a movement, but they strangely allow the spread of criticism and information that could motivate a movement. This seems problematic because it does not address the underlying concerns of the people, but instead just makes it more difficult to do something in return. Also, the censorship is targeting publicly viewed posts on social media, but doesn’t seem to have any focus on the private direct messages and communication that is being used as well. In the case of a rebellious movement forming, I think this kind of direct and more private communication would naturally come about when a large group has a unifying criticism of the government.

Read More

Reflection #11 – [03/27] – [Hamza Manzoor]

[1] King, Gary, Jennifer Pan, and Margaret E. Roberts. “Reverse-engineering censorship in China: Randomized experimentation and participant observation.” Science 345.6199 (2014): 1251722.

[2] Hiruncharoenvate, Chaya, Zhiyuan Lin, and Eric Gilbert. “Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions.” ICWSM. 2015.

Summaries:

In the first paper, King et al. conducted an experiment on censorship in China by creating their own social media websites. They submitted different posts on their social media websites and observed how these were reviewed. The goal of their study was to reverse engineer the censorship process. The results of their study show that the posts that invoke collective actions like protests are censored, whereas, the posts containing criticism of the state and its leaders are published.

In the second paper, Hiruncharoenvate et al. performed experiments to manipulate the keyword based censoring algorithms. They make use of homophones of censored words to get past automated reviews. The authors collected the censored Weibos and developed an algorithm that generates homophones for the censored keywords. The results of their experiments show that that posts with homophones tend to stay 3 times longer and that the native Chinese speakers do not face any trouble deciphering the homophones.

Reflections:

Both these papers use deception to manipulate the “The Great Firewall of China”. The first paper felt like a plot of a movie, where a secret agent invades another country to “rescue” its citizens from a so-called tyrant oppressor. According to me, the research conducted in both of these papers are ethically wrong on so many levels. There is a fine line between illegal and unethical and I think that these papers might have crossed that line. Creating a secret network and providing ways to manipulate the infrastructure created by a government for only its own people is wrong in my opinion. How is it different from the Russian hackers using Facebook to manipulate the election results? Except for the fact that these research papers are in the name of “Free Speech” or “Research”. Had Russians written a research paper “Large-scale experiment on how social media can be used to change users opinions or manipulate elections”, would that justify what they did? NO.

Moving further, one question that I had while reading the first paper was, if they already had access to the software, then, why did they create a social network to see which posts are blocked when the same software was used to block the posts in those social networks in the first place? Or did I understand it wrongly? Secondly, being unfamiliar with the Chinese language, the use homophones in second paper is interesting, and since we have both Chinese speakers presenting tomorrow, it would be nice to know if all the words in Chinese have homophones. Also, is it only in Mandarin or in all Chinese languages? I believe we cannot replicate this research in any other popular language like English or Spanish.

Furthermore, in the second paper, the main idea behind the use of homophones is to deceive the algorithms. The authors claim that the algorithms get deceived due to a different word but the native speakers were able to get the true meaning by looking at the context of the sentence. This makes me wonder that with the new deep learning techniques it is possible to know the context of a sentence and therefore, will this research still work? Secondly, after some time the Chinese government will know that people are using homophones and therefore, feeding homophones to algorithms should not be too difficult.

Finally, it was interesting to see in the first paper that the posts that invoke collective actions like protests are censored, whereas, the posts containing criticism of the state and its leaders are published. So, essentially, the Chinese government is not against criticism but protests. Now, the question of ethics for the other side, is it ethical for governments to block posts? And, how is what the Chinese government doing is different from when other governments crack down on their protestors? Allowing protests and then cracking down on them seems even worse than disallowing protests at all.

Read More

Reflection #11 – [03/27] – [Md Momen Bhuiyan]

Paper #1: Reverse-engineering censorship in China: Randomized experimentation and participant observation
Paper #2: Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions

Reflection #1:
This paper tries to fill the knowledge gap on modeling censorship framework in China. The authors perform a randomized experiment on 100 websites owned by both Chinese government and the private sector to find out how the censorship works. They also do interviews with people as well as the censors themselves to get a better idea about the steps of censorship in China. In posts, the authors focus on 4 cases: posts having collective action plan or not, posts for or against the government. The authors tried to control the language, topic, and timing of the posts as much as possible. From the result it seems like there is a 40% prior probability of a post falling under automatic review. Despite this, sites seem to be more reliant on human actions on censorship as their automatic keyword matching systems don’t perform well on separating different posts. The government puts more constraint on the censorship of collective action like protest while all the other types of posts have an equal probability of being censored. The authors tried to account for all edge cases in their study.

Reflection #2:
This paper uses the reverse-engineered knowledge from previous paper to evade the issue of censorship. The paper introduces a non-deterministic (randomized) algorithm using homophones (apparently words sounding very similar). According to their experiment, the homophones are not easily detectable using the automatic algorithm, while robust to understanding by users. From the cost perspective this add additional 15 hour of human labor per homophones. Although this approach seems to be good, China is already known for an abundance of cheap labor. So even if this adds extra cost to the system, it would only work on systems managed by private entities. The authors use of most frequent homophones seems clever. But it depends on how users would react if more posts are censored due to the usage of all possible combination of censoring words. Given that they have already complied with the current state of censorship, I wouldn’t argue against that.

Read More

Reflection #11 – [03/27] – [Jamal A. Khan]

  • Hiruncharoenvate, Chaya, Zhiyuan Lin, and Eric Gilbert. “Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions.
  • King, Gary, Jennifer Pan, and Margaret E. Roberts. “Reverse-engineering censorship in China: Randomized experimentation and participant observation.

Both of the papers assigned the next class are about Chinese censorship and in a sense have this heroic writing tone in terms of how the idea is being put forward. I didn’t quite like the way the ideas were staged but that is irrelevant and subjective.

Regardless of the tone of the writing or my likes or dislikes at that, I like the first paper’s idea of using the semantics of Chinese language itself as a deterrent against censorship. The complexity of the language has come as a blessing in disguise. Before i get into the critical details of the paper, i would say that the approach of the authors is sound and has been well demonstrated. Therefore, the reflection will focus on what can be done (or undone in my case) using the paper as  a base.

Since the title itself states that the purpose of the paper is to “bypass” the censorship, a natural question is “Does this method still work?”. A naive approach to breaking this scheme, or at the very least majorly cutting down the human cost that the authors talk about, would be to build a homophone replacement method. This is very much possible with the recent advances in Word Embedding schemes (referring to works in [1], [2] and especially [3]) and their ability to detect the similarity is usage of words. These embeddings per say do not look at what a word is but rather at how it occurs to deduce the importance, similarity and in case of [3] hierarchy as well when mapping to a arbitrary dimensional vector space (meaning that it could deduce what a random smiley means as well) . Hence to these embedding the homophones are very similar words IF they are used in the same context (which they are!). Since the proposed solution in the paper relies on the reader being able to deduce meaning of the sentences from the context of the article or the situation/news trends, Embeddings will be able to do so well, if not better, and hence the system would censor the posts. So, i  guess my point is that, this method might be outdated now, the only overhead the censoring system would have to bear is the training of a new embedding model every day or so.

The other question is “Do the censorship practices still function the same way”. Now that NLP tasks are being dominated by sequence models (deep learning models based on bi-directional RNNs for example),  it might be possible to automatically detect even better now. I feel that this question is one that needs further exploration and there is no direct answer.

Another natural question to ask would be: does this homophonic approach extend to other languages as well? For Urdu (Punjabi as well), English and to some extent Arabic, the languages which i know myself, I’m not too sure if such a variety of homophones exists. Since it doesn’t then a straight follow up question is Can we develop language invariant censorship avoidance schemes? I feel that this could be some very exiting work. Maybe some inspiration can be drawn from schemes such as [4].

The second paper by King et al., I must is pretty impressive. The amount of detail in terms of experiment design, the consideration undertaken and the way results are presented is pretty much on point. Now, I’m not to  familiar with Chinese censorship and it’s effects, so i can’t make much of the results. The thing that is surprising to me is that posts with collective action potential are banned while those critiquing the government are not, why? Another surprising finding was the absence  of a centralized method of censorship and this leads me back to my original question that with newer NLP techniques powered  by deep learning emerging, will the censor hammer come down harder? will these digital terminators be more efficient at their job? In the unfortunate case, that this dystopian scenario were to come true, how’re we to deal with it?.

I guess with both the paper combined an ethical question needs to be discussed: Is censorship ethical? if no, then why? if yes, then under what circumstance and to what extent? It would be nice to hear other peoples opinion on this in class.

 

[1] Efficient Estimation of Word Representations in Vector Space: https://arxiv.org/pdf/1301.3781.pdf

[2] GloVe: Global Vectors for Word Representation: https://nlp.stanford.edu/pubs/glove.pdf

[3] Poincaré Embeddings for Learning Hierarchical Representations: https://arxiv.org/pdf/1705.08039.pdf

[4] Unobservable communication over fully untrusted infrastructure: https://www.cs.utexas.edu/~sebs/papers/pung_osdi16_tr.pdf

Read More

Reflection #11 – [03/27] – [Vartan Kesiz-Abnousi]

[1] Hiruncharoenvate, Chaya, Zhiyuan Lin, and Eric Gilbert. “Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions.” ICWSM. 2015.

[2] King, Gary, Jennifer Pan, and Margaret E. Roberts. “Reverse-engineering censorship in China: Randomized experimentation and participant observation.” Science 345.6199 (2014): 1251722.

 

Summary[1]

 

The paper published by King and colleagues in 2014, researchers did not understand how the censorship apparatus works on sites like SinaWeibo, which is the Chinese version of Twitter. The censored weibos censored weibos were collected for the duration between October 2, 2009-November 20, 2014 and is comprised with approximately 4.4K weibos. The two experiments that the authors use rely on this dataset. Namely, an experiment on Sina Weibo itself and a second experiment where they ask trained users from Amazon Mechanical Turk to recognize the homophones. The second dataset consists of weibos from the public timeline of Sina Weibo, from October 13,2014–November 20,2014, accumulating 11,712,617  weibos.

 

Venn diagram showing the relationships between homophones (blue circle) and related linguistic concepts

 

Reflections[1]

I’ve never heard of the term “homophone” before. Apparently, such as decomposition of characters, translation, and creating nicknames—to circumvent adversaries, creating Morphs have been in wide usage. Homophones are a subset of such morphs. The Venn Diagram also provides further insight. Overall, three questions are asked. First, are homophone-transformed posts treated differently from ones that would have otherwise been censored? Second, are homophone-transformed posts understandable by native Chinese speakers? Third, if so, in what rational ways might SinaWeibo’s censorship mechanisms respond? One question that I have is whether utilizing tf-idf score is the best possible choice for their analysis. Why not an LDA? I didn’t find a discussion regarding this choice of the model, even though it is detrimental to the results. The algorithm, as the authors suggest, has a high chance to generate homophones that have no meaning since they did not consult a dictionary.  I find this to also have a serious impact in the model.  This might look like a detail, but I think it might have been a better idea to keep the Amazon Turk instructions only in Mandarin, instead of asking in English that non-Chinese speakers not to complete the task.  It would have been helpful if we had all the parameters of the logit model in a table. Regardless, they find that

 

Questions[1]

  1. Is the usage of homophones particularly that widespread in mandarin, compared to the Indo-European Language Family? Furthermore, can these methods be applied to other languages?
  2. How much of a complex conversation can occur with the usage of homophones? Is there language complexity metric, with complexity defined as some metric that conveys ideas effectively?
  3. An extension of the study could be the study of the linguistic features of posts containing homophones.

 

Summary [2]

The paper written by King et al has two parts. First, they create accounts on numerous Chinese social media sites. Then, they randomly submit different text and observe which texts are censored and which weren’t. The second task involves the establishment of a social media site, that uses Chinese media’s censorship technologies. Their goal is to reverse engineer the censorship process. Their results support a hypothesis, where criticism of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored.

 

Reflections [2]

This is an excellent paper in terms of causal inference and the entire structure. Gary King is a notorious author in experimental design studies aimed to draw causal inference. For the experimental part, first they introduce blocking based on the writing style. I didn’t find much about the writing style on the supplemental material. They also have a double dichotomy, that produces four experimental conditions: pro- or anti-government and with or without collective action potential. It is the randomization that allows to draw causal claims in the study.

Questions [2]

  1. How do they measure the “writing style”, when they introduce blocking?

Read More

Reflection #11 – 03/27 – Pratik Anand

Paper 1 : Reverse-engineering censorship in China: Randomized experimentation and participant observation

Paper 2 : Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions

Both the papers play two parts of a larger process – observation and counter-action.
The first paper deals with the reasearchers trying to understand how Chinese censoeship works. In the light of lack of any official documentation, they use experimentally controlled anecdotal evidences to build an hypthesis of functioning of “The Great Firewall of China“. They use some well-known topics with history of censorship and post discussions on various Chinese forums. They had some interesting observations like corruption cases and criticism as well as praise of goverment are more heavily censored than sensitive topics like Tibet and border disputes. This could represent the priorities of the government towards censorship and can help to bypass it for majority of the sensitive cases.
Non-familiarity with the language creates difficulty in understanding the nuances of the language used in surviving posts and banned posts. Though, mostly it seems like that the censorship is primarily dependent on keyword matching with exernal techniques as subsidiaries, will global advancement in NLP research might be harmful than useful in this case ? With the help of advanced NLP, censorship tools can go beyond just words and infer context from the statments.
This brings to the second paper on bypassing censorship. The authors make use of homophone substitutions to fool auto-censorship tools. Language is again a barrier here in fully grasping the effects of homophones substitutions. However, it can be inferred that a limited number of substitutions are possible for every word to be replaced. This creates a problem if those substitutions become popular. The censorship tools can easily ban those. A common recent example is the removal of the 2-term restriction on Presidentship in China. The people started criticising it using mathematical terminlogy of 1,2,……N to represent infinite numbers. Messages like “Congratulating Xi Jinping for getting selected as President for the Nth time”. The censor tools went ahead and not only recognised the context of this very subtle joke but also blocked the letter N for some time. Hence, it shows that no matter how robust and covert a system is, if it gains enough traction, it will come into focus and gets banned. There is a need ot find ways which cannot be countered even after they have been exposed.

Read More

Reflection #11 – [03/27] – [John Wenskovitch]

This pair of papers falls under the topic of censorship in Chinese social media.  King et al.’s “Reverse-Engineering Censorship” article takes an interesting approach towards evaluating censorship experimentally.  Their first stage was to create accounts on a variety of social media sites (100 total) and sent messages worldwide to see which messages were censored and which were untouched.  Accompanying this analysis are interviews with confidential sources, as well as the creating of their own social media site by contracting with Chinese firms and then reverse-engineering their software.  Using their own site gave the authors the ability to understand more about posts that are reviewed & censored and accounts that are permanently blocks, which could not be done through typical observational studies.  In contrast, the “Algorithmically Bypassing Censorship” paper, the authors make use of homophones of censored keywords in order to get around detection by keyword matching censorship algorithms.  Their process, a non-deterministic algorithm, still allows native speakers to recover the meaning behind almost all of the original untransformed posts, while also allowing the transformed posts to exist 3x longer than their censored counterparts.

Regarding the “Reverse-Engineering” paper, one decision in their first stage that I was puzzled by was the decision to submit all posts between 8AM and 8PM China time.  While it wasn’t the specific goal of their research, submitting some after-hours posts could generate interesting information about just how active the censorship process is in the middle of the night.  That includes all of the potential branches – censored after post, censored after being held for review, and accounts blocked.

From their results, I’m not sure which part surprised me more:  that 63% of submissions that go into review are censored, or that 37% that go into review are not censored and eventually get posted.  I guess I need more experience with Chinese censorship before settling on a final feeling.  It seems reasonable that automated review will capture a fair number of innocuous posts that will later be approved, but 37% feels like a high number.  Their note that a variety of technologies are used in this automated review process would imply high variability in the accuracy of the automated review system, and so a large number of ineffective solutions could explain why 37% of submissions are released for publication after review.  On the other hand, the authors chose to make a number of posts about hot-button (“collective action”) issues, which is the source of my surprise regarding the 63% number.  Initially I would have expected a higher number, because despite the fact that the authors submit both pro- and anti-government posts, I would suspect that additional censorship might be added in order to un-hot-button these issues.  Again, I need more experience with Chinese social media to get a better feeling of the results.

Regarding the “Algorithmically Bypassing” paper, I really enjoyed the methodology of taking an idea that activists are already using to evade censorship and automating it to use at scale by more users.  Without being particularly familiar with Mandarin, I suspect that creating such a solution is easier in China than it would be in a language like English with fewer homophones.  However, it did remind me of the images that are shared frequently on Facebook that are something like “fi yuo cna raed tihs yuo aer ni teh tpo 5% inteligance” (generally seen with better scrambled letters in longer words, in which the first and last letters are kept in the correct position).

I felt that the authors’ stated result that posts typically live 3x longer than an untransformed equivalent censored post was impressive until I saw the distribution in Figure 4.  A majority of the posts do appear to have survived with that 3x longer time statistic.  However, the relationship is much more prevalent for surviving 3 hours rather than 1, while many fewer posts exist in the part of the curve where a post survives for 15 hours rather than 5.  A case of giving a result that is accurate but also a bit misleading.

Read More

Reflection #11 – [03/27] – [Ashish Baghudana]

King, Gary, Jennifer Pan, and Margaret E. Roberts. “Reverse-engineering censorship in China: Randomized experimentation and participant observation.” Science 345.6199 (2014): 1251722.
Hiruncharoenvate, Chaya, Zhiyuan Lin, and Eric Gilbert. “Algorithmically Bypassing Censorship on Sina Weibo with Nondeterministic Homophone Substitutions.” ICWSM. 2015.

Summary 1

King et al. conducted a large-scale experimental study of censorship in China by creating their own social media websites, submitting different posts, and observing how these were reviewed and/or censored. They obtained technical know-how in the use of automatic censorship software from the support services of the hosting company. Based on user guides, documentation, and personal interviews, the authors deduced that most social media websites in China conduct an automatic review through keyword matching. The keywords are generally hand-curated. They reverse-engineer the keyword list by posting their own content and observing which posts get through. Finally, the authors find that posts that invoke collective action are censored as compared to criticisms of the government or its leaders.

Reflection 1

King et al. conduct fascinating research in the censorship domain. (The paper felt as much a covert spy operation as research work). The most interesting observation from the paper is that posts about collective action are censored, but not those that criticise the government. This is labeled the collection action hypothesis vs. state critique hypothesis. This means two things – negative posts about the government are not filtered, and positive posts about the government can get filtered. The paper also finds that automated reviews are not very useful. The authors observe a few edge cases too – posts about corruption, wrongdoing, senior leaders in the government (however innocuous their actions might be), and sensitive topics such as Tibet are automatically censored. These may not bring about any collective action, either online or offline, but are still deemed censor-worthy. The paper makes the claim that certain topics are censored irrespective of whether they are for or against the topic.

I came across another paper by the same set of authors from 2017 – King, Gary, Jennifer Pan, and Margaret E. Roberts. “How the Chinese government fabricates social media posts for strategic distraction, not engaged argument.” American Political Science Review 111.3 (2017): 484-501. If censorship is one side of the coin, then bots and sockpuppets constitute the other. It would not be too difficult to imagine “official” posts by the Chinese government that favor their point of view and distract the community from more relevant issues.

The paper threw open several interesting questions. Firstly, is there punishment for writing posts that go against the country policy? Secondly, the Internet infrastructure in China must be enormous. From a systems scale, do they ensure each and every post goes through their censorship system?

Summary 2

The second paper by Hiruncharoenvate et al. carries the idea of keyword-based censoring forward. They base their paper on the observation that activists have employed homophones of censored words to get past automated reviews. The authors develop a non-deterministic algorithm that generates homophones for the censored keywords. They suggest that homophone transformations would cost Sina Weibo an additional 15 hours per keyword per day. They also find that posts with homophones tend to stay 3 times longer on the site on average. The authors also tie up the paper by demonstrating that native Chinese readers did not face any confusion while reading the homophones – i.e. they were able to decipher their true meaning.

Reflection 2

Of all the papers we have read for the Computational Social Science, I found this paper to be the most engaging, and I liked the treatment of their motivations, design of experiments, results, and discussions. However, I also felt disconnected because of the language barrier. I feel natural language processing tasks in Mandarin can be very different from that in English. Therefore, I was intrigued by the choice of algorithms (tf-idf) that the authors use to obtain censored keywords, and then further downstream processing. I am curious to hear how the structure of Mandarin influences NLP tasks from a native Chinese speaker!

I liked Experiment 2 and the questions in the AMT task. The questions were unbiased and actually evaluate if the person understood which words were mutated.

However, the paper also raised other research questions. Given the publication of this algorithm, how easy is it to reverse engineer the homophone generation and block posts that contain the homophones as well? They keyword-matching algorithm could be tweaked just a little to add homophones to the list, and checking if several of these homophones occurred together or with other banned keywords.

Finally, I am also interested in the definitions of free speech and how they are implemented across different countries. I am unable to draw the line between promoting free speech and respecting the sovereignty of a nation and I am open to hearing perspectives from the class about these issues.

Read More