Technical filtering mechanisms can be found at all levels of the Internet in China, from the home computer to PCs located in hotels and Internet cafés. PHOTO BY TOM CHRISTENSEN.
The screening of content over the Internet by the government is common and far-reaching, overseen by thousands of private and public monitors who filter content sent through websites, blogs, forums, and e-mail. A majority of Chinese Internet users have said in surveys that they prefer that the Internet be controlled in some way.
The Chinese government’s control over the Internet, including Internet content, is well documented and hardly a surprise to anyone. All media in the country is tightly controlled, which, to most people in the West, amounts to unacceptable censorship. The actual scope of Internet content filtering is difficult to determine because the monitors continually shift their attention to certain sites and topics they deem undesirable.
Monitoring the Monitors
Reports of China’s Internet filtering comes from the activities of bloggers, academics, nongovernmental organizations, and watchdog groups both within and outside the country that monitor China’s filtering system and compare and discuses its characteristics on e-mail lists, blogs, and other public forums. One such watchdog is the OpenNet Initiative (ONI), a joint project that monitors and reports on Internet filtering and surveillance practices by governments. Partners in the project are the Citizen Lab at the Munk Centre for International Studies at the University of Toronto; the Berkman Center for Internet & Society at the Harvard Law School; the Oxford Internet Institute at the University of Oxford; and the Advanced Network Research Group at the University of Cambridge.
In a 2005 report, the ONI described China as operating “the most extensive, technologically sophisticated, and broad-reaching system of Internet filtering in the world.” China’s filtering program employs a combination of technical, legal, and social measures applied at a variety of access points and overseen by thousands of private and public monitors who filter content sent through a range of communication methods, such as websites, blogs, forums, and e-mail. Together, these measures create a matrix of soft and hard controls and induce a widespread climate of self-censorship.
Technical filtering mechanisms can be found at all levels of the Internet, from the basic architecture to PCs located in hotels and Internet cafés. Although Internet Service Providers (ISPs), Internet cafés, search engines, and other network services can and do operate their own filtering systems, all network traffic is subject to a uniform system of filtering at three major international gateways in Beijing, Shanghai, and Guangzhou known collectively as the Great China Firewall. Filtering is centralized and largely consistent across each of the international gateways. This gateway level of filtering is an unavoidable last line of defense. ONI research has uncovered three forms of filtering at these international gateways: DNS tampering, keyword filtering, and IP blocking.
DNS tampering works by interfering with the system that cross-references domain names with the numerical address associated with them. Users are directed to an invalid IP as if the site they requested did not exist. By contrast, IP blocking targets the numerical address. This type of blocking can cause major collateral filtering of unrelated content because different domain names can share the same IP address host.
Keyword filtering targets the URL path—and, increasingly, the webpages—searching for banned terms. When one is found, the routers send what are known as RST packets, which terminate the connection between sending and receiving computers, effectively preventing that computer from making requests to the same server for an indefinite period. Because the system works both ways (for requests exiting and entering China), it can be tested by searching for banned keywords like “Falun Gong” or “Tibet” on search engines hosted in China. In each case users requesting banned information receive an error message on their web browser, making it appear as if the information is not available or that there is something wrong with their Internet connection. Users trying to access banned content do not receive a block page informing them that the content is officially filtered, as is the case in some other countries that censor the Internet.
During the Tibetan protests in 2008, another new, sophisticated form of blocking appeared: the use of distributed denial of service (DDOS) attacks. There have been persistent and increasing charges that DDOS attacks against servers in the United States, the United Kingdom, Canada, and elsewhere have their origins in mainland China. Such attacks were prominent during and following the demonstrations in Tibet, with the servers of many Tibetan and Chinese human rights organizations systematically targeted. It is difficult to pinpoint the sourer of these more aggressive methods of denying access to information, which effectively target and disable the sources of information themselves rather than passively blocking requests for information, as the filtering systems do.
Technical means of filtering are complemented by an extensive set of social and legal or regulatory measures. Legal or regulatory measures tend to be vague and broad; they offer a wide scope for application and enforcement as well as uncertainty among users. Most have the effect of holding end users and services—such as café operators, ISPs, blog-hosting services, and media—responsible for censoring the content they post and host. Since enforcement can be arbitrary, users and operators of services tend to err on the side of caution, preferring to prevent or remove offending material rather than risk censure.
Social measures are even more general and thus harder to define, but they include operating norms, principles, and rules that are propagated through media and official channels and are combined with extensive techniques of surveillance, which together affect behavior in both formal and informal ways. These include self-discipline pacts signed by Chinese Internet service companies pledging to uphold public values and self-censorship by Internet users. The first thing many users see when logging on are two cute cartoon police officer characters, Jingjing and Chacha, that pop up and warn users not to visit banned sites or post harmful information.
Content targeted for blocking is wide ranging and covers social, cultural, security, and political topics considered a threat to Communist Party control and social or political stability. Among the topics most frequently blocked are Taiwanese and Tibetan independence, Falun Gong, the Dalai Lama, the Tiananmen Square incident, and opposition political parties.
ONI investigations revealed that China’s filtering tends to focus disproportionately on content in local Chinese languages. Users searching for the equivalent English language terms, for example, will often get a higher proportion of results than they would for the same terms searched for in Chinese.
During the 2003 SARS epidemic, the government tried to restrict news about the outbreaks in the early days of the crisis while officials and experts tried to work out a course of action. But groups inside and outside China used the Internet, along with mobile phones and satellite broad
casts, to ensure that information about the disease was widely distributed.
In January 2009 the Internet Affairs Bureau under the State Council Information Office launched a campaign against pornography and vulgar content online. The campaign led to the shutdown of 1, 250 websites and the arrest of forty-one people. Similar actions have been taken against online gaming sites that may be too violent or too political. Websites and services that help people evade government censorship are also regularly filtered.
Although the filtering system appears consistent and relatively stable from day to day, the Chinese government has also demonstrated a propensity to use what ONI calls just-in-time blocking in response to special situations as they emerge. For example, during the demonstrations in Tibet in 2008, the government implemented blocks against Youtube.com and other video-streaming services that were circulating images of protests. The blocks were later lifted.
Official acknowledgement of censorship of Internet content has been inconsistent. Officials deny or do not discuss details of content-filtering practices. On rare occasions when public officials raise the subject, they compare it to similar practices in the West and justify it as way to protect public safety, core social values, and stability. When Google.cn was established and agreed to abide by government rules, there was an international outcry over censorship. But searches on Google.cn on controversial topics brought up many sites that were critical of the government. But the full scope of China’s censorship regime is never spelled out in official circles or public government documents.
Generally speaking, Chinese Internet users accept government intervention much more readily than users in Western countries would do. In surveys conducted in 2003, 2005, and 2007, more than 80 percent of respondents in China said that the Internet should be controlled and that the government should be the controlling agent, a role that the government intends to maintain.
Deibert, R. (2008, June 18). Testimony to the U.S.-China Economic & Security Review Commission. Retrieved February 6, 2009, from http://deibert.citizenlab.org/deibertcongresstestimony.pdf
Deibert, R., Palfrey, J., Rohozinski, R., & Zittrain, J. (Eds.). (2008). Access denied: The practice and policy of global Internet filtering. Cambridge, MA: MIT Press.
Source: Deibert, Ronald J.. (2009). Internet Content Filtering. In Linsun Cheng, et al. (Eds.), Berkshire Encyclopedia of China, pp. 1172–1174. Great Barrington, MA: Berkshire Publishing.
Internet Content Filtering (Hùliánw?ng nèiróng guòlü? ???????)|Hùliánw?ng nèiróng guòlü? ??????? (Internet Content Filtering)