INTERNET CENSORSHIP DETECTION AND ANALYSIS TECHNOLOGIES Dossetov U.
Dossetov Ualikhan - Master Degree, FACULTY OF INFORMATION SCIENCE AND TECHNOLOGY NANJING UNIVERSITY OF SCIENCE AND TECHNOLOGY, NANJING, CHINESE PEOPLE'S REPUBLIC
Abstract: with the use of the Internet to promote information and political discussion and
activities, the struggle of governments to control this information channel has become
evident: the change over time of Internet Censorship has been analyzed in [Bambauer,
2013], where such worrisome evolution and its last stage are the basis to a call for
transparency and public control on censorship application.
This paper analyzes internet censorship detection and analysis technologies
Keywords: internet, censorship, technology and discussion.
With the use of the Internet to promote information and political discussion and activities, the struggle of governments to control this information channel has become evident: the change over time of Internet Censorship has been analyzed in [Bambauer, 2013], where such worrisome evolution and its last stage are the basis to a call for transparency and public control on censorship application. The author makes the point that, regardless of the different motivations (legitimate or rhetorical) and the specific technical means, the intentional limitation of access to information is always a form of censorship, and as such should be clearly named, highlighting the sensitive and dangerous nature of its enforcement and requiring strict watch from the citizens [1].
A chronology of the evolution of Internet Censorship, but with a more technical viewpoint, has been previously presented in, in which the authors leverage the studies carried on in the OpenNet Initiative [ONI] research project to characterize the evolution of the normative and technical control of access to information on the Internet. Four phases are identified for "Cyberspace Regulation", summarized as:
• Open Commons (1990s - 2000) - Internet is considered as an information sharing media it is named "cyberspace", a world somehow separated and different from "real life", not subjected to the same rules. Unbounded, cheap-as-free access to all available online information is the norm. Governments mostly ignore the Internet or lightly regulate it.
• Access Denied (2000 - 2005) - States and other entities start considering online activities as demanding management: filtering activities are enacted at country or organization level, often to block specific content categories (pornography, content deemed harmful to underages); the freedom of access to information is actively challenged.
• Access Controlled (2005 - 2010) - Regulatory approaches are emphasized, a wider range of means and points of control (non necessarily technical in nature) are exploited by authorities to shape and control the access to information. Active methods such as computer network attacks, espionage and public opinion forming (ad-hoc creation of content by fake "common citizen" or grassroots movements) are enacted. Surveillance and the implied self -censorship are linked with national conditions and regulations, as the obligation of registering with an identification document when accessing the Internet. From the technical point of view, ad-hoc filtering is the new challenge for censorship assessment: time limited blocking of specific online resources is hard to detect or can be mistaken by a failure, or an attack whose instigator is hard to provably traceback.
Access Contested (2010 - current) - The fight and the debate about regulation of access to online information has come in the foreground, with governments pushing their prerogative of controlling online information and companies providing filtering services and technologies on one side, and individuals, organizations and private companies arguing
these prerogatives in order to defend their rights and interests, on the other side.
Internet Censorship is a phenomenon that crosses several study fields, from computer networking and computer security to social sciences; several definitions can be found in academic literature and in technical reports and news, but often they are implicit and described by means of specific detection or analysis technique. Most literature refers to censorship of web content, i.e. information accessible through web browsers (adopting an application level view) and employing the HTTP and HTTPS protocols. We note that web browsing (and more generally applications adopting HTTP or HTTPS protocols) is only one kind of network application that use the Internet, other examples being Voice-Over-IP, Peer-to-peer file sharing, e-mail, multi-user video games, Instant Messaging; censorship has been found to be enforced also on some of these applications.
From a network topology point of view, a coarse-grain classification can be proposed with regards to the components of the communication system that are employed for censorship: client-based, and server-based, if censorship is applied at the ends of the communication path, network-based, if it happens in between. While this Thesis work is focused on network-based censorship detection, in the following an overview of client-based and server-based censorship techniques is given both for completeness and to clarify the scope of the work [2].
We consider client-based censorship as the blocking of access to online content by means of applications running on the same system of the network application client. It can be implemented by different means, such as: an independent application, akin to a key logger, that terminates the applications whose keyboard input matches a blacklisted keyword - such apparently is the technology employed in Cuba. Another form for this kind of censorship is a network filter like parental control or company policy control enforcement filters, running as a personal firewall. Finally, it can be enforced as a modified version of the network application client itself, added with surveillance "features", as the case of TOM-Skype in China [3].
The final node of the communication path, the server, is the component where server-based censorship is enforced, with no disruption of the communication mechanics: the censor selectively removes, hides, or impairs access to specific content directly in the server, by means of management facilities provided by the service itself. The censoring action can be enforced commanding the server manager to comply with the request.
This form of censorship is specifically hard to be analyzed, as its mechanics are internal to the service and not exposed to the users; a recent quantitative analysis of it has been performed in , that reported several censoring evidences of different type (divided as proactive or retroactive mechanisms), and proposed hypotheses on how these mechanisms are actually enacted.
The existence of this kind of censorship is sometimes acknowledged by the Online Service Providers themselves. One such case is Google Transparency Report - Removal Requests by which Google discloses a summary of requests from governments or from copyright owners to block access to specific content. While the actual removed targets are not disclosed, a categorization of removal requests is done according to the reason and the type of requester and the related statistics are provided. This kind of censorship and its consequences are analyzed under the term "intermediary censorship" in Deibert [2010, chap. 5].
In between the host running the application client and the host running the respective server part is where network-based censorship is enforced.
With respect to client-based censorship it provides the censor a much wider coverage of the network and with more efficiency, allowing the control of high number of communications through the management of a relatively few gateways or hubs (instead of one installation for each user system). On the other hand, client-based censorship implies the control of the user terminal or the compliance of the user herself, for each controlled user.
Similar considerations can be done with respect to server-based censorship, that in turn requires control or compliance of server host managers. The relatively small number of
popular services helps the censor that wants to control them, but there is the possibility that such servers are located abroad or otherwise outside of the influence of the censor, thus nor direct control nor compliance can be forced.
These comparisons highlight the pivotal importance of network-based censorship, that constitutes the central phenomenon considered in the present Thesis work, and will be analyzed in more detail in the following section. Unless explicitly stated differently, hereafter by "censorship" will be intended "network-based Internet censorship", and similarly by "detection" will be intended "detection of network-based Internet censorship".
The techniques employed by the censors can be characterized in terms of the trigger that initiates the censoring process (and thus implicitly the phase of communication in which the trigger is sent), the action itself, and the symptom experienced by the user. A general distinction is between stateless and stateful censoring technique or system: in the first kind the censoring action is performed deciding on a per-packet basis (presence of the trigger); in the latter the decision depends on both the information on past packets (status) and the presence of the related trigger: what constitutes a trigger changes over time according to the sequence of packets that are seen by the surveillance device [5].
A characterization of censorship techniques along these properties is presented in the following and summarized graphically in Figure 2.4, where the defining properties of one of the techniques are highlighted (two-stage DNS hijacking and HTTP injection, Section 2.3.10).
In the following the techniques are described, grouped according to the type of action (and the possible setups) adopted by the censor, also discussing the remaining elements [6].
The awareness of the political importance of Internet as a mass medium has raised with examples such as the political campaign on Online Social Networks of the president B.
Governments worldwide, having acknowledged the Internet as an important channel for information, public discussion and organization of communities of interest, have exerted their control power on it. This has led to an arms race for the adoption of surveillance and censorship tools on a side, and privacy preserving and censorship circumvention tools on the other. As a consequence, different censorship techniques have been adopted over time in different countries worldwide. The actual extension of this phenomenon is not advertised by the censors, that thus become hardly accountable for it; hence the necessity of an independent and provable assessment of Internet censorship by its detection and continuous monitoring. These motivations have lead the research conducted by the candidate, described hereafter.
The available literature on Internet censorship has been found based mainly in network security (more focused on circumvention), while the techniques employed to enforce censorship derive also from traffic classification and traffic engineering. A selection of findings and studies have been analyzed adopting a network monitoring point of view, in order to extract the elements instrumental to the detection of network-based censorship.
Using web applications as a reference for client-server network applications on the Internet, a simplified model of the communication has been defined comprising the protocols involved (at the network, transport and application layers of the TCP/IP stack), the network topology, and the intermediate devices found on the path between client and server. Using the defined model, the censorship techniques have been characterized according to different elements, namely: the location of the surveillance device, the trigger, i.e. the element of the communication that elicits the activation of the censoring action; the localization of the censoring device, i.e. the component of the censoring system that applies the censoring action and the censoring action itself, i.e. the blocking or impairing of the access to the resource or service, or the mangling of the content of the communication. This has constituted an original contribution of the candidate, aimed at providing an unifying and comprehensive model of a complex phenomenon so far investigated in heterogeneous study fields.
With the same approach of analysis and with reference to the same model, the techniques and tools available for the detection of Internet censorship have been characterized, based on their ability to purposely generate or just receive traffic to elicit censorship, the types of triggers that said traffic can contain, the criteria to infer censorship
and to ignore involuntary outages. The availability has been researched of censorship monitoring platforms, aimed at providing a quasi-real-time running report of the state of application of Internet censorship on a global scale and for year-long time scales. The properties such a platform should provide have also been analyzed and described, on the basis of the different approaches found in literature and in field usage.
References
1. Herdict. [Electronic resource]. URL: http://www.herdict.org/ (date of access: 08.02.2014).
2. Eastlake D. 3rd. Transport Layer Security (TLS) Extensions: Extension.
3. Feamster Aceto Nick and Pescapse Antonio. User-side approach for censorship detection: home-router and client-based platforms. In Connaught Summer Institute on Monitoring Internet Openness and Rights. University of Toronto, 2013.
4. Anderson Collin. Dimming the internet: Detecting throttling as a mechanism of censorship in iran, June 2013. [Electronic resource]. URL: http://arxiv.org/abs/1306.4361/ (date of access: 17.04.2017).
5. Aryan Simurgh, Aryan Homa andHalderman J. Alex. Internet censorship in iran: A first look. In Presented as part of the 3rd USENIX Workshop on Free and Open Communications on the Internet. USENIX, 2013.
6. Atkins D. and Austein R. Threat Analysis of the Domain Name System (DNS). RFC 3833 (Informational), August, 2004. [Electronic resource]. URL: http://www.ietf.org/rfc/rfc3833.txt/ (date of access: 17.04.2017).
МЕТОДИКА ИЗМЕРЕНИЯ СЛОЖНОСТИ ВОСПРИЯТИЯ ГРАФИЧЕСКОГО ИНТЕРФЕЙСА ПОЛЬЗОВАТЕЛЯ Кудрявцев М.А.
Кудрявцев Михаил Андреевич - аспирант, кафедра информационных технологий и компьютерного дизайна, Российский государственный университет им. А.Н. Косыгина, г. Москва
Аннотация: в статье анализируются современные методы оценки сложности интерфейсов и приводится методика получения объективной оценки эргономичности и сложности интерфейса на этапе проектирования. Ключевые слова: анализ, интерфейс, эргономика, проектирование.
Интенсивность развития информационных технологий в современном мире привела к увеличению количества выпускаемых программных продуктов. Подобная ситуация создает высококонкурентную среду на рынке, где наиболее значимыми качества выпускаемой продукции являются: эффективность, скорость работы, цена и удобство работы пользователя. Именно поэтому крайне важно на этапе разработки и проектирования продукта учитывать данные параметры. Однако предугадать эффективность и удобство программного продукта для конечного пользователя крайне сложно до момента реального использования [1]. Очевидно, актуальность оценки удобства существует не только на этапе разработки, но и на последующих итерациях поддержки и развития программного продукта.
Методики оценки сложности интерфейса предлагались и раньше, однако большинство из них основаны на оценке интерфейса экспертом. Такие методы базируются на наборе правил, критериев и советов, которые описывают особенности восприятия пользователем интерфейсов [2]. Однако, учитывая тот факт, что