Научная статья на тему 'PROTECTION OF ONE’S HONOR, DIGNITY, AND BUSINESS REPUTATION ON SOCIAL NETWORKS: ISSUES AND WAYS TO RESOLVE THEM'

PROTECTION OF ONE’S HONOR, DIGNITY, AND BUSINESS REPUTATION ON SOCIAL NETWORKS: ISSUES AND WAYS TO RESOLVE THEM Текст научной статьи по специальности «СМИ (медиа) и массовые коммуникации»

CC BY
709
46
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
SOCIAL NETWORKS / HONOR / DIGNITY / BUSINESS REPUTATION / DEFAMATION / PROTECTION OF ONE’S HONOR / AND BUSINESS REPUTATION

Аннотация научной статьи по СМИ (медиа) и массовым коммуникациям, автор научной работы — Slavko Anna S., Zavhorodnia Vladyslava M., Shevchenko Natal'Ya A.

Over the last decade, social networks have become an indispensable part of societal life. Today, they are used not only for engaging in interpersonal communication but also for developing a personal image or business reputation, creating or promoting a brand, building professional or business relations, conducting commerce, or obtaining the latest information about what is going on around the world. The findings from an analysis of relevant legislation, case law, and user agreements indicate the unique legal nature of social networks. Based on their analysis of social networks’ key functions, the authors prove that in many areas social networks have acquired the role of mass media, as they are capable now of delivering the latest news or any other information to the user factoring in their individual tastes and information needs. Given the significant role played by social networks in informing people in today’s society, coupled with their key characteristics such as horizontal dissemination of information, lack of preliminary moderation of user comments, and availability of two-way communication, compromising people’s honor, dignity, and business reputation on social networks can have quite serious implications for them in various spheres of social life. With that said, it appears to be quite difficult, for now, to counter this kind of attacks, both technically and legally. The paper provides an analysis of key issues that can arise as part of efforts to counter defamation on social networks (e.g., difficulty of establishing the identity of a respondent, difficulty of proving malice, or having to factor in the special nature of communication on social networks) and ones to develop legal solutions and ways to overcome such issues.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «PROTECTION OF ONE’S HONOR, DIGNITY, AND BUSINESS REPUTATION ON SOCIAL NETWORKS: ISSUES AND WAYS TO RESOLVE THEM»

Copyright © 2020 by Academic Publishing House Researcher s.r.o.

r " *

I

Published in the Slovak Republic

International Journal of Media and Information Literacy Has been issued since 2016. E-ISSN: 2500-106X 2020, 5(2): 205-216

DOI: 10.13187/ijmil.2020.2.205 www. ej ournal4 6.com

Protection of One's Honor, Dignity, and Business Reputation on Social Networks: Issues and Ways to Resolve Them

Anna S. Slavko a, Vladyslava M. Zavhorodnia a > *, Natal'ya A. Shevchenko b > c

a Sumy State University, Ukraine

b International Network Center for Fundamental and Applied Research, Washington, USA c Volgograd State University, Volgograd, Russian Federation

Abstract

Over the last decade, social networks have become an indispensable part of societal life. Today, they are used not only for engaging in interpersonal communication but also for developing a personal image or business reputation, creating or promoting a brand, building professional or business relations, conducting commerce, or obtaining the latest information about what is going on around the world.

The findings from an analysis of relevant legislation, case law, and user agreements indicate the unique legal nature of social networks. Based on their analysis of social networks' key functions, the authors prove that in many areas social networks have acquired the role of mass media, as they are capable now of delivering the latest news or any other information to the user factoring in their individual tastes and information needs. Given the significant role played by social networks in informing people in today's society, coupled with their key characteristics such as horizontal dissemination of information, lack of preliminary moderation of user comments, and availability of two-way communication, compromising people's honor, dignity, and business reputation on social networks can have quite serious implications for them in various spheres of social life. With that said, it appears to be quite difficult, for now, to counter this kind of attacks, both technically and legally.

The paper provides an analysis of key issues that can arise as part of efforts to counter defamation on social networks (e.g., difficulty of establishing the identity of a respondent, difficulty of proving malice, or having to factor in the special nature of communication on social networks) and ones to develop legal solutions and ways to overcome such issues.

Keywords: social networks, honor, dignity, business reputation, defamation, protection of one's honor, dignity, and business reputation.

1. Introduction

Contemporary life can hardly be imagined without social networks, which are used today for expressing oneself, obtaining or disseminating information, or promoting various goods or services. Social networks have become a salient phenomenon that is playing nowadays a significant role in politics, economics, and mass culture. The explosive growth in the popularity of social networks is attested by statistics. Specifically, in January of 2020 the world's five most popular

* Corresponding author

E-mail addresses: [email protected] (V.M. Zavhorodnia)

social networks reached a combined audience of 8.5 billion (Most popular, 2020). To compare, in 2008 the figure was not even 500 million (Ortiz-Ospina, 2019).

That being said, the widespread use of social networks has also come with certain risks to the government and society as a whole - namely, in terms of infringement on the rights of certain individuals. Such risks include, above all, the possibility of intrusion into a person's privacy and disruption of the activity of administrative institutions by way of uncontrolled dissemination of false information. At the juncture of such threats are cases of defamation on social networks, with the phenomenon evolving rapidly in scale, unfortunately.

Disseminating false offensive information about a person or a group of people can harm their honor, dignity, and business reputation and have a negative effect on their professional and personal contacts. Defamation can not only undermine the overall level of trust within society but also shatter respect for some of the key institutions that ensure the stable operation of the social system, such as public authorities, mass media, nongovernmental organizations, and business establishments. With that said, it is worth noting that risks that can arise in conjunction with the dissemination of fake news on social networks are of special concern, as communication on them tends to be perceived as something private and information obtained this way, accordingly, as something truthful and highly credible.

Thus, defamation on social networks has certain distinctive characteristics due to which it appears to be difficult to counter it today. This paper aims to identify some of these characteristics and bring forward a set of ways to counter defamation on social networks. The study's subject is the characteristics of defamation on social networks, and its object is developing a set of ways to counter defamation on social networks using legal tools.

2. Materials and methods

Due to the relatively recent history of virtual communication, issues of protection of honor, dignity, and business reputation on social networks have not been given sufficient attention in science, legal practice, and, certainly, law in most contemporary countries. This study's theoretical basis is a set of works analyzing the key aspects of the policies pursued by various countries and international organizations (above all, the European Union and Council of Europe) in relation to social networks (Burkell et al., 2014; Obar, Wildman, 2015; Sarikakis, Winter, 2017; Vicentea, Novob, 2014).

An analysis of the terms of service and rules of conduct of several leading social networks helped gain an insight into the nature of the private-law dimension of the operation of social networks.

To achieve the study's objective, the authors explored relevant legislation and case law in the US and the EU and case law from the European Court of Human Rights. The choice of said empirical material for the analysis was governed by a number of factors. Firstly, the world's major social networks recognize the jurisdiction of the United States - as a country where the companies that run them are mainly registered. Secondly, within the framework of the Anglo-American legal family judges have the powers to render a judgment based on "common sense" and a "general notion of fairness" when there are no direct provisions of law available. Thus, it is American courts that are building nowadays most of the case law in this area, which, due to lack of adequate legal response, is becoming the guidepost for the legal qualification and evaluation of the behavior of people on social networks. The other guidepost for countries signed up to the European Convention on Human Rights (Convention, 1950) is the practice of the European Court of Human Rights, which has been recognized in all member countries of the Council of Europe as legally mandatory for the national judicial agencies.

3. Discussion

Social networking sites are application systems that offer users functionalities for identity management and enable them to keep in touch with other users (Richter, Koch, 2008). The key functions of social networks include the following:

1) Content-oriented functions:

- Information and knowledge management: creating, detecting, receiving, managing, and exchanging opinions, knowledge, and information, e.g. wikis, social bookmarking, tagging, RSS, blogospheres, or special interest sites.

- Entertainment or experiencing virtual worlds: exchanging content for the purpose of entertainment or of experiencing virtual (game) worlds, such as YouTube, certain interactive online games, etc.

2) Relationship-oriented functions:

- Relationship management: maintaining existing and establishing new relationships (e.g. on contact platforms), linking of people with similar interests, and exchanging information, e.g. special interest sites such as Myspace for musicians.

- Identity and reputation management: (selectively) presenting aspects of one's own person, e.g. on personal blogs, podcasts, etc. (Legal Basis, 2011).

In addition, social networks are increasingly becoming an area of study as a tool for electronic learning in the context of cultivation of social skills (Gersamia, Toradze, 2017) and in terms of value-based administration of law (Zavhorodnia et al., 2019).

Possessive of features such as horizontal dissemination of information, availability of instant two-way communication, creation of information "niches" (a sort of information subspaces focused on a certain thematic or ideological area), and simplicity of communication, social networks also appear to serve as today's most efficient mass medium. This fact has been substantiated by research - when it comes both to events of a global scale and to local news associated with the close circle of contacts of a user of a social network (Myers, Hamilton, 2014).

Definitely, of interest is viewing social networks as a communication tool that can be utilized by public authorities (Tappendorf, 2013). On one hand, the economic efficiency of this way to communicate is quite high, as it enables government establishments to spend less time and money on getting necessary information to, as well as targeting it for, a certain audience. On the other hand, social networks enable the public to receive necessary information from government institutions directly, without having to visit them in person, phone them, or make inquiries with them by regular mail. For example, in 2019 Ukraine's National Agency for Ensuring the Quality of Higher Education set up on the social network Facebook a special group (1) designed to inform institutions of higher learning of the accreditation requirements and procedures for accreditation of curricula and (2) oriented toward experts in the area of accreditation selected from among higher education instructors and seekers. Communication within the setting of this group helps come up promptly with solutions to issues that arise during the process of accreditation of curricula, by reference both to the stance of the National Agency and to the experience of several thousand experts concerned with accreditation examination of various specialties and disciplines. It is clear that an online community of this kind offers great potential both for an exchange of ideas and for the study of the regulatory framework for accreditation of curricula. There also occur collisions between representatives of the institutions of higher learning and experts with a negative assessment of their activity, as well as between both of these groups and the staff of the National Agency. Collisions of this kind tend to be of quite a productive nature, as, in the end, they can lead to improvements in the quality of higher education offered by Ukrainian universities. However, as ascertained by research conducted by the authors of this work, there is also the possibility of statements of an untactful nature being made concerning both public interests (e.g., the openness of information about curricula and verification of the knowledge and competencies acquired by the seeker) and private interests (e.g., the honor, dignity, and business reputation of persons involved in the accreditation process). Consequently, issues of protection of the honor and dignity of individuals engaged in communication within the segments of social networks moderated by the authorities need to be regulated too.

It is also worth noting that in recent years professional journalists have increasingly used social networks as a source for creating news and other information materials (Umrani et al., 2019). For this reason, the quality of content offered on social networks and the credibility of information disseminated over them are key components of the personal security of any user.

Researchers have yet to arrive at a clear-cut concept of what principles the government's policy on social networks is to be built upon. Specifically, there has been discussion over whether social networks are to be considered public or personal space (Burkell et al., 2014), how the government could utilize social networks (Vicentea, Novob, 2014), whether the government can (and should) regulate the activity of users of social networks (Obar, Wildman, 2015), and what the content and limits of law on privacy on social networks are to be (Sarikakis, Winter, 2017). Without resolving these issues at a conceptual level, it will hardly be possible to ensure the proper

protection of the rights of citizens and protection from risks associated with the dissemination on social networks of false and destructive information.

4. Results

Social networks in the private-law dimension

From the standpoint of private law, social networks operate based on an adhesion agreement entered into between the user and the social network company. User agreement provisions are established and modified by the company unilaterally, of which it informs the user. With that said, normally there is the presumption of the user having familiarized themselves with and given consent to the changes.

In essence, user agreements are service provision agreements, for the company enables the user to place on the site information about themselves, exchange posts and comments, express their attitude toward something using special symbols, etc. The user, in turn, agrees not to violate the rules of the community (e.g., refrain from disseminating fake information, provide their first and last names during registration, respect the copyright of other users, or refrain from hate speech). The list of obligations may vary depending on the policy of the different social networks.

For instance, in accordance with Facebook's user agreement, this social network's mission is to give people the power to build community and bring the world closer together. The services Facebook provides are subsumed under several categories. This includes the following:

(1) providing a personalized experience for the user; (2) connecting the user with people and organizations that they care about; (3) empowering the user to express themselves and communicate about what matters to them; (4) helping the user discover content, products, and services that may interest them; (5) combating harmful conduct, and protecting and supporting the community; (6) using and developing advanced technologies to provide safe and functional services for everyone; (7) researching ways to make Facebook's services better; (8) providing consistent and seamless experiences across the Facebook company products; (9) enabling global access to Facebook's services (Facebook, 2019).

With that said, the user agrees to (1) provide accurate information about themselves;

(2) refrain from doing anything that is unlawful, misleading, discriminatory, or fraudulent;

(3) refrain from doing anything that infringes or violates someone else's rights, including their intellectual property rights; (4) refrain from uploading viruses or malicious code, or doing anything that could disable, overburden, or impair the proper working or appearance of Facebook's products; (5) refrain from accessing or collecting data from Facebook's products using automated means without the company's prior permission (Facebook, 2019).

Facebook cannot be used if (1) the person is under 13 years old (or the minimum legal age in their country to use Facebook's products); (2) the person is a convicted sex offender; (3) Facebook has previously disabled the person's account for violations of its terms or policies; (4) the person is prohibited from receiving Facebook's products, services, or software under applicable laws (Facebook, 2019). With that said, Facebook does not engage in preliminary moderation of the user's page and does not check the user for compliance with its requirements. However, the company has made provision for the possibility of complaining about offensive or inappropriate content or suspicious user profiles. Complaints of this kind are examined by the social network's local or regional offices. Subsequently, the company decides whether to exculpate the user as they did not violate the rules of the community, ban the user temporarily (for one to three days or one month), or ban the user for good. Facebook also has a special page verification function. Primarily, this is for brands or public persons, and is designed to show that the user has confirmed their identity or brand (Facebook, 2019).

The rules of the social network Twitter are relatively more liberal. For example, it does not require that you provide your real name. To use Twitter, you must be at least 13 years old, or in the case of certain services 16 years old. User names are not moderated. Moreover, protecting your anonymity is one of Twitter's top priorities. With that said, the social network assumes no responsibility for the unreliability and offensive nature of content and lets the user know upfront that any use or reliance on any content or materials posted on Twitter or obtained by the user through Twitter is at their own risk. Twitter reserves the right to remove content that violates the user agreement, including, for example, copyright or trademark violations or other intellectual property misappropriation, impersonation, unlawful conduct, or harassment (Twitter, 2020).

Twitter may suspend or terminate your account or cease providing you with all or part of the services if it reasonably believes that (1) you have violated the terms or the Twitter rules and policies; (2) you create risk or possible legal exposure for it; (3) your account should be removed due to unlawful conduct; (4) your account should be removed due to prolonged inactivity; (5) its provision of the services to you is no longer commercially viable (Twitter, 2020).

The social network LinkedIn positions itself as a platform for establishing work contacts and presenting yourself as a professional. Pursuant to Linkedln's user agreement, to access its services you must be the "minimum age" (16 years old) or older, have one LinkedIn account, which must be in your real name, and be not already restricted by LinkedIn from using its services (LinkedIn, 2020).

LinkedIn does not offer preliminary moderation of content and will bear no responsibility if you violate the intellectual property rights of others or post offensive content. You are not allowed to create a false identity on LinkedIn, use malicious software, disclose confidential information, and violate the intellectual property rights of others. These actions may result in unilateral termination of the agreement (LinkedIn, 2020).

Another highly popular social network - Instagram - is a product of Facebook. For this reason, its terms of use are similar to Facebook's. For its part, Instagram offers personalized opportunities to create things and communicate and promotes fostering a positive, inclusive, and safe environment. The social network promises you consistent and seamless experiences across the company products (Instagram, 2018).

To use Instagram, you must be at least 13 years old, not be prohibited from receiving any aspect of its service under applicable laws, not have previously had your account disabled for violation of law or any of its policies, not be a convicted sex offender, and not be on an applicable denied party listing (Instagram, 2018).

You are discouraged from impersonating someone you are not and providing inaccurate information over Instagram. Nevertheless, you do not have to disclose your identity on Instagram. The social network does not want you to do anything fraudulent, interfere with the intended operation of the service, collect private information, and do anything that violates someone else's rights, including intellectual property. Instagram also offers an account verification function, which mainly is intended for celebrities and brands (Instagram, 2018).

In fine, user agreements are bilateral service provision agreements. What remains a key condition of user agreements is respect for the mutual rights and obligations of social networks and users. Social networks reserve the right to determine the circle of persons to whom to provide their services and the right to terminate cooperation with users who create risks of litigation for the company and violate the rules of the community.

Social networks in the public-law dimension

In public law, social networks are treated in a completely different way, in terms of both functionality and legal status. While the foundations of the public-law dimension of social networks' operation are still being laid, the findings from an analysis of documents from international organizations and relevant case law indicate that there is an integrated understanding out there of social networks being information platforms with two-way communication.

It was back in 2003 that the Committee of Ministers of the Council of Europe adopted the Declaration on Freedom of Communication on the Internet, which set out such important principles as absence of prior state control, anonymity, freedom to provide services via the Internet, and limited liability of service providers for Internet content (Declaration, 2003). That being said, over the following 10 years, the Council of Europe's stance underwent a number of changes. Specifically, Recommendation CM/Rec(2012)4 of the Committee of Ministers to Member States on the Protection of Human Rights with Regard to Social Networking Services does not only highlight the benefits offered by social networks (e.g., social networks being human rights enablers and catalysts for democracy) but also warns of the risks associated with the use thereof. In particular, the document states that "the right to freedom of expression and information, as well as the right to private life and human dignity, may also be threatened on social networking services, which can also shelter discriminatory practices" (Recommendation, 2014).

The European Union prioritizes, above all, the protection of the personal data of users of social networks. In this regard, there have been adopted a number of documents dealing with the

protection of natural persons in relation to the processing of personal data, free movement of such data (Regulation, 2016), and protection of one's intellectual property rights (Directive, 2019).

In terms of working out policy on social networks, most nations of the world are divided into the following two camps: (1) those willing to disregard or ignore the activity of social networks and (2) those willing to finalize the rules of communication on social networks and monitor all of the content published on them. That being said, a composite version of the relationship between the state and social networks implies cooperation and co-regulation, with the government as a whole and particular members thereof communicating with citizens via social networks and perceiving the behavior of users on the Internet as real actions. Thus, users of social networks will bear legal responsibility for personal data theft, fraud, offensive statements, infringement of one's intellectual property rights, and other wrong deeds committed over the Internet.

Furthermore, social networks are viewed as one of the most significant tools for communication. Of interest in this context is the Packingham v. North Carolina case, one of the first trials to help give a legal assessment of social networks' role, which was argued during the October 2016 term of the U.S. Supreme Court. The legislation of the state of North Carolina used to prohibit registered sex offenders from accessing social networking websites that permit minor children to become members. Based on data from the Court, the state had persecuted more than 1,000 people for violating that law, including the petitioner, Lester Packingham, who was convicted for maintaining a Facebook page in violation of the statute. Having invoked the First Amendment to the US Constitution, Court held that "a fundamental principle of the First Amendment is that all persons have access to places where they can speak and listen, and then, after reflection, speak and listen once more." The Court described cyberspace as the most important forum for the exchange of ideas, with social networks offering "relatively unlimited, low-cost capacity for communication of all kinds" (Packingham v. North Carolina, 2017).

On July 11, 2017, the Knight Institute filed a lawsuit in the US District Court for the Southern District of New York against President Trump and his aides for blocking seven people from the @realDonaldTrump Twitter account based on their criticism of his presidency and policies. This lawsuit resulted in the case of Knight Institute v. Trump.

The lawsuit maintained that the @realDonaldTrump account is a "public forum" under the First Amendment, from which the government may not exclude people based simply on their views. According to the plaintiffs, President Trump established his account, with the handle @realDonaldTrump, in March of 2009. From that time and until January of 2017, when his inauguration took place, no one was in a position to dictate to the user how to run his private account. However, as soon as Donald Trump officially became President, his account turned into a public forum for First Amendment purposes. The plaintiffs insisted that the public presentation of the Account and the webpage associated with it bear all the trappings of an official, state-run account. On top of that, Trump uses the account to announce "matters related to official government business," including high-level White House and cabinet-level staff changes, as well as changes to major national policies. Finally, the plaintiffs noted that the National Archives, the government agency responsible for maintaining the government's records, had concluded the President's tweets to be official records (Knight Institute v. Trump, 2018).

The US District Court for the Southern District of New York found that the "interactive space" in the account is a public forum and that the exclusion from that space was unconstitutional viewpoint discrimination. The US Court of Appeals for the Second Circuit affirmed the district court's holding that President Trump's practice of blocking critics from his Twitter account violates the First Amendment (Knight Institute v. Trump, 2018). On August 23, 2019, the government filed a petition for rehearing en banc.

However, the public forum rule applies to not only the president and other top federal officials. For instance, in its decision in the case of Davison v. Randall, the court affirmed that public officials' social media accounts can be "public forums" under the First Amendment. The lawsuit was brought by Brian Davison, a Virginia resident who was temporarily blocked from the official Facebook page of Phyllis J. Randall, the chair of the Loudoun County Board of Supervisors. The lawsuit maintained that Randall's Facebook page is a "public forum" under the First Amendment, and that Randall may not exclude people from it based on their views. A panel of the Fourth Circuit unanimously held that the "interactive component" of a local government

official's Facebook page constituted a public forum and that the official engaged in unconstitutional viewpoint discrimination by banning Davison from that forum. The court suggested that blocking critical comments about government officials is a substantial form of discrimination from the perspective of the First Amendment. The US Court of Appeals for the Fourth Circuit affirmed the decision, with the defendant's cross-claim failing (Davison v. Randall, 2019). Thus, from the perspective of US public law, a portion of accounts on social networks (i.e., those belonging to public officials) are public forums to which access cannot be restricted.

Note that, while the pages of users who are not a public official are not viewed as a "public forum", they possess the same functionality in terms of disseminating information. Accordingly, "regular" users can independently set their desired level of accessibility of their content in making it available to other users for viewing and commenting on. The possibility of easily disseminating any information among a large group of people makes social networks look like an online mass medium. That being said, users of social networks are not bound by professional ethics requirements and often do not suspect that they can bear responsibility for what they say online.

Thus, when it comes to the pages (accounts) of users of social networks from the perspective of public law, they are to be regarded as either a "public forum", i.e. a platform for the exchange of ideas, including between the government and citizens, or a quasi-mass medium that disseminates information for an unlimited number of people.

Defamation

The term 'defamation' comes from the Middle English 'diffamacioun' ('disgrace', 'dishonor', or 'ill repute'), which is derived from the Latin 'diffamare' ('to spread abroad by ill report' or 'to make a scandal of). While it is common in the US and certain countries of Western Europe, the concept is not very common in the legislation of most post-Soviet states (Shatiryan, 2014). These nations tend to prefer the cumbrous 'demean one's honor, dignity, and business reputation' to 'defame one'. For the most part, the lawmaker treats defamation as a tort, with a focus on protecting one's honor, dignity, and business reputation by way of court action.

The above is perfectly in line with the standards of the Council of Europe captured in Resolution No. 1577 (2007), 'Towards Decriminalisation of Defamation'. The document states that, while "freedom of expression is not unlimited", "statements or allegations which are made in the public interest, even if they prove to be inaccurate, should not be punishable provided that they were made without knowledge of their inaccuracy, without intention to cause harm, and their truthfulness was checked with proper diligence" (Resolution 1577). A similar conclusion has been reached by the United Nations Human Rights Committee, whose Article 19 ('Freedoms of Opinion and Expression') of General Comment No. 34 states that "consideration should be given to avoiding penalizing or otherwise rendering unlawful untrue statements that have been published in error but without malice" (General comment, 2011).

While different legal systems treat the structure of defamatory wrongdoing slightly differently, the common focus is on how to deal with the dissemination of false information about a person or a group of individuals and the offensive, degrading nature of such information.

For instance, in the US making truthful information (whose veracity can be proven in a court of law) public is not considered defamation. Defamation is construed as consciously injuring a person's reputation by "false and malicious statements" (Black's Law Dictionary). With that said, false information about a person can be disseminated either orally (e.g., during a public argument, a discussion between the victim and a third party, or a face-to-face or phone conversation) or in written form (e.g., via articles in mass media, messages on social networks, letters, or leaflets). In the current climate of the rapid development of digital technology and growth in the popularity of social networks, one is witnessing the second type of defamation becoming increasingly common.

The key components of judicature related to defamation include proving the falseness of the information, proving the consciousness of the falseness of the information, and establishing the size of moral damage inflicted on the petitioner. The size of the compensation the petitioner may be entitled to will depend on the degree to which the information is offensive and how wide the circle of people who have had access to it is.

In some cases, laws on defamation, often in the form of provisions on "calumny", are used to compensate for moral damage without taking account of the fact that in a democratic society a person's notions of their own honor and dignity ought to be juxtaposed with the right of others to freely express their views. The national legislation of most European nations provides parameters

for the "offensive actions and statements" of and exposes to potential legal liability a person who has insulted a public official or a judge. However, practices of this kind are not always recognized as justified. For example, in the case of Lingens v. Austria the European Court of Human Rights suggested that, while the press may not overstep its boundaries, including the protection of the reputation of others, the limits of acceptable criticism by the press are wider as regards a politician as such than as regards a private individual. In addition, the Court suggested that a careful distinction is to be made between facts and value judgments/opinions (Lingens v. Austria, 1986).

In the case of De Haes and Gijsels v. Belgium, the European Court asserted that, while the comments made by the applicants (an editor and a journalist) were, without doubt, severely critical, they, nevertheless, appeared "proportionate to the stir and indignation caused by the matters alleged in their articles." As for the applicants' polemical and even aggressive tone, it was stated that it must be remembered that Article 10 of the European Convention on Human Rights "protects not only the substance of the ideas and information expressed but also the form in which they are conveyed" (De Haes and Gijsels v. Belgium, 1997). Thus, the European Court indirectly acknowledged that Belgium's legislation on defamation is overly punitive toward people who disseminate information.

Defamation on social networks

When information that discredits one's honor, dignity, and business reputation is disseminated over the Internet and social networks, the courts and petitioners may face a number of difficulties that make it highly complicated to protect the petitioner's rights.

Firstly, there is the issue of identifying the person who disseminated false information about the petitioner, i.e. the respondent. As already mentioned above, not all social networks require that you specify your real name when registering the account. Even social networks that do have this requirement (e.g., Facebook and LinkedIn) cannot always guarantee that a certain name, last name, or photo belongs to a real person. While moderators at social networks do have to react to user complaints regarding fake accounts from other users, they rarely conduct checks of their own accord. In the event of numerous complaints about an account, moderators may ask for electronic copies of documents that can confirm the person's identity. Social networks such as Twitter and Instagram do not require that the user's display name match their real name, and even consider ensuring user anonymity to be their priority.

There can be a little more confidence about the respondent's identity in a defamation case when the account is verified (e.g., on Facebook and Twitter this is indicated by a blue tick beside the username). However, in such cases the user is still in a position to deny their personal engagement in disseminating a piece of information. For instance, there have been many cases of users claiming that their account was hacked or stolen, for which reason they could not be held responsible for defamation.

Quite often, offensive information is disseminated from an anonymous account or within an anonymous group. For example, in the case of Layshock v. Hermitage School District a high school student used his grandmother's computer to create a fake profile on the social network MySpace, using for that the name of the school's principal and a photo of the latter, which he copied from the school district's website. The vulgar parody linked the principal to drugs, alcohol, and prostitutes. Justin afforded access to the profile to other students in the District by listing them as "friends" on the MySpace website, thus allowing them to view the profile. Subsequently, a few more similar profiles of the principal were posted, each more vulgar and more offensive than Justin's. Their authors were never found. As a result, school district officials ordered disciplinary actions against the student. To challenge this, Justin's parents went to court. The court acknowledged that the District's punishment violated Justin's First Amendment rights of expression. The decision was affirmed by the court of appeals (Layshock v. Hermitage, 2010).

In investigating punishable offences, the police may try identifying you by your IP address or confiscating your hard drive or telephone, but these measures are often inaccessible to the petitioner during the litigation. What is also obvious is the fact that social networks possess more information about their users than what is provided on their page. For example, a social network knows your email address, phone number, IP address, geotagging data, and much more information that can be used for identification purposes. While this information can be requested by a pre-trial investigation authority in investigating an offence, it is almost never disclosed to private individuals.

The authors are of the view that possible solutions to this problem are to seize the means of accessing the user's profile by way of provisional remedy and to conduct a linguistic examination in order to determine who the author of the defamatory statements is. In addition, to confirm that it is the respondent's account, it may help to request information on the user's profile from the social network's administration. A significant factor that may hinder the efficient handling of cases on defamation on social networks is Section 230 of the US Communications Decency Act, which provides immunity from liability for providers and users of an interactive computer service who publish information provided by others (U.S. Code § 230). It may be argued that such immunity should not be of an absolute nature. It will be recalled that most social networks' user agreements entitle the user to complain about content that violates the rules of the social network. A person who has become the victim of defamation can make a complaint about the corresponding content to the moderator. In the event this content has not been removed although the user who posted it has been warned of a violation of the social network's terms of use, it may help to have the social network's administration joined as a co-respondent to the petition.

The second difficulty in prosecuting cases related to defamation on social networks is associated with proving the consciousness of the falseness of the information.

As already mentioned above, users of social networks who are not public officials are not bound by rigorous standards of conduct on social networks. In addition, they are immune to the rules of professional ethics, unlike journalists for instance. Thus, it is acceptable that they make value judgments, emotional statements, assumptions, etc., as long as that does not violate the required standards of conduct in a community.

Of interest in this context is the case of Gordon & Holmes v. Love. Specifically, celebrity Courtney Love, the widow of former Nirvana lead singer Kurt Cobain, accused her former attorney Rhonda Holmes of bribery. In a series of messages on Twitter, she claimed that her attorneys had been bribed and hinted at their ties to organized crime. Holmes filed a lawsuit in the Superior Court of Los Angeles County, seeking punitive damages for libel. According to the plaintiffs, Love's public comments could "cause irreparable damage" to their business, name, and reputation. The respondent's representatives argued that communication on social networks is very much like real-world communication, so users may sometimes make emotional statements. Since Love's comments did not contain any indication that she had checked the information communicated, it was not possible to apply to them the same standards as those applied to, say, what is published in the media. As a result, the court ruled in favor of Courtney Love based on the petitioner's inability to prove that the respondent had known that her statements were libelous (Gordon & Holmes v. Love, 2016).

The courts tend to assess statements a little differently when a respondent's actions may be qualified as bullying or harassment on social networks. Of interest in this context is the case of Kowalski v. Berkeley County Schools. Specifically, a senior high school student created and posted a MySpace.com webpage called 'S.A.S.H.' ("students against sluts herpes"), dedicated to ridiculing a fellow student as someone with a sexually transmitted disease. For instance, the group uploaded a picture of a female fellow student with red dots drawn on her face to simulate herpes. School administrators received a complaint from several students about this. The victim and her parents filed a harassment complaint with the school's administration. After an investigation of the site and interviews with Ms. Kowalski, the administration concluded that she had created a "hate website" which was a direct violation of the school's policy against "harassment, bullying, and intimidation." The school then punished her with a five-day suspension from school and a 90-day "social suspension" prohibiting her attendance from school events. Kowalski's parents appealed the decision in court, contending that it violated her free speech and due process rights under the First and Fourteenth Amendments. However, both the district court and the court of appeals supported the decision of the school's administration, even though the student's expression took place off campus and outside school hours. According to the district court, the school's administration was authorized to punish Kowalski because her webpage was "created for the purpose of inviting others to indulge in disruptive and hateful conduct," which caused an "in-school disruption" (Kowalski v. Berkeley, 2011).

Thus, proving the consciousness of the falseness of the information is an extremely complex process, just like proving any other component of the subjective side of a wrongdoing. The possible types of evidence include testimony from witnesses pointing to the respondent's intent to commit

the crime, the respondent's past statements and behavior, and even the Web search history on their computer if it is pertinent to the case.

Finally, the third major difficulty is associated with assessing symbols, hashtags, emoji, and other elements of communication on social networks. Communication on social media abounds with all kinds of symbols. The most common of them are emoji, hashtags, and special abbreviations. In addition, communication on social media typically involves the use of metaphors, allegories, and sarcasm. Quite often, because of this, messages are perceived in diametrically different way, which afterwards can be a problem for a court hearing a defamation case (Pelletier, 2016). Of interest in this context is the McAlpine v Bercow case. Specifically, a report broadcast on TV linked an unnamed "senior Conservative" politician to paedophilia claims. Shortly afterwards, the defendant, Sally Bercow, who herself had a high public and media profile at the time, wrote on her Twitter page 'Why is Lord McAlpine trending?', following this with an 'innocent face' emoji. After the allegations against McAlpine proved to be unfounded, the peer threatened Bercow with legal action for libel. As "innocent" as the defendant's tweet seemed on the outside, the court took account of the context.

Specifically, the court stated the following: "People who are not familiar with Twitter may not understand the words 'trending' and 'innocent face'. But users of Twitter would understand. The Twitter website has a screen with a box headed 'Trends'. It lists names of individuals and other topics. Twitter explains that this list is generated by an algorithm which 'identifies topics that are immediately popular, rather than topics that have been popular for a while or on a daily basis, to help you discover the hottest emerging topics of discussion on Twitter that matter most to you. You can choose to see Trends that are tailored for you...'" (Lord McAlpine...).

The court then noted that it is "common ground between the parties that the words 'innocent face' are to be read like a stage direction, or an emoticon (a type of symbol commonly used in a text message or e-mail). Readers are to imagine that they can see the Defendant's face as she asks the question in the tweet. The words direct the reader to imagine that the expression on her face is one of innocence, that is an expression which purports to indicate (sincerely, on the Defendant's case, but insincerely or ironically on the Claimant's case) that she does not know the answer to her question" (Lord McAlpine, 2013).

The court factored in the prevalence of sarcasm on social media and noted that the tweet meant, in its natural and ordinary defamatory meaning, that Lord McAlpine was "a paedophile who was guilty of sexually abusing boys living in care" and that in the alternative it would be found that the tweet bore an innuendo meaning to the same effect (Lord McAlpine, 2013).

Thus, the courts are faced with the objective of assessing the context in which the communication was taking place as thoroughly as possible. In addition, they must take into account the use of technical capabilities such as hashtags, emoji, or reactions that can influence the user's perception of the content. This objective appears to be the hardest to achieve in countries following the Romano-Germanic law tradition, where the judge does not enjoy as broad discretion and cannot invoke "common sense" as freely as in the US and the UK.

5. Conclusion

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Social networks are systems that are used today for engaging in interpersonal communication, building relations, and disseminating information. They are also used widely nowadays for advertising purposes, selling goods and services, and creating a "personal brand". Today, increasingly more people are building their picture of the world based on social networks, with many residing in so-called "filter bubbles", niched information spaces that encompass certain information flows but are isolated from others. For this reason, defamation on social networks comes with multiple risks. The special nature of defamation on social networks makes it a lot harder to counter it compared with slander or dissemination of knowingly false information via traditional mass media. This paper has discussed issues in countering defamation on social networks such as difficulty of establishing the identity of a respondent, difficulty of proving malice, and having to take into account the various additional means of communicating on social networks (e.g., emoji and hashtags). Existing statutes and case law, both within specific countries and internationally (e.g., case law from the European Court of Human Rights), provide only certain critically significant points of departure for ensuring the protection of the rights of people who use social networks. It is without question that there is currently a need for thorough research, in terms

of both the scholarly and law-application aspects, into criteria for the legal qualification of the behavior of users of social networks in various contexts (e.g., suppression of defamation, protection of privacy, and protection of intellectual property rights).

References

Black's Law Dictionary - What is defamation? Black's Law Dictionary. [Electronic resource]. URL: https://thelawdictionary.org/defamation/

Burkell et al., 2014 - Burkell, J., Fortier, A., Wong, L. (Lola) Y.C., Simpson, J.L. (2014). Facebook: public space, or private space? Information, Communication & Society. 17: 974-985.

Convention, 1950 - Convention for the Protection of Human Rights and Fundamental Freedoms Rome, 4.XI.1950. Strasbourg cedex [Electronic resource]. URL: https://www.echr .coe.int/Documents/Convention_ENG.pdf

Davison v. Randall, 2019 - Case of Davison v. Randall / Justia United States Law (2019). [Electronic resource]. URL: https://law.justia.com/cases/federal/appellate-courts/ca4/17-2002/ 17-2002-2019-01-07.html

De Haes and Gijsels v. Belgium, 2017 - Case of De Haes and Gijsels v. Belgium / European Court of Human Rights HUDOC.p (1997). [Electronic resource]. URL: http://hudoc.echr.coe.int/eng?i=001-58015

Declaration, 2003 - Declaration on freedom of communication on the Internet, adopted by the Committee of Ministers on 28 May 2003 at the 840th meeting of the Ministers' Deputies / Organization of Security and Co-operation in Europe [Electronic resource]. URL: https://www.osce.org/fom/31507?download=true

Directive, 2019 - Directive (EU) 2019/790 of the European Parliament and of the Council of 17 April 2019 on copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC / EUR-Lex. Law [Electronic resource]. URL: https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1586949061989&uri=CELEX:32019L0790

Facebook, 2019 - Facebook Terms of Service (2019). [Electronic resource]. URL: https://www.facebook.com/legal/terms

General comment, 2011 - General comment no. 34, Article 19, Freedoms of opinion and expression / UN Human Rights Committee (HRC), 12 September 2011. CCPR/C/GC/34 [Electronic resource]. URL: https://www.refworld.org/docid/4ed34b562.html

Gersamia, Toradze, 2017 - Gersamia, M., Toradze, M. (2017) Communication Function of Social Networks in Media Education: The Case of Georgia. Athens Journal of Mass Media and Communications. 3: 195-206.

Gordon & Holmes v. Love, 2016 - Case of Gordon & Holmes v. Love / Casetext (2016). [Electronic resource]. URL: https://casetext.com/case/holmes-v-love

Instagram, 2018 - Terms of Use in Instagram (2018). [Electronic resource]. URL: https://www.facebook.com/help/instagram/581066165581870

Knight Institute v. Trump, 2018 - Case of Knight Institute v. Trump. Justia United States Law (2018). [Electronic resource]. URL: https://cases.justia.com/federal/appellate-courts/ca2/18-1691/18-1691-2019-07-09.pdf?ts=1562682609

Kowalski v. Berkeley, 2011 - Case of Kowalski v. Berkeley County Schools / Casetext (2011). [Electronic resource]. URL: https://casetext.com/case/kowalski-v-berkeley-county-sch

Layshock v. Hermitage, 2010 - Case of Layshock v. Hermitage School District / Casetext (2010). [Electronic resource]. URL: https://casetext.com/case/layshock-v-hermitage-school-dist-3 Legal Basis, 2011 - Legal Basis for Social Media Report of the Federal Council in Fulfilment of the Amherd Postulate (2011) [Electronic resource]. URL: https://www.bakom.admin.ch/ dam/bakom/en/dokumente/2013/10/rechtliche_basisfuersocialmediaberichtdesbundesrates.pdf. download.pdf/legal_basis_for_socialmediareportofthefederalcouncil.pdf

Lingens v. Austria, 1986 - Case of Lingens v. Austria / European Court of Human Rights HUDOC (1986). [Electronic resource]. URL: http://hudoc.echr.coe.int/eng?i=001-57523

LinkedIn, 2020 - User agreement of LinkedIn [Electronic resource]. URL: https://www.linkedin.com/legal/user-agreement?_l=ru_RU#dos

Lord McAlpine, 2013 - Case of the Lord McAlpine of West Green Claimant v. Sally Bercow / Courts and Tribunals Judiciary (2013). [Electronic resource]. URL: judiciary.uk/wp-content/uploads/JCO/Documents/Judgments/mcalpine-bercow-judgment-24052013.pdf

Most popular, 2020 - Most popular social networks worldwide as of January 2020, ranked by number of active users [Electronic resource]. URL: https://www.statista.com/statistics/ 272014/global-social-networks-ranked-by-number-of-users/

Myers, Hamilton, 2014 - Myers, M, Hamilton, J. (2014). Social Media as Primary Source. Media History. 20: 431-444.

Obar, Wildman, 2015 - Obar, JA., Wildman, S. (2015). Social media definition and the governance challenge: An introduction to the special issue. Telecommunications policy. 39(9): 745-750.

Ortiz-Ospina, 2019 - Ortiz-Ospina, E. (2019) The Rise of social media [Electronic resource]. URL: https://ourworldindata.org/rise-of-social-media

Packingham v. North Carolina, 2017 - Case of Packingham v. North Carolina / Supreme Court of the United States (2017). URL: https://www.supremecourt.gov/opinions/16pdf/15-1194_08l1.pdf

Pelletier, 2016 - Pelletier, N. (2016). The Emoji that Cost $20,000: Triggering Liability for Defamation on Social Media. Journal of Law & Policy. 52: 227-254

Recommendation, 2014 - Recommendation CM/Rec(2012)4 of the Committee of Ministers to member States on the protection of human rights with regard to social networking services (Adopted by the Committee of Ministers on 4 April 2012 at the 1139th meeting of the Ministers' Deputies). Council of Europe [Electronic resource]. URL: https://search.coe.int/cm/Pages/result _details.aspx?ObjectID =09000016805caa9b

Regulation, 2016 - Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). EUR-Lex. [Electronic resource]. URL: https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1586948907014&uri=CELEX:32016R0679

Resolution 1577 - Resolution 1577 (2007). Towards decriminalisation of defamation adopted by the Assembly on 4.10.2007. Council of Europe Parliamentary Assembly [Electronic resource]. URL: http://assembly.coe.int/Main.asp?link=/Documents/AdoptedText/ta07/ERES1577.htm

Richter, Koch, 2008 - Richter, A., Koch, M. Functions of Social Networking Services / COOP 2008: Proceedings of the 8th International Conference on Designing Cooperative Systems: May 20th- 23th, 2008. Carry-le-Rouet, Provence, France [Electronic resource]. URL: https://pdfs.seman ticscholar.org/af45/bfbd61fcea36d04ba0b48e09d5432240569c.pdf

Sarikakis, Winter, 2017 - Sarikakis, K, Winter, L. (2017) Social Media Users' Legal Consciousness about Privacy. Social Media + Society. 3(1): 1-14. DOI: 10.1177/2056305117695325 Shatiryan, 2014 - Shatiryan, E. (2014). The Issue of Improvement of Some Structures of Legal Protection of Individual's Honor, Dignity fnd Business Reputation in The Republic of Armenia. Materials of conference devoted to 80th anniversary of the faculty of law of the Yerevan State University: 124-139. [Electronic resource]. URL: http://www.ysu.am/files/Edga_ %20Shatiryan.pdf

Tappendorf, 2013 - Tappendorf, JA. (2013). Social Media & Governments - Legal & Ethical Issues. Ancel Glink [Electronic resource]. URL: https://www.in.gov/ig/files/Julie_Tappendorf.pdf Twitter, 2020 - Twitter Terms of Service (2020). [Electronic resource]. URL: https://twitter.com/ru/tos

U.S. Code § 230 - U.S. Code § 230. Protection for private blocking and screening of offensive material. Legal Informational Institute [Electronic resource]. URL: https://www.law.cornell. edu/uscode/text/47/230

Umrani, et al., 2019 - Umrani, L., Memon, B., Ali Khuhro, R. (2019). Use of Facebook Information for News Production by Journalists in Pakistan. International Journal of Media and Information Literacy. 4(2): 66-76. DOI: 10.13187/ijmil.2019.2.66CrossRef

Vicentea, Novob, 2014 - Vicentea, M.R., Novob, A. (2014). An empirical analysis of e-participation. The role of social networks and e-government over citizens' online engagement. Government Information Quarterly. 31: 379-387.

Zavhorodnia et al., 2019 - Zavhorodnia, V., Slavko, A., Degtyarev, S., Polyakova, L. (2019). Implementing a Value-Oriented Approach to Training Law Students. European Journal of Contemporary Education. 8(3): 677-691. DOI: 10.13187/ejced.2019.3.677

i Надоели баннеры? Вы всегда можете отключить рекламу.