ISSN (Print) - 0012-9976 | ISSN (Online) - 2349-8846
Reader Mode
-A A +A

Taking Down Cyber Violence

Supreme Court’s Emerging Stance on Online Censorship and Intermediary Liability

Amrita Vasudevan (amrita@itforchange.net) is a research associate with IT for Change, a Bengaluru-based non-governmental organisation.

Through the proliferation of rape videos, morphed images, etc, the internet is witnessing increasing instances of violence against women. As websites sometimes claim to be intermediaries which cannot be always held responsible for the nature of content uploaded online, the issue of intermediary liability needs to be addressed urgently. The precedent set by the Supreme Court in this regard in certain cases merits critical examination, in order to pave the way forward for developing an alternate intermediate liability regime, which can walk the tightrope between censorship and the protection of the dignity of women and children in cyberspace.

The author would like to thank Anita Gurumurthy, executive director of IT for Change, for her vital inputs and recommendations, and acknowledge the referee for comments and revisions that helped shape this article.

This paper revisits current debates on the liability of internet intermediaries in India, in the context of violence against women online. Internet intermediaries can be described as entities that facilitate access to the internet or services on the internet (Association Progressive Communications 2014). Common types of intermediaries are internet service providers, search engines, social media networks, etc. Unlike book publishers, internet intermediaries adopt a passive relationship with the content they host. Since they do not exercise editorial control, countries have been encouraged to legislate safe-harbour protections of internet intermediaries from strict or criminal liability (La Rue 2011). Digital corporations have time and again asserted their passivity in relation to the content on their services (Gillespie 2010), and civil society has supported their claim to immunity.1 The crux of the argument is, when intermediaries are treated as gatekeepers by the law, in a bid to reduce their liability, they will over-censor (Center for Democracy and Technology 2010). Their particular legal treatment is argued to be an essential part of free speech online.

In 2008, India introduced significant amendments to the Information Technology Act, 2000 (IT Act), by adopting a notice-and-take-down regime, along the lines of the European Union (EU) E-Commerce Directive 2000/31/EC. So, an intermediary was liable to take down illegal content about which it has “actual knowledge,” acquired through proactive measures, user notification or a government order (Arun and Singh 2010). Such a notice-and-takedown system has been criticised for leaving the decision of the legality of content in the hands of a private entity, that is bound to err on the side of censorship overreach, lest it be held liable for allowing illegal content to remain on its platform (Dara 2011). In 2015, via the landmark Shreya Singhal v Union of India (2015) judgment, the Supreme Court tweaked the intermediary liability regime, by reading down “actual knowledge” to mean a court or executive order.

Consequent to the judgment, judicial or executive deliberation is required before content can be pulled down from the internet. In the same year, two other cases before the Supreme Court were gathering steam. One dealt with search engine liability for allowing advertisements related to prenatal sex determination to display in search results, and another that related to the liability of social media networks for the circulation of rape videos online. By the end of 2017, the court decided in both the cases that a combination of measures would be required—pre-filtering of content by the intermediary as well as notice-and-takedown—as a result of which the government set up special nodal agencies to receive complaints and initiate takedowns (Financial Express 2017). The response to the court’s stance has been critical, accusing the Court of ignoring the safeguards to free speech online, established under the Shreya Singhal judgment (Gupta 2016).

This paper argues that these two decisions must be seen in relation to the content type sought to be addressed and not as a general precedent, thus making a case for a versatile and differentiated intermediary liability regime. The Court’s response in both these cases is contextually stimulated and must be seen as an effort to promote women’s right to equality and dignity, by preventing sex selective abortion and addressing the stigma around rape. Online circulation of rape videos and advertisements for prenatal sex determination tests are patently illegal in the Indian context and warrants an immediate takedown, whereas other kinds of illegal content can afford to stay online, pending a court order. An intermediary liability regime must adapt to the ideas of rights that are organic to their location.

This does not mean doing away with safeguards to free speech. On the contrary, it means legislatively and juridically balancing the rights of the complainant, internet users at large, and the intermediary, based on the situation at hand. Due process procedures, this paper suggests, can be the bulwark of free speech online and need to be explicitly read into the intermediary liability framework. This paper lays out its arguments through three parts: part one chronicles India’s intermediary liability journey, picking up key milestones to understand and analyse; part two takes a look at how other jurisdictions have handled the takedown of “ostensibly illegal content” like rape videos and non-consensual circulation of intimate images; and part three examines how due process should be read into the law.

Remit of Intermediary Liability

The intermediary liability regime in India has followed a pattern of trigger-incident judicial intervention and legislative response. As expected, the legislative turnaround has had the greatest lag. What is interesting, however, is that the judiciary seems to have come full circle. The first milestone for intermediary liability in India was the arrest of Avnish Bajaj, the managing director of eBay’s Indian subsidiary Baazee.com, for the sale on the website of a video clip of two teenagers engaged in sexual activity. In 2008, the Delhi High Court held that even though the buyer and seller agreement was bipartite, because Baazee.com charged a commission on every sale, it should have had in place adequate pre-filtering measures that would have blocked the sale of the video clip. For the failure to exercise due diligence, the court imposed strict liability on the company for being in violation of Section 292 of the Indian Penal Code and Section 67 of the IT Act (publishing obscene content) (Avnish Bajaj v State [NCT] of Delhi 2005).

The decision unravelled the flimsy protection intermediaries received under the IT Act. The act protected only those intermediaries who were network service providers2 and limited safe harbour to the IT Act and rules and regulations made thereunder, leaving intermediaries vulnerable to civil and criminal liability under other laws (Gupta 2007). The court’s decision was denounced as unreasonably clamping down on intermediaries and anachronistic to the general legislative drift towards stronger safe-harbour provisions. Following the judgment, the IT Act was amended by Parliament in 2008, ushering in a more expansive safe-harbour regime, and a new phase of intermediary liability in the country (Software Freedom Law Centre 2012). Strictly speaking, India has a vertical intermediary liability system,3 where the liability depends on the kind of infringing content. The Copyright Act, 1957 and the Trade Marks Act, 1999 have provisions that determine intermediary liability, where the infringement relates to either of the two areas (Advani 2013). But, in all other cases, it is the singular regime established by the IT Act and the accompanying Information Technology (Intermediaries Guidelines) Rules, 2011 (Intermediary Guidelines) and Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009 that apply.

Section 79A of the IT Act anchors safe-harbour protection for intermediaries against third-party content or user-generated content. Immunity is conditioned on the premise that the intermediary is only providing access to the communication system over which the content is available or that the intermediary has not initiated the transmission, selected the receiver, or modified the content in any way. It is also required to follow the due diligence requirements prescribed by the IT Act and Intermediary Guidelines. Despite the above, an intermediary can incur liability, if after receiving “actual knowledge” of infringing content or being notified of such content by the government or its agencies, it fails to disable access to such content within 36 hours. This sort of mechanism is referred to as “private notice and takedown.”

A broad list of content that an intermediary is liable to take down on notice can be found in the Intermediary Guidelines. The list includes, “information that is grossly harmful, harassing, blasphemous, defamatory, obscene, pornographic, paedophilic, libellous, invasive of another’s privacy, hateful, or racially, ethnically objectionable, disparaging, relating or encouraging money laundering or gambling, or otherwise unlawful in any manner whatever.”4 Many of these content types lack exact legal definition and require that the intermediary use its discretion while taking down content. Studies have shown that when it is left to intermediaries to decide, they disproportionately favour taking down disputed content (Urban and Quilter 2005).

The 2015 Supreme Court judgment in Shreya Singhal, which struck down Section 66A of the IT Act5 as unconstitutional, is the second milestone in regulating intermediary liability in India. The arrest of two women, under Section 66A of the IT Act—for a Facebook post critical of the fact that Mumbai city came to a standstill during the funeral of a leading local politician (BBC 2015)—threw into relief the sweeping powers that the state wielded to censor and punish. This watershed decision for free speech modified the notice-and-takedown system, by reading down Section 79 (3)(b) of the IT Act, and limiting
“actual knowledge” to a court-ordered takedown of content. This court order must fall within the contours of reasonable restrictions laid down by Article 19(2) of the Constitution. The rationale supplied was that “it would be very difficult for intermediaries like Google, Facebook etc. to act when millions of requests are made and the intermediary is then to judge as to which of such requests are legitimate and which not” (Shreya Singhal v Union of India 2015). When the illegality of content type is not patently discernible, and there is plenty of room for interpretation, a more nuanced deliberation that is free from the fear of liability can ensure that access to only content that is illegal is disabled. In Chile, for instance, to remove information from a website, a petition needs to be filed before the civil court for an injunction. A court may issue a preliminary injunction without hearing the person who uploaded the content, but only if the petitioner is able to post a bond. The court also hears counter notices challenging the takedown, through a “brief and summary procedure” (Center for Democracy and Technology 2012).

A Need for Proactive Measures

Two cases mark the third milestone (and the final one for the purposes of this article), in regard to intermediary liability in the country. The first case is Sabu Mathew George v Union of India (2008), which impleaded Yahoo, Google and Microsoft for violating the prohibition on advertisements relating to pre-conception and prenatal determination of sex.6 The prohibition is contained in Section 22 of the Pre-conception and Pre-natal Diagnostic Techniques (Prohibition of Sex Selection) Act, 1994 (PcpNDT Act). The second case, In Re: Prajwala (2015), is a suo-motu public interest litigation taken up by the apex court in 2015, in response to a letter from the women’s rights activist Sunitha Krishnan, on the rampant circulation of rape videos on social networks and social media platforms. Facebook, Google, Yahoo, Microsoft, and WhatsApp are the respondents in this case (Indian Express 2017).

Sabu Mathew George v Union of India: The petitioner filed the present case on finding advertisements related to determination of the sex of the foetus, from queries made through search engines run by the respondent companies. As for the petition related to sponsored advertisements, there is less of a dispute as to the respondent’s primary liability under Section 22 of the PcpNDT Act. Here, respondents like Google exert considerable editorial control and, hence, are not passive intermediaries (Reddy 2016). The respondent companies argued that the term “advertisement” in the act referred solely to commercial advertisements and not general “organic search results,” over which they lacked control. The petitioner responded that this would contradict the spirit of the law, the preamble of which supported a broad interpretation of advertisements, in the light of India’s skewed sex ratio and increasing sex selective abortions (Sabu Mathew George v Union of India 2008).7

The Supreme Court recommended that the respondents auto-block content by prescribing a set of words and phrases related to prenatal sex determination. Google later submitted that it could comply with the directions on banning sponsored advertisements and blocking content that had been notified as illegal by a government agency from appearing on its search results. Auto-blocking was, however, not possible, as Google was not in a position to develop in-house mechanisms to proactively filter out violative content. The counsel for Google contended that, “You cannot have a preventive blockage. You can have curative blockage” (Deccan Chronicle 2017). It was also argued that such a measure would violate the Shreya Singhal judgment, which required a judicial or executive order before access to content can be disabled by an intermediary (Centre for Communication Governance 2017).

The Court pushed back, arguing that advertisements for sex selection needed to be blocked, even if they were not proactively brought to the notice of the respondents. It also ordered the government to establish a nodal agency, to which complaints can be made about anything in the nature of an advertisement for sex-selective procedures, kits, detection, etc. On receipt of a complaint, the nodal agency would inform the hosting service, which must disable access to the content within 36 hours of the complaint being lodged. The respondent companies were directed to set up an in-house expert body that “shall on its own understanding delete anything that violates the letter and spirit of language of Section 22 of the 1994 Act” (Sabu Mathew George v Union of India 2008). In the case of any confusion, the in-house body may consult with the nodal agency.8 At the same time, the Court conceded that to preserve the freedom to access information, only content that defeats the purpose of Section 22 of the PcpNDT Act should be disabled. In its final decision dated December 2017, the court directed litigants to collaboratively decide on a solution to the matter (Sabu Mathew George v Union of India 2008). The result of these
collaborative discussions is not known.

In Re: Prajwala: In response to Sunitha Krishnan’s letter to the Supreme Court on the circulation of rape videos online, the bench asked Google, one of the respondents, “Take for instance, nobody has reported (rape, gang rape or child abuse material), do you act on your own to decipher it? We are asking, is it possible for you or not?” (Indian Express 2017). To which the respondent company responded in the negative, stating that they only catalogued content, and it would not be possible for them to discover such content if it was not reported to them. To assuage the Court, the government set up a specialised nodal agency that could flag rape, gang rape or child abuse content for takedown. The Court was insistent, however, that what they wanted was “prevention not cure” (Indian Express 2017). Through the trial, the Supreme Court set up a committee consisting of representatives of the petitioner, the Ministry of Information and Technology, the Ministry of Home Affairs and the respondents, including Facebook, Google, Yahoo, Microsoft and WhatsApp to come up with suitable solutions—technological and administrative—to the problem. The committee also received expert interventions and submissions. It came out with a list of recommendations and outlined against each the respective stakeholder’s responsibilities.

These are9: (i) Carrying out keyword searches, including in vernacular languages, to pick up on child pornography, rape and gang rape content, and post warnings and public interest messages. Contributors to the list of keywords will be the government, civil society and the respondent companies. (ii) Creating a central reporting mechanism, an online portal and a hotline to report child pornography, and rape and gang rape content. (iii) Empowering specific entities that can respond to complaints of child pornography and rape and gang rape content within specified timeliness, and initiate takedown of content, registration of the first information report, and prosecution of the crime. (iv) Creating a list of non-governmental organisations tasked with searching and informing the government body of child pornography and rape and gang rape content. (v) Creating a common bank of child pornography and rape and gang rape content that the government maintains, and using the technology used to identify child pornography to also identify rape and gang rape videos. (vi) Proactively identifying “rogue sites” and blocking access to these sites. (vii) Investing in research and development of artificial intelligence, machine learning and deep learning techniques to identifying child pornography and rape and gang rape content when it is being uploaded, for real-time filtering (In Re: Prajwala 2015).10

Unfortunately, the status report on these measures listed above reveals tardy implementation.11 In May 2018, citing the lack of progress on the part of the respondent companies to follow through on the recommendations made by the committee, the Supreme Court fined each of them ₹ 1 lakh (Hindustan Times 2018). The latest record of the proceedings from the Supreme Court states that the central government has drawn up a standard operating procedure (SOP) for taking action by the security/law enforcement agencies under Section 79(3)(b) of the IT Act. Further, an SOP set up for the portal to handle complaints of child pornography and rape and gang rape content has been drawn up and was to be finalised by mid-November (In Re: Prajwala 2015).

The nature of the disputed content, in the two cases detailed above, has led the Supreme Court to find favour with expeditious takedowns. The Court establishes and recommends a variety of ways to actualise this, including setting up a nodal agency that could prompt takedowns, proactive blocking, and technological solutions. Free speech activists have expressed their anxiety over the Court’s zeal to curb access to unlawful content online. They have two main concerns. First, they fear that the Court may be going down the path of “progressively increasing censorship,” which will end up rendering “entire swathes of the Internet off-limits for everyone” (Bhatia 2017). The concern here is that the Court is reversing the gains of the Shreya Singhal judgment. Second, they are critical of the Court’s disregard of due process safeguards. It has been pointed out that the way in which the Court has defined the powers and functions of the nodal agency contravenes the Information Technology (Procedure and Safeguards for Blocking for Access of Information by Public) Rules, 2009. These rules ensure that an order for the blocking of content is reviewed by a committee, which will confirm if the order falls within the scope of the IT Act (Section 69A) (Internet Freedom Foundation 2017). The succeeding sections will weigh the merit of these two concerns, looking in particular at the need for a responsive redress mechanism in the case of online violence against women, and due process safeguards in the IT Act and rules.

Violence against Women Online

Vitriolic sexism online has become commonplace in the Indian internet landscape. The United Nations Broadband Commission reports that one in three women have faced some kind of cyber violence (Working Group on Broadband and Gender 2015). Trolling, rape and death threats, morphing of images and the non-consensual circulation of intimate images are some of the many ways in which violence against women is carried out online. There has been a discernible pattern of sexual violence, where the assault is videotaped and consequently used to blackmail the victim by threatening to upload it or send it to her family, friends or employer (Gurumurthy and Vasudevan 2018). In June 2018, the Human Rights Council adopted the resolution on “accelerating efforts to eliminate violence against women and girls: preventing and responding to violence against women and girls in digital contexts.” The resolution in particular condemnsthe dissemination of content that promotes and reinforces violence against women and girls, which can result in the perpetual revictimization and retraumatization of women and girls, given that a permanent digital record is created by content shared in digital contexts” (Human Rights Council 2018). It also calls for active cooperation between law enforcement, judiciary and private actors in detecting and reporting these crimes, directing states to encourage digital corporations “to strengthen or adopt positive measures, including internal policies, to promote gender equality in the design, implementation and use of digital technologies” (Human Rights Council 2018).

The United Nations Special Rapporteur on Violence against Women notes that the qualities of online global searchability, persistence and scalability have not only resulted in the replication of offline violence, but amplified, redefined and created new forms of gender-based discrimination online. Reaffirming that human rights must be protected online,12 the Special Rapporteur underlines the central role intermediaries occupy in facilitating our experience of the online, and the human rights responsibilities that intermediaries must bear (Šimonović 2018). Waking up to the meteoric rise of online violence directed at women, the Ministry of Women and Child Development is developing a central reporting mechanism dedicated to complaints of cyber violence against women and children, that aims to resolve the matter in 24 hours (Indian Express 2018).

Despite the general conviviality after the Shreya Singhal judgment, there were also those who felt the delays of an adversarial system and court orders could lead to flagged illegal content staying online for extended periods before it is pulled down. The Andhra Pradesh High Court, in Google India Private Limited v M/S Visaka Industries Limited (2009), observed that if sexually explicit images or morphed images can only be taken down on court orders, they could end up remaining online for a long time. Receiving an order for takedown years after the content is posted is a futile exercise. Unfortunately, at present, neither the IT Act nor its rules have provisions for immediate relief. In her report on online violence against women, presented before the Human Rights Council, the Special Rapporteur cites the Sabu Mathew George case to instantiate the obligation upon states to protect women from violence, emphasising swift judicial action and interventions by intermediaries, in the case of online violence (Šimonović 2018).

A workaround to the problem of judicial latency that the countries have adopted has been to have an exception to the general norm of judicial order prior takedown, in the case of content that promotes violence against women. The Canadian law that criminalises publication of intimate images without consent—Protecting Canadians from Online Crime Act, 2014—also punishes those who knowingly distribute, transmit or make available the content. So, if an intermediary is made aware of such content through a private notification, it must be immediately pulled down (Martin-Bariteau 2015). Turkey requires content that violates an individual’s privacy to be blocked within four hours of notification. Within 24 hours of this, the complainant must file a case before the Criminal Judgeships of Peace, who has to deliver a judgment in 48 hours. In case the latter deadline is not adhered to, the content has to be reinstated (Swiss Institute of Comparative Law 2015). In R.M.B c/Google y ot. s/ Ds y Ps, the Supreme Court of Argentina held that while judicial review of notice is required in most cases before access to content online can be disabled, the exception is “ostensible infringing content” where notice would be enough. Ostensibly infringing content, according to the court, includes obviously morphed pictures, images for which there is a reasonable expectation of confidentiality, content which constitutes a serious invasion of privacy, content which incites discrimination, etc (Vargas 2014).

In Brazil, after the suicide of two teenagers when their intimate images were non-consensually circulated by their boyfriends online, an exception was introduced to the intermediary liability law, which would exempt a notification of non-consensually circulated intimate content from the rigmarole of judicial approval prior to takedown. In the case of other kinds of illegal content, such as defamatory material or copyright infringement, a court order is necessary. Findings from InternetLab’s research on this exception showed that far from being hijacked by the moral police, platforms have invested in responding swiftly when they receive notice of non-consensual intimate images (Valente 2018). In some cases, judicial latency and cost of litigation has also prompted proposals for a “notice to notice” procedure to replace notice and takedown. In such a system, upon receiving the complainant’s notice, an intermediary must forward it to the alleged wrongdoer within a prescribed time period. If the alleged wrongdoer volunteers to take the content down, then this is communicated to the complainant. If a counter-notice is filed instead, then the intermediary must forward the same to the complainant within a specified period. The complainant can then decide whether she would like to take the matter before a court of law or the appropriate adjudicatory body. The intermediary is merely a pencil pusher, whose liability is limited to forwarding notices (Article 19 2013). While this kind of system is most suited to civil violations like copyright infringement and defamation, there have been states that have used it to address criminal liability as well.

New Zealand’s Harmful Digital Communications Act, 2015 aims to set up a quick and efficient means by which to redress serious emotional distress caused by online communications. The act also has 10 guiding principles for digital communication, stating that such communication should not contain a matter that is published in breach of confidence, or incite/encourage anyone to send a message to an individual for the purpose of causing harm to the individual. Adjudicatory bodies roped in by the law are required to be cognisant of these principles, while deciding cases.13 The act saves online content hosts from liability, if upon receiving notice of harmful content, they forward it to the author within 48 hours. If a counter-notice is filed by the person who uploaded the content, then the content host must send it across to the complainant within 48 hours of receipt. Access must be disabled if the counter-notice consents to the charge.

However, if the counter-notice disputes the complainant’s charge, then the content must be retained online. In cases where the uploader of the content cannot be contacted, then the content must be disabled within 48 hours of notice. An “approved agency” is established under the act, whose functions include receiving, assessing and investigating the complaints of harm caused by the digital communications. The agency tries to resolve these complaints through negotiations and mediations. The complainant also has the option of taking the complaint before the district court, after it has been assessed by the approved agency.14 The district court ensures that all actors are accountable. Thus, the specialised quasi-judicial administrative body is established to avoid adding to the judicial burden. It also provides complainants an alternate redress mechanism that avoids the long turnover of a court-ordered takedown. Article 19 (2013) opines that notice-to-notice procedures (explained above) may not be suitable for certain kinds of criminal content like child pornography, incitement to violence, etc, where immediate takedown on notice is necessary.

Differentiated Intermediary Liability Regime

Christina Angelopoulos and Stijn Smet (2016) critique the “one size fits all” approach of a horizontal intermediary liability regime, that treats all legal wrongs online, be it civil or criminal, in the same manner. This is currently the regime the EU follows under the E-Commerce Directive. Instead, a vertically calibrated regime is a much more equitable solution that counterweighs free speech concerns of internet users, the intermediary’s right to do business and the rights of the complainant (Angelopoulos and Smet 2016). This jurisprudence of “fair balance” has been underscored in the decisions of the Court of Justice of the European Union and the European Court of Human Rights, that have relied on the Charter of Fundamental Rights of the European Union and the European Convention on Human Rights, in cases of intermediary liability. The outcome of such a philosophy is a “notice and action” system where “action” depends on the harm caused by the illegal activity and consequently, how easily harmful content can be identified as such.

So, in cases of child pornography, illegality is apparent in most cases, and automatic takedown is justified. Intermediaries in some countries are expected to have some kind of pre-filtering and automatic removal of such content (Swiss Institute of Comparative Law 2015; Sutter 2011). In the case of copyright, the authors suggest a notice-to-notice system, where the intermediary does not have adjudicatory functions and merely facilitates the dialogue between disputing parties. For defamation, a notice and delayed takedown is suggested, which allows time for counter-notice, after which the intermediary takes a decision on whether or not to take the content down. In the case of hate speech, where the wrong is serious but not always easy to ascertain as illegal, a traditional notice-and-takedown is offered (Angelopolous and Smet 2016).

How a country decides to define its intermediary liability for a particular crime is also influenced by historical forces. The EU is more amenable to placing restrictions on free speech and treating hate speech as “a manifestly illegal act,” because of its particular historical context. Therefore, in Delfi A S v Estonia (2015), the Grand Chamber of the European Court of Justice chose to place liability on the intermediary to proactively take down hate speech or speech that incites violence, expressly stating that this would not amount to private censorship (LSE Media Policy Project 2015). Murmurs of revising safe-harbour measures are also heard in the United States (US), which arguably has one of the laxest liability regimes for intermediaries. In the context of online sex trafficking, Daphne Keller (2018) suggests that if the intermediaries know that certain content is illegal, that is, it is “obviously illegal to a reasonable person,” they must be liable to take it down. Such taking down is warranted when the content poses a serious threat and when it is easy to identify (Keller 2018). In the case of India, we argue that the Sabu Mathew George and Prajwala cases are valid exceptions to a “judicial notice and take-down” system.15 The content types sought to be regulated are definite, well-defined and limited.

According to the Census of India 2011, the child sex ratio in the country was 918 girls per 1,000 boys, plummeting to the lowest levels ever recorded, since independence. In some districts of the country, the sex ratio is as low as 774 (Dhar 2011). The PcpNDT Act seeks to address this deep patriarchal malaise, regulating sex selective abortions. In the Sabu Mathew George case, the Supreme Court found that regulating content on the internet is an important part of this policy vision. In the Prajwala case, the Supreme Court was able to pick up on a pattern of rape where the victim was being filmed through the crime, in order to blackmail her and continue exploiting her sexually. The Council of Europe has considered the non-consensual circulation of intimate images a form of gender-based hate speech. Reasoning that the freedom of expression must be read in conjunction with other rights, such as the right to equality, the council opines that,

Like freedom of expression, equality between women and men is an integral part of fundamental rights and of any true democracy. In this context, gender equality and freedom of expression should be seen as intertwined rather than opposing rights. This is why freedom of expression cannot be accepted as a way to silence women and girls. (Council of Europe Gender Equality Strategy 2016)

The content types sought to be regulated can be definite, well-defined and limited. Moreover, they pose an immense threat to the right to equality and dignity of women, which constitutional morality demands should be safeguarded (Josephine Shine v Union of India 2017). However, how the exception to the Shreya Singhal norm should be operationalised needs some rethinking. If pre-filtering leads to indiscrete removal of content, maybe a private notice-and-takedown would work. In the case of rape and gang rape content, a pre-filtering system may be warranted if occurrence of false-positive cases is few. In both cases, however, takedown should be liable to be challenged. India already partially follows a vertical regime; thus, establishing differential regimes based on content types is already a part of the legislative precedent. Exceptions that are constitutionally sound and balance various stakeholder interests will only enhance the effectiveness of the regimes.

One must not fail to notice that the respondents of the case are transnational corporations that are well placed to bear the regulatory burden.16 Some states have become aware of the power that big digital platforms wield, and have sought to regulate them without burdening smaller digital platforms. Understandably, a start-up cannot be expected to have the same resources to pre-filter as a multinational corporation.17 When Germany passed a law on illegal and harmful content on social media sites, it limited its application to only those platforms that have more than 20 lakh German users and distribute and exchange non-specific content, like Facebook or Twitter (Berger 2016). This kind of limitation of intermediary liability is something that India may find useful exploring.

A Balancing Act

The concerns of over-censorship, considering India’s past skittishness over the right to free speech, are understandable. For instance, in India, intermediaries are allowed to terminate access to those users who violate the Intermediary Guidelines. This punishment is reminiscent of the much criticised three-strikes rule, a graduated response to copyright infringement online that can ultimately lead to cutting-off of internet services to the accused (that France had once followed) (La Rue 2011; BBC 2013). Further, in India, neither are intermediaries obligated by law to notify the authors of disputed content, when access to the content is sought to be disabled, nor is there any provision for the author to challenge allegations of illegality through a counter-notice (Arun and Singh 2010). In other words, due process procedures are found to be lacking and need to be infused into the framework of intermediary liability in the country.

The court in the Sabu Mathew George as well as Prajwala cases failed to adequately address how exceptions to the precedent can be balanced with procedural safeguards. It is important that for any kind of content takedown, a counter-notice mechanism is introduced. For instance, in the case of a pre-filtering of content, as is now the case for child pornography and rape and gang rape videos, the author of the content should be notified and allowed to file a counter-notice within a reasonable time, after access to the content is disabled. The nodal agencies must be empowered to decide at first instance based on notice and the counter, whether the content should continue to be kept offline or restored, and must deliver this decision within a tight time frame. However, the option to file a case before the court should be retained. This is inspired by the New Zealand framework discussed above, that aims to deliver speedy justice to the victim. The intermediary’s liability here should be to carry out the pre-filtering as prescribed by the court and government, facilitating the notice and counter-process, as well as taking down or reinstating content within the time specified. Rule 3 (11) of the Intermediary Guidelines already prescribes intermediaries to appoint a Grievance Officer, to whom complaints are to be notified. This officer can be in charge of the facilitating notice and counter-notice (Joshi 2018).

The Digital Millennium Copyright Act, 1998 has a counter-notice mechanism, under which when a takedown is challenged, the intermediary “must” replace the contentious content between 10–14 days, unless the copyright complainant files an infringement suit (Urban et al 2016). Similar clauses with reasonable timelines can be inserted into the IT Act, taking into consideration the content sought to be taken down. Internet intermediaries like content hosts must ensure transparency, while moderating content on their services. The Electronic Freedom Foundation developed three important principles for the above, known as the Santa Clara Principles on Transparency and Content Moderation. They recommend, at the minimum: (i) companies should report on the number of posts they have taken down and the accounts that have been temporarily or permanently disabled; (ii) companies must give notice to the user whose content has been taken down or account has been disabled; and (iii) companies must provide those affected by content takedown or account suspension a chance to appeal (Santa Clara Principles nd). These are important principles that India needs to work towards.

Conclusions

In its report on “Tackling Illegal Content Online—Towards an Enhanced Responsibility of Online Platforms,” the European Commission has recommended that the intermediary’s role should not only be reactive. The Commission encourages proactive measures to detect and remove illegal content. Given the enormous volume of content online, it advocates the use of technical means for automatic detection and removal of content, as long as it is within the framework of the law. However, it does not do away with human discretion, insisting that the human-in-the-loop is important for any automated legal procedure to determine legality of content (European Commission 2017). As regulation becomes increasingly automated, we must remember that we cannot do without human discretion.

“Many to many” communications, one of the defining features of a decentralised and democratic information society, are being corroded by the centripetal forces of powerful digital organisations like Google, Facebook, Amazon and Microsoft. These organisations are able to shape user experience of the internet and sometimes behave like “communications bottlenecks.” Google, for instance, has been accused of manipulating search results to favour its own products. These powerful digital corporations use safe-harbour provisions, claim relief from regulatory burdens, which ideally only smaller players should be able to claim. Google has, in the past, claimed the right to speech when it manipulated its search  esults to favour itself, but in other instances, when it works to their advantage, the company has claimed to be a passive conduit (Jia 2016). It is important for states to catch on this doublespeak, and impose regulations within reasonable measure, whenever necessary. The intermediaries of today are not the intermediaries of the 1990s, when they might have been “mere conduits.” Today’s intermediaries make conscious decisions about their design to yield certain kinds of content; they surveil users and micro-target advertisements at them or sell user data to others; they even use the knowledge they gain about users to influence their behaviour. Would not such an intermediary then be a publisher (Sylvian 2017).

However, enforcing a domestic policy against actors who are headquartered abroad has always been tricky. The most famous case of this, involving digital corporations and illegal content on the internet, has been the failed attempt by French courts to prevent access to the online auction site run by Yahoo that sold Nazi memorabilia. Due to its historical significance, the sale of these items is illegal in France, but not in California, where that site was hosted. The District Court of California consequently refused to remove such content from the Yahoo site, stating that this would be a violation of the First Amendment of the US Constitution (Samson nd). More recently, the Canadian Supreme Court upheld the decision directing Google to de-index a website that sold trademark infringing products not only in the US, but globally.

It reasoned that limited de-indexing in just Canada was not useful as people in Canada could still access this content from sites not hosted in the country. To Google’s concerns of possible free speech violation, the court put the burden on the company to figure out in which jurisdiction that would be the case, and depending on the result, modify the order (Joshi 2017). Google appealed against the global de-indexing order in the Northern District Court of California, which granted it permanent injunction by relying on the safe-harbour provisions of Section 230 of the Communication Decency Act, 1996. The court’s sole reliance on the Communications Decency Act and the failure to evaluate substantive implications on free speech has been criticised (Sookman 2018).

The legal treatment of intermediaries is entangled in the conflicting positions of different actors in the fray, with the vital and pragmatic issue of how best to govern what intermediaries carry, being caught more or less in rhetorical rather than real issues of the scope and reach of the law. There is no cookie-cutter approach to intermediary liability. Even though standing up to the might of digital corporations and obligating them to follow national policy will be a tough task, India must work towards solutions based on its own legal sensibilities, supported by cultural contexts and historic precedent.

Notes

1 The Manila Principles on Intermediary Liability were developed by civil society organisations from across the world, including the Electronic Freedom Foundation, the Centre for Internet and Society and Derechos Digitales, as a guiding framework for countries developing laws and policies on intermediary liability. The very first principle holds that “intermediaries should be shielded from liability for third-party content” (Manila Principles 2018).

2 A network service provider is: “with respect to any particular electronic message means any person who on behalf of another person receives, stores or transmits that message or provides any service with respect to that message.”

3 There are broadly two kinds of intermediary liability systems. In a vertically differentiated system, the intermediary’s liability depends on the type of illegal content (which can be hate speech, copyright infringement, defamation, etc). In a horizontal system, the law is agnostic to the type of illegal content, and the intermediary liability regime is homogeneous.

4 Rule 3(2) of the Information Technology (Intermediaries Guidelines) Rules, 2011.

5 The section punished the sending of offensive messages through a communication service.

6 Order dated 4 December 2014 in the Sabu Mathew George case.

7 Order dated 16 November 2016 in the Sabu Mathew George case.

8 Order dated 16 February 2016 in the Sabu Mathew George case.

9 Order dated 23 October 2017 in the In Re: Prajwala case.

10 Order dated 23 October 2017 in the In Re: Prajwala case.

11 Order dated 11 December 2017 in the In Re: Prajwala case.

12 The United Nations Human Rights Council acknowledged that “the need for human rights to underpin Internet governance and that rights that people have offline must also be protected online” (Human Rights Council 2016).

13 Harmful Digital Communications Act, 2015 of New Zealand.

14 Harmful Digital Communications Act, 2015 of New Zealand.

15 The other notable exception to judicial notice-and-takedown, is copyright infringement in India. In MySpace Inc v Super Cassettes Industries Ltd (2016), division bench of the Delhi High Court held that in the case of copyright infringement, a court order is not needed to constitute actual knowledge and that notice delivered in the method prescribed by the intermediary is sufficient (Bhatia 2017a).

16 With a 2 billion user base, Facebook CEO Mark Zuckerberg is said to have admitted that, “In a lot of ways Facebook is more like a government than a traditional company” (Farrell et al 2018).

17 The Allowing States and Victims to Fight Online Sex Trafficking Act (intended to combat online sex trafficking and signed into law by US President Trump) requires platforms to screen content posted by users, but does not specify what kind of platforms are responsible for this. Evan Engstrom and Daphne Keller (2018) criticise the US Congress for designing the law “with only these massive platforms in mind” and failing to recognise the regulatory burden the law will place on smaller platforms.

References

Advani, Pritika Rai (2013): “Intermediary Liability in India, Economic & Political Weekly, Vol 48, No 50, pp 120–28.

Angelopolous, Christina and Stijn Smet (2016): “Notice-and-fair-balance: How to Reach a Compromise between Fundamental Rights in European Intermediary Liability,” Journal of Media Law, Vol 8, No 2, pp 266–301.

Article 19 (2013): “Internet Intermediaries: Dilemma of Liability,” https://www.article19.org/wp-content/uploads/2018/02/Intermediaries_ENGL....

Arun, Chinmayi and Sarvjeet Singh (2010): “NoC Online Intermediaries Case Studies Series: Online Intermediaries in India,” Global Network of Interdisciplinary Internet & Society Research Centers, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2566952.

Association Progressive Communications (2014): “Frequently Asked Questions on Internet Intermediary Liability,” https://www.apc.org/en/pubs/apc%E2%80%99s-frequently-asked-questions-internet-intermed.

BBC (2013): “France Ends Three-strikes Internet Piracy Ban Policy,” British Broadcasting Corporation, 10 July, https://www.bbc.com/news/technology-23252515.

(2015): “Section 66A: India Court Strikes Down ‘Facebook’ Arrest Law,” British Broadcasting Corporation, 24 March, https://www.bbc.com/news/world-asia-india-32029369.

Berger, Cathleen (2016): “Content and Platform Regulation: The German Case and What’s to Come in 2018,” Medium, https://medium.com/@_cberger_/will-germanys-approach-to-content-and-platform-regulation-prevail-in-2018-d7e6e2db5cb.

Bhatia, Gautam (2017): “Upsetting a Very Fine Balance,” Hindu, 20 February, http://www.thehindu.com/todays-paper/tp-opinion/upsetting-a-very-fine-balance/article17331540.ece.

— (2017a): “Online Speech and Intermediary Liability: The Delhi High Court’s MySpace Judgment,” Indian Constitutional Law and Philosophy, https://indconlawphil.wordpress.com/2017/01/16/online-speech-and-intermediary-liability-the-delhi-high-courts-myspace-judgment/.

Centre for Communication Governance (2017): “The Supreme Court Hears Sabu Mathew George v Union of India—Another Blow for Intermediary Liability,” National Law University Delhi, https://ccgnludelhi.wordpress.com/2017/02/16/the-supreme-court-hears-sab....

Center for Democracy and Technology (2010): “Intermediary Liability: Protecting Internet Platforms for Expression and Innovation,” CDT.org, https://cdt.org/files/pdfs/CDT-Intermediary%20Liability_(2010).pdf.

— (2012): “Chile’s Notice-and-takedown System for Copyright Protection: An Alternate Approach,” CDT.org, https://cdt.org/files/pdfs/Chile-notice-takedown.pdf.

Council of Europe Gender Equality Strategy (2016): “Combating Sexist Hate Speech,” Council of Europe, https://edoc.coe.int/en/gender-equality/6995-combating-sexist-hate-speec....

Dara, Rishabh (2011): “Intermediary Liability in India: Chilling Effects on Free Expression on the Internet,” Bengaluru: Centre for Internet & Society, https://cis-india.org/internet-governan
ce/intermediary-liability-in-india.pdf
.

Dhar, Aarti (2011): “At 914, Child Sex Ratio Is the Lowest since Independence,” Hindu, 1 April, http://www.thehindu.com/news/national/At-914-child-sex-ratio-is-the-lowe....

Deccan Chronicle (2017): “Follow Indian Law, Block Ads on Sex Determination: SC to Google, Microsoft,” 16 February, http://www.deccanchronicle.com/nation/current-affairs/160217/follow-indian-law-block-ads-on-sex-determination-sc-to-google-microsoft.html.

Engstrom, Evan and Daphne Keller (2018): “Only Giant Internet Firms May Be Able to Comply with One-size-fits-all Rules,” San Francisco Chronicle, 11 May, https://www.sfchronicle.com/opinion/openforum/article/Only-giant-interne....

European Commission (2017): “Tackling Illegal Content Online: Towards an Enhanced Responsibility of Online Platforms,” https://eur-lex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52017DC0555.

Farrell, Henry, Margaret Levi and Tim O’Reilly (2018): “Mark Zuckerberg Runs a Nation-state, and He’s the King,” Vox, 10 April, https://www.vox.com/the-big-idea/2018/4/9/17214752/zuckerberg-facebook-p....

Financial Express (2017): “Can Uploading of Obscene Videos Be Prevented, Supreme Court Asks Google,” 22 February, https://www.financialexpress.com/india-news/can-uploading-of-obscene-vid....

Gillespie, Tarleton (2010): “The Politics of ‘Platforms’,” New Media and Society, Vol 12, No 3, pp 347–64.

Gupta, Apar (2007): “Liability of Intermediaries in India: From Troubled Waters to Safe Harbours,” Computer and Telecommunications Law Review, Vol 13, No 2, pp 60.

— (2016): “The Supreme Court’s Slow March Towards Eroding Online Intermediary Liability,” Wire, 14 July, https://thewire.in/government/ignorance-is-not-an-excuse-in-law.

Gurumurthy, Anita and Amrita Vasudevan (2018): “Hidden Figures—A Look at Technology-mediated Violence against Women in India,” GenderIT.org, https://www.genderit.org/node/5104/.

Hindustan Times (2018): “SC Slaps Rs 1 Lakh Fine on Google, Facebook and Others Over Sex Abuse Videos Case,” 21 May, https://www.hindustantimes.com/india-news/sc-slaps-rs-1-lakh-fine-on-google-facebook-and-others-over-sex-abuse-videos-case/story-NraMDVq6O5rROJbmu8VAeJ.html.

Human Rights Council (2016): “Oral Revisions of 30 June: The Promotion, Protection and Enjoyment of Human Rights on the Internet,” New York: United Nations General Assembly, https://www.article19.org/data/files/Internet_Statement_Adopted.pdf.

— (2018): “Report of the Human Rights Council on Its Thirty-eighth Session,” A/HRC/38/2, Geneva: United Nations Human Rights Council.

Indian Express (2017): “Can Uploading of Obscene Videos Be Prevented, SC Asks Google,” 22 February, http://indianexpress.com/article/india/supreme-court-obscene-content-can-uploading-of-obscene-videos-be-prevented-sc-asks-google-4538456/.

— (2018): “Government Portal Dedicated to Cybercrime against Women, Children Soon: Maneka Gandhi,7 June, https://indianexpress.com/article/india/government-portal-dedicated-to-c....

Internet Freedom Foundation (2017): “Statement of Concern on the Sabu Mathew George Case: Don’t ‘auto-block’ Online Expression,” https://internetfreedom.in/statement-of-concern-on-the-sabu-mathew-george-case-dont-auto-block-online-expression/.

Jia, Kai (2016): “From Immunity to Regulation: Turning Point of Internet Intermediary Regulatory Agenda,” Journal of Law and Technology at Texas, http://jolttx.com/2016/10/08/immunity-regulation-turning-point-internet-intermediary-regulatory-agenda/.

Joshi, Divij (2017): “Canada Throws a Google-y at Judicial Comity on the Internet, Issues ‘Global’ Injunction,” SpicyIP, https://spicyip.com/2017/07/canada-throws-a-google-y-at-judicial-comity-....

— (2018): “Indian Intermediary Liability Regime Compliance with the Manila Principles on Intermediary Liability,” Bengaluru: Centre for Internet & Society, https://cis-india.org/internet-governance/files/indian-intermediary-liability-regime.

Keller, Daphne (2017): “SESTA and the Teachings of Intermediary Liability,” The Center for Internet and Society, http://cyberlaw.stanford.edu/files/publication/files/SESTA-and-IL-Keller....

La Rue, Frank (2011): “Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression, Frank La Rue,” United Nations Human Rights Council, http://www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf.

LSE Media Policy Project (2015): “The Delfi AS vs Estonia Judgement Explained,” http://blogs.lse.ac.uk/mediapolicyproject/2015/06/16/the-delfi-as-vs-est....

Manila Principles (2018): “Manila Principles on Intermediary Liability,” Manila Principles
https://www.manilaprinciples.org/.

Martin-Bariteau, Florian (2015): “Internet Intermediaries Liability—Perspectives from the United States and Canada for Brazil,” Understanding Brazil’s Internet Bill of Rights, C Souza, M Viola and R Lemos (eds), Rio: itsrio.org, p 56.

Reddy, Prashant (2016): “Do Online Advertising Platforms Qualify for ‘Intermediary Liability’ Protection?,” SpicyIP, https://spicyip.com/2016/08/do-online-advertising-platforms-qualify-for-intermediary-liability-protection.html.

Sayeeswari, R (2016): “Intermediary Liability for Online Copyright Infringement—A Recipe for Killing the Internet—A Comparative Analysis of the Law in US and India,” Social Science Research Network, http://dx.doi.org/10.2139/ssrn.2831181.

Samson, Martin (nd): “Yahoo, Inc v La Ligue Contre Le Racisme et L’Antisemitisme, et al,” Internet Library of Law and Court Decisions, http://www.internetlibrary.com/cases/lib_case17.cfm.

Santa Clara Principles (nd): “The Santa Clara Principles on Transparency & Content Moderation,” GlobalNetPolicy, http://globalnetpolicy.org/wp-content/uploads/2018/05/Santa-Clara-Principles_t.pdf.

Šimonović, Dubrakava (2018): “Report of the Special Rapporteur on Violence against Women, Its Causes and Consequences on Online Violence against Women and Girls from a Human Rights Perspective,” United Nations Human Rights Council, https://www.ohchr.org/EN/HRBodies/HRC/RegularSessions/Session38/Documents/A_HRC_38_47_EN.docx.

Software Freedom Law Center (2012): “Intermediaries, Users and the Law—Analysing Intermediary Liability and the IT Rules,” https://sflc.in/sites/default/files/wp-content/uploads/2012/07/eBook-IT-Rules.pdf.

Sookman, Barry (2018): “US Court Thumbs Its Nose at Supreme Court of Canada: Google v Equustek,” Barry Sookman, http://webcache.googleusercontent.com/search?q=cache:8uT6DVpUB9wJ:www.ba....

Sutter, Gavin (2011): “Re-thinking Online Intermediary Liability: In Search of the Baby Bear Approach,Indian Journal of Law and Technology, Vol 7, pp 33–90.

Swiss Institute of Comparative Law (2015): “Comparative Study on Blocking, Filtering and Take-down of Illegal Internet Content,” Council of Europe, https://rm.coe.int/CoERMPublicCommonSearchServices/DisplayDCTMContent?documentId=09000016806554bf.

Sylvian, Oliver (2018): “Intermediary Design Liability,” Connecticut Law Review, Vol 50, No 1, pp 204–74.

Urban, Jennifer M, Joe Karagains and Brianna L Schofield (2016): “Notice and Takedown in Everyday Practice,” Berkeley: University of California, BerkleyLaw, http://illusionofmore.com/wp-content/uploads/2016/04/Berkeley_Columbia-o....

Urban, Jennifer M and Laura Quilter: (2005):“Efficient Process or Chilling Effects—Takedown Notices under Section 512 of the Digital Millennium Copyright Act,Santa Clara
Computer and High Technology Law Journal
, Vol 22, p 621, https://scholarship.law.berkeley.edu/cgi/viewcontent.cgi?article=1500&co.... https://sflc.in/sites/default/files/wp-content/uploads/2012/07/eBook-IT-Rules.pdf.

Valente, Marian (2018): “Do We Need New Laws to Address Non-consensual Circulation of Intimate Images: The Case of Brazil,” GenderIt.org, https://www.genderit.org/articles/do-we-need-new-laws-address-non-consensual-circulation-intimate-images-case-brazil.

Vargas, Paula (2014): “Argentine Supreme Court Decides Landmark Intermediary Liability Case,” The Center for Internet and Society at Stanford Law School, https://wilmap.law.stanford.edu/news/argentine-supreme-court-decides-lan....

Working Group on Broadband and Gender (2015): “Cyber Violence against Women and Girls—A Worldwide Wake-up Call,” UN Broadband Commission for Digital Development, http://www.unesco.org/new/fileadmin/MULTIMEDIA/HQ/CI/CI/images/wsis/GenderReport2015FINAL.pdf.

Cases Cited 

Avnish Bajaj v State (NCT) of Delhi (2005): 116, DLT, 427.

Delfi A S v Estonia (2015): ECtHR, 64669/09, European Court of Human Rights. 

Google India Private Limited v M/S Visaka Industries Limited (2009): Andhra Pradesh High Court.

In Re: Prajwala (2015): SMW (Crl) No(s), 3/2015, Supreme Court of India, https://indiankanoon.org/doc/73685449/.

Josephine Shine v Union of India (2017): Writ Petition No 194, Supreme Court of India. 

MySpace Inc v Super Casettes Industries Ltd (2016): FAO(OS) 540/2011, High Court of Delhi, http://lobis.nic.in/ddir/dhc/SRB/judgement/24-12-016/SRB23122016FAOOS540....

Sabu Mathew George v Union of India and Others (2008): Writ Petition No 341, Supreme Court of India.

Shreya Singhal v Union of India (2015): AIR, SC, 1523.

Updated On : 15th Jan, 2019

Comments

(-) Hide

EPW looks forward to your comments. Please note that comments are moderated as per our comments policy. They may take some time to appear. A comment, if suitable, may be selected for publication in the Letters pages of EPW.

Back to Top