First published in LegalEra Magazine, June 2020. Robert Bond co-authors this article, alongside Salman Waris, Partner TechLegis Advocates and Solicitors, and Noriswadi Ismail, Managing Director, Ankura.
The recent executive orders issued by Donald Trump targeting social media companies days after Twitter called two of his tweets “potentially misleading” has put the regulation of social media debate to the forefront internationally.
Practically, the actual problem with the executive order is that it could change the way the Internet works, signals for platform regulation globally, especially in countries like India where governments have already been considering social media regulation and democratic institutions are not very strong.
This is especially concerning in the Indian context as India is currently finalising the Information Technology Intermediary Guidelines (Amendment) Rules 2018 and in the process of introducing Framework and Guidelines for use of Social Media Regulations 2020.
Background
On July 26, 2018, changes to the Intermediaries Guidelines Rules, 2011, were announced in the Rajya Sabha “to make intermediaries more liable towards the content that is published, transmitted, etc. on their platform”.
More recently, with the Corona Virus (hereinafter referred to as “COVID 19”) outbreak has been declared a global health emergency and as Countries across the world are trying their best to mitigate the spread of corona virus, there is a trend of circulation of misinformation/false news and sharing anonymous data related to corona virus in various social media platforms creating panic among public.
Regulatory developments
In light of the above the MeitY vide its recent Notification No. 16(1) / 2020-CLES dated 20th March 2020 (hereinafter referred to as “Advisory”) had advised social media platforms to curb false news/misinformation with regard to corona virus.
The Ministry of Information and Broadcasting (hereinafter referred to as “MeitY”) vide its Notification No. 42015/2/2019-BCIII dated April 1st 2020 (hereinafter referred to as “Notification”) has requested for the recent Supreme Court of India Judgement in Writ Petition (Civil) No. 468/2020 concerning, false news/misinformation on corona virus, to be disseminated to all stakeholders for appropriate action.
Further, the MeitY also announced the revision of the intermediary guidelines before finalizing the rules of the social media regulation, which is expected to be rolled out later in 2020.
Government’s dilemma
With the rise of platforms such as Facebook, TikTok etc., it was noticed by the Government that the “Right to Freedom of Speech and Expression” was being used recklessly and carelessly by the citizens, the Government’s primary concern pointed out the rise in hate speech, fake news and so-called anti-national activities online through defamatory social media platforms. Social media use has become more common in India because of “lower internet tariffs, availability of smart devices and last-mile connectivity”. Internet as potent tool to cause “unimaginable disruption to the democratic polity”.
The government has proposed measures to regulate social media companies over harmful content, including “substantial” fines and the ability to block services that do not stick to the rules.
In order to encourage and enable government agencies to make use of this dynamic medium of interaction, a Framework and Guidelines for use of Social Media in India has been formulated, which shall be included as a provision in the intermediary guidelines. The move has been taken up following the reckless and careless exercise of the Right to Freedom of Speech and Expression by citizens on social media. The government has urged the Internet Service Providers (ISPs) to block access to child pornography websites. It also requested all the ISPs to educate its subscribers about the use of parental control filters on devices via messages, emails, invoices, websites and more.
One of the key problems of a lot of regulatory measures is the vagueness of language which is exploited by state agencies to behave in a repressive way. Any regulation has to be clear and concrete so that there is no scope for overreach.
The Supreme Court, in the Whatsapp traceability case, had earlier expressed the need to regulate social media to curb fake news, defamation and trolling. It also directed the Union Government to come up with guidelines to prevent misuse of social media while protecting users‟ privacy in three weeks time. The case came into light following the incident — in which devices of 1,400 people globally, including 121 Indians, were allegedly compromised after they were infected by NSO’s Pegasus spyware — the government’s concerns only increased. The bench had observed that the issue around traceability on social media and instant messaging apps needed to be dealt with keeping in mind the sovereignty of India, the privacy of an individual, and prevention of illegal activities.
The regulations are intended to curb the misuse of social media and stop the spreading of fake news that sparked unrest and violence earlier this year, but internet companies and privacy advocates say the new measures are a threat to free speech.
The new rules proposed by the Ministry of Electronics and Information Technology (MEITY) dilute the legal provision which ensures that internet companies generally have no obligations to actively censor content. Under the new rules, all “intermediaries” are required to “proactively” purge their platforms of “unlawful” content or else potentially face criminal or civil liability.
Scope and applicability
The Information Technology Act, 2000 (hereinafter referred to as “IT Act”) under section 2(1)(w) defines ‘Intermediary’ and Social Media platforms, as such fall under the definition of intermediaries and are required to follow ‘due diligence’ as prescribed under the Information Technology (Intermediary Guidelines) Rules, 2011 (hereinafter referred to as “IT Rules”) notified under section 79 of the IT Act. They must inform their users not to host, display, upload, modify, publish, transmit, update or share any information that may affect public order and unlawful in any way.
Therefore, as per the Advisory the Social Media platforms are directed to:
- Initiate awareness campaign on their platforms for the users not to upload / circulate any false news / misinformation concerning corona virus which are likely to create panic among public and disturb the public order and social tranquility;
- Take immediate action to disable / remove such content hosted on their platforms on priority basis;
- Promote dissemination of authentic information related to corona virus as far as possible.
The Supreme Court Of India – Writ petitions
Pursuant to the declaration of general countrywide lockdown by the Prime Minister a large number of migrant workers tried to walk across highways to reach their home-states in fear due to fake news that this lockdown would last three months.
Petitions were filed seeking the Apex Court’s intervention and requesting direction to the Government of India with respect to providing shelter, food, clean drinking water and medicines to these migrant labourers.
Directions to media
The Supreme Court of India Judgement in Writ Petition (Civil) No. 468/2020 Alak Alok Srivastav Vs. Union of India issued the following directions:
- Media (print, electronic or social) to maintain a strong sense of responsibility and ensure that unverified news capable of causing panic is not disseminated.
- Section 54 of Disaster Management Act, 2005 provides for punishment to a person who makes or circulates a false alarm or warning as to a disaster or its severity or magnitude, leading to panic. Such person shall be punished with imprisonment which may extend to one year or with fine.
- A daily bulletin by the Government of India through all media avenues including social media and forums to clear the doubts of people would be made active.
- Media is directed to refer to and publish the official version about the developments.
The US and EU approach
The U.S and E.U shaped its regulatory approach 20 years ago aimed to promote internet growth leading to the current status quo. These jurisdictions protect social media organisations and other platforms from liability whenever users post any contents on their sites. At this juncture both sides of the Atlantic are debating and revisiting these laws.
At the time of this publication, President Trump aim to strip such legal protection if social media organisations’ platforms engage in existing or potential political conduct or censorship.
On March 31st, 2020, Ursula von der leyen, the European Commission President remarked: Disinformation can cost lives and called social media organisations/platforms to share data with fact-checking communities as to ensuring accuracy. EU chief goal is to oblige technology companies/service providers to cut back on hate speech and disinformation.
Whilst these rules are being reviewed, legislature and relevant stakeholders should determine how social media organisations’ platforms should treat user-generated content which contains hate speech and disinformation. Importantly what are the key legal implications to be faced.
Comparatively the U.K Information Commissioner, Elizabeth Denham issued a forward-looking statement as to respond to the Department for Digital, Culture, Media & Sport consultation on the Online Harms White Paper[1]:
“I think the white paper proposals reflect people’s growing mistrust of social media and online services. People want to use these services, they appreciate the value of them, but they’re increasingly questioning how much control they have of what they see, and how their information is used. That relationship needs repairing, and regulation can help that. If we get this right, we can protect people online while embracing the opportunities of digital innovation.
“While this important debate unfolds, we will continue to take action. We have powers, provided under data protection law, to act decisively where people’s information is being misused online, and we have specific powers to ensure firms are accountable to the people whose data they use.
“We’ve already taken action against online services, we acted when people’s data was misused in relation to political campaigning, and we will be consulting shortly on a statutory code to protect children online. We see the current focus on online harms as complementary to our work, and look forward to participating in discussions regarding the White Paper.“
Self regulation
The EU has had a self-regulatory approach over the past 10 years or more, expecting platform providers to take a responsible approach to their duty to manage appropriate behaviour. In the past year there has been a move towards greater legal regulation of social media unless the platform owners react quicker to remove or block hate speech, copyright infringing material and other illegal content.
In response to Twitter’s recent fact-check labelling of President Trump’s tweets, the European Commission Vice-President is reported to have commented, “I want platforms to become more responsible, therefore I support Twitter’s action to implement a transparent and consistent moderation policy.”
Germany has The Network Enforcement Act (NetzDG) which for past two years has required social network owners to combat fake news, but is now planning a law aimed at hate speech and other criminal posts, demanding that social media platforms report such content to the police authorities.
The EU Digital Services Act is due to be put forward later this year and amongst a general overhaul of E-Commerce rules is going to obligate digital service providers to be more responsible for fair and equitable content and behaviour.
The challenge is that over-regulation can stifle expression and other freedoms. In 1998 the Report of the Ditchley Conference on the Regulation of Cyberspace at which I was Rapporteur concluded with my words, “When it comes to the regulation of Cyberspace, the business of Internet management should not hinder the management of Internet business”
Conclusion
The US Communications Decency Act and the ‘safe harbor’ principles essentially protect free speech and in today’s day and age, when so much of our speech is on the Internet, it is just as important to protect the carriers of our speech, as speech itself. In India, Section 79 of the Information Technology Act) actually protect the platforms that enable our free speech. And with fake news and deep fakes often having widespread community implications beyond just the internet in the real world it is not only logical but also a legal obligation under IT Act for service providers and social media platforms like twitter to implement mechanisms to prevent circulation of fake news and offer users the option to fact check. This may not particularly fare well with certain politicians and leaders whose campaigns rely to a big extent on deception, misinformation and circulation of fake news. Unfortunately this is exactly what has happened in the present instance where in the run up to the election Trump has been caught red-handed and it’s frustrating him.
Perhaps it would be ideal for India, U.S and the E.U to contextually learn the U.K’s approach and balance the same in their respective cultural and local regulatory landscape – of course, a herculean task for legislature, social media organisations/platforms and other relevant stakeholders to collectively achieve a consensus.
——————
[1] See The Information Commissioner’s response to the Department for Digital, Culture, Media & Sport consultation on the Online Harms White Paper, retrieved June 6 2020.