Privacy and Manipulation: How Social Media Has Affected Political Discourse

As our interactions with social media and messaging apps become more and more pervasive, does our data remain private or can it be used by algorithms to tailor and manipulate our online experience?

The social media messaging application WhatsApp recently updated its privacy policy. WhatsApp has tried to assure users that it cannot read or listen to personal conversations and has even delayed the implementation of the new policy. Yet, concerns have been raised about the potential for it sharing user data with parent company Facebook. While the app uses end-to-end encryption for messages and will continue to do so with the new privacy update, it already shares user data, such as about location, contacts, usage and identifiers, with Facebook. Therefore, experts have explained that the key difference in the new policy relates to how users engage in “business interactions.”

With the new WhatsApp privacy policy raising questions about data protection, the debate over privacy has resurfaced. In a previous reading list, the discourse on data protection and privacy had been thoroughly analysed. In a panel discussion on privacy concerns in a data society, as part of the “Data Societies 2020” series, the privacy debate had been analysed in the context of the capitalist forces that sustain social media.  

But the present debate also brings to focus the role of social media in a changing data ecosystem. How do our data interactions online affect the content that is distributed to us? And how does this play into third parties manipulating us for advertising and propaganda? Are end-to-end encrypted models free from such interference? In this reading list, we look broadly at questions related to social media and privacy. 

Social Media Algorithms and Manipulation

While elements of manipulation and propaganda can be espied in traditional media forms as well, the nature of social media allows for manipulation at a more insidious level. Amba Uttara Kak (2018) wrote:

In offering a single persuasive opinion at the right time, targeted communications are uniquely placed to narrow our information choices (Benkler 2001).

She introduced the basic premise of information consumption on social media:

It has been an open secret that communications on the internet are concentrated on a handful of social media platforms, and with a smaller subset of companies that own them. The internet might have limitless content, but its consumers have finite attention. Platforms are built around addressing this attention scarcity, with the stated goal of bringing forth content that is relevant and tailored to each user. Social media companies do not generally produce content, and so their very public community guidelines apply to users, rather than constitute editorial guidelines for the platform. Through the operation of algorithms, however, they organise and rank what an individual user sees on the platform and when, as well as decide on the basis for classification of “trending topics.” Overall, these algorithms determine the content that is amplified and that which disappears without a trace. A part of this, of course, is sponsored content, which may be artificially boosted to target audiences in exchange for payment.
These algorithmic choices and business dynamics shape our exposure to opinion and fact, and the range of sources from which we get them. The manipulation of their preferences may not interfere directly with an individual’s options, but, as legal philosopher Joseph Raz (1986: 377) explains, “perverts the way that person reaches decisions, forms preferences, or adopts goals.”

Alok Prasanna Kumar (2018) explained how social networks essentially control what users see:

Whereas Facebook was the outlier in terms of moving away from a purely chronological feed (latest posts first) to an algorithmic one (where the user sees what the social network’s algorithm thinks may be most relevant), this seems to have become the standard across all social networks (Kiberd 2016). This, as research has shown (Hern 2017), is hardly benign. There is the serious potential of this algorithm being gamed to spread fake news and other unlawful content. More importantly, social networks cannot just be considered the internet’s version of bulletin boards. Rather, the use of algorithms now means that, in my view, they should be considered more akin to an editor of a newspaper.

The pervasiveness of algorithms in determining online social life is evident in India. As Padmini Ray Murray and Paul Anthony (2020) wrote:

Platform logic has completely shaped the Indian citizen’s relationship to data and their attitudes to it. Platforms mediate every aspect of digital life, friendships, personal relationships, consumption as well as the citizen’s relationship to the state. India is currently a “platform society” a term coined by Van Dijk et al that “emphasizes the inextricable relation between online platforms and societal structures. Platforms do not reflect the social: they produce the social structures we live in” (Van Dijck et al 2018).

Privacy, Consent and the Data That Feeds Algorithms

What complicates the picture of algorithmic manipulation on social media is the data used to feed the algorithms, which can often be obtained without informed consent. Amber Sinha (2018) noted:

… Today, data is collected continuously with every use of online services, making it humanly impossible to exercise meaningful consent. The quantity of data being generated is expanding at an exponential rate. With connected devices, smartphones, appliances transmitting data about our usage, and even the smart cities themselves, data now streams constantly from almost every sector and function of daily life, “creating countless new digital puddles, lakes, tributaries and oceans of information” (Bollier 2010).

“The infinitely complex nature of the data ecosystem” renders consent of little value even in cases where individuals may be able to read and comprehend privacy notices. Sinha added:

As the uses of data are so diverse, and often not limited by a purpose identified at the beginning, individuals cannot conceptualise how their data will be aggregated and possibly used or reused. 

Seemingly innocuous bits of data revealed at different stages could be combined to reveal sensitive information about the individual. 

The power asymmetry between users and platforms, as well as between users and the government also informs the nature of consent for data use. Murray and Anthony explained:

To expect that in a country like India, citizens who are largely disenfranchised by poverty and illiteracy, should have to constantly look over their shoulders to protect their own data from their own government, demonstrates a gross desertion of care.
…even the most educated [users] are less than aware of what rights they are relinquishing when they tick the “I accept” box at the bottom of an end user license agreement. 

The many ways in which the potential interplay of data and privacy could play out in the face of evolving technology are hard to predict. Emphasising the need for reform in India’s data protection laws, Zubin Dash (2019) wrote

Data is often termed the new oil. But data is not oil, oil will eventually run out. Data is an ever-replenishing source of ungodly amounts of revenue, waiting to be collected, processed, mined, analysed, scrutinised, and ultimately monetised. Through digital payments, artificial intelligence, big data, autonomous vehicles, drone operations, among many other developments, privacy will be impacted from new quarters – some which are not even foreseeable today. 

Facebook’s Track Record with Privacy—Cambridge Analytica

Globally, the Cambridge Analytica case has been a watershed in exposing how social media and data analytics companies manipulate public life (Howard 2018).

wrote Sahana Udupa (2019), referring to the allegations of widespread political manipulation by Cambridge Analytica using Facebook user data. 

Kak (2018) explained:

Cambridge Analytica used deceptive means (illegal in several countries) to gain access from Facebook to “granular” information about more than 50 million Americans and deployed it to tailor political messaging for Donald Trump’s (eventually successful) presidential campaign. 
Propaganda is not new and, when done right, it does involve manipulation of public opinion. What, then, was new about this incident? For one, information about people’s preferences and motivations had been obtained under the pretence of a cheerful personality quiz on Facebook (Cadwalladr and Graham-Harrison 2018). Users were outraged that their (and their Facebook friends’) information was used as fodder for a political campaign, without their knowledge or consent.

Sinha (2018) emphasised that the Cambridge Analytica scenario was not the product of a data breach or lapse in data protection, but was a product of the very design of the social media ecosystem. He wrote:

It is evident from such data-sharing practices, as demonstrated by the Cambridge Analytica–Facebook story, that platform architectures are designed with a clear view to collect as much data as possible. This is amply demonstrated by the provision of a “friends permission” feature by Facebook on its platform to allow individuals to share information not just about themselves, but also about their friends. For the principle of informed consent to be meaningfully implemented, it is necessary for users to have access to information about intended data practices, purposes and usage, so they consciously share data about themselves. 
In reality, however, privacy policies are more likely to serve as liability disclaimers for companies than any kind of guarantee of privacy for consumers. A case in point is Mark Zuckerberg’s facile claim that there was no “data-breach" in the Cambridge Analytica–Facebook incident. Instead of asking each of the 87 million users whether they wanted their data to be collected and shared further, Facebook designed a platform that required consent in any form only from 270,000 users. Not only were users denied the opportunity to give consent, their consent was assumed through a feature which was on by default. This is representative of how privacy trade-offs are conceived by current data-driven business models. Participation in a digital ecosystem is by itself deemed as users’ consent to relinquish control over how their data is collected, who may have access to it, and what purposes it may be used for.

Political Propaganda on Social Media—The Indian Scenario

Cambridge Analytica and the Trump election campaign are not the only instance of political propaganda on social media being electorally beneficial.

Referring to Twitter and political messaging as “an integral duo in the electoral success of the Bharatiya Janata Party (BJP) in 2014,” Vignesh Karthik K R, Vihang Jumle and Jeyannathann Karunanithi (2020) noted:

… the platform [Twitter] became an effective tool to amplify certain talking points which are intended to elicit a reaction from the individuals as against a well-thought-out response. This in turn led to Twitter being dominated by groups that act in a concerted fashion, many of which are politically and culturally rooted.  

Political messaging has come to proliferate not just Twitter but social media platforms in general. Kak (2018) explained:

Political advertising on social media comes in many forms and remains under­examined in India. Direct forms include political campaigns paying social media companies to promote their content. Increasingly, however, advertising is channelled through personal accounts of individuals with large networks and high levels of engagement, labelled as “social-media influencers” (Basu 2018). These agents float content that invariably does not disclose that these are paid for by political campaigns or their social media companies.
… personalisation allows political actors to tailor their messages right down to the individual level, at scale and in real time. It is possible to roughly identify where particular kinds of audiences gather (say a group on Facebook or a particular Twitter account) with relative ease. However, some social media companies offer this service to advertisers, at scale and with greater precision. Facebook, for example, defines audiences based on “demographics, location, interests, and behaviour” and can charge a fee to disseminate content to this group (“Politically moderate, practices yoga, lives in Uttar Pradesh,” for example). 

The social media campaigning cuts across platforms and parties. Udupa (2019) wrote:

The BJP’s first-mover advantage in social media campaigning was challenged by other political parties during the run-up to the elections. Stepping up its efforts, the Indian National Congress (INC) re-energised several of its party units, including a dedicated “research team” to prepare “counters” to the BJP and other parties. Full-fledged social media teams of the Congress and regional political parties got on to the same game of composing witty, satirical, and retaliatory messages. 

The social media engagement of parties goes beyond direct social media campaigns. It capitalises on the “vast complexity of content creation and distribution channels, together with the speed of circulation in the digital age.” Udupa added:

Alongside party-based efforts, individual politicians increasingly recruited social media campaigners for online promotions. It was common to witness social media strategists accompanying politicians during campaign visits for ward-level mobilisation. These strategists ranged from a single individual who would follow the leader with a camera to upload the video the very next minute on Twitter, YouTube, and Facebook to small- and mid-sized enterprises that had paid teams working on social media promotions. Media reports also exposed clandestine operations of proxy companies that created toxic digital campaign content aimed against religious minorities and opposition party leaders (Poonam and Bansal 2019). Even as Facebook, WhatsApp and Twitter came under the radar for election content volatilities, TikTok, ShareChat, Helo and other mid-range platforms started providing new means to share political content and peddle partisan positions.

The problem with such politicisation on social media is not about a manufactured support for a particular party or ideology alone, but has bearing on the nature of democratic participation. An EPW editorial (February 2019) wrote:

When anonymous private entities with high capital can pay for more space for their opinions, they are effectively buying a louder voice. Not every voice on the internet commands the same kind of audience. If political discourse in the digital sphere is a matter of out-shouting one’s opponent till an election is won, then the quality of politics suffers. Voices from the grass roots do not have the volume to compete with the kind of resources that larger political parties can employ for mobilising the vote bank.
We can scrutinise expenditure of political parties on social media, but can we scrutinise the money spent by individuals at the behest of political parties? These nebulous connections within the architecture of social media platforms have enabled political parties to meet the dual goals of profitability and popularity. The focus is restricted to the promotion of content that generates more user engagement, regardless of how inflammatory the content may be. What we tend to forget is that social media is not an ideology or an ideal or a moral institution, but a product built by companies to make profits. Masquerading as democratic, the operating principle of these platforms is not democratic, but commercial and is, in essence, what can be called a “marketplace of views.”

Is WhatsApp Different?

Kak (2018) wrote:

In India, reports suggest that WhatsApp (much more than Facebook or Twitter) is the primary tool for the dissemination of political communications (Dias 2017; Daniyal 2018; Calamur 2017).

With over 400 million users in India (as of July 2019), WhatsApp’s selling point as a political platform is its ubiquity. Chinmayi Arun (2019) explained:

The ubiquity of WhatsApp is apparent to any Indian resident who uses a smartphone. The history is similar to the manner in which the platform became ubiquitous in countries like Kenya and Israel: WhatsApp disrupted SMS by offering not just low-cost texting and multimedia messaging, but also enhancements such as group chats, quick forwarding to multiple people, integration of text with video, audio, emoji, and other multimedia content. The company has modified some of these features in India recently, in res­ponse to government and public pressure. The first-mover advantage, and the architecture of the platform resulted in WhatsApp accumulating “social influence” (Church and Oliveira 2013), ensured its continued ubiquity and increasing popularity even after other apps started offering similar services. We all have friends, family and professional networks on WhatsApp. This is why we join and why we stay.

Commenting on the ease of spread of rumours and misinformation on WhatsApp, Sohini Sengupta (2019) wrote:

Social media platforms like WhatsApp have been blamed for unleashing the “dark forces” of the Internet. WhatsApp itself has avoided accepting total responsibility and particularly resisted demands for creating “traceability of messages” by compromising its end-to-end encryption. A powerful, convenient and affordable medium for information exchange that is used by millions for exchanging legitimate information; school homework, government departmental functions, social events, business, gets transformed. As peers engage in WhatsApp communication, reasonably secure in the knowledge of each other, through online and offline interactions, they form a “filter bubble” (Pariser 2011) that receive, believe and lend legitimacy to misinformation and disinformation.

Does WhatsApp’s end-to-end encryption help prevent political manipulation? In fact, the encrypted nature of WhatsApp poses challenges when it comes to regulation of political propaganda on social media. Kak (2018) argued:

… while political advertising on traditional mass media could potentially be identified, and consequently audited, personalised advertisements on social media are relatively opaque to external audits (Dias 2017). This problem is heightened with WhatsApp, which functions as a messaging app and has end-to-end encryption, making it technically resilient to interception. The inability to monitor the scale means that electoral spending limits are hard to enforce and promises made to prospective voters unaccountable.

Moreover, sharing user data with the government for regulatory purposes may create its own problems, wrote Arun (2019). 

WhatsApp’s sharing of metadata with the Indian government might put many vulnerable Indians at risk. While the company has clarified that it uses end to end encryption for the content of user conversations, it can access the metadata easily. This includes geolocation, and contact lists. A user merely has to install WhatsApp—and not even actually use it—for the company to be able access this data. Once WhatsApp shares phone numbers—and who is in contact with whom—with the Indian government, it is easy enough for the government to match the phone number with the individual since all phone numbers are linked to government-issued identification in India.
Nothing is known about the threshold that WhatsApp requires the Indian government to meet with before sharing metadata with it. The procedural safeguards for informational surveillance in India lag far behind internationally accepted norms. If WhatsApp shares metadata in response to every executive order, it will very likely enable the violation of human rights: the Indian government might be able to track the members of WhatsApp groups dedicated to people organising to criticise the state for human rights violations.

Arun also emphasised another aspect of the WhatsApp encryption–privacy debate: Are encrypted messages private considering they can be decrypted? He explained:

… there is the possibility that the contents of WhatsApp conversations are not impossible to access, as the company implies. WhatsApp has only maintained that it uses end to end encryption (WhatsApp FAQ 2018). However, it has said nothing about accessing conversations once they are decrypted and readable on users’ phones. Security experts have suggested that WhatsApp has put arrangements in place that would permit it to access this content (Zanon 2018). Whether this is true, and how far this content is shared with governments, is a question to which only WhatsApp knows the answer.

With the blurring lines between governments and regimes, can it be said with certainty that decrypted personal data will not be used beyond its stated investigative use—perhaps for political purposes?
Read more

Twitter and the Rebranding of Narendra Modi | Joyojeet Pal, Priyank Chandra and V G Vinod Vydiswaran, 2016

The Feigned Lives of Facebook | Suyash Saxena, 2019

Is Hindutva Masculinity on Social Media Producing A Culture of Violence against Women and Muslims? | Sujatha Subramanian, 2019

Must Read

Do water policies recognise the differential requirements and usages of water by women and the importance of adequate availability and accessibility?
Personal Laws in India present a situation where abolishing them in the interest of gender justice also inadvertently benefits the reactionary side.   
Back to Top