ISSN (Online) - 2349-8846
-A A +A

The Privacy Judgment and Financial Inclusion in India

Malavika Raghavan ( is a lawyer working on emerging issues in policy and regulation around financial inclusion and consumer protection in India. She currently leads the Future of Finance Initiative at Dvara Research (formerly the IFMR Finance Foundation).

Technology promises to overcome traditional barriers to financial inclusion, in particular by harnessing insights from consumers’ personal data. However, use of personal data creates new risks for consumers. Service providers must build consumers’ trust, a necessary precondition to increased participation in formal finance. K Puttaswamy v Union of India frames the use of personal data within the rubric of informational privacy, and in doing so provides guidance about how data practices and regulation in finance can evolve to align with citizens’ rights and reasonable expectations of informational privacy.

The author thanks Beni Chugh for valuable comments.

In recent years, efforts to expand access to formal finance are looking to technology and digital financial services to overcome traditional barriers, in particular by harnessing insights from consumers’ personal information. However, it is also being recognised that merely growing digital technology does not deliver digital dividends to everyone unless its “analog complements” are in place: appropriate regulations, accountable institutions and users who are genuinely enabled to use these technologies (World Bank Group 2016). K Puttaswamy v Union of India (2017a) matters to those of us interested in ensuring complete, suitable access to financial services for every individual because it frames the use of personal information against the rubric of the right to informational privacy. In doing so, the Puttaswamy judgment provides early guidance on how to evolve data practices in finance to allow consumers’ to benefit from technology whilst avoiding the harms that arise from the use of personal data.

Financial Inclusion Conundrum

The recent Report of the Household Finance Committee (RBI 2017) highlighted three types of events that cause major financial losses to about 60% of Indian households: loss of crops or livestock due to bad weather, medical emergencies, and natural disasters causing damage to property, farm equipment or business. This figure is worrying given the liquidity constraints most Indians are already living with, as average monthly earnings are estimated to be below ₹10,000 for 77% of rural households and 45% of urban households in India (GoI 2016).

Access to formal finance could improve the lot of the majority of Indians living with poverty and uncertainty. The report found that Indian households could benefit greatly from reallocating their assets through formal financial markets, shifting to institutional debt from informal sources, and avoiding the steep interest burden of emergency credit (RBI 2017). Evidence exists that the deepening of the financial sector can contribute to poverty reduction not only from the direct benefit of financial deepening by directly accessing financial services, but also through indirect structural effects of financial deepening (Ayyagari et al 2013). Despite these theoretical benefits, the unfortunate reality is that the majority of Indian households do not rely on the formal financial system to weather the storms that they face. About 76% of households turn to friends and family, accumulated savings and moneylenders to tide over crisis situations (RBI 2017: 38). Only 55% of the households in the poorest quintile of the population hold any financial assets, whereas that number rises to 90% for the richest quintile (RBI 2017). Why is the formal financial system so poorly utilised by our people despite years of policy initiatives aimed at financial inclusion?

Some Traditional Barriers

Factors that have been identified as supply-side barriers to banking “the unbanked” include high operating costs for low-value services, and information asymmetries that face providers when it comes to “thin-file” or “no file” individuals (Costa et al 2015). To overcome these information asymmetries, formal institutions rely on extensive know your customer (KYC) processes and additional documentation for first-time users of formal finance, creating further barriers of time, money, and incidental costs, especially for lower income groups who are often unable to meet these documentary requirements (Mowl and Boudot 2014). No surprise, then, that these consumers have been put off by the large administrative burden and complicated paperwork required to engage with formal institutions, impeding their participation in formal financial markets (RBI 2017).

This impasse, however, signals the larger problem that operates between financial services providers and poorer consumers: low trust. Low trust and negative perceptions of formal providers can prevail among low-income groups who see access to financial products as the prerogative of elite groups in society (RBI 2017). The potential role of stigma in reinforcing the sense of exclusion for low-income groups interacting with formal providers has been observed to be a barrier to participation in formal finance (Mowl and Boudot 2014). Karlan et al (2013) have also highlighted information gaps, social constraints, and behavioural biases as hindrances to the adoption of formal savings and services by the poor across developing countries. Much of this resonates with the Indian experience. So the conundrum of financial inclusion continues in India whereby, despite known benefits, there is limited meaningful use of formal finance by most of the population. It is to bridge this gap that technology has stepped in.

Promise of Technology

Technology holds out the promise of overcoming many obstacles to formal sector participation that have traditionally confounded us. The almost ubiquitous ownership of mobile phones has introduced a new channel for providers to reach consumers directly. In India, the Aadhaar system has resulted in unique identification numbers being assigned to almost the entire population (UIDAI 2017). The e-KYC layer of IndiaStack allows people to remotely verify their identity and authenticate transactions through biometric verification or SMS to mobile numbers linked to their unique identification number. This offers the potential to overcome the first major barrier to engagement for formal finance: identification and verification of previously unbanked clients.

Once consumers have entered the system, the increasing use of technology and online services in society keeps creating more digital trails of individuals’ information, preferences and behaviours. Innovative providers are using varied forms of non-traditional data—from mobile call data records and bill payments to Internet browsing patterns and social media behaviour—to create a new way to assess consumer risk, determine the creditworthiness of previously “invisible” consumers, and consequently offer convenient, quicker, and often cheaper loans to the previously underserved (Costa et al 2015). The rising availability of these kinds of information about consumers from new sources, and use of new techniques and algorithms to gain insight from this information are permitting providers to design new products that fit the actual needs and realities of consumers, based on their behaviour and demographic information (Kemp 2017). Technology can therefore help overcome many cost and information barriers that providers of finance usually face with consumers who have limited assets or credit profiles. However, these techniques also expose consumers to a range of new risks that need to be managed in order to overcome the most fraught barrier to participation of all: low trust.

Personal Information in Finance

The increased collection and aggregation of personal data, and use of algorithms to gain insights from this data is a trend across sectors; but finance is often its most prominent use case. It raises the potential for Indians to become “data rich,” before becoming “economically rich,”
as Nandan Nilekani has often stated (Jha 2017). Providers can not only collect more information directly from customers, but also track customers physically (using geolocation data from their mobile phones); track online browsing histories and purchases; and engage third parties to combine the provider’s detailed information on each customer with aggregated data from other sources about that customer, including their employment history, income, lifestyle, online and offline purchases, and social media activities (Kemp 2017). The extent of personally identifiable information that is being amassed about people flags new concerns around the use or misuse of this information.

In this context, the judgments of the nine-judge bench in the Puttaswamy case are remarkable because they engage with society as the Court finds it: an age of information, where knowledge is power and “technology has made life fundamentally interconnected.” The plurality opinion by Justice D Y Chandrachud acknowledges the complex issues for informational privacy that arise in this age of information (K Puttaswamy v Union of India 2017b). It takes note of the challenges that new data mining techniques and big data pose to privacy interests, many of which are already manifesting in real-world incidents.

A tangible consequence of the compromise of our personal data is data theft leading to impersonation, fraud and financial loss. The recent theft of the Aadhaar data of about 300 people, which was then fraudulently used to withdraw sums of over ₹40 lakh deposited in their names under a government pension scheme, highlights the potential for such thefts (Indian Express 2017). While fraud has occurred in the past, new technologies create a range of new ways and a larger scale on which they can be committed.1 At an operational level, frequent server downtimes, interrupted transactions, and lack of confirmation messages compound the underlying fears of fraud and reputational risk, and have added to consumer perceptions that funds held digitally are not safe (Wright and Pandey 2016).

Less tangible, but equally damaging harms can also be created by new data practices, even by well-intentioned providers. This can arise where decisions may be made to the detriment of entire groups or segments of people based on inferences drawn from big data, without the knowledge or consent of these groups (Kemp 2017). More granular segmentation of consumers based on demographic or behaviour-based factors can allow practices that go against principles of treating consumers fairly and not subjecting them to unfair or deceptive practices. For instance, behavioural data can help companies charge vastly different prices for the same product to customers in the same target group, exploiting factors like consumers’ willingness to pay, track record of inertia in switching or brand loyalty (ESMA 2016).

Some note that this is particularly problematic for low-income or less technologically savvy consumers, who could be subject to “weblining” (Newman 2014): the online equivalent of “redlining,” the offline practice whereby financial institutions deny services or raise prices for consumers in particular geographical areas where low-income or minority groups reside. This means that, although personal information is being used to overcome information asymmetry and expand services, there is a risk of creating the opposite effect, making it easier to avoid consumers who appear “low value” in the short term. While some might argue that the market will correct itself in the long run, the welfare losses for consumers before this happens could be devastating, especially for the poorest amongst us.

The opposite concern is that the information from vulnerable groups that face fund crunches could be used by predatory finance providers to target or mis-sell products to them. In recent months in China, a spate of suicides by students has turned the spotlight on online lenders who targeted them with loans at staggering interest rates and questionable collection practices (Zhang and Woo 2017). Such opportunistic targeting could have long-term implications of further excluding vulnerable groups, as defaults on even very small loans can have negative repercussions on creditworthiness data. Consider the 4,00,000 Kenyans who have been blacklisted by credit bureaus for non-repayment of mobile money loan balances of less than $2 (Business Daily Africa 2016).

The reputational harm and social shaming that can also result are high social costs that individuals have to live with for years. The design of digital interfaces is also sometimes used to exploit individuals’ behavioural biases. Consumers can unwittingly be signed up or opt for services once their personal information is available with providers, through default auto-renewal or auto-subscribe settings. The recent alleged opening of payments bank accounts for scores of consumers using the information submitted to complete the Aadhaar-based mobile subscriber identification module (SIM) verification highlights the risks of such use of consumers’ information without clear prior notice (Deccan Chronicle 2017).

These harms highlight the negative consequences for individuals when their information is inappropriately disclosed or used. In other words, these are the negative consequences of privacy violations. Calo’s (2011) seminal conceptualisation of privacy harms in two categories provides a useful framework to understand these harms as (i) subjective harm, that is, the perception of unwanted observation, and the mental states of anxiety or embarrassment that accompany the belief that one is being watched; and (ii) objective privacy harm, that is, the unanticipated or coerced use of information concerning a person against that person. In the financial services context, subjective harm could be the distress caused by social shaming that accompanies an unpaid debt, and objective harm could be the usage of an individual’s personal or financial information to commit fraud or claim benefits due to them. These harms can be created in new ways or amplified in the digital economy.

Reading Puttaswamy’s Tea Leaves

To make finance easy and safe to access, it is important to have an open conversation about how one is to address new risks facing consumers in a digital economy, and build trust. The Puttaswamy decision is an important one not only because it recognises Indians’ fundamental right to privacy, but also because it addressed some of the discomfiture arising from today’s data practices. In the plurality opinion, Justice Chandrachud admits that it is impossible to conceptualise all possible uses of information and resulting harms given the pace of technological change and state of data practices. However, the judges go further to indicate some broad principles to be used to ensure data practice and regulation can develop in a way that would avoid privacy harm and protect individual autonomy.

First, an important contribution the Puttaswamy judgment makes is to cast “the reasonable expectation of privacy” that every person possesses as a touchstone to use when navigating the thorny questions of what constitutes the legitimate collection and use personal data. Of particular relevance to the financial sector is the quoting of District Registrar and Collector, Hyderabad v Canara Bank (2005), which dealt with the privacy of customers’ transactional and banking information held by a bank. In that case, the Court upheld the informational
privacy of customers with respect to their banking documentation, against statutory provisions mandating disclosure and deemed them to be unreasonable encroachments on customers’ rights. The plurality opinion in the Puttaswamy case also highlights the subjective and objective elements of the reasonable expectation of privacy. It notes that our constitutional values can guide us to collectively agree to an objective zone of privacy where people and their information are allowed to be private. A future data protection regime must make this assessment of the contexts in which these objective expectations of privacy exist, for instance with respect to medical information.

Second, the Court specifically picks out the principle of non-discrimination as one that should operate during data collection to ensure that data practices do not discriminate on the basis of racial or ethnic origin, political or religious beliefs, genetic or health status, or sexual orientation (K Puttaswamy v Union of India 2017b: para 178). This has direct implications for the use of big data and granular segmentations of consumers in finance, as future regulation could take a cue from this to mandate that algorithms distinguishing between consumers along these particular characteristics infringe consumers’ privacy.

Given the nature of harm that can arise from the misuse of financial information, I would argue that finance is a context in which such reasonable expectations of privacy would attach. This means that those who handle personal information when providing finance and related services should act in line with consumers’ privacy expectations and make the best efforts to avoid exposing consumers to the kinds of risks noted in the previous sections. Providers could even take the lead over regulation today, and consider what these objective reasonable expectations of consumers might be, so that data practices are tailored accordingly. For instance, one obvious reasonable expectation could be that personal information should not be used to harm or discriminate unfairly by racial, religious, or gender characteristics, and algorithms could be built and tested accordingly. Another reasonable expectation could be that information should only be used for the collected purpose and not retained in perpetuity.

The Puttaswamy judgment has also given backing to the view that privacy is not a value that can be traded away for some other sense of development. Especially in the financial inclusion context, the existence of the “Privacy Paradox”—whereby people state that they value their informational privacy but surrender it at the drop of a hat—sometimes leads to the argument that people’s need for finance is traded off against their concern about the disclosure of their personal information. Rather than take this route, the plurality opinion quotes Posner (2008) to see trust as a reason for the paradox, that is, that people reveal personal information because they trust entities like the state, have an expectation that it will not be used to harm them, and can benefit collectively from such disclosure.

Finally, the judgments also touch upon the role of non-state actors when it comes to the right to privacy. While they consider several tests for scenarios in which the state can restrict the right to privacy, they defer to the government when it comes to the regulations that will govern the private sector. The plurality opinion of Justice Chandrachud notes that the state has both negative obligations not to infringe privacy and positive obligations to adopt suitable measures to protect privacy, of which adequate and appropriate data protection regulations are a part. Justice S K Kaul notes in his opinion that enabling such claims against non-state actors may require legislative intervention by the state (K Puttaswamy v Union of India 2017c). Such regulation is perhaps inevitable after this judgment and the constitution of a committee by the Government of India with a mandate to deliberate on a data protection framework for India (GoI 2017).

Providers would do well to take the lead on thinking through the objective reasonable expectations of data use and avoidance of harms that consumers broadly expect, and build these into their practice to stay ahead of regulation and in step with consumers’ concerns. Consumers at the bottom of the pyramid have the most to lose because they have so little, and building services that win their trust will ultimately be the real “game changer” for India’s financial inclusion ambitions.


1 Common types of frauds in mobile money include (i) transactional fraud, committed by those misusing personal information and impersonating genuine consumers (by the unauthorised use of information including account details, PINs or passwords or other identification details); (ii) channel fraud which may be carried out by the agent (through false commissions, registering false accounts, or overcharging); or (iii) internal fraud (where employees may collude to defraud financial institutions or use their access to undertake identity theft and exploit customer information without authorisation) (Gilman and Joyce 2012).


Ayyagari, M, T Beck and M Hoseini (2013): “Finance and Poverty: Evidence from India,” CEPR Discussion Paper 9497,

Business Daily Africa (2016): “Pain of Kenyans Blacklisted for Amounts as Small as Sh100 in Mobile Loans, Bank Fees,” 9 September,

Calo, R (2011): “The Boundaries of Privacy Harm,” Indiana Law Journal, Vol 86, No 3, pp 1131–61.

Costa, A, A Deb and M Kubzansky (2015): “Big Data, Small Credit: The Digital Revolution and Its Impact on Emerging Market Consumers,” Omidyar Network,

Deccan Chronicle (2017): “UIDAI Slaps Notice on Airtel, its Payments Bank for Flouting Aadhaar Rules,” 21 September,

District Registrar and Collector, Hyderabad v Canara Bank (2005): SCC, SC, 1, p 49.

ESMA (2016): “Joint Committee Discussion Paper on the Use of Big Data by Financial Institutions,” Joint Committee of the European Supervisory Authorities (ESMA, EBA and EIOPA), JC 2016 86, European Securities and Markets Authority, 19 December,

Gilman, Laura and M Joyce (2012): “Managing the Risk of Fraud in Mobile Money,” GSMA: Mobile Money for the Unbanked Programme Publication,

GoI (2016): “Report on Fifth Annual Employment – Unemployment Survey (2015–16) Volume I,” Ministry of Labour and Employment (Labour Bureau), 15 September,

— (2017): “Office Memorandum: Constitution of a Committee of Experts to Deliberate on a Data Protection Framework for India,” Ministry of Electronics & Information Technology, Government of India, No.3(6)/2017-CLES, 31 July,

Indian Express (2017): “Hyderabad Police Arrest 3 for Stealing Aadhaar Data,” 29 October,

Jha, L K (2017): “Aadhaar Helped Indian Government Save $9 Billion, Says Nandan Nilekani,” Mint, 13 October, viewed on 5 November,

K Puttaswamy v Union of India (2017a): Writ Petition (Civil) No 494 of 2012, Supreme Court judgment dated 24 August.

(2017b): Writ Petition (Civil) No 494 of 2012, Supreme Court judgment dated 24 August (plurality opinion).

(2017c): Writ Petition (Civil) No 494 of 2012, Supreme Court judgment dated 24 August (Kaul, J, concurring).

Karlan, D, A L Ratan and J Zinman (2013): “Savings by and for the Poor: A Research Review and Agenda,” Center for Global Development Working Paper 346, November,

Kemp, K (2017): “Big Data, Financial Inclusion and Privacy for the Poor,” IFMR Blog, 22 August,

Mowl, A J and C Boudot (2014): “Barriers to Basic Banking: Results from an Audit Study in South India,” NSE Working Paper Series No WP-2014-1,

Newman, N (2014): “How Big Data Enables Economic Harm to Consumers, Especially to Low-Income and Other Vulnerable Sectors of the Population”, ttps://

Posner, R (2008): “Privacy, Surveillance, and Law,” University of Chicago Law Review, Vol 75, pp 245–60.

RBI (2017): “Indian Household Finance,” Report of the Household Finance Committee July 2017, 24 August, 9 November,

UIDAI (2017): “State/UT-wise Aadhaar Saturation —31st October 2017,” Unique Identification Authority of India,

World Bank Group (2016): World Development Report 2016: Digital Dividend, Washington DC: World Bank Publications, pp 94–98.

Wright, G and S H Pandey (2016): “Customer Vulnerability, Trust and Risk in Indian Digital Financial Services,” MicroSave blog, April,

Zhang, S and R Woo (2017): “After Spate of Suicides, China Targets Predatory Student Lending,” Reuters, 27 September,

[All the URLs were viewed on 9 November 2017.]

Updated On : 27th Dec, 2017


(-) Hide

EPW looks forward to your comments. Please note that comments are moderated as per our comments policy. They may take some time to appear. A comment, if suitable, may be selected for publication in the Letters pages of EPW.

Back to Top