Interrogating the AI Hype: A Situated Politics of Machine Learning in Indian Healthcare

Though they may seem as such, AI technologies are not merely technical systems. Rather, they are constitutive, and indicative, of the sociopolitical contexts that they are situated in.


This is part of a six-article series on questions surrounding data, privacy, artificial intelligence, among others. You can read the introduction here.


"If you have the temerity to insert your work into a political issue that ... doesn’t immediately affect your life, you should also be prepared to accept the consequences ... it’s never ‘just’ an algorithm. And there’s no such thing as 'just’ an engineer." (Hoffmann 2018)

The introduction of information technology in healthcare at various points in history have been largely hailed as watershed moments in the healthcare industry. For instance, for women, the advent of new forms of technology (such as in reproductive health services through commercial surrogacy or population control) has had the effect of shifting power dynamics and increasing vulnerabilities under the garb of “development” and “empowerment” (Pande 2010, Goslinga-Roy 2000, Petchesky 1995). Many experts posit that we are at another such watershed moment in the history of healthcare, more specifically, “a revolution in health care” (Economist 2018) through the introduction of machine learning (ML)-enabled technologies in the industry, which are a subset of Artificial Intelligence (AI) technologies (Asokan 2019, Dash 2018). These technologies are trained on historical data to uncover patterns and learn from examples, thus  learning to predict and classify generalised future outcomes for the purposes of decision-making. They are based on large data sets (referred to as Big Data) that are beyond the human brain’s ability to analyse at comparable speeds and scale.

Such AI-enabled automated diagnosis systems, and the algorithms that automate such systems, are being developed at a rapid pace in India with the stated rationale of improving healthcare access to underserved parts of India which have an acute shortage of skilled doctors. They aim to assist doctors in making diagnostic decisions, and in the future, may supplement the doctor’s presence. However, because these interventions are happening exclusively within a predatory, unaffordable healthcare sector, introduction of new technologies becomes a method of using bodies and medical records of the sick and poor as data to train machine learning algorithms. This technologisation, then, becomes an opportunity to innovate advanced technology and replace labour with capital, without benefitting those it is targeted towards, and further putting them at risk of untested technologies. Driven by capital flows from the global North, and supported by state policy positions in India within a weak and anti-poor regulatory regime, this process is aided by the diversity of a large population in India, as well as the  significantly low costs of performing medical trials. This gives a new sheen to the expropriation and experimentation of the sick and poor in the country. Since the design of such technologies requires access to large data sets, they are being developed through collaborations between large healthcare providers and global technology companies to enable the sharing of patient data (in the form of medical records from hospitals and health centres) and facilitating easier access to the bodies of patients for the testing of these diagnostic tools.

The analysis offered in this essay is derived from an ethnographic study I conducted of AI-enabled medical diagnosis systems in southern India, in the cities of Bengaluru (Karnataka), Madurai (Tamil Nadu), and Thiruppuvanam (Tamil Nadu) during 2017–2019. Most of the systems I have researched are still in the stage of data collection, testing, and clinical validation of the AI algorithms. Such pre-deployment, early-stage research on AI-enabled automatic diagnostic models is crucial because once these digital systems are deployed and scaled up, they can be “remarkably hard to decommission” (Eubanks 2018).

AI technologies are not merely technical systems, but rather sociotechnical systems, and the impact of these applications is closely dependent on the sociopolitical contexts in which they are deployed. Thus, I begin this essay with laying out the historical and contemporary context of the healthcare ecosystem in India. In the subsequent section using the perspective that technology has been closely linked to the idea of “development” and “nation-building” in India, I analyse the state’s policy positions on AI, foregrounding it in the current political climate of our times. Next, I examine how the narrative of “AI for social good” is being used by the state and corporate stakeholders to gain public acceptance for exploitative AI technologies.  

Contextualising the Healthcare Ecosystem in India

The histories of the healthcare ecosystem in India offer crucial context for critically analysing the development of AI-enabled automated diagnostic tools. Given the understanding that AI-powered tools are not just technical, but rather sociotechnical systems, it is necessary to first understand the social realities in which such an AI system will be deployed, and its interactions with society at large. This section thus explores this sociological context in some detail.

With the liberalisation of Indian economy initiated in 1991, there has been an incremental, policy-induced expansion and dominance of private service providers (inclusive of foreign investors) in the Indian healthcare ecosystem (National Sample Survey Office 2016).[1] With this came a strong push towards cutting down government spending in various sectors including healthcare. This has compelled Indians seeking medical services to both, incur significant out-of-pocket expenses, as well as  to seek private healthcare services due to the lack of availability of public medical institutions or their low quality of service. While privatisation of an industry is not a new phenomenon for the country, what is being observed today is not only an aggravation of the manner in which the  industry has been privatising, but also the consolidation of  medical services with large corporate giants swallowing up smaller private entities and centralising delivery of healthcare through franchisee units in various private hospitals (Vasan and Vijayakumar 2016). This has culminated into the consolidation of ownership and control of large corporate institutions over “data” collection and analysis within privatised and centralised data infrastructures.

The erosion of the public healthcare system is further attributed by healthcare activists to the deliberate attempts of the private sector to keep it partially functional, that is functional only to the extent of using public healthcare units for referrals to private units (Akhila Vasan and Vijayakumar Seethappa, Karnataka Janaarogya Chaluvali). When a particular facility is missing in a public healthcare unit, or there is a shortage of doctors or medicines in the same, the case is referred to private healthcare suppliers so the treatment is carried out in the private unit while the patient is monetarily reimbursed for the expenses (since the treatment would have otherwise been subsidised or free at the public unit). For example, in Karnataka, a large fraction of the total state budget for the health and family welfare department goes to private hospitals; “this 1.3% of the GDP [for public expenditure on healthcare]; a lot of it goes to the private hospitals’ kitty. In Karnataka, of the total 4,000 crore state Health and Family Welfare department budget, 1,000 crore goes to the kitty of the private hospitals” (Akhila Vasan, Karnataka Janaarogya Chaluvali).[2] With the extra money coming in through referrals, private hospitals declare profits every quarter so they are able to give shareholders a return on their investments. Moreover, the bodies of patients become commodified into a set of numbers for private hospitals to use as conversion ratios. The widespread and effective private sector opposition to the Karnataka Private Medical Establishments (KPME) Act, which proposed measures for increased accountability of the private medical sector highlights the impunity with which the private healthcare sector in India functions (Krishna 2017).

 In light of this, public health activists have raised pertinent and urgent questions such as: How can a hospital have targets and shareholders? What are they declaring profits on people’s misery? Why should public money go to a private facility? Why can the same money not be invested in a government hospital to equip facilities, systems, and human resources, and recruit vacancies present in the public health system on a permanent basis? What stops them from giving workers social and job security so they can retain the people they recruit instead of developing AI to fill in the vacancies? (Akhila Vasan and Vijayakumar Seethappa 2016)

The Indian state also has a large role to play in historically targeting the poor, especially women, through interventions under the garb of “development.” What started out as controlling women’s bodies and fertility to curb population growth from a Malthusian line of thinking (Dhanraj 2003), soon transformed into the state struggling with the most basic health issues like maternal deaths; “this [pregnancy] is not a disease condition that requires women to die, and is typically an indicator that tells us about not just the health system’s robustness, but also the political environment.” (Akhila Vasan and Vijayakumar Seethappa 2016). The state also seems to have abdicated the work of policymaking to commercialised vested interests (Auroshree 2018). Given the absence of accountability measures for the private healthcare space, this has left the sick and poor even more vulnerable to exploitation, and has eroded the democratic functioning of the state.

Scholars and public healthcare activists, thus, broadly locate Indian healthcare today between a “completely dysfunctional public health system” and a “very predatory private health sector preying on people’s distress,” aided by a state that has surrendered part of its responsibility to unaccountable non-state actors. All these stakeholders target the sick and poor; the state for “control and converting it into a certain idea of development” and the private sector for “a monetary idea of profit” (Rajan 2005). It is in this sociopolitical climate that we must analyse the introduction of AI-powered technologies in the domain of healthcare.

 Medical Technology for Development?

“What is the point of having an AI-enabled scanning machine in a place where there is no electricity?” — Vijayakumar Seethappa, Karnataka Janaarogya Chaluvali

Technology is deeply embedded in India’s national imagination for “development” and “nation-building.” For instance, the Scientific Policy Resolution of 1958 states that “The key to national prosperity, apart from the spirit of the people, lies, in the modern age, is the effective combination of three factors, technology, raw materials and capital, of which the first is perhaps the most important” (Department of Science and Technology 1958). The understanding of nation-building and development in these nationalist imaginations was, thus, one that would involve large-scale industrial advancement with a supposed trickle-down effect (Achuthan 2011).

These visions of technology-led development are also observed starkly today under the policies of the present ruling party, the Bharatiya Janata Party (BJP), which has implemented various schemes towards the creation of a “Digital India.” Under this campaign, launched by Prime Minister Narenda Modi in 2015, a digital infrastructure of surveillance is being built through technologies, such as biometrics and facial recognition. For instance, the Aadhaar digital biometric identity system and the Smart Cities Mission fall under this mandate. These are being developed at the cost of vulnerable and marginalised communities such as women (Chandrasekhar 2018), queer-trans communities (Firstpost 2019), economically marginalised communities (Dey and Roy 2018, Johari 2018), people with (dis)abilities (Zubeda 2017, Press Trust of India 2018), and immigrants (Prasad-Aleyamma 2017), among others, whose livelihoods are being tracked and linked to dysfunctional and dystopian digital identities.

The state has also actively endorsed the use of AI technologies in various domains from healthcare to education to crime prediction, through the National Strategy for Artificial Intelligence (NITI Aayog 2018) and the Report of the Artificial Intelligence Task Force (the Artificial Intelligence Task Force 2018). Within the domain of healthcare, the Artificial Intelligence Task Force states that “AI has the potential to transform delivery of health services in rural areas.” Both reports focus on leveraging AI for quantitative outcome-based economic growth and social development, without much attention to the social and ethical implications of AI technologies, such as the qualitative processes involved in the design and deployment of these tools and their ​experiential impact on the lives of already vulnerable populations. This is despite the fact that there is much research to show that such automated decision-making is based on biased data through prejudiced data collection practices laden with problematic social and political assumptions that translate into the AI tools (O'Neil 2016). 

It is therefore crucial that when we talk about the usage of technology, we centre the discussion around the appropriate use of technology, as opposed to dumping technology. For example, IBM’s Watson for Ontology, an AI-enabled diagnosis tool for cancer, has been actively rejected in Denmark and the Netherlands because “it is too focused on the preferences of a few American doctors” (Ross 2017). The tool is reported to be trained on data from the Memorial Sloan Kettering Cancer Center in New York, and is shown to be biased towards American patients and standards of care, without taking into account the “economic and social issues faced by patients in poorer countries” (Ross 2017). Yet, this technology is currently in use in Manipal hospitals[3] across India for cancer diagnosis; the doctors at the hospitals also declined my request for an interview about the same. What other countries across the globe are abandoning due to ethical concerns, the Indian state seems to be embracing in its race towards increased technologisation. This takes focus away from the root cause of the social problem being targeted, thereby hindering efforts towards sustainable solutions.

"You’re not able to get doctors to sit in the most peripheral centres … what are you offering technology for? Why are doctors not sitting in those peripheral centres? Nobody is asking that question. People are sidestepping that and saying let’s bring in technology, doesn’t matter if doctors don’t sit, doesn’t matter if the doctors don’t treat, let’s bring telemedicine. We will not subject ourselves to it, but we will subject other poorer people to it. So, what are people like us talking about technology for?" — Akhila Vasan and Vijayakumar Seethappa, Karnataka Janaarogya Chaluvali

Public health activists argue that most doctors in the country enter lucrative fields such as medical tourism or cosmetic surgery, at least partially because of the expensive medical education that they have to go through, and because of the dysfunctional state of the public healthcare system at large. When the state and technology companies therefore focus on the shortage of doctors in remote parts of the country, and use this as a justification to bring in AI-enabled diagnostics in these regions, what they are effectively doing is evading these harder questions.

"So, for me, that's the wrong question to be asked that there are not enough doctors—we have many doctors—but what are they really doing, that’s the question. You have followed policies that have endangered doctors [...preventing them] from joining the public health system in favor of the private system. Now they will bring technology in the public system in place of doctors—what does this say about what will happen in public healthcare?" — Akhila Vasan and Vijayakumar Seethappa, Karnataka Janaarogya Chaluvali

The advent of emerging technologies such as machine learning is also seen as a new trend, supposedly raising new, complex questions. However, I find that these medical technologies do not necessarily raise new questions, as much as they make us talk about the same old questions in new ways, at best with added layers of complexity, packaged differently in code. Historicising the role that science and technology have played in narratives around social development, Nandy (1988) writes that the illusion of “spectacular development” consists of “occasional dramatic demonstrations of technological capacity.” Under such a model of development, “highly visible short-term technological performance in small areas yields nationwide political dividends” (Nandy 1988). While Nandy analysed this in 1988 in the context of technologies such as large dams and space flights, I find this analysis particularly relevant today to the sociopolitical conditions under which AI is being developed in India, and sold and marketed to the Indian public, especially the sick and poor, with active endorsement of the state. The sudden increase in development and testing of complex, large-scale AI-based technologies in the country—now being termed as the “AI hype” (Hao 2019)—has had the effect of biasing policy and investment decisions that favour the use of data-driven decision-making as the ideal form of technological intervention that is the “need of the hour.”

Thus, the questions we must ask are not merely to do with machine learning but what machine learning is replacing when we sell its applications as products that are a panacea for social problems. Such technology has created conditions wherein the Indian public “expect[s] this technology to allow the country to tackle its basic political and social problems and thus ensure the continued political domination of an apolitical, technocratic, modern elite over the decision-making process, defying the democratic system” (Nandy 1988). The reason these pertinent sociological questions are invisibilised today is because of active efforts by the state, in collaboration with corporate actors, to market these technologies in a manner that dissent is not only discouraged, but is violently clamped down upon, as is observed through the state’s authoritarian actions in the current political ethos ( Economist 2019). 

AI for Social Good or Experimentation?

"If you insist on working with the poor…  then at least work among the poor who can tell you to go to hell …  [It] is profoundly damaging … when you define something that you want to do as 'good,' a 'sacrifice' and 'help.' …  Come to look, come to climb our mountains, to enjoy our flowers … But do not come to help." (Illich 1968)

The policy and development goals that the Indian state is evidently unable to achieve through its unpopular, divisive, and fascist policies (Economist 2019) are at least in part being supplanted through a narrative that legitimises the use of advanced technologies such as machine learning to solve social problems. In this Section, I critically unpack the popular narrative of “AI for social good” that has been adopted by the Indian state (with active encouragement by many corporate stakeholders) in its AI policy positions. Specific to the domain of healthcare, I argue that the introduction of new technologies becomes a method of using bodies and medical records of the sick and poor as data to train machine learning algorithms, and an opportunity to innovate by replacing labour with capital, without benefiting those it is targeted towards, and in fact. actively burdening them with risks arising from unforeseen results of “spectacular,” experimental technologies.

 As has been discussed in the previous section, the state’s position is to establish India as a “garage” for research and commercial development of AI applications in the context of emerging and developing economies (NITI Aayog 2018). Similarly, corporate stakeholders have publicly released their philosophy around AI, which has been focused on “AI for social good.” For example, Google’s first objective for their AI applications is to “be socially beneficial” (Pichai 2018), and the non-profit Wadhwani Institute for Artificial Intelligence (which receives funding from the Bill and Melinda Gates Foundation) states its guiding tagline as “Artificial Intelligence for Social Good” (Wadhwani AI). Here, I will examine two related concerns: who designs this “AI for social good” and who is targeted through it.

 The AI industry in India (as well as the rest of the world) is largely populated by male, cis, heterosexual, upper-class, upper-caste engineers, who are often far removed from the socio-economic contexts of the variously marginalised communities  they designe these algorithmic tools for (Sudhakar 2019). As a consequence of this, such applications reproduce world views and ideologies that are harmful to vulnerable and underserved communities, and that end up reinforcing current prejudices about them (Hart 2017). Despite scholars and activists repeatedly pointing out that AI applications are sociotechnical systems, teams developing these tools are almost never interdisciplinary in nature, having no sociologists or anthropologists amongst them, who would otherwise be better positioned to understand the impact of these technologies on those for whom they are  designed to benefit. For example, among those that I have interviewed, technology companies developing AI-enabled diagnostic tools have no understanding of even basic requirements such as whether consent forms were being given out to patients for the use of the automated tools in diagnosing them "For the validation we're doing, we give the consent forms to the [healthcare] partners. A lot of the hospitals did not care about it. I don't know what exactly they told the patients ... The partners had their own processes ... I don't know how seriously they take it.... I never saw the practice in action" (Gaurav, AI data scientist).[4]

 Moreover, the intended beneficiaries are almost always the sick and poor who are marked by a disadvantaged socio-economic position through  the collection and processing of medical data. Yet, I argue that their lives are not considered the sources of knowledge production while designing AI applications. The sick and poor in the country end up being merely beneficiaries of outreach programmes, which lack any understanding of their lived realities. The demand for having an automated diagnosis in place of manual diagnosis is not coming from the bottom-up, that is from the sick and poor; it is being imposed top-down by technology companies based in the global North, in collaboration with private healthcare providers in India, supported by a technocratic state. My ethnography also showed that there is no effort being made to understand the experiences they have with the usage of these automated tools, that is, their choice, consent, privacy, and preferences are neither asked for nor taken into account. 

I now juxtapose the claims of “AI for social good” along the axis of the political, within which I place arguments from feminist scholarship to examine technology as a political institution, and the options available to negotiate with its power. Doing so necessarily means paying closer attention to the contexts and experiences of those whose lives are the targets of technological interventions. It means examining the epistemological, and forwarding a case “for situated knowledges, for experience as the situation of knowledge-making, and the possible movement from here to the articulation of a standpoint epistemology” (Achuthan 2011). “Feminist Standpoint Epistemology” (Harding 1992) claims that some ways of knowing the world are inherently better than other possible ways; the starting point for knowledge about society should thus come from the lives of marginalised persons as this would provide a better understanding of the world. This is because some forms of knowledge are only accessible to marginalised communities as a direct result of their experiences of oppression.Therefore, dominant communities are epistemologically disadvantaged for producing this knowledge that is invisible to them due to their privileged position in a society that is stratified by gender, sexuality, caste, class, and other such divisions. Building upon this theorisation, I argue that by ignoring the lived realities of the sick and poor in the design of AI systems, the dominant group narrative of “AI for social good” produces limited knowledge, and is thus unable to effectively solve for the social developmental challenges it has set out to address. Further, as discussed in the following analysis, the present models and strategies of deploying machine learning in Indian healthcare industry not only end up expropriating lived experiences of the sick and poor, for whom it does not generate any value, but also actively burdens them with risks arising from unforeseen results of experimental technologies.

Based on findings from my ethnographic study, I analyse three main factors that make India an appealing option for “data” collection in the domain of healthcare, and for designing AI-powered diagnosis systems. These are, diversity of Indian populations, reduced costs of collecting and processing data in India and an unregulated, predatory healthcare ecosystem coupled with an absence of a regulatory framework for AI in the country. I will further elaborate on each of these points.

First, the diversity of Indian populations makes it attractive for collecting "data" (medical records of patients) to train machine learning algorithms. During an ethnographic visit to a national hospital chain, I was informed by the head of a medical team for a leading AI project that "India is a hub for Diabetic Retinopathy. You don’t see these variations of Diabetic Retinopathy patients back in the US. They [technology collaborator] wanted to train the software with the numbers [that] we have [in] our hospital" (Naresh, Hospital Director).[5] India has historically been considered suitable for pharmaceutical drug trials due to the “legions of poor, illiterate test subjects that are willing to try out new drugs” (Al Jazeera 2011). Rajan (2005) has similarly analysed that “India’s cross-section of populations covers the spectrum of the world’s populations...The Indian state... acts (through the company it seeds) as a full-blown market agent in making Indian populations available to Western corporate interests, as experimental subjects."

Second, patient treatment in the hospitals I visited in southern Indian has been combined with experimental processes for collection of “data” to train AI models for automated diagnosis. This reduces the cost of data collection for technology companies as well as healthcare providers as they do not have to incur additional expenses for designing the AI algorithms. This is a clear conflict of interest between the technology company’s profit and market-driven self interests and the best interests of the patient, both of which must be catered to by the medical professional. This raises serious ethical concerns because the already marginalised sick and poor have a reduced ability to bargain, and thus demand effective healthcare within a predatory medical-industrial complex. What this results in is therefore the favouring of market-driven private interests over patient interests.

For example, I observed that for the diagnosis of a disease, patients in smaller towns and villages in Tamil Nadu are being scanned using AI-powered devices when they come in for a medical check-up. This medical “data” from the scans is then fed into a database containing such data from thousands of other patients, upon which an AI model (developed by a leading global technology company) is made to run to diagnose whether the scanned images show anomalies corresponding to the presence of disease in the patient. Looking at the scans, a computer technician / technical image grader at the hospital then compares the automated diagnosis with her own manual diagnosis. If the diagnosis matches, the patient is given the final diagnosis, and if there is a mismatch, a doctor is consulted. In both cases, the feedback about the automated diagnosis is fed back into the AI system as part of the clinical validation, so it can “learn” to predict better outcomes next time. This is how the machine "learns" to make an automated diagnosis. At the same time, patients I spoke to (mostly coming from agricultural, labour-class contexts) are not properly informed about this process, and they are unaware of how the diagnosis is made. The resulting lack of opposition by the patients is considered to be consent to the procedure. Though patients are given a consent form, I observed that most of them could not read or write, and in distress, simply accepted any procedure being asked of them for the diagnosis.

This also connects to the third factor for why these tools developed by global technology companies are often trained and tested in India; the weak regulatory landscape that lends to what can be termed as “low rights environments where there are few expectations of political accountability and transparency” (Eubanks 2018). There is currently no policy in the country for regulating the deployment of AI or AI-enabled automation systems, and any progress on implementing one is stalled because of politically motivated differences over concerns such as which state agency would be responsible for implementing the plans (Agarwal and Sharma 2019). In the midst of this, we find the sick and poor in the country, on whose bodies these experimental technologies are already being tested in the absence of any accountability and regulatory frameworks. They seem to have no real agency in opting in or out of experimental AI trials because their alternative is having no medical care at all, thus also nullifying any informed consent.

Historically in the country, Purendra Prasad has argued that health has not been a rational choice, but rather an imposed preference for the sick and poor (Prasad 2007). This is a reality that all stakeholders in the AI-healthcare ecosystem are not only well aware of, but in fact benefit from, and therefore actively work towards maintaining this status quo. For example, public healthcare activists I spoke to have been demanding that clinical trial facilities should be physically separated from treatment facilities in healthcare centres (under the KMPE [Amendment] Act), but these demands have not been taken on board by state committees due to strong pushback from private interests (Krishna 2017).

The state, thus, plays a foundational role in producing the conditions under which the country’s sick and poor emerge as an available experimental “data source.” For example, one can observe state support for industry-led AI-healthcare technologies through a recent collaboration of the Telangana government with Microsoft to implement AI-based eyecare screening as part of the state’s Rashtriya Bal Swasthya Karyakram programme under the National Health Mission (Nagpal 2017). These collaborations are also being observed in other domains such as AI-enabled education for predicting school dropouts (Srivas 2016), and AI-enabled agriculture to predict crop yield (Press Information Bureau 2018). The sociolegal–technical conditions this creates not only fails to realise proposed benefits to the sick and poor through these technologies, but rather has an adverse, disparate impact on their living conditions. As has been analysed in this essay, this happens through two means: one, through the use of the medical data of the sick and poor to train proprietary AI algorithms that generate no value for them, and two, through the burdening of the sick and poor with risks potentially arising from unforeseen results of experimental technologies.


"I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." (Maslow 1966) 

A cognitive bias commonly termed the “law of the hammer” describes an over-reliance on a familiar tool (Maslow 1966). I argue that the “AI hype” that the Indian state has endorsed falls into a similar trap; all involved stakeholders view AI as a “tool” and all socio-development problems as "nails" to be fixed using the tool. This view, or cognitive bias, is perhaps best presented in the slogan of corporate stakeholders “AI for social good,” which indicates both the moral promise of AI (to deliver social good) and its value proposition (imagining the world of social good as a market for the AI being developed). I have analysed in this essay that these technology-centred solutions do not end up solving the identified problems, but rather exacerbating conditions for marginalised communities, while reaping profits for private corporate stakeholders within a market-driven economy. 

It is, thus, at this juncture that as feminists and sociologists, we must pause to reframe the questions that we are currently posing to AI technologies; instead of asking, “How can AI solve this problem?” it is most worthwhile to ask  “What problems can AI solve?” This has at least two advantages: one, it makes us focus on using AI-based technologies to solve only those problems that are within the scope of technology to solve, and two, it incentivises us to find need-based solutions as opposed to market-driven capitalist solutions to identified problems.

Our methodology for such a reframing is also critical to ponder over. We must ensure that if at all we do "target" marginalised communities for building technology-based solutions, the approaches we use are participatory and interactive, taking into account the needs and experiences of those most impacted. We must centre the grassroots needs of underserved communities and start knowledge-building from their experiences in a bottom-up, reflexive manner. When technology developers and designers draw upon the experiences of marginalised communities, they should make sure that their applications increase the agency of these communities. This can be done only when AI applications are built by diverse teams within inclusive institutions, including but not limited to women and social scientists, and challenging the prejudices that go into developing these tools by directly interacting with the impacted communities.  Thus, shining a light upon the grassroots implications of automated decision-making (especially on the lives of “targeted” communities) will rob it of its power to naturalise our social conventions about technologies, and help us move forward towards a situated politics of machine learning in Indian healthcare.

The author would like to extend her sincere gratitude to the Advanced Centre for Women’s Studies, Tata Institute of Social Sciences (TISS), Mumbai, India, for providing her with institutional support to carry out the fieldwork for this research as part of her master’s thesis during her MA in women’s studies (2017–2019). A part of her work on this research was also undertaken at the Centre for Internet and Society (CIS), India, and was supported by the Big Data for Development network funded by International Development Research Centre (IDRC), Canada. The author is also indebted to Asha Achuthan, whose guidance and constructive criticism during her research have shaped this enquiry in a significant way. The author thanks Akhila Vasan and Vijayakumar Seethappa, public health activists with Karnataka Janaarogya Chaluvali, for sharing their valuable and deeply insightful perspectives with her in an interview dated 9 May 2018. Further, the author is also thankful to all her research participants for their cooperation, and grateful to Sumandro Chattapadhyay, Director, the Centre for Internet and Society (CIS), for reviewing this essay.

Must Read

Do water policies recognise the differential requirements and usages of water by women and the importance of adequate availability and accessibility?
Personal Laws in India present a situation where abolishing them in the interest of gender justice also inadvertently benefits the reactionary side.   
Back to Top