ISSN (Print) - 0012-9976 | ISSN (Online) - 2349-8846
-A A +A

NITI Aayog’s Health Index

Missing the Rigour

Subir K Kole (skole@ipeglobal.com) is Advisor–Monitoring, Evaluation, and Learning at IPE Global, New Delhi, and serves as the Executive Board Member of the East–West Center Association, Honolulu, Hawaii.

 

Based on a critical review of the NITI Aayog’s recently published “Healthy States, Progressive India,” it is argued that the report provides only a superficial insight into the overall health attainment. Much deeper and careful analysis is required if one aims to unfold the complexity and varied contexts provided by Indian states, let alone ranking them on health attainment. The method of calculating the index in the report compromises scientific rigour, and the inferences drawn are highly misleading.

 

Once again, the NITI Aayog has come up with its Health Index for Indian states—the first in the series was published a year ago (NITI Aayog 2018). The latest Health Index released in late-June, titled “Healthy States, Progressive India,” provides a snapshot of the current status of the health sector across Indian states and union territories (UTs) (NITI Aayog 2019). For the most part, the report is similar to the United Nations Development Programme’s (UNDP) Human Development Report for Indian States (UNDP 2011). The report will come handy if one wants to understand how the states are doing in public health at any given point. However, to assess whether one is making progress or not (as this report does) would be erroneous, as no time-series data is available.

Certainly, the NITI Aayog’s attempt to construct a health index is not the first of its kind. As early as in 2016, three professors at the Indian Institute of Management Ahmedabad took the first dig at constructing a health index for Indian states (Sinha et al 2016). Two years later, the NITI Aayog’s current report, or even its first report, has not acknowledged or even consulted those pioneers in the field. As all research is built on an existing knowledge base, the reasons for this oversight are anyone’s guess.

Since its publication, the report has received a lot of flak from the media. Critics have come out openly questioning the robustness of the data and choice of indicators used for constructing the index. While I do not intend to repeat any of them here, my broader contentions will centre around three important dimensions of the report.

On Methodology

The report uses the same methodology as the UNDP’s Human Development Index (HDI) for computing the health attainment of individual states. To estimate on which rung in the development ladder a state lies, every indicator is scaled by calculating the distance a state needs to travel (calculated as actual minus minimum, divided by maximum minus minimum; for negative indicators, this scaling is reversed). A composite index is then calculated using a simple weighted average of all indicators for both years, 2016 and 2017. The change in the composite index score of each state between the two years shows the incremental progress of each state.

A total of 23 indicators were used to construct the index. These 23 indicators were divided into three broad domains: (i) health outcomes (10 indicators); (ii) governance and information (three indicators); and (iii) key inputs/processes (10 indicators). For larger states, nearly 70% weight is given to health outcomes, 12% weight is given to governance and information, and 19% weight is given to key inputs and processes (for smaller states and UTs, these percentages are even lower). Since the health outcome itself is a function of a state’s socio-economic, political, cultural, and environmental processes, assigning lesser weights to inputs and processes somehow distorts our understanding of attainment.

Barring four of the 23 indicators, the data source for constructing the index is based on departmental reporting, the Health Management Information System (HMIS), and the state-run vertical disease control programme (Revised National Tuberculosis Control Programme [RNTCP]). As many as five indicators with relatively higher weightages come from the HMIS, notoriously known for over-reporting, misreporting, and time lags. The report acknowledges that “There are huge disparities in the data integrity measures across states and UTs” (NITI Aayog 2019: 50). For example, compared to National Family Health Survey (NFHS)–4 (2015–16) estimates, Jharkhand over-reported antenatal care (ANC) registrations within the first trimester by 53.5%, West Bengal by 42.4%, Chhattisgarh by 25.9%, Rajasthan by 18.4%, and so on. This means that the HMIS data on ANC within the first trimester for Rajasthan is: 63% + 18.4% = 81.4%. This over-reporting is similar across indicators and across states. Chhattisgarh, Madhya Pradesh (MP), Andhra Pradesh (AP), and Uttar Pradesh (UP) over-reported institutional deliveries by 22%–36% (NITI Aayog 2019: 50–51). This puts institutional deliveries in AP at 115% (91.5% from the NFHS–4, plus 23.5% from the HMIS), and in UP at over 104% (67.8% from the NFHS–4, plus 36.6% from the HMIS).

Problems with Indicators

Crucial indicators of health outcomes are missing or were not considered for constructing the index. For example, indicators widely used as “health outcomes” in World Health Statistics (WHO various years), such as life expectancy at birth, maternal mortality, disability adjusted life years, including incidence of malaria, were not considered. Whereas, redundant indicators such as sex ratio at birth (SRB) are included in constructing the index. The SRB is an outcome indicator for gender disparity, but certainly not for health attainment. The natural SRB is 1.05 in any given population. This is because nature balances out higher mortality among male babies by producing 105 male babies for every 100 female babies. Thus, even if a population suffers from poor health, this natural ratio will not alter without human intervention (say, sex-selective abortion, which happens when a population has son preference because of patriarchy, gender power relations, and the status it attributes to its women). The SRB, therefore, can never be an “outcome” of health.

Similarly, the health of a population cannot be judged based on its total fertility rate (TFR). The TFR of a population can be relatively low or even below replacement level of fertility (2.1), but that population may suffer from overall bad health because of socio-economic, cultural, or environmental factors. The report misses these simple points and assigns relatively higher weightages for these indicators while constructing the index. Instead, the percentage of pregnant women with anaemia, or percentage of women with low body mass index (BMI) as important health outcomes and also as determinants of low birthweight (LBW) were not considered.

There is a lot of multicollinearity within a group of indicators. Tuberculosis (TB) and HIV are correlated, and then there are two similar indicators for TB (notification rate + success rate, whereas only prevalence of TB in a population would suffice as a health outcome indicator). ANC within the first trimester is taken as an input, but dropping out from ANC (full ANC, or 3-ANC), which ultimately determines LBW (Kandhasamy and Singh 2015; Metgud et al 2013), were not considered.

Similarly, the health input domain does not consider crucial indicators. For example, indicators widely used as “inputs” in the World Health Statistics, such as doctors per thousand population, hospital beds per thousand population, nursing and midwifery personnel per thousand population, people with potable drinking water supply, per capita government spending on health, percentage of people with health insurance, and percentage of population using clean fuel (that determines LBW) were not considered. The choice of indicators, therefore, is not of feasibility, but a political one. Since government spending on health, clean fuel, insurance, water supply, etc, is politically sensitive and comes with lot of accountability, none of them were considered.

The report used the proportion of vacant healthcare provider positions and the proportion of staff with e-payslips as “inputs.” This is methodologically flawed. A state may have absolutely no vacant positions for auxiliary nurse midwives (ANMs) and staff nurses, say, for example, Rajasthan may have all 20,000 sanctioned positions of ANMs filled. Does this “proportion of ANMs with e-payslips” justify how many ANMs are needed to serve the entire population of Rajasthan in the first place (population to ANM ratio, or population to staff nurse ratio)? Certainly not. The current indicators do not justify the overcrowding in public health facilities and access to healthcare. While other standard reports would use these ratios as input indicators, this report does not.

Finally, how can one even think of health without economic or income components, especially when income determines access to healthcare? Out-of-pocket expenses and per capita health expenditure, which are widely used in constructing indices, were not considered in this report.

On Inferences Drawn

The time window in the report for measuring incremental change is extremely small: one year, 2015–16 (base year) against 2017–18 (reference year). Most of the health outcomes do not have a direct intervention to response rate in one year. We can understand that two-time-point data makes sense when that time is separated by, say, over five years. Sadly, the report derives conclusions based on a one-year change.

The conclusions drawn based on a “single year” incremental performance must be understood with caution. One cannot simply deduce an “improvement” based on a single year’s data (and conclude that a state has “most improved” or “less improved,” as this report does). For example, an upward move by 6 points for Rajasthan from 2016 to 2017 concludes that Rajasthan has “most improved.” What we do not know is the margin of error in this estimate. We do not know if that 6-point increase is because of a data/sampling error. To conclude with some degree of confidence about improvement or decline, we need at least three consecutive data points. The inference, therefore, is grossly misleading.

The decline in LBW is the most contentious issue (NITI Aayog 2019: 43). AP, Jharkhand, Telangana, Uttarakhand, Bihar and Jammu and Kashmir (J&K) show LBW prevalence similar to the most advanced industrialised countries such as the United States (US) and Japan, and Western Europe and Nordic countries (OECD 2019; UNICEF 2015). In any population, due to foetal and maternal genetic factors, 4%–5% LBW is the norm (Lunde et al 2007). In the entire world, out of 192 countries for which data was available, only three countries reported LBW prevalence below 4% (OECD 2019; UNICEF 2015). How come poor, impoverished Indian states drop below that threshold level? Surprisingly, nowhere in the report is this fact either acknowledged, or the data quality of birthweight reporting mentioned. Instead, the report goes on to argue that there is a 40% reduction in LBW in Rajasthan in the last one year. Whereas nothing that determines the prevalence of LBW in a population has moved in the positive direction (level of anaemia in pregnancy, low BMI, 4-ANC check-up, full ANC for management of high risk, iron and folic acid and calcium uptake, inter-pregnancy interval, etc), how is it that LBW has dropped suddenly by 40% in one year?

The report says,

Rajasthan and Haryana attributed this decline to measures such as early registration of pregnancies, early detection and management of high-risk pregnancies, regular monitoring of HMIS data. (NITI Aayog 2019: 43)

This ought to be less than true. We have no way to understand improvement in early registration of pregnancy in one year. The report uses HMIS data for this indicator, which is overinflated because early registration is highly incentivised. This indicator also does not capture, if, after registration, pregnant women drop out and do not complete their ANC. Therefore, “full ANC” or “4-ANC” should have been considered for constructing the index. But states’ performance on these two indicators is too low, with only 9% registering for full ANC and 38% for 4-ANC in Rajasthan, according to NFHS–4 data. This data is also available in the HMIS, but the NITI Aayog deliberately chose to use an overinflated indicator over more qualitative and better suited ones. Ultimately, full-ANC or 4-ANC will determine the management of high-risk cases because of repeated contacts with health workers. If that has not improved in the first place, how can the report claim that early detection and management of high-risk pregnancies have improved leading to 40% reduction in lbw?

The Verdict

To understand where the states stand currently in terms of their health sector, and how they rank at a given point, the index comes handy. But, findings must be read with caution. Methodology and indicators are not in tune with international standards (WHO’s World Health Statistics) as there is confusion and overlap across domains (indicators reflecting governance and input). Further, crucial health inputs were not used for political sensitivity or otherwise. The data source remains the notorious HMIS, known for misreporting, over-reporting, and time lags.

To assess the quality and integrity of HMIS data, the author undertook an independent analysis of 13,000 records of pregnant women obtained from Government of Rajasthan’s Pregnancy, Child Tracking and Health Services Management System (PCTS) database. The LBW data reported by the HMIS is sourced from PCTS recordings. These pregnancy records were obtained using unique PCTS-ID for all women who got pregnant between 1 June 2018 and 31 December 2018 from six blocks of Udaipur district in Rajasthan. Based on the author’s analysis, it was observed that the mean weight gain of 13,000 women during the entire pregnancy period was 3.8 kg. However, the mean birthweight of newborns for these 13,000 women was 2.6 kg. Since pregnancy weight gain is a strong predictor of birthweight (Tela et al 2019; Esimai and Ojofeitimi 2014), this cause–effect relationship in pregnancy outcome is theoretically impossible. Prospective longitudinal cohort studies suggest that the ideal mean pregnancy weight gain for normal-BMI Indian women is 10.36 kg (Ismail et al 2016). In the absence of attaining the minimum weight gain standard, as the PCTS data suggests, delivering a normal birthweight baby defies nature’s logic.

This data and finding therefore demonstrate gross inconsistencies in both collection and reporting of birthweight data. This also implies that facility-based birthweight reporting has a propensity to report “normal” birthweight, regardless of the mother’s weight gain standard. Since reducing lbw is now an important target set by the National Nutrition Mission, the government should explore digital solutions and capturing of the birthweight real time.

References

Esimai, O A and E Ojofeitimi (2014): “Pattern and Determinants of Gestational Weight Gain an Important Predictor of Infant Birth Weight in a Developing Country,” Global Journal of Health Science, Vol 6, No 4, pp 148–54, doi: 10.5539/gjhs.v6n4p148.

Ismail, L C, D C Bishop, R Pang, E O Ohuma, Gilberto Kac, Barbara Abrams et al (2016): “Gestational Weight Gain Standards Based on Women Enrolled in the Fetal Growth Longitudinal Study of the INTERGROWTH-21st Project: A Prospective Longitudinal Cohort Study,” BMJ, Vol 352, i555, doi: 10.1136/bmj.i555.

Kandhasamy, K and Z Singh (2015): “Determinants of Low Birth Weight in a Rural Area of Tamil Nadu, India: A Case–Control Study,” International Journal of Medical Sciences and Public Health, Vol 4, No 3, pp 376–80.

Lunde, Astrid, Kari Klungsoyr Melve, Hakon K Gjessing, Rolv Skjaerven and Lorentz M Irgens (2007): “Genetic and Environmental Influences on Birth Weight, Birth Length, Head Circumference, and Gestational Age by Use of Population-based Parent-Offspring Data,” American Journal of Epidemiology, Vol 165, No 7, pp 734–41.

Metgud, Chandra S, Vijaya A Naik and Maheshwar D Mal (2013): “Factors Affecting Birth Weight of a Newborn—A Community Based Study in Rural Karnataka, India,” PLoS One, Vol 7, No 7, e40040.

NITI Aayog (2018): “Healthy States, Progressive India: Report on the Ranks of States and Union Territories,” NITI Aayog and Ministry of Health and Family Welfare, Government of India, New Delhi.

— (2019): “Healthy States, Progressive India: Report on the Ranks of States and Union Territories,” NITI Aayog and Ministry of Health and Family Welfare, Government of India, New Delhi, viewed on 4 October 2019, http://social.niti.gov.in/uploads/sample/health_index_report.pdf.

OECD (2019): “Low Birth Weight,” OECD Family Database (updated July 2019), Organisation for Economic Co-operation and Development, viewed on 1 July 2019, https://www.oecd.org/els/family/CO_1_3_Low_birth_weight.pdf.

Sinha, P K, A Sahay and S Koul (2016): “Development of a Health Index of Indian States,” Indian Institute of Management Ahmedabad.

Tela, F G, A M Bezabih and A K Adhanu (2019): “Effect of Pregnancy Weight Gain on Infant Birth Weight among Mothers Attending Antenatal Care from Private Clinics in Mekelle City, Northern Ethiopia: A Facility Based Follow-up Study,” PLoS ONE, Vol 14, No 3, e0212424, doi: 10.1371/journal.pone.0212424.

UNDP (2011): “Inequity Adjusted Human Development Report for Indian States,” United Nations Development Programme, New Delhi.

UNICEF (2015): “A Good Start in Life Begins in the Womb: Low Birthweight Prevalence, by Country and Region,” United Nations Children’s Fund, viewed on 1 July 2019, https://data.unicef.org/topic/nutrition/low-birthweight/.

WHO (various years): World Health Statistics, World Health Organization, Geneva.

Updated On : 22nd Oct, 2019

Comments

(-) Hide

EPW looks forward to your comments. Please note that comments are moderated as per our comments policy. They may take some time to appear. A comment, if suitable, may be selected for publication in the Letters pages of EPW.

Back to Top