Health sector regulation in Kenya increased the compliance of health facilities with a checklist of patient safety measures without any increase in prices or decrease in utilisation
Big Picture
Unsafe medical care leads to an estimated 2.6 million deaths in low- and middle-income countries (LMICs) every year, leading to passionate pleas for greater government stewardship and regulation (National Academy of Sciences 2018). But given the challenges in `randomizing the law’, we have frighteningly little evidence on how such regulation causally affects performance. Even for widely advocated core regulatory practices such as minimum quality standards, a systematic review reports that there are no randomised-controlled or high-quality studies to date (Flodgren et al. 2016).
Our research (Bedoya, Das and Dolinger 2023) addresses this gap. We worked with regulators in Kenya over five years to design and implement a new regulation that sought to improve patient safety by imposing minimum quality standards (MQS). We then received permission from the cabinet to experimentally implement this regulation for one year. During this year, all facilities in treated—but not control—areas were included in the new regime with ramped up inspections and sanctions. In control areas, the old regulatory regime remained in place, with inspections responding to any specific complaints (and in our data, the inspection rate was close to zero). Our experimental design of a market-level randomisation specifically allowed us to estimate the causal impacts of this regulation in the presence of spillovers and substantial facility closures resulting from the regulation itself.
We find that the regulation—realised through its de jure aspects as inspections, warnings, and sanctions—improved patient safety scores on the inspection checklist, without any increase in prices or decline in utilisation. Rather than the closures of poorly performing facilities (which was frequent), increased compliance with the safety checklist reflects improvements in public facilities and private facilities across the quality spectrum, with larger improvements in those that were better performing. We provide suggestive evidence that the channel of improvement for the private facilities was a reduction in market-power rather than an increase in information.
Context and intervention
Healthcare in Kenya is delivered through both the public and the private sector. The public sector is tax financed, and facility-gate prices are nominal and administratively determined. At the time of our intervention, there was little formal health insurance, so private facilities were financed through prices paid by patients at the time of their visit. There is no gate-keeping, so patients are free to visit any facility as long as they can afford to do so. Finally, regulation is a federal responsibility with different parts of the facilities regulated (at the time of the intervention) by 9 boards and councils—the medical board regulates the doctors, the pharmacy board the pharmacy and so on.
The regulation: The new regulation that we evaluate introduced checklist-based inspections carried out by dedicated government inspectors, replacing a system of ad-hoc inspections, typically in response to a complaint. Based on the results of the inspections, facilities that did not have a license to operate were reported for closure, and facilities that were not compliant with the standards were given time to improve, with more time given to better performing facilities. Facilities did not receive any financial or in-kind support as part of the inspections, and the full checklist was made available to both treated and control facilities. We evaluate the one-year impact of the regulation, during which facilities were closed for lack of licenses, but not for lack of improvement. Lack of licensure is itself a massive regulatory issue in LMICs: In India 70% of primary care private facilities are unlicensed (Das et al. 2022) and, in our sample, 60% of all private facilities lacked a valid license at the time of inspection.
The Checklist: The Joint Health Inspection Checklist (JHIC) used for inspections has 471 items that cover different parts of the facility (the full checklist would be administered only in highly specialised hospitals with facilities such as a mortuary or radiology). The items in the JHIC differ in how difficult/costly they are to remediate (from printing and pasting standard operating procedures to expensive investments in infrastructure, such as a sluice in the labour room) but the regulatory boards did not want to weight the items by difficulty or cost of remediation, preferring simplicity to a more complex scoring system.
Experimental Design: In three Kenyan counties we conducted a census, during which we counted 1348 health facilities. We clustered these facilities into 273 different `markets’, defining a market as a geographically contiguous area with each facility within 4km of the centroid. The randomisation was then carried out at the level of markets with 87 markets assigned to control and 186 markets assigned to one of two treatments—regular inspections (90 markets) and regular inspections + information (96 markets), whereby a report card with inspection results was pasted on the door. This “market-level randomisation,” developed by Andrabi, Das and Khwaja (2017), allows us to fully address the movement of patients across facilities, as well as facility exits and entries when estimating the causal impact of the regulation at the market-level.
Facts
Five important facts about the environment contextualise our results.
Fact #1 Considerable Competition: Most facilities are in markets with considerable competition. In our sample, 79% of all health facilities were located in markets with 4 or more healthcare providers, accounting for 70% of all patient visits (see Figure 1 for examples). In contrast, only 11% of patient visits were to facilities in markets with a single provider.
Figure 1: Most health facilities operate in markets with significant competition
Markets with single providers: 7% of all health facilities and 11% of all outpatient visits Small markets with 2-3 providers: 15% of all health facilities and 19% of all outpatient visits Large markets with 4 or more providers: 79% of all facilities and 70% of all outpatient visits
Fact #2 Lack of Licensure: Most (60%) private facilities in our sample did not have appropriate licenses to operate in the treatment group at the first inspection: 34% did not have a license at all, while another 26% had an expired license. Patient safety scores on the checklist averaged 34% of the maximum JHIC score in private and 41% in public facilities at baseline.
Fact #3 Exit and Entry: Health facilities exit and enter frequently. In control markets, 19% of all facilities exited and 15% entered in the year between the randomisation and the endline survey. This is substantially higher than the 8.2% reported by McKenzie and Paffhausen (2019) in their survey of establishments in LMICs.
Fact #4 Excess Capacity: There is massive excess capacity in all types of facilities. Private facilities see 11 patients a day on average (29% of all patients), while public facilities with multiple providers see 49 (71% of all patients). Substantial variation in caseload implies that the least busy 20% of private facilities see less than 2 patients a day, similar to other results from Sub-Saharan Africa and other LMICs (Daniels et al. 2023). It also implies that improvements for patients will require improvements in the facilities with larger patient loads.
Fact #5 Market Evolution and JHIC: Facilities with lower JHIC scores are less likely to be licensed, more likely to exit the market within one year and have lower patient loads. In fact, exiting facilities in control areas account for only 3% of patients in the market. This is one indication that the JHIC score is linked to “real” outcomes in these contexts—and that patient demand must be correlated to some aspects of the score.
Results
The big takeaway is that the pooled intervention (inspection and inspections + information) increased the JHIC scores by 5.2 percentage points, with larger increases in private (6.3 points) versus public facilities (2.8 points). These increases translate into improvements of 0.58 and 0.31 standard deviations (SDs) respectively (Figure 2). We find no evidence of an increase in fees in private facilities, no decline in the number of patients in treated markets and, a reallocation of patients away from private towards public facilities.
Figure 2: The regulation increased scores on the JHIC checklist for both public and private facilities
At first glance, these results seem consistent with survivorship bias—the worst facilities were closed down, patients shifted to public facilities and JHIC scores increased, just as a standard regulatory story would suggest. Indeed, over the duration of the experiment there were 1,670 inspections, and 30% of all private facilities inspected in treated markets had to be closed due to lack of licensure, despite being given time to renew or obtain a license by the inspection team. Furthermore, licensure and low baseline JHIC scores were both predictive of closure.
But in fact, the increase in JHIC scores cannot be attributed to survivorship bias. This is because unlicensed facilities and facilities with low JHIC scores were more likely to exit the market anyway in control markets and, by endline, the likelihood of facility exits is no different between treated and control markets. Further, most closed facilities re-opened themselves without obtaining a license or were replaced with other unlicensed facilities. Using a decomposition approach, we formally show that 87% of the increase in the JHIC score is due to an increase in the scores of facilities that were open both at baseline and endline, with only 5% of the increase attributable to exits and 8% to patients relocating across facilities from lower to higher quality.
If we rule-out survivorship bias, what explains the increase in JHIC scores? One obvious explanation is that the results could reflect better information for patients in the arm where we posted report cards on the clinics (and verified that they were not taken down). Strikingly, we find no difference in any of our main outcomes (JHIC scores, patient load, and prices) in the arm with report cards compared to the arm without. This does not imply that asymmetric information has no role to play in this market, but it does say that increased information was not a likely channel for our results.
Instead, one part of the reason for the increase in JHIC scores is an improvement in the public facilities. The improvement in the public sector we observe could reflect the fact that health is now a devolved subject that has become the responsibility of each county, who are now competing with each other. Counties have quality teams to begin with, and the arrival of a federal regulator and a 3rd-party report could have given the facilities credible signals on where to focus.
The results also reflect improvements at the top of the JHIC distribution for private facilities. We show this using quantile treatment effects and by demonstrating that improvements in the JHIC scores are higher for facilities with lower likelihood of closure, who were at the top of the quality distribution at baseline. This could be because the facilities at the top end of the market believed that they may be shut down if they did not improve, which is consistent with the framing of the sanctions, but is not consistent with rational expectations based on the empirical reality. It could also be due to a market-power channel, whereby facilities at the top of the distribution were forced to respond to competitive pressures as those below them improved. Such a market power channel would also explain why we see improvements in the private facilities without a commensurate increase in prices.
Can we say that this improved patient welfare?
Although we do not have data on health outcomes, we are optimistic of the intervention’s usefulness. On the cost side, the regulatory intervention had larger impacts at a cost of $165 per visit (~$500 per facility), compared to “thousands of dollars” per facility in the other studies from Nigeria (Dunsch et al. 2022) and Tanzania (King et al. 2021) that used similar checklists to measure improvement. On the benefit side:
- We document large improvements for costly items like infrastructure and medical supplies rather than small changes (for instance, printing and pasting standard operating procedures) that could still boost JHIC scores substantially. There were real and lasting changes to the treated facilities, ranging from better waste disposal methods to the availability of personal protective equipment for staff.
- Next, we looked for studies that tied specific components of the checklist to demonstrated improvements in health outcomes and found sizeable increases in these components, ranging from 0.06 SD to 0.26 SD among facilities in treated markets (Figure 3).
- Finally, we benchmarked the improvements against willingness to pay for better performance on the JHIC. Using data from 11,000 patient exit surveys, we show that the gains in the JHIC score translate into implied gains in consumer welfare of $2.4 million annually, compared to costs of $242,000. The benefits outweigh the costs by an order of magnitude, suggesting that any bias in our willingness to pay estimate would have to be implausibly large to overturn the results.
Figure 3: Facilities invested in checklist components that have been shown to be associated with health outcomes
Conclusions
Addressing low quality of care and poor patient safety is one of the most pressing problems in LMICs today. Despite frequent calls for better government regulation, there are currently no studies that allow us to actually determine whether and how better regulation can improve quality of care for patients. Using a market-based randomisation allowed us to estimate the causal impact of regulations in the face of facility closures and spillovers, both of which will be an integral part of most regulations.
Our results provide considerable grounds for optimism: MQS improved compliance with the inspection checklist, without any increase in price or decline in utilisation. These improvements reflected gains across the quality distribution, with larger increases among public facilities and better performing private facilities. That is good news, both because it is these facilities that cater to the bulk of patients and because it shows that even if regulators find it difficult to control the bottom of the market when entry costs are very low, MQS can still improve patient safety through knock-on effects that impact the full range of health facilities.
References
Andrabi, T, J Das, and A I Khwaja (2017), "Report cards: The impact of providing school and child test scores on educational markets." American Economic Review, 107(6): 1535-1563. Read on VoxDev.
Bedoya, G, J Das, and A Dolinger, (2023), “Randomized regulation: The impact of minimum quality standards on health markets.” NBER Working Paper 31203.
Daniels, B, J Das, and R Gatti (2023), “Analysis of SDI Health Data (Version V0).” Available here.
Das, J, B Daniels, M Ashok, E-Y Shim, and K Muralidharan (2022), "Two Indias: The structure of primary health care markets in rural Indian villages with implications for policy." Social Science & Medicine 301: 112799.
Dunsch, F A, D K Evans, E Eze‐Ajoku, and M Macis (2022), “Management, supervision, and healthcare: A field experiment.” Journal of Economics & Management Strategy jems.12471.
Flodgren G, D C Gonçalves-Bradley, and M P Pomey. “External inspection of compliance with standards for improved healthcare outcomes.” Cochrane Database of Systematic Reviews 2016, Issue 12. Art. No.: CD008992.
King, J J C, T Powell-Jackson, C Makungu, N Spieker, P Risha, A Mkopi and C Goodman (2021), “Effect of a multifaceted intervention to improve clinical quality of care through stepwise certification (Safecare) in health-care facilities in Tanzania: A cluster-randomised controlled trial.” The Lancet Global Health, 9(9): e1262–e1272.
McKenzie, D and A L Paffhausen (2019), “Small firm death in developing countries.” The Review of Economics and Statistics, 101(4): 645–657.
National Academies of Sciences, Engineering, and Medicine (2018), “Crossing the global quality chasm: Improving health care worldwide.” Washington, DC: The National Academies Press.