- 23rd June 2021
- 4 min read
- 23 November 2021
- 1 min read
Evaluation of new interventions is an important part of improvement efforts in healthcare, but it doesn’t always happen consistently. When done well, it can help to solve problems, guide decision-making, and create an evidence base for future improvement efforts. Including evaluation in a planned improvement intervention from the very beginning can help to understand whether the desired change is achieved, whether it is the result of that intervention – and, just as importantly, what the unintended consequences are. A good quality evaluation allows stepping back and regrouping when things go wrong. It also means maximum learning can be extracted, whether an intervention is successful or not, and the lessons can be applied to make future improvements.
There are many examples of well-intentioned interventions that went wrong, both large-scale and small-scale. The Liverpool Care Pathway for the Dying Patient is a well-known example. An integrated care pathway, it was recommended in England and Wales as part of an effort to improve end-of-life care in hospitals. But too-rapid implementation and uncritical faith that simply adopting it was enough led to widely publicised instances of poor care, and ultimately to the discontinuation of the pathway in 2014.
On a much smaller scale, a similar fate awaited an intervention involving “Do not interrupt” tabards worn by nurses when preparing and administering medications. The intention was clear: they were meant to prevent nurses from being distracted while doing the painstaking work of preparing and administering medications, reducing possible errors and harm to patients. However, the error reduction rate was very low (at 1.8%), and nurses found the tabards “time consuming, cumbersome and hot”.
With 1 in 4 hospital beds occupied by patients with dementia, their care is a field where improvement initiatives abound. They range from dementia-friendly design in hospital wards to using visual identifiers for people with dementia. There are different types of identifiers intended to offer a visual prompt to staff that a patient has some sort of cognitive impairment and to encourage person-centred care that might require a different approach, or simply more time. Some identifiers are stickers placed on patient’s notes or above the hospital bed, some are magnets placed on whiteboards or doorframes, and some different-coloured wristbands. Identifiers worn by the patient, such as wristbands, might have a particular role in enabling patients who walk away from a hospital ward to be identified, or supporting staff who do not know patients to more easily recognise their needs.
THIS Institute’s project, “Developing a visual identification method for people with cognitive impairment in institutional settings”, aims to collect information about the kinds of visual systems used to identify patients in NHS hospitals across the UK with cognitive impairment, so that care can be tailored to their needs. As part of the project, an analysis of the ethical and legal issues surrounding the use of visual identifiers for hospitalised people with dementia was led by the PHG Foundation. It noted that, alongside possible benefits, using identifiers also brings challenges. For example, identifiers can raise risks relating to disclosure of private information and to consent, among others.
Another component of the project was a survey of staff perceptions of using visual identifiers for patients with dementia that suggested inconsistent use within wards can also be problematic, for example staff feel they have to double-check patients’ diagnoses – an additional burden that undermines one key purpose of the identifiers. Other problems included identifiers being left in place even after the patient with dementia had been discharged, so the next patient in the bed was wrongly identified as having dementia, or leaving people with dementia puzzled as to why their hospital wristbands are different from other patients’ wristbands.
Emerging findings from the project also suggest that what makes visual identifiers work well is the way they are used (not just the fact that they are used). In contexts where hospital policies are clear, where staff are well trained and where there is time to assess the patient’s needs and obtain their consent, many of the challenges can be tackled, and the advantages optimised. But with pressures on staff and beds, hospital realities can be messy. Understanding how visual identifiers work in practice – rather than how they are imagined – will require evaluation. Such work can reveal when they really improve care, and when they simply make it more complicated. Evaluation can inform better decisions about what to change, and better approaches to implementation.
Almost by definition, healthcare improvement interventions are well intentioned – and many not only work according to the way they were designed but also make care better. But without evaluation alongside intervention, we can only obtain anecdotal, fragmentary knowledge of whether an intervention is working as intended. Time and money are precious and finite resources, and care is needed to ensure they are used as effectively and respectfully as possible. And most of all, interventions should help patients and avoid harm. Evaluation can help to achieve this, and to make improvement efforts more effective.