Assess

7. Beyond clinical audit – alternative ways to assess

While clinical audit remains the most widely used methodology in human health care, there are a number of other quality improvement tools that you can use to assess.

Traditional clinical audit is a formal way to find out if the care we are providing is in line with recognised standards, but there are a large number of other quality improvement tools that you can use to Assess (Hughes, 2008). Examples of some of these alternative methods are outlined below.

Plan-do-study-act (PDSA) methodology (Taylor et al., 2013) is a simple form of cyclical assessment of practice, which can be used to drive small steps of change in practice at a local level. The PDSA is a small, rapid cycle designed to test, measure impact and test again. It allows users to design the process so that it makes their life easier, while retaining the quality improvement effect.

Plan: identify a change aimed at improving quality of care and develop a plan to test the change

Do: test the effect of this change

Study: observe, analyse and learn from the test to evaluate the success of the change, or identify anything that went wrong

Act: adopt the change if it was entirely successful, or identify any modifications required to inform a new PDSA cycle


Clinical Scenario

Small animal resuscitation

Bella, an 8-month-old lurcher, had fracture fixation surgery but she did not regain normal limb function and follow-up radiographs showed non-union, therefore amputation of the limb was determined to be the best treatment option. During limb amputation surgery, Bella went in to cardiopulmonary arrest. Unfortunately, the locum nurse struggled to find the drugs required by the anaesthetist for advanced life support, and Bella died despite cardiopulmonary resuscitation (CPR).

Head nurse Lynne realises that implementing resuscitation training for all staff could help reduce the risk of this happening in the future (PLAN). Lynne reviews available guidelines (Fletcher et al., 2012) and together with the anaesthetist, she prepares a short lecture regarding CPR to provide in-house training for all nursing staff. She also organises an induction for all new staff, to ensure that they know the location of the resuscitation trolley in theatre, and that they are familiar with the layout and contents of the resuscitation trolley (DO). Lynne informally assesses the benefit of her training and induction by setting a basic quiz for nursing staff to take afterwards. The training is well received and the nurses perform well in the quiz, however since cardiopulmonary arrest thankfully happens very rarely in their practice, some of the nurses are concerned that they may not remember everything they have learned in the future. They also highlight that for some surgeries, interns are more frequently involved in assisting the anaesthetist than nurses (STUDY). Lynne realises that all clinical staff in the practice (vets, nurses and veterinary care assistants) should receive the CPR training and resuscitation trolley induction, and that including some practical skills training should improve learning and knowledge retention. She also devised a more structured assessment for staff after CPR training, and creates an emergency drug list with a dosing chart, which is kept with the drugs in the resuscitation trolley (ACT).

Lynne plans to provide refresher training to all staff every six months, based on available guidelines (Fletcher et al., 2012). She aims to repeat the PDSA cycle following implementation of her new training methods, using staff feedback and post-training assessment to test what works well and to learn from what does not work. This will also allow her to determine whether her training process is reliable, and that it works for different staff teams.

Key points:

For a rare outcome like cardiopulmonary arrest, assessing the impact of CPR training via clinical audit would have required a very long audit time frame – the PDSA cycle allowed Lynne to assess her intervention in a much shorter period.

A run chart is a graph of data over time, and offers a simple and effective tool to help you determine whether the changes you are making are leading to improvement. Run charts are simple to construct – the horizontal (x) axis is usually a time scale (e.g. weeks; April, May, June, etc.; or quarter 1, quarter 2, etc.) and the vertical (y) axis is the aspect of health care being assessed (e.g. surgical site infection rate, daily magnetic resonance imaging (MRI) start time, compliance with completion of surgical safety check lists). 

Once you have at least ten observations, you can plot your data and calculate the median value, which is then projected into the future on the chart. Annotate your run chart to indicate where any changes were implemented. There are simple rules to help you interpret your run chart and to determine if a change has resulted in an improvement (Perla et al., 2011). 

Run charts can be a helpful way to make progress visible to the practice team, providing a powerful display of the linkage between change and improved outcomes, and to ensure that improvement is being sustained over time.

Performance polygons offer a method to allow you to Assess multiple different aspects of quality of care in a single visual representation. Each outcome measure is plotted on a single line from ‘lowest performance’ to ‘best performance’ and performance data are then plotted, with the lines for each measure joined to form a ‘performance polygon’ (Cook et al., 2012). Markers can be used to show benchmark data or results of previous audit or assessments, to facilitate comparisons of overall performance. This could offer a simple method for you to track your own clinical or EBVM performance over time.

Health failure modes and effects analysis (HFMEA) is a systematic, proactive quality improvement method for process evaluation. It is a particularly useful method for evaluating a new process prior to implementation or for assessing the impact of a proposed change to an existing process. 

A multidisciplinary team representing all areas of the process being evaluated identifies where and how a process might fail (failure modes), possible reasons for failing (failure causes) and assesses the relative impact of different failures (failure effects). Failure mode causes are prioritised by risk grading (hazard analysis) to identify elements of the process in most need of change (Marquet et al., 2012). Team members with appropriate expertise then work together to devise improvements to prevent those failures.