Read 09kk2004 text version


Clinical Indicators ­ What, Why and How

YH Lee, KH Tan ABSTRACT Healthcare organisations are required to provide evidence of improving performance by utilising data. This can be used in comparative or benchmarking information relating to clinical care across institutions. The aims of clinical indicator development, monitoring and reporting are to increase healthcare providers' awareness and involvement in evaluating care outcome or care process and identifying quality and/or process gaps in the care delivery system. Indicators serve as `pointers' to direct healthcare providers' attention and resources to target areas for improvement in the process of delivering health care. Keywords: Clinical indicators, performance measurement, quality improvement


The Australian Council on Healthcare Standards (ACHS) defines clinical indicator as an objective measure of the management process or outcome of care, in quantitative term.1 It provides measurable dimension of the quality or appropriateness aspect of patient care. Clinical indicators can be used in comparative or benchmarking information relating to clinical care. Possible problems and/or opportunities for improvement are flagged out within the organisation. Examples of clinical indicators include inpatient mortality, perioperative mortality, unscheduled readmissions, unscheduled returns to the Operating Theatre, unscheduled returns to the Emergency Department, reattendance at Emergency Department for asthma, etc. Such data can help to highlight problem areas in clinical performance, inform or drive quality improvement activities, prompt reflections on clinical practice, proper channelling of resources and identify important issues for further research. Valid and reliable data concerning desired and undesired results play an important role in a comprehensive monitoring and evaluation system.

30,000 hospital records in New York state, found injuries from care itself ("adverse events") to occur in 3.7% of hospital admissions, over half of which were preventable and 13.6% of which led to death.4 There is increasing pressure from the public, regulators and professionals to redesign healthcare processes and systems to become much safer in future.5,6 Against this background there has been an explosion in methods aiming to use routine data to compare performance between healthcare providers. Thus the data have wide range of potential uses and are of interest to a wide range of stakeholders such as researchers, practitioners, managers, regulators, patients, and carers.7


Maryland Quality Indicator Project (MQIP) The MQIP indicators provide a clinical, outcome-based approach to measuring and evaluating organisational performance. It is a comparative analysis research project, initiated in Maryland, USA in 1985 by the Maryland Hospital Association (MHA). The Singapore's National Medical Audit Programme sets one of its key objectives to monitor and assess the clinical performance of hospital institutions through clinical outcome indicators, so as to facilitate continuous quality improvement and benchmarking. Since 1998, all hospitals are required to submit data on clinical performance indicators as part of the comprehensive quality improvement activities


Health care is becoming more complex, with increasing approaches to the delivery of care. At the same time, care must also be delivered in a context of cost constraints, increasing patient expectations, and a greater focus on accountability.2,3 The Harvard Medical Practice Study, which reviewed over

YH Lee Manager, Medical Affairs KK Women's and Children's Hospital

KH Tan Director, Clinical Quality Head, Perinatal Audit & Epidemiology Senior Consultant Department of Maternal Fetal Medicine KK Women's and Children's Hospital

Correspondence to: Ms Lee Yean Hoon Medical Affairs KK Women's and Children's Hospital 100 Bukit Timah Road Singapore 229899 Email: [email protected] Tel: 63942316 Fax: 62937933


required under the Private Hospital and Medical Clinics Act (PHMCA). These indicators were locally developed based on the definitions and specifications of the MQIP indicators. On 1 April 2000, the Ministry of Health (MOH) required all acute care hospitals to participate officially in the MQIP. The aim was to enable hospitals to benchmark their performance against comparable and reputable hospitals from among the participating hospitals in the US, Europe, Japan and Taiwan. Today, more than 1,000 acute care hospital institutions worldwide participate in the MQIP. The MQIP is intended to provide its participants with opportunities to compare their indicator rates with peer group rates over time, and thereby help them gain a greater understanding of their level of performance. It provides a global overview of the quality of care provided by hospitals. Table 1 lists the selected clinical indicators submitted by KKH to the MQIP.

Table 1: List of indicators submitted to MQIP Measure 3.1 5.1 5.2 5.3 5.4 5.5 5.6 6.3 7.1 8.2 8.3 9.1 10.1 A1.1 A1.2 A1.3 A1.1a A1.2a A1.3a A5.2a Description

Specialty Specific Clinical Indicators (SSCI) In 2001, specialty specific clinical indicators were introduced to monitor outcomes of specific clinical procedures or treatment instituted by hospitals. The Chapters of the Academy of Medicine, Singapore played a key role in the selection of appropriate clinical indicators for implementation in the local hospitals from the list of indicators developed by the Australian Council on Healthcare Standards (ACHS). The Academy also advises on the indicator definitions, inclusion/ exclusion criteria, and important confounding variables, as well as input and recommendations based on the results of the indicators. The SSCI was introduced in three phases since July 2001. In early 2004, the MOH Clinical Quality Branch conducted a review on the usefulness of SSCI. The results of the review were pending at the time of this report and hopefully it will further enhance its usefulness. Table 2 lists the indicators submitted by KKH at the time of writing. We expect the SSCI to evolve to become a very useful aspect of our specialty care.

Table 2: List of indicators submitted to SSCI Indicator Speciality Paediatric Medicine Description Re-attendance at A&E for asthma

Total inpatient mortality Total perioperative mortality Perioperative mortality for patients with ASA P1 Perioperative mortality for patients with ASA P2 Perioperative mortality for patients with ASA P3 Perioperative mortality for patients with ASA P4 Perioperative mortality for patients with ASA P5 Total C-sections Total unscheduled acute care readmissions within 15 days for the same or a related condition Unscheduled admissions following ambulatory digestive, respiratory, and urinary system diagnostic endoscopies Unscheduled admissions following all other ambulatory operative procedures Unscheduled returns to intensive care units Unscheduled returns to the operation room Unscheduled returns to the emergency department within 24 hours Unscheduled returns to the emergency department within 48 hours Unscheduled returns to the emergency department within 72 hours Unscheduled returns to the emergency department within 24 hours resulting in an inpatient admission Unscheduled returns to the emergency department within 48 hours resulting in an inpatient admission Unscheduled returns to the emergency department within 72 hours resulting in an inpatient admission Cancellations by the facility of scheduled ambulatory diagnostic digestive system endoscopies on the day of the procedure Cancellations by the facility of scheduled other ambulatory procedures on the day of the procedure Documented falls in acute care

Paediatric Neurosurgery Ventricular Shunt Infection Intensive Care (Adult) Ambulatory Care Unscheduled returns to ICU · Cancellation of scheduled ambulatory digestive/urinary tract endoscopic procedures by facility · Cancellation of scheduled other ambulatory procedures by facility · Appendicectomy with normal histology · Appendicectomy with normal histology but other intra-abdominal pathology

Paediatric Surgery

A5.3a 13.1

Patient Safety Pilot Programme Late 2003, MOH engaged Dr Vahe A. Kazandjian who is the President of The Center for Performance Science, Inc. (CPS), an outcomes research centre based in Maryland (which also oversees the IQIP), in developing a pilot programme on patient safety indicators. KKH is one of the six hospitals selected to participate in the patient safety pilot programme. The pilot programme aims to locally develop and define measurable patient safety indicators (with numerator and denominator data) and test comparability of the patient safety indicators across institutions. The development stage includes the completion of a survey tool known as the "ISMP (Institute for Safe Medication Practices) Medication Safety Self-Assessment Tool" which enable comparison across institutions on areas such as organisation culture/readiness in terms of safety, processes in medication use, use of technology, communication etc. The results from the survey provide a quick overview on the currentstate of the institution and identify the gap to target areas for safer medication-use practices. Through deliberations and consensus building among the six participating institutions, twenty patient safety indicators were selected for feasibility testing. The indicators primarily focus on medication safety. Table 3 lists the selected patient safety indicators.


Table 3: List of patient safety indicators (Patient Safety Pilot Programme) Measure 1.1a 1.5 1.7 1.10 2.2 4.1 4.2 4.3 4.5 4.5a 4.6 4.6a 4.6b 4.8 4.9 4.10 4.11 4.11a 4.11b 4.12 Description

Table 4: List of KKH clinical indicators (CIAF System) Indicator Speciality Obstetrics & Gynaecology Description Unplanned re-admission related to the previous hospitalisation within 15 days of inpatient discharge Unplanned removal, injury or repair of organ during surgery Unplanned return to operation theatre for complications during the same admission Unplanned admission within 48 hours following ambulatory procedure Any serious or unexpected complication from surgery / pre-operative or during post-operative recovery Cardiopulmonary arrest Eclampsia Peri-operative Deep Vein Thrombosis / Pulmonary Embolism Death Trauma to organ e.g. broken tooth, lip abrasion Awareness while under general anaesthesia Any procedure that caused transient or permanent neurological problems / deficits in patient Any problems arising from apparatus or equipment failure that have resulted or may resulted in hypoxaemia to patients, physical injuries, esp. neurological injuries Any serious or unexpected complication from radiological procedure Birth trauma Apgar score <4 at 5 mins, HIE Samat II and above Term infant, >7 days length of stay in NICU Massive aspiration syndromes Missed congenital malformation Deaths excluding stillbirth Unplanned re-admission related to the previous hospitalisation within 15 days of inpatient discharge ICU admission exceeding 14 days Paediatrics admission exceeding 30 days Serious complication including collapse from any procedure / medication. Deaths Unplanned re-admission related to the previous hospitalisation within 15 days of inpatient discharge Unplanned removal, injury or repair of organ during surgery Unplanned returns to operating theatre for complications during the current admission Unplanned admission within 48 hrs following ambulatory procedure ICU admissions exceeding 14 days Wound Complications Sepsis related to instrumentation, catheters & devices Deaths

Orders written for and administered to wrong patients Wrong route orders intercepted by Pharmacy that resulted in call back to physician Incomplete orders that required call back to physician Medications ordered for patient with a documented allergy to those medications Wrong medications dispensed resulting in harm Medications administered to wrong patient Wrong medication administered that did not result in harm Wrong medication administered that did result in harm Duplicate doses administered Duplicate doses administered that resulted in harm Medications administered via wrong route Warfarin and i.v. heparin administered via wrong route Antibiotics administered via wrong route Wrong injection site used on wrong patient Wrong dosage forms administered Medications incorrectly prepared by non-pharmacy personnel in unit Medication administration devices malfunctioned Medication administration devices malfunctioned during medication administration Medication administration devices malfunctioned when being serviced Medication administration devices incorrectly-adjusted


Radiology Neonatology

Paediatric Medicine


Clinical Indicator Audit System Internally, a clinical indicator audit system is in place to facilitate monitoring and reporting of significant medical events. The clinical indicator audit form (CIAF) is developed with inputs and support from the three Medical Divisions (Obstetrics & Gynaecology, Paediatric Medicine and Paediatric Surgery). The CIAF system provides an infrastructure to facilitate reporting and review of significant clinical incidents. Recommendations and changes made are tracked over time. It helps to promote a more transparent and learning environment among staff. The CIAF is also used to flag out sentinel events which is reportable to MOH. A review of the CIAF is currently underway with the aims to further clarify the scope and definition of the indicators that would enhance the accuracy of reporting and to review the structure of the form to make it more user-friendly and time sensitive. Table 4 lists the indicators captured in the CIAF.

Children's Emergency Deaths Paediatric Surgery

Sentinel Event Reporting In 2002, the Ministry of Health (MOH) implemented Sentinel Event Review (SER) in both public and private hospitals which replaces the old Committee on Inquiry (COI). 22

COI sought to determine whether deaths were "avoidable" and tend to focus on individual responsibility while SER primarily focus on organisational systems and processes rather than individual performance. The key objectives of SER are to promote quality improvement with the intention to create positive impact on improving patient care and reducing the probability of such an event in the future by making changes to organisation's systems and processes. Hospitals are required to establish Quality Assurance Committee (QAC) to review sentinel events using Root Cause Analysis methodology, to understand the causes that underlie the event. The report is strictly confidential and is protected under the Section 11 (Quality assurance committees) of the PHMCA (refer to Table 5).8 SER also helps to increase the general knowledge about sentinel events, their causes, strategies for prevention and a heighten vigilance on risk assessment. In February 2004, MOH's Clinical Quality Branch (CQ) rolled out the revised "Guidelines for Review of Sentinel Events by Hospital Quality Assurance Committees, 2004". The definition of a sentinel event is detailed in Table 6.

Table 5: Extracted from the Private Hospitals and Medical Clinics Act (PHMCA), Section 11 (Quality assurance committees), Revised Edition 1999.

of a quality assurance committee or any person acting under the direction of a quality assurance committee in good faith for the purposes of the exercise of the quality assurance committee's functions, does not subject such a member or person personally to any action, liability, claim or demand. (7) Without limiting subsection (6), a member of a quality assurance committee has qualified privilege in proceedings for defamation in respect of:(a) any statement made orally or in writing in the exercise of the functions of a member; or (b) the contents of any report or other information published by the quality assurance committee.

Table 6: Guidelines for Review of Sentinel Events by Hospital Quality Assurance Committee, 2004. Definitions of a Sentinel Event.

(1) The licensee of a private hospital or healthcare establishment shall establish one or more quality assurance committees to:(a) monitor and evaluate the quality and appropriateness of the services provided and the practices and procedures carried out at the private hospital or healthcare establishment; (b) identify and resolve problems that may have arisen in connection with any service provided or any practice or procedure carried out at the private hospital or healthcare establishment; (c) make recommendations to improve the quality of the services provided and the practices and procedures carried out at the private hospital or healthcare establishment; and (d) monitor the implementation of the recommendations made under paragraph (c). (3) A person who is or was a member of a quality assurance committee is neither competent nor compellable:(a) to produce before any court, tribunal, board or person any document in his possession or under his control that was created by, at the request of or solely for the purpose of the quality assurance committee; or (b) to disclose to any court, tribunal, board or person any information that has come to his knowledge as a member of the quality assurance committee. (5) A finding or recommendation by a quality assurance committee as to the need for changes or improvements in relation to any service provided or any practice or procedure carried out at a private hospital or a healthcare establishment is not admissible in any proceedings as evidence that the service, practice or procedure is or was inappropriate or inadequate. (6) Anything done by a quality assurance committee, a member

A sentinel event is defined as : (a) an unexpected occurrence i) involving death, or major permanent loss of function1 , or major injury, AND ii) that is associated with the treatment, lack of treatment or delay in treatment of the patient's illness or underlying condition. For reporting purposes, an occurrence as defined in (a), shall also be categorized2 as follows: · Blood transfusion · Childbirth/ Pregnancy · Inpatient suicide · Medication usage · Surgical/ procedural complications3: - Ward-based (e.g. chest tube insertion, pleural biopsy, & haemodialysis) - Non ward-based (e.g. surgery done in operating theatre, angiography, & CT-guided biopsy) · Others (e.g. restraint/ fall/ assault/choking/ usage of medical equipment, etc.) (b) Specifically, any of the following events: · Retained instruments/material after procedure · Wrong type of procedure/surgery · Wrong site of procedure/surgery · Wrong patient procedure/surgery


Major permanent loss of function occurs when a patient has sensory, motor, physiologic or intellectual impairment that was not present at the time of admission 2 An occurrence may involve more than one catergory 3 Includes iatrogenic complications


· To be useful, indicators should pose several attributes. 9,10 They should cover elements of practice that are acknowledged to be important and not simply record what is easy to measure. Ideally they should be devised with the help of clinicians and they should fairly reflect their clinical practice appropriately. They should be value free.





The data on which indicators are based should be valid and reliable. · The results should be presented in a user friendly way. · It should be made clear that indicators should be used for guidance: they "cannot on their own, provide `definitive' evidence of success or failure and should be used to raise questions, not provide answers". The development and use of potential indicators should be accompanied by a process of rigorous evaluation.11 Indicators could be incorporated into computerised record systems in general practice. This might reduce the time taken to record the indicators and address problems associated with reliability between reporting staff.


6. 7.

8. 9. 10.


We need to understand and be aware how the ways in which data are collected may impact on the interpretations. Indicator validity and reliability are two crucial aspects of performance improvement programme. Indicator validity refers to the usefulness of the measures in performance assessment and improvement. Reliability is demonstrated through field-testing. Poor validity and/or reliability of the measures can undermine the conclusions drawn. Furthermore, when people are aware that they will be judged on the data, other incentives may come into play leading to concerns about "gaming" with data.12 Changes in reporting practices over time may also undermine the validity of indicators derived from routine data source. In a study of emergency admissions in one health authority from 1989/90 to 1997/8 an apparent increase in emergency activity was not matched by an increase in the number of admissions or by the increase in the number of patients each year. What appeared to be a rise in emergency admissions turned out to be mainly due to increased reporting of internal transfers after admission.13 Process measures are relatively easy to interpret, and they provide a direct link to the remedial action required. They may be particularly useful in revealing quality problems that are not susceptible to outcome measurement ­ for example, "near misses", unwanted outcomes, or unnecessary resource use.14





Lawthers AG, et al. Incident of adverse events and negligence in hospitalized patients: results of the Harvard medical Practice Study I. N Engl J Med 1991; 324: 370-6. Berwick DM, Leape LL. Reducing errors in medicine ­ It's time to take this more seriously. Quality in Health Care 1999; 8:145146. Thomas R, Lally J. Clinical indicators: Do we know what we're doing? Quality in Health Care 1998; 7: 122. Powell AE, Davies HTO and Thomson RG. Using routine comparative data to assess the quality of health care: understanding and avoiding common pitfalls. Qual Saf Health Care 2003;12:122128. The Private Hospitals and Medical Clinics Act (PHMCA). Quality Assurance Committees, Section 11, Revised Edition 1999. Likierman A. Performance indicators: 20 early lessons from managerial use. Public Money and Management 1993; 13: 15-22. Avery AJ. Appropriate prescribing in general practice: Development of the indicators. Quality in health care 1998; 7:123. Cantrill JA, Sibbald B, Buetow S. Indicators of the appropriateness of long term prescribing in general practice in the United Kingdom: Consensus development, face and content validity, feasibility and reliability. Quality in Health Care 1998;7:130-5. Smith P. On the unintended consequences of publishing performance data in the public sector. Int J Public Admin 1995;18:277-310. Morgan K, Prothero D, Frankel S. The rise in emergency admissions - crisis or artefact? Temporal analysis of health services data. BMJ 1999;319:158-9. Crombie IK, Davies HT. Beyond health outcomes: the advantages of measuring process. J Eval Clin Pract 1998;4:31-8.


The work carried out to develop, monitor and report the clinical indicators reported in this paper at KKH would not have been possible without the support and collaboration of the clinicians, nurses, pharmacists, Dr Yoong Siew Lee (former Director of Medical Affairs) and Medical Affairs staff namely, Ms Cheah Li Li, Ms Junne How, Ms Lok Sun Sun, Ms Kamala Krishnan and Ms Angela Bek.


1. The Australian Council on Healthcare Standards (ACHS). Clinical Indicator Users' Manual 2002, 4. 2. Davies HT, Marshall MN. Public disclosure of performance data: does the public get what the public wants? Lancet 1999;353:163940. 3. Nuttley SM, Smith PC. League tables for performance improvement in health care. J Health Serv Res Policy 1998;3:507. 4. Brennan TA, Leape LL, Laird NM, Hebert L, Localio AR,




5 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate


You might also be interested in