Printer Friendly

Preventing medical errors in point-of-care testing: security, validation, performance, safeguards, and connectivity. (Original Articles)(Cover Story).

Errare Humanum Est

--Morals, Plutarch (46-120 AD)

The Committee on Quality of Health Care in America of the Institute of Medicine (IOM) recently published guidelines intended to reduce patient deaths (estimated to be 44 000 to 98 000 per year) caused by medical errors. (1) Errors in laboratory testing are often preanalytic and can contribute to inappropriate care or modification of therapy. (2,3) Point-of-care testing is defined as testing at or near the site of patient care. (4,5) Although exact preanalytic, analytic, and postanalytic error rates are not known, bedside diagnostic testing outside the clinical laboratory may contribute to serious medical errors. (4-6) The goal of this article is to improve the performance, quality, and safety of point-of-care testing by presenting new systems for error prevention. These systems, based on expert specifications, extensive review, and consensus opinion, are intended for implementation on point-of-care testing instruments.

Point-of-care testing is expanding rapidly. The growth rate is 12% to 15% per annum, several times greater than that of the central laboratory. (7) As more testing shifts to the bedside (8,9) and is performed by increasing numbers of physicians, nurses, and nurse practitioners, there must be adequate safeguards to prevent medical errors and reduce risk. The hypothesis here is that such prevention can best be achieved through improved security, validation, and performance systems plus enterprise-wide connectivity (bidirectional communication between computerized information systems and remote devices) of point-of-care testing. (10-16)

Point-of-care testing may be prone to serious disadvantages, (5) such as (a) insufficient validation of trained and certified operators, (b) little or no security of patient test results and quality control data, and (c) limited connectivity (bidirectional communication) with the electronic medical record. Instruments lack algorithms to assess the competency of operators. (17) The use of unauthorized point-of-care test results in diagnosis and treatment does not comply with practice standards; accreditation requirements; or the security, confidentiality, authentication, integrity, and auditing requirements of the Health Insurance Portability and Accountability Act. (18) Therefore, the objectives of this study were (1) to assess the importance of validating certified operators; (2) to determine expert specifications (requirements) for security, validation, and performance; (3) to design systems that fulfill expert specifications and reflect consensus professional opinion for prevention of medical errors; and (4) to describe associated essential safeguard and connectivity features that increase efficiency and improve efficacy.

MATERIALS AND METHODS

Three-Step Consensus Process

Step 1: United States National Survey.--Forty-six point-of-care testing experts were surveyed in the United States. The experts were selected for geographic diversity from among clinicians who use point-of-care testing, authors of articles and book chapters on point-of-care testing, speakers at national meetings, directors of point-of-care testing programs, and health system staff who oversee point-of-care testing programs and manage performance. Several of the experts were directors or coordinators of point-of-care testing programs in hospitals and health maintenance organizations. The survey was conducted by e-mail, with telephones and facsimile transmissions used as necessary. Experts were asked (a) to state whether lockout of nonvalidated operators was needed; (b) to list the 5 most important objectives for security, validation, and performance; and (c) to describe unique aspects of error prevention used or needed in their local point-of-care testing programs. Survey questions were open-ended to encourage creative responses and original design specifications. The 37 responding experts (80.4% [37/46] response rate) were from academic health systems and health maintenance organizations distributed geographically in 19 states and from a broad range of disciplines, including pathology, laboratory medicine, internal medicine, critical care, surgery, anesthesiology, medical technology, management, administration, regulation, research, consulting, sales, and industry.

Step 2: Error Prevention Systems Design.--Raw data (survey responses) from the experts in the multicenter survey were collated and tabulated into 3 categories of specifications: (1) security, (2) validation, and (3) performance, in order of the priority with which the experts cited the requirements. The performance category captured quality control and proficiency testing requirements. An additional "safeguards" category was necessary to specify expert requirements for protecting critical information and patients. From the these tabulated survey responses, 3 mutually compatible parallel systems for security, validation, and performance functions on point-of-care instruments were designed to fulfill the error-prevention objectives stated by the experts. An additional emergency system was added early in step 3 to meet expert and professional requirements for medically needed point-of-care testing during disasters and crises.

Step 3: Consensus Process.--The 4 error-prevention systems and safeguards were critiqued by groups of professional managers, specialists, clinicians, and researchers in point-of-care testing to whom the author presented the concepts in 4 venues over a 9-month period in 2000-2001. Presentation venues included venue I, the San Bernardino, Calif, meeting of the Clinical Laboratory Management Association (n = 16 critiques returned); venue, II, the Clinical Laboratory Management Association national meeting in Anaheim, Calif (n = 49); venue III, the American Association for Clinical Chemistry national meeting in San Francisco, Calif (n = 66); and venue IV, the Los Angeles, Calif, point-of-care coordinators' forum (n = 44). Input (not quantitated) also was gathered from staff of the Point-of-Care Testing Center for Teaching and Research (POCT * CTR) at the School of Medicine, University of California, Davis.

The 175 professionals who participated in the 4 venues listed in step 3 submitted anonymous written critiques of the error-prevention systems and safeguards, as well as quantitative ratings of the former on a scale of 1 (low) to 5 (high). This feedback was used to improve the systems designs and safeguards incrementally between presentations until stabilization in the form of the final consensus configurations presented in this article. Stabilization occurred after the fourth public presentation and feedback iteration. Very few changes were incorporated subsequently.

Parallel Investigations

This component was not part of the national survey or consensus process. An investigation of commercial devices determined security, validation, and performance features currently available on point-of-care testing instruments. A summary of user needs for connectivity of point-of-care testing was compiled from (a) proceedings and reports of the Connectivity Industry Consortium (CIC), including a Point-of-Care Coordinators Workshop held in September 2000; (b) information on the CIC web site (www.poccic.org); (c) the author's experience as a founding member of the CIC Provider Review Committee and a survey of this committee in August 2000; and (d) information found in references 10, 19, and 20.

RESULTS

Table 1 summarizes security features recommended by experts for (a) protecting instrument operations, matching clinical goals, and avoiding repudiation; (b) ensuring the integrity of test data and confidential information; (c) providing practical, intelligent, user-friendly, and user-definable software features; and (d) discouraging fraudulent use, tampering, and theft. The specifications (in order of priority for major topics, A through D) in Table 1 and in Tables 2 and 3 were used as criteria for the initial designs of the security, validation, performance, and emergency systems presented in final consensus form below.

Table 2 summarizes the following validation specifications: (a) locking out operators not approved to use instruments; (b) making exceptions during medical emergencies; (c) integrating competency requirements for accreditation; and (d) balancing validation requirements versus rapid response and user friendliness. One hundred percent of the responding experts recommended implementing nonvalidated operator lockout on hospital-based systems for point-of-care testing, regardless of the instrument format (handheld, portable, or transportable) or testing modality (in vitro, ex vivo, or in vivo (12)). Some experts advised that access be controlled even on home test devices to prevent inadvertent negligence or abuse (eg, by children).

Table 3 summarizes expert recommendations for features that improve performance. The most important concern was blocking patient testing when required quality control procedures are not performed. Other performance features included using problem-based interactive training and explicit "live" action commands to communicate with the point-of-care operator (rather than using alphanumerically coded information, which requires access to interpretive manuals); forcing download (transfer) of patient results, capturing performance monitoring data, and tracking; and defeating fraudulent manipulation of quality assurance processes. Some requirements for performance (Table 3) and validation (Table 2) were interrelated, such as preventing fraudulent use of quality control routines to perform patient testing.

Table 4 presents the error-prevention systems designed initially based on the expert requirements in Tables 1 through 3 and improved through several iterations to achieve high ratings by professional peers (Figure 1). Final ratings at venue IV were 4.34 (SD 0.67), 4.28 (0.72), 4.45 (0.50), and 4.05 (0.96) for the security, validation, performance, and emergency systems, respectively, and 4.37 (0.49) overall.

[FIGURE 1 OMITTED]

The first 3 systems (Table 4, A through C) yield 27 (3 x 3 x 3) flexible combinations of features that can be used to adjust risk, limit access, and improve performance according to individual institutional objectives. The security system (Table 4, A) prioritizes methods of instrument operational security versus risk trade-offs. The validation system (Table 4, B) for operator access has 3 levels. The standard level excludes actual testing, but allows the user to review and download quality control and patient test results. The intermediate level allows a point-of-care operator to perform testing. The master level grants access to all testing, performance, setup, and instrument functions, including test calibration, reportable ranges, linearity, critical limits, and user-defined functions. The performance system (Table 4, C) has 3 priority levels for quality control process interrupts. Experts recommended that the point-of-care testing coordinator be notified of critical problems and that instrument operation be released (ie, cleared after the problem is solved) only with a physical maintenance key held by the point-of-care coordinator or director. The emergency system (Table 4, D) enables the point-of-care coordinator or director to define the conditions for partially overriding the other 3 systems, but if done, demands tracking of the sequence of actions, recovery, and follow-up.

Experts also recommended several important safeguards, which were refined (without ratings) during step 3 (see "Methods") of the consensus process to arrive at the final form shown in Table 5. These safeguards include (a) identifying the patient; (b) reporting and documenting critical results; (c) assuring the integrity of the specimen, formats, test results, and statistical process control; (d) preventing inappropriate use of tests, reagents, calibrators, code keys (device chip inserts with test strip lot and calibration), parameters, and settings; and (e) capturing legally and financially necessary information. Table 5 also addresses detection and tracking of preanalytic (eg, hemolysis), analytic (eg, measurement errors), and postanalytic (eg, process interrupts) problems.

COMMENT

Preventing Medical Errors

The IOM Quality of Health Care Committee guidelines (1) for the prevention of medical errors recommend that national leadership focus on raising expectations, improving safety standards, and implementing safer practices at the delivery level within health care systems, and additionally, that the Food and Drug Administration increase attention to the safe use of drugs and devices. Other IOM recommendations include (a) communicate and disseminate methods of preventing, identifying, and analyzing errors; (b) integrate competency certification and recertification into the delivery of professional services; (c) require error reporting systems in hospital and ambulatory care settings; and (d) enhance patient safety through standardizing and simplifying equipment, supplies, and processes. Identification of error modes, prediction of error expression, training and design for self-detection, and interdiction before transformation into harmful accidents are fundamental to protecting patients (21) and carrying out the IOM recommendations.

Legislation, such as the Medical Error Reduction Act and the Stop All Frequent Errors (SAFE) in Medicare and Medicaid Act, will fund research to identify sources of medical errors and mechanisms for reducing preventable injuries and deaths, including those attributable to diagnostic testing. The legislation also will require providers to measurably reduce errors and affirms the IOM goal of cutting the medical error rate by 50%. These initiatives reflect the high level of national concern about prevention and reduction of medical errors. Recognized methods of error reduction include (a) decreasing reliance on memory and vigilance, (b) error-proofing devices, (c) improving information access, (d) structuring critical tasks with constraints and "forcing functions" (required actions), (e) simplifying key steps, (f) standardizing processes, and (g) training by means of problem simulation to develop skills in failure mode analysis. (1,22)

Diagnostic testing continues to shift from the clinical laboratory to the point of care as devices become smaller, smarter, and faster. (8,9,11-16) It is the responsibility of industry to provide systems that are safe for the intended application, such as the operating room, intensive care unit, clinical specialty ward, outpatient care center, or patient's home. To fulfill IOM guidelines, Health Insurance Portability and Accountability Act regulations, (18) and other sound advice, (21-25) manufacturers should proactively incorporate into instruments the consensus security, validation, performance, and emergency systems presented here, so that users will have the tools necessary to improve the safety of point-of-care testing and to test objectively manufacturers' claims of accurate and precise performance. The Food and Drug Administration, (1,23) which reviews and approves manufacturers' claims, does not, but could require nonvalidated operator lockout, noncertified operator lockout, error reduction options (eg, no quality control performed, no proficiency testing completed, or no data transfer done), and software-based safeguards (eg, no patient identification) on instruments when licensing new devices and tests for point-of-care testing. Availability of these options would allow systems managers and point-of-care testing coordinators to track and analyze error-to-injury translation sequences. (24)

The frequency of serious medical errors caused by point-of-care testing is not known, (25) and generally medical error rates may be overestimated or underestimated. (26,27) However, rapid growth of this modality of diagnostic testing in the United States and other countries warrants investment in error detection and prevention, even if only a fraction of the sentinel events or errors actually lead to significant harm, since testing volumes are large and will increase. A systems approach to error prevention like that presented here generally is recognized as productive (1,11,22) and is favored over approaches focusing on repeated discipline of personnel and punitive measures. Point-of-care instruments should be designed with respect for human limitations. Additionally, timely discovery and recognition of errors is essential to improve patient safety.

Enhancing Critical Human Factors

Human factor problems, especially when cumulative opportunities for errors exist, are more likely to occur with technologically complicated instruments and poorly trained operators. Instrument access is a critical control point. Consensus participants uniformly recommended that only certified and validated operators be allowed to perform point-of-care testing and that security standards be comparable to those used for access to other hospital systems, such as computerized medical records. To reduce risk, the basic (or higher) tier of protection features (see Table 4, A) should be used. Use of a personal identification number (PIN) plus password (alphanumeric) combination is preferred because of the potential for sharing a PIN. In systems that require only a PIN, it is often difficult to track the origin of test results, determine if results were produced by competent users, and correct deficiencies. For flexibility, all 3 operator-access levels (Table 4, B) should be options on instruments. The master level should be limited to the director and/or coordinator in charge of the point-of-care testing program.

Competency is the ability to perform all phases of point-of-care testing correctly. Competency can be documented through certifications that assure individual operators meet training and experience standards. Competency was deemed crucial by consensus participants, a position similar to that of the Joint Commission on Accreditation of Healthcare Organizations (28,29) and other accreditation agencies. However, each health system must define its own goals for operator training, certification, and recertification, as well as its own criteria for operator validation. Training and certification alone probably will not prevent errors. Resolution through periodic human contact and communication is essential. For example, monthly e-mail notification of quality control compliance and within-control scores improves operator performance. (15) Real-time quality control interrupts (Table 4, C) should be carefully designed to match clinical needs, fulfill manufacturers' specifications, and integrate accreditation requirements (eg, no patient results if quality control is out of control range). Automated internal quality control ("wet" solutions and/or "dry" electronic) performed concurrently with patient testing is an efficient approach.

Safety improvements made during the 4 stages of review (in step 3) require that users cannot bypass the security, validation, and performance systems, except in emergencies (see Table 4, D). Following emergencies, the same users are held accountable within 24 hours for actions by tracking their PIN- and/or password-gated entry to instrument use. All such operators should be trained on site-specific instruments made available for emergencies and disasters. Emergency override allows testing in critical situations when not testing could harm patients and expose the hospital to excess liability. Emergency override is problematic (especially for blood gases) because of (a) potential errors in clinical decisions based on erroneous test results, (b) management difficulties controlling emergency access, and (c) the resulting inconsistent testing practices. A viable option is to send emergency specimens to the core laboratory by courier or to call laboratory personnel to perform testing on site.

However, during a rapidly evolving disaster, communications may be interrupted or sites of emergency care may be physically inaccessible, necessitating testing by available clinical personnel. Therefore, the operation of the emergency override system is conditional to diminish the drawbacks of override through requirements for (a) formal planning of staff who will have access, (b) training and certifying of these individuals, (c) documenting written attestations that PINs and passwords will not be shared, and (d) performing quality control in advance as part of daily workflow so that

emergency-designated instruments are perpetually ready. With this conservative approach, security features will not obstruct medically needed rapid-response testing during emergencies and disasters.

Using the Error Prevention Systems and Safeguards

Figure 2 illustrates how the security, validation, and performance systems can overlap, possibly causing increased workload, process delays, and higher costs. Thus, these key elements must be efficient and professionally managed by a collaborative team of pathologists and clinicians who establish and enforce the rules in conjunction with the point-of-care director and coordinator. For example, a transportable whole-blood analyzer with several biosensor-based tests could incorporate radiofrequency badges for definitive, but fast operator identification; 3 levels of access control; and a physical maintenance key to release fatal quality failures once corrected. In contrast, a consumer meter for self-monitoring of blood glucose levels could use PIN-gated access and remind the user to perform a quality control check each new day of testing. Except for patient training in self-monitoring (eg, glucose or prothrombin time) prior to discharge, the use of open-access consumer meters, which anyone could operate in hospital settings, invites errors and is not recommended. Protection features, access levels, and quality control interrupts can be used in conjunction with tracking, monitoring, and auditing the rate of nonvalidated operator use and other compliance metrics to identify errors by function, correct them promptly, and reduce occurrence rates. On simple instruments, quality control results can be "pass/fail" to avoid the abuse of quality control routines being used for patient testing.

[FIGURE 2 OMITTED]

Safeguards (Table 5) can be implemented through instrument system software. The most important safeguards were identifying the patient, integrating critical limits, (30-33) and assuring the integrity of specimens, formats, test results, and statistical process control. Some safeguards, such as verifying the sequence and timing of patient test results, quality control, and proficiency testing for Joint Commission on Accreditation of Healthcare Organizations accreditation inspections, require a combination of "smart" instruments, data archiving, and collaborative management of the hospital-based point-of-care testing program to complete the "quality loop." (28) Fulfillment of other requirements, such as alerting interference from confounding variables (34) and drugs (35) requires bidirectional connectivity with hospital computerized databases. Root-cause analyses show that serious adverse events frequently are attributed to drug accidents. (1,21) If a drug regimen interferes with test methods, physicians making rapid bedside decisions should be informed of possible analytic errors with a viewable comment when analyte results are displayed.

Connecting Point-of-Care Testing

Connectivity is critical to future problem solving (Figure 3). Error prevention features cannot be implemented easily without bidirectional connectivity. Table 6 lists user connectivity specifications summarized from proceedings of the CIC, a nonprofit corporation launched October 20, 1999. Connectivity standards, which will be published by the National Committee for Clinical Laboratory Standards, will facilitate secure data transmissions and help organize patient information. For example, bidirectional communication is essential for synchronizing time and date functions, providing interpretive information with test results, and sending alerts to point-of-care operators (eg, hematocrit out of range during glucose meter testing). Important user requirements encompass confidentiality, security, legality, compatibility, interoperability, timeliness, and convenience of processes, data, communications, and software. Members of CIC believe that connectivity should not increase therapeutic turnaround time (the time from test order to patient treatment (4,5)) for point-of-care test results. Fast therapeutic turnaround is associated with improved patient care. (36) Connectivity will improve the efficiency and efficacy of the error-prevention systems, facilitate use of safeguards through shared instrument software, help track and store error episodes in clinical information systems for identification of serious accidents, and allow point-of-care informatics to merge with computerized hospital systems and clinical data repositories.

[FIGURE 3 OMITTED]

Conclusions: Improving Patient Outcomes

Existing options (Table 7), especially on portable and handheld instruments, offer no, few, or inconsistent features that do not adequately assure operator competency or the integrity of test results. Often these features can be bypassed by operating instruments in an open-access mode. Provision of adequate safety options will help decrease the costs of detecting and reducing medical errors. Health system providers, government agencies, accreditation organizations, and industry should collaborate to provide consensus security, validation, performance, emergency, safeguard, and connectivity solutions, (20) in order to identify and prevent errors, improve the performance of point-of-care testing, preserve fast therapeutic turnaround time, and help improve medical and economic outcomes. (37)

Point-of-care testing has become the standard of practice in critical emergencies because of the immediate availability of critical results and fast therapeutic turnaround time. (4,5,36) The goals of point-of-care testing are to provide rapid response and to improve patient outcomes, (4,5) not to increase the frequency of medical errors. One hundred percent of experts recommended the practice of validating trained and certified operators before they use hospital-based instruments for point-of-care testing. Leadership must assure at least this minimum level of safety and should proactively support use of the consensus error-prevention systems. Additionally, collaborative hospital teams (1) and a culture of safety (38,39) are necessary to make safety a primary concern and responsibility among hybrid staff (8,9) participating in a point-of-care testing program.
Table 1. Securing Point-of-Care Operations, Integrity, and Instruments

A. Protect instrument operations, match clinical goals,
 and avoid repudiation
 1. Establish a security level that matches clinical goals
 2. Correlate acceptable risk with the protection level
 3. Define clinical sites where validated operators can use
 instruments
 4. Avoid security repudiation

B. Ensure the integrity of test data and confidential information
 1. Protect patient results, monitor access, and track use
 2. Download test results and performance data to computerized
 systems
 3. Encrypt data as necessary for electronic, telephone, or
 wireless transmissions
 4. Prevent network security breaches and guard time and date
 functions

C. Provide practical, intelligent, user-friendly, and user-definable
 software features
 1. Expedite first clearance when the operator starts using the
 instrument
 2. Automate recognition of operator or use short personal
 identification number (PINs) and/or passwords
 3. Streamline subsequent access but maintain user accountability
 4. Build in a date registry so that user cannot bypass times
 functions

D. Discourage fraudulent use, tampering, and theft
 1. Disable if term of calibration or other protocols expires
 2. Lock down if user repeatedly enters invalid access codes
 3. Shut down if detect tampering and investigate violations
 4. Sound alarm and/or lock out if transported outside approved
 clinical site
Table 2. Validating Certified Point-of-Care Testing Operators

A. Lock out operators not approved to use instruments
 1. Assign users appropriate levels based on competency and
 responsibility
 2. Time out access after instrument is idle for defined interval
 3. Include a special entry code for instrument set-up
 4. Monitor rate of invalid operator use for performance improvement

B. Make exceptions during medical emergencies
 1. Include an option to override operator lockout during emergencies
 2. Provide emergency access with a special code
 3. Assure testing does not shut down during emergencies
 4. Flag and track test results and operators for follow-up

C. Integrate competency requirements for accreditation
 1. Fulfill accreditation requirements for operator credentials,
 certification, and documentation
 2. Lock out user after expiration of certification or too few
 tests performed in time interval
 3. Prevent operators who are not retrained or not recertified
 from using instrument
 4. Notify operator of why locked out, corrective action, and
 contact person

D. Balance validation requirements versus rapid response and
 user friendliness
 1. Tailor validation requirement to instrument format (eg,
 handheld, portable, or transportable)
 2. Maintain appropriate therapeutic turnaround time for
 critical care and other clinical needs
 3. Focus on ease of use, flexibility, and cost-effectiveness
 through user-defined options
 4. Discourage inadvertent use outside hospital (eg,
 by children at home)
Table 3. Improving the Performance of Point-of-Care Testing

A. Block patient testing if required quality control (QC)
 procedures are not performed
 1. Match requirements for QC and QC timing to clinical priorities
 2. Decide when to implement routine, urgent, or critical QC
 interrupts
 3. Suppress patient test results if QC results are unacceptable
 4. Build in exceptions for emergencies

B. Use problem-based interactive training and explicit "live"
 action commands
 1. Simulate performance problems to train for certification
 2. Prioritize built-in, easy-to-understand comments to the
 point-of-care operator
 3. Avoid cryptic comment codes for the most important communications
 4. Display action commands directly on the instrument readout

C. Force download of patient results, capture performance monitoring
 data, and track
 1. Schedule periodic download of patient results (ideally real-time)
 2. Include QC and proficiency testing (PT) data
 3. Lock out instrument use if download is not done
 4. Compare point-of-care and core laboratory test results
 periodically
 5. Track with automated performance summaries over time

D. Defeat fraudulent manipulation of quality assurance processes
 1. Protect QC/PT processes and disallow patient testing using
 QC/PT modes
 2. Warn user if he or she attempts to bypass QC/PT protocols
 3. Lock out operator if he or she accumulates repeated QC run
 failures
 4. Disable instrument if user enters fictitious QC/PT data
Table 4. Systems to Prevent Medical Errors in Point-of-Care Testing

 A. Security System for Point-of-Care Instruments

 Tier Protection Feature Potential Risk

Low Require key entry of High risk because the PIN
 personal identification or password may be
 number (PIN) or shared or entered
 password; becomes basic incorrectly, and
 level if both PIN and somewhat inconvenient,
 matching password are but the least expensive
 required
Basic Use badge barcode reader, More secure since users
 magnetic button or are less likely to swap
 strip scan, radio barcodes, buttons,
 frequency badge, or cards, badges, or
 digital certificate/ certificates
 signature (with
 password)
High Identify operator Least risk but the most
 biometrically using expensive and may be
 optical fingerprint, impractical for portable
 digits angle, iris or small handheld
 pattern, voice or face devices
 recognition, or retinal
 scan

 B. Validation System for Access to Results, Testing, and Functions

 Level Access

Standard Review and/or download (transfer) test results
Intermediate Above, plus operator enabled to perform patient
 testing, quality control (QC), and proficiency
 testing
Master Access to all testing, performance, setup, and
 instrument functions

 C. Performance System for Quality Control

 Priority Action

Routine QC required at timed intervals (eg, day of use, each
 shift, or defined period) or when use of the device
 triggers a QC check
Advanced Above, plus QC check demanded if range exceeded,
 result rejected, or error detected
Critical All of the above, but instrument use released only with
 a physical maintenance key following correction of
 serious problems

 D. Emergency Override System

1. Emergency personnel must be trained and certified periodically in
 point-of-care testing.
2. Hospital defines conditions for emergency access to testing, which
 is flagged and tracked, following mandatory entry of operator's
 PIN.
 Password (ie, basic security level) or special emergency code
 (eg, in a "break box") also may be required.
3. During emergencies, operator is warned, initiates testing, and is
 accountable for recovery actions.
4. For instruments with override capability and those used during
 disasters, QC is performed in advance as part of daily workflow to
 assure that each instrument is prepared for critical patient
 testing.
Table 5. Safeguarding Critical Information and Patients

A. Identify the patient
 1. Use bar-coded wristband or radiofrequency badge
 (magnetic strip reader for outpatients)
 2. Verify patient identification against an electronically
 downloaded list (if available)
 3. Cross-check patient against bed location (eg, for
 critically ill patients in an intensive care unit)
 4. Lock out testing if patient identification is invalid or
 unavailable

B. Report and document critical results (critical, panic, or
 alert values)
 1. Build in a list of relevant critical limits and store easily
 accessible critical results
 2. Annotate, report, and flag test results that are critical
 immediately
 3. Record individual obtaining or clinician receiving critical
 results
 4. Request verification of critical test results (user-defined
 option)

C. Assure the integrity of specimens, formats, test results, and
 statistical process control
 1. Signal sample quantity not sufficient (QNS), volume inadequate,
 or wrong anticoagulant
 2. Alert hemolysis, out-of-range hematocrit (or P[0.sub.2], pH),
 and interferences (eg, bilirubin, drugs, or lipids)
 3. Report qualitative "<" or ">" if the test result is out of the
 reportable range
 4. Format storage of patient and sample identification, test
 results, and quality control/proficiency testing (QC/PT) data
 5. Track and record measurement errors, interinstrument variations,
 and process interrupts

D. Prevent inappropriate use of tests, reagents, calibrators, code
 keys, parameters, and settings
 1. Lock out use of unnecessary tests and of invalid or unapproved
 reagent or calibrator lots
 2. Disallow use of expired reagents, test strips, or QC materials
 3. Stop use if test strips (if used) and code key do not match
 4. Lock down calibration, linearity, reportable range, time, date,
 and other parameters
 5. Embed preventative maintenance, downloading protocols, and
 operating rules

E. Capture legally and financially necessary information
 1. Record operator identification in prescribed data fields and
 append to test result record
 2. Automate entry of responsible clinician or ordering physician
 3. Archive time of specimen collection, results reporting, and
 system failures
 4. Prepare to verify (for inspectors) that test and matching
 QC/PT results are acceptable
 5. Integrate accounting, costing, charging, billing, and
 inventory information
Table 6. User Requirements and Priorities for Connectivity *

A. Transmission, confidentiality, and security

 1. Transmit bidirectionally between point-of-care devices,
 coordinators, and databases
 2. Use existing communication infrastructures (eg, LAN/WAN,
 intranet, Internet, and WWW)
 3. Conserve IP addresses and use existing ones where possible
 4. Guarantee confidentiality and security (encrypt as necessary)

B. Accreditation, legality, and regulation

 1. Satisfy regulatory agency guidelines (eg, JCAHO, CAP,
 COLA, and individual states)
 2. Facilitate test ordering and resulting at the point of
 care and via computerized system
 3. Enhance performance, quality compliance, and QC/QA training

C. Software and hardware compatibility and interoperability

 1. Standardize device connections ("plug and play") and minimize
 space needed
 2. Ensure compatibility and interoperability with commercial
 software
 3. Qualify and archive results with relevant descriptors (eg,
 patient test, QC, or critical value)

D. Timeliness, convenience, and information content

 1. Minimize therapeutic turnaround time ([less than or equal to]
 1-2 min preanalytic/postanalytic reception/transmission)
 2. Design an intuitive, functionally simple standard that is easy
 to use and operate
 3. Qualify and archive results with relevant descriptors (eg,
 patient test, QC, or critical value)

E. Downloading, uploading, access, flexibility, and integration

 1. Download point-of-care testing data automatically, seamlessly,
 and on demand
 2. Interface directly to LIS, HIS, CIS, and/or CDR, and avoid
 intermediates (eg, laptops)
 3. Allow user-defined, multi-level access to test menus/results,
 QC/QA setup, and order entry
 4. Let operators configure data model, optimize density/content,
 and customize uses
 5. Integrate informatics fully and automate information processes
 (eg, automated e-mail alerts)

F. Future requirements

 1. Real-time verification of operator/patient identifications and
 communication of test results
 2. Wireless operation with locator capability, site annotation,
 and regional communication
 3. Compatability/communication with personal digital assistants
 and other devices
 4. Intelligent processing and interpretation of information using
 expert systems
 5. Efficient data retrieving, mining, sharing, analyzing, trending,
 and managing
 6. Information consolidation and condensation for outcomes
 optimization

* Top ten connectivity priorities are in italics. LAN indicates local
area network; WAN, wide area network; WWW, World Wide Web; IP, Internet
protocol; JCAHO, Joint Commission on the Accreditation of Healthcare
Organizations; CAP, College of American Pathologists; COLA, Commission
on Laboratory Accreditation; QC, quality control; QA, quality
assurance; LIS, laboratory information system; HIS, hospital
information system; CIS, clinical information system; CDR, clinical
data repository.
Table 7. Current Status of 6 Key Error Reduction Functions on
Point-of-Care Testing Devices

 System Status

Security Several commonly used point-of-care instruments
 run only in open-access mode
 Protection varies from none to personal identification
 numbers (PINs) and/or passwords on handhelds
 and portables
 Even when protection features are available, an
 option for open access is typical
Validation Nonvalidated operator lockout is not common
 on portables and handhelds
 Some devices offer basic and intermediate access
 levels via computer data stations
 Downloading and uploading of valid operator
 lists typically are cumbersome
Performance Quality control (QC) lockout is available on a
 few portable and handheld point-of-care devices
 Nurses and physicians are reluctant to implement
 QC lockout at the bedside
 Most practical approach would be automated
 QC concurrent with patient testing
Emergency Methods of emergency override at the point of
 care are not proven
 Override is dangerous if it allows potentially
 inaccurate testing
 Experts feel that emergency operators should be
 trained, certified, and validated
Safeguards Lockout of expired QC materials and incorrect
 code keys is a common strength
 Lack of positive patient identification is a
 weakness of portables and handhelds
 Emerging algorithmic software offers some
 user-definable safeguards
Connectivity Transportables, some portables, and handhelds
 with computer data stations have basic connectivity
 Bidirectional communication generally is not
 available for portable and handheld point-of-care
 devices
 Handheld devices use proprietary docking stations,
 which are not interchangeable and waste
 space at the bedside or nursing station


The author sincerely thanks each of the 212 participants who contributed to this study.

References

(1.) Kohn LT, Corrigan JM, Donaldson MS, eds; Committee on Quality of Health Care in America, Institute of Medicine. To Err is Human: Building a Safer Health System. Washington, DC: National Academy Press; 2000.

(2.) Plebani M, Carraro P. Mistakes in a stat laboratory: types and frequency. Clin Chem. 1997;43:1348-1351.

(3.) Witte DL, VanNess SA, Angstadt DS, Pennell BJ. Errors, mistakes, blunders, outliers, or unacceptable results: how many? Clin Chem. 1997;43:1352-1356.

(4.) Kost GJ. Guidelines for point-of-care testing: improving patient outcomes. Am J Clin Pathol. 1995;104:S111-S127.

(5.) Kost GJ, Ehrmeyer SS, Chernow B, et al. The laboratory-clinical interface: point-of-care testing. Chest. 1999;115:1140-1154.

(6.) Kilgore ML, Steindel SJ, Smith JA. Continuous quality improvement for point-of-care testing using background monitoring of duplicate specimens. Arch Pathol Lab Med. 1999;123:824-828.

(7.) Stephans EJ. Hospital point-of-care survey report (Enterprise Analysis Corporation). Presented at: Connectivity Industry Consortium Meeting; October 20, 1999; Redwood City, Calif.

(8.) Kost GJ. The hybrid laboratory: shifting the focus to the point of care. MLO Med Lab Obs. 1992;24(9S):17-28.

(9.) Kost GJ. Point-of-care testing [??] the hybrid laboratory [??] knowledge optimization. In: Kost GJ, ed. Handbook of Clinical Automation, Robotics, and Optimization. New York, NY: John Wiley & Sons; 1996:757-838.

(10.) Kost GJ. Connectivity: the millennium challenge for point-of-care testing [editorial]. Arch Pathol Lab Med. 2000;124:1108-1110.

(11.) Kost GJ. Optimizing point-of-care testing in clinical systems management. Clin Lab Manage Rev. 1998;12:353-363.

(12.) Kost GJ, Hague C. In vitro, ex vivo, and in vivo biosensor systems. In: Kost GJ, ed. Handbook of Clinical Automation, Robotics, and Optimization. New York, NY: John Wiley & Sons; 1996:648-753.

(13.) Kost GJ. Point-of-care testing in intensive care. In: Tobin MJ, ed. Principles and Practice of Intensive Care Monitoring. New York, NY: McGraw-Hill; 1998: 1267-1296.

(14.) Kost GJ. Planning and implementing point-of-care testing systems. In:Tobin MJ, ed. Principles and Practice of Intensive Care Monitoring. New York, NY: McGraw-Hill; 1998:1297-1328.

(15.) Louie RF, Tang Z, Shelby DG, Kost GJ. Point-of-care testing: millennium technology for critical care. Lab Med. 2000;31:402-408.

(16.) Kost GJ. Point-of-care testing. In: Myers RA, ed. Encyclopedia of Analytical Chemistry: Instrumentation and Applications. New York, NY: John Wiley & Sons; 2000:chap 540.

(17.) Jenny RW, Jackson-Tarentino KY. Causes of unsatisfactory performance in proficiency testing. Clin Chem. 2000;46:89-99.

(18.) Ballam H. HIPAA, security and electronic signature: a closer look. J AHIMA. 1999;70:26-30.

(19.) Nichols J, ed. User needs (prepared by the CIC Provider Review Committee). Version 2.0; and Provider Review Committee Survey. Available at: http:// www.poccic.org. Accessed 2000.

(20.) Connectivity Industry Consortium. AACC Milestone Status, San Francisco, July 2000 (published by Agilent Technologies); and Standards Documents, 2001 (also published by National Committee for Clinical Laboratory Standards, Wayne, Pa; document AUTO 6-P). Available at: http://www.poccic.org.

(21.) Senders JW. Medical error and mental acts of God (MAOG). Available at: http://www.ergogero.com. Accessed 2000.

(22.) Leape LL. Error in medicine. JAMA. 1994;272:1851-1857.

(23.) Witte DL, Astion ML. Panel discussion: how to monitor and minimize variation and mistakes. Clin Chem. 1997;43:880-885.

(24.) Senders JW. On errors, incidents and accidents. Available at: http:// www.ergogero.com. Accessed 2000.

(25.) Witte DL, Van Ness SA. Frequency of unacceptable results in point-of-care testing [editorial]. Arch Pathol Lab Med. 1999;123:761.

(26.) McDonald CJ, Weiner M, Hui SL. Deaths due to medical errors are exaggerated in Institute of Medicine report. JAMA. 2000;284:93-95.

(27.) Leape LL. Institute of Medicine medical error figures are not exaggerated. JAMA. 2000;284:95-97.

(28.) Quality Point of Care Testing: A Joint Commission Handbook. Oakbrook Terrace, Ill: Joint Commission on Accreditation of Healthcare Organizations; 1999:94.

(29.) How to Meet the Most Frequently Cited Laboratory Standards. Oakbrook Terrace, Ill: Joint Commission on Accreditation of Healthcare Organizations; 2001.

(30.) Kost GJ. Critical limits for urgent clinician notification at US medical centers. JAMA. 1990;263:704-707.

(31.) Kost GJ. Critical limits for emergency clinician notification at United States children's hospitals. Pediatrics. 1991;88:597-603.

(32.) Kost GJ. Using critical limits to improve patient outcome. MLO Med Lab Obs. 1993;25:22-27.

(33.) Kost GJ. Designing critical limit systems for knowledge optimization. Arch Pathol Lab Med. 1996;120:616-618.

(34.) Louie RF, Tang Z, Sutton DV, Lee JH, Kost GJ. Point-of-care testing: effects of critical care variables, influence of reference instruments, and a modular glucose meter design. Arch Pathol Lab Med. 2000;124:257-266.

(35.) Tang Z, Du X, Louie RF, Kost GJ. Effects of drugs on glucose measurements with handheld glucose meters and a portable glucose analyzer. Am J Clin Pathol. 2000;113:75-86.

(36.) Kilgore ML, Steindel SJ, Smith JA. Evaluating stat testing options in an academic health center: therapeutic turnaround time and staff satisfaction. Clin Chem. 1998;44:1597-1603.

(37.) Kern DA, Bennett ST. Quality improvement in the information age. MLO Med Lab Obs. 1999;31(12):24-28.

(38.) Sirota RL. The Institute of Medicine's report on medical error: implications for pathology. Arch Pathol Lab Med. 2000;124:1674-1678.

(39.) Richardson WL, Berwick DM, Bisgard JC, et al. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001. Also available at: http://www.nap.edu/catalog/10027.html.

Accepted for publication May 21, 2001.

From the Point-of-Care Testing Center for Teaching and Research; Department of Medical Pathology, School of Medicine; Clinical Chemistry, University of California, Davis, Health System; and Biomedical Engineering, University of California, Davis.

Reprints: Gerald J. Kost, MD, PhD, Department of Medical Pathology, 3453 Tupper Hall, School of Medicine, University of California, Davis, CA 95616 (e-mail: gjkost@ucdavis.edu).
COPYRIGHT 2001 College of American Pathologists
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2001 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Author:Kost, Gerald J.
Publication:Archives of Pathology & Laboratory Medicine
Geographic Code:1USA
Date:Oct 1, 2001
Words:6636
Previous Article:Malignant Melanoma: an update. (Special article).
Next Article:Immunohistochemical analysis of nuclear versus cytoplasmic staining of WT1 in malignant mesotheliomas and primary pulmonary adenocarcinomas.
Topics:


Related Articles
Validating your laboratory information system.
Connectivity.
Remote diagnostics monitors vital signs in the lab.

Terms of use | Privacy policy | Copyright © 2020 Farlex, Inc. | Feedback | For webmasters