SEARCH

SEARCH BY CITATION

Keywords:

  • Laboratory error;
  • laboratory standards;
  • reference interval;
  • veterinary in-clinic analysis

Abstract

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

Background

While there have been ASVCP meeting discussions regarding quality assurance plans and lack thereof for in-clinic analyzers, there are little published data regarding in-clinic quality assurance and control practices.

Objectives

The purpose of this study was the identification of the common equipment used in hematologic, biochemical, urinalysis, and other testing, and assessment of quality control and assurance programs currently being performed in-clinic.

Methods

All members of the Veterinary Information Network (VIN) were solicited to participate in an online survey between July and September 2007.

Results

In total, 452 complete or partial responses were received. Eighty-nine percent of respondents (361/404) said that veterinary technicians (unlicensed, licensed, and registered) performed the majority of analyses. Eighty-eight percent (366/417) of respondents performed some quality assurance on their laboratory equipment, most commonly on chemistry (91%, 324/357), and hematology (84%, 292/347) analyzers, and least commonly on fecal analyses (57%, 148/260) and ELISA assays (25%, 65/256). Ignorance of how to perform quality assurance was the most commonly stated reason (49%, 25/51) for lack of a quality assurance program. The majority of practices (316/374) utilized manufacturer-provided reference intervals without further adjustment or assessment. Roughly one-third of respondents (126/374) used reference intervals from textbooks, which is discouraged by ASVCP guidelines.

Conclusions

This study found that the majority of respondents were not in compliance with ASVCP guidelines, illustrating the need for improved education of technical staff, veterinary students, and veterinarians regarding limitations of in-clinic laboratory equipment and the importance of regular quality control, maintenance, training, and reference interval development.


Introduction

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

Medical professionals strive to provide optimal care to their patients. While difficult to quantify, one study in the human literature estimated that laboratory analyses provided 43% of the data used by clinicians to make clinical decisions in an intensive care setting.[1] While similar estimates are not available for veterinary clinicians, the frequent lack of patient history, vague descriptions of clinical signs, and inability to question patients about symptoms would all suggest that veterinarians also rely heavily on laboratory analyses for diagnosis and monitoring of treatment. In an effort to deliver more timely patient care as well as maximize profit, many clinics have made significant financial investments in the purchase of in-clinic laboratory equipment. In-clinic equipment (equipment at or near the site of patient care) is designed to decrease turnaround time, thereby theoretically improving patient outcomes and owner satisfaction. However, such goals are only achievable if the quality of laboratory data on which the clinician bases his/her decision-making is high; specifically, that the data are accurate, precise, and reproducible from day to day.

Quality laboratory medicine requires continual maintenance through a quality assurance program regardless of the size of the laboratory; from large commercial facilities to universities to in-clinic point-of-care laboratories, a culture that promotes and maintains that quality must be a goal. Development and implementation of more complete quality control programs were recently identified as areas for improvement by a group of European veterinary medical laboratories seeking ISO 15189 certification.[2] A recent study suggested that performance of in-clinic biochemical analyzers was significantly worse than that of reference laboratories, and commonly used in-clinic analyzers periodically and unpredictably failed quality assurance.[3] Additionally, certain analytes tended to be problematic with most analyzers. In all cases, the clinicians participating in that study were unaware of the inaccuracy and imprecision of their own in-clinic analyzers, and did not routinely undertake quality control measures to examine and validate performance. Subsequently, recommendations for quality assurance in veterinary practice were developed, and have been recently published and will be updated on a regular basis in the future.[4-6] Therefore, we conducted a survey to assess the current state of in-clinic instrumentation, frequency of use, quality control and assurance, training, and continuing education programs. We sought to examine whether clinicians with in-clinic analyzers perform quality control procedures that would provide them with confidence in the results they generate.

Materials and Methods

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

An online survey was developed by a psychometrician, reviewed by a panel of veterinarians, and opened to all members of the Veterinary Information Network (VIN) from July to September 2007. Respondents were allowed to choose more than one option for some questions as dictated by the needs of the question to supply a complete answer. Not all respondents provided answers for each question, and some questions allowed for multiple responses; therefore, sum totals may differ between questions. When questions allowed for multiple responses, percentages were not reported. The questionnaire included 28 questions (Appendix S1) with 3 main areas of interest. The first area of interest established each respondent's main type of practice, geographic location, level of education, access to continuing education, and his/her definition of quality assurance (QA). The second established types of analyses performed in-clinic, the types of equipment utilized, and the level of education of those performing the majority of analyses. The final area of interest attempted to quantify the number of practices performing QA as well as establish the schedule of quality control (QC) products, instrument maintenance, and sources of reference intervals (RI). Survey data were collated and analyzed descriptively. No statistical analysis was performed.

Results

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

Demographics

Of the approximately 3200 VIN member groups to whom the survey was made available, 14% (452) provided responses sufficient for further analysis. Demographic details of respondents are provided in Table 1. Respondents who listed their practice type as “other” wrote in responses such as technician, emergency, university, or diagnostic laboratory. The majority of clinics certified by the American Animal Hospital Association (AAHA) were general small animal practices (78%, 95/122), although a minority were mixed animal, referral, university, or emergency facilities, or respondents did not specify. Other accrediting agencies answered by respondents included primarily state and provincial agencies; only 3 respondents listed American Association of Veterinary Laboratory Diagnosticians (AAVLD) certification.

Table 1. Demographics of the veterinary practice population surveyed for point-of-care instrumentation, analysis, and quality assurance.
 Number (%)
Practice Type Total452
Small animal general311 (69)
Mixed general30 (7)
Small animal emergency19 (4)
Academia17 (4)
Small animal referral13 (3)
Large animal general2 (< 1)
Other60 (13)
Accreditation Total450
None286 (64)
AAHA122 (27)
Other42 (9)
Geographic Region Total450
US380 (84)
Canada31 (7)
Europe13 (3)
Asia12 (3)
Australia/New Zealand8 (2)
Other6 (1)
Advanced Training Total432
Graduate degree82 (19)
Board certification64 (15)
Continuing Education Total444
Veterinarians > 10 hours/year301 (68)
Technicians < 10 hours/year223 (51)

Instrumentation, usage, and maintenance

The vast majority of respondents had an in-clinic laboratory (92%; 417/452). Of those who responded affirmatively, 99% (413/417) performed clinical chemistry profiles, 94% (390/417) performed FeLV/FIV SNAP tests, 93% (388/417) performed urinalyses, 90% (375/417) performed hematology profiles, 86% (360/417) performed fecal analyses, 81% (337/417) performed heartworm ELISA tests, 31% (131/417) performed clotting assays, 26% (110/417) performed blood gas analyses, and 28% (118/417) performed some other in-clinic testing, such as hormone/enzyme assays, bacterial/fungal culture, cytology, electrolyte analyses, etc. The next question inquired which manufacturer(s) supplied equipment for the in-clinic laboratory; results are provided in Table 2. Reference laboratories commonly had systems not listed in the question.

Table 2. Manufacturer of in-clinic laboratory equipment used by respondents surveyed for point-of-care instrumentation, analysis, and quality assurance.
ManufacturerNumber of Clinics with at Least One Analyzer (412 Total Responses)
  1. Idexx, Westbrook, ME, USA; Abaxis, Union City, CA, USA; Heska, Loveland, CO, USA; Siemens Corp., Tarrytown, NY, USA; QBC Diagnostics, Inc., Port Matilda, PA, USA; Scil Animal Care Company, Gurnee, IL, USA; Synbiotics Corp., Kansas City, MO, USA; Diavant, Indianapolis, IN, USA; Oxford Science, Inc., Oxford, CT, USA; Hemagen Diagnostics, Inc., Columbia, MD, USA; Drew Scientific Group, Dallas, TX, USA.

Idexx292
Abaxis113
Heska111
Siemens13
QBC13
Scil10
Synbiotics7
Diavant7
Oxford Science7
Hemagen4
Drew Scientific3
Other27

Eighty-nine percent of respondents (361/404) said that veterinary technicians (unlicensed, licensed, and registered) performed the majority of diagnostic analyses. Respondents were allowed to choose more than one option, as the survey designers recognized that persons possessing a certain level of education may perform the majority of one type of analysis, but not another. Unlicensed technicians performed most of at least one type of testing in 249/404 of responses, while 229/404 stated that licensed/registered technicians performed the majority of at least one type of analysis. Technicians or other support staff performed routine maintenance 76% (315/417) of the time, 10% (41/417) had the vendor provide maintenance service, 6% (25/417) said that the practice owner performed maintenance, and 3% (13/417) stated that they did not perform any routine maintenance at all. A third party provided maintenance services for instruments utilized by 1% (5/417) of respondents. Four percent of respondents (18/417) either did not know or gave an answer that did not fit into one of the listed categories such as “our machine is basically maintenance free” or “does not need maintenance” (Figure 1).

image

Figure 1. Percentages of persons performing routine laboratory maintenance in veterinary practices based on a survey on point-of-care instrumentation, analysis, and quality assurance.

Download figure to PowerPoint

Quality Control and Assurance

The section regarding QA and QC began by asking respondents to write in definitions of these terms. Individual responses ranged from “no idea” to “something everyone has to be aware of” to “checking a ‘normal’ animal periodically to make sure machines are accurate.” Over 50% of respondents used some form of the word “accurate” in their definition.

Fourteen percent (60/417) of respondents did not maintain any form of Standard Operating Procedures (SOP) information, while 84% (348/417) provided a manual either at the machine, in a binder, or to each employee. Two percent (9/417) of respondents stated that they were unsure what an SOP was.

The majority of respondents (88%; 366/417) performed some type of QA on their laboratory equipment. When asked what type(s) of QA were performed, 303/365 had a formal schedule for running control materials and 183/365 regularly compared send-out and in-house results on duplicate samples. QA was performed erratically or on some other schedule by 32/365 of respondents. Ninety-three percent (339/364) of respondents performed QA on clinical chemistry analyzers, 82% (299/364) performed QA on hematology analyzers, 55% (199/364) performed QA on electrolyte analyzers, 36% (132/364) performed QA on urine analyses, 20% (76/364) performed QA on fecal analysis, and 14% (49/364) performed QA on ELISA assays.

The majority of respondents claimed to use one or 2 control materials for biochemistry (71%; 231/327) and hematology (68%; 211/309) QC. The majority (60%, 156/263) used no QC materials for urine analyses. Of the 32 respondents who stated that 3 or more hematology control materials were used, 44% (14/32) worked in referral facilities. However, only 13% of respondents claiming to use 3 or more biochemistry control materials and 13% of respondents claiming to use 3 or more urine control materials (7/52 and 2/15, respectively) could be clearly identified as working in referral facilities.

When asked what method the respondent used to determine acceptability of QC performance, 56% (204/362) of respondents used a mean ± 2 standard deviations (SD) rule provided by manufacturer, 16% (57/362) determined their own SD and coefficient of variation (CV) and used statistical QC to validate, 8% (27/362) used a mean ± 3 SD, and 5% (19/362) relied on instrument representatives to determine QC. The remaining respondents (15%; 55/362) did not run controls regularly, did not run controls at all, or wrote in an alternative answer, such as “not sure what clin path does” or “I don't know.”

Of the 51 respondents who stated that they did not perform QA on their in-clinic laboratory equipment, 49% (25/51) stated that they were unsure how to perform QA, 27% (14/51) stated that company representatives had informed them that there was no need to perform QA, 14% (7/51) did not know that QA was required, 13% (6/51) felt it took too much time, 6% (3/51) felt it was cost-prohibitive, and 31% (16/51) answered “other.”

When asked about confirming automated hematology results, 33% (122/367) only performed a manual WBC differential count if the sample was flagged with a problem by the automated analyzer, 27% (98/367) performed a manual backup of WBC differential counts either all the time or most of the time, and 24% (89/367) relied on either a 3-part or 5-part automated WBC differential count without blood smear review. Sixteen percent (58/367) checked either “other” or made comments such as “occasionally perform blood smears if parameters are out of whack…” and “…estimated WBC can be off by 50%… so what is the point?” (Figure 2).

image

Figure 2. Frequency of manual review of automated CBC differential counts in veterinary practices based on a survey on point-of-care instrumentation, analysis, and quality assurance.

Download figure to PowerPoint

When asked about logging and analyzing ELISA serologic data to monitor changes in disease incidence, 217/371 did not keep track of ELISA results for any diseases, 75/371 stated that they kept track either daily and/or weekly, and 71/371 stated that they kept track monthly and/or annually; 34/371 checked “other” and/or added comments such as “by impression only” or “if we see they are too often positive—we replace the tests for new ones.”

When respondents were asked what resource(s) they used to set RIs for their in-clinic laboratory analyzers, 316/374 used the manufacturer's recommended range, 126/374 used RIs published in textbooks, 120/374 used online resources, and 99/374 established their own RIs for their analyzers. A small minority (16/374) answered “other” or provided comments including “don't know—think we ran a bunch of dogs” and “the results have reference ranges printed on the result form.”

Discussion

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

This survey found that > 90% of clinics owned laboratory equipment and that the vast majority of analyses were performed by technicians at varying levels of certification. There was large variation in the knowledge base regarding QA and QC, probably reflecting the broad response demographic, ranging from technologists to board-certified clinical pathologists. Respondents often used words like “accurate,” “consistent,” “precise,” “and “reliable” when asked to define QA and QC, but there was little discernible distinction between the 2 terms when write-in definitions were requested. Overall, the findings of this survey stress the need for education of veterinary personnel at all levels of experience and education regarding the importance of regular QC, maintenance, and training for in-clinic laboratory analyses. While there are currently no enforceable regulations for QA programs in the veterinary field, the Quality Assurance and Laboratory Standards (QALS) committee of the American Society for Veterinary Clinical Pathology (ASVCP) has recently published reference guidelines to which all US in-clinic veterinary laboratories should ascribe.[7] As these guidelines represent a minimum standard of QA and QC for veterinary clinical laboratories (in-clinic laboratories included), any deviation from these guidelines represents an area for improvement.

Additional points of improvement include knowledge of importance of equipment maintenance, as 7% of respondents either did not perform maintenance, did not know if maintenance was performed, or were unable to provide an answer that fit in standardized categories. Fewer than 100 respondents stated that they had developed/validated the RIs they were currently using, so the majority of respondents were not in compliance with current ASVCP recommendations.[8, 9]

Demographics

Regardless of education level, training, and competency assessment of all personnel performing in-clinic testing, it is important to ensure that each test yields accurate and precise results, no matter who performs the assay. Theoretically, persons of any level of technical ability should be able to follow a package insert to run point-of-care analyzers. However, many operators may not have the education or expertise to fully understand the many variables that affect results, identify early abnormal trends, or be able to troubleshoot problems.[10, 11] Training options include manufacturer-provided educators, an in-clinic point-of-care trainer, self-study modules, VIN classes and rounds, and continuing education at regional or national veterinary meetings. Staff competency assessment should be performed at least annually, which can include performing a test on an unknown specimen, periodic work observation by a superior, monitoring each user's QC performance, or other forms of assessment.[12]

Instrumentation, usage, and maintenance

No matter the instrument manufacturer, in-clinic laboratory equipment is marketed industry-wide as an investment to increase revenues for a practice. Whether this cost benefit is real or perceived, however, is difficult to prove. Total cost estimation should include capital investment in equipment, reagents, QC materials, and waste of expired reagents, as well as technical time involved in training, ordering, maintenance, and QA, etc. Higher volumes yield decreased cost per test, and claims of high profit margins for in-house testing often omit costs associated with maintenance, overhead, spoiled reagents, labor, etc.[13] In-clinic laboratory testing is also frequently implemented based on the theory that faster results will streamline patient care and improve outcome. Although few would dispute that bedside testing improves laboratory turnaround time (the interval between sample acquisition and delivery of result), impact on patient outcomes is less clear. In human medicine, it has been difficult to prove that point-of-care testing yields shorter patient stays in-hospital, or decreased admission rate or mortality.[14] Additionally, a survey examining this premise in human medicine found that fewer than 10% of hospitals actually monitored patients to determine if point-of-care testing actually delivered better outcomes.[15] Finally, clinicians often fail to consider that in-clinic testing is not interchangeable with reference laboratory methods. Point-of-care devices tend to be less accurate and precise, and in a recent study comparing veterinary biochemical testing quality in reference laboratories vs an in-clinic setting, reference laboratories were able to achieve desirable quality requirements more frequently than in-clinic laboratories.[3] This is not to say that in-clinic testing is without value; when utilized for critical patients or for time-sensitive testing such as urinalyses or clotting assays, which should ideally be evaluated within hours per ASVCP guidelines, it plays an undeniable role in improving diagnostics and treatment plans. Judicious use of in-clinic testing coupled with a comprehensive QA program should improve the quality of patient care and outcome.

Quality control and assurance

Quality control is most commonly used to refer to procedures specifically designed to minimize analytical error. Quality assurance more commonly (and more broadly) refers to a comprehensive plan for minimizing preanalytical, analytical, and postanalytical error.[16] In human hospital central laboratories, error frequencies of 46–68% in the preanalytical phase, 7–13% in the analytical phase, and 18–47% in the postanalytical phase have been reported.[14] Manufacturers have invested significant time and money in the production of small, simple, rapid-output analytical systems designed to minimize potential sources of error. However, in an effort to keep things user-friendly and cost-effective, especially in the veterinary field, limited emphasis has been placed on the importance of routine QC.[10] While managing pre- and postanalytical error may seem simple in the lower volume setting of the typical general veterinary practice compared with a large reference laboratory, an internal and external QA program of appropriate frequency is required even in this simpler environment to ensure the quality results desired by practitioners.

Practitioners surveyed obviously desire quality results; however, there appears to be a lack of knowledge on how to implement the level of QC necessary to obtain them. This survey revealed that the majority of respondents claimed to use one or 2 materials for hematology and biochemistry QC. These findings are interesting, and perhaps reflect confusion on the part of the respondents, given that a recent study did not identify a single participant using externally provided QC materials at the start of the study.[3] The finding that most respondents claiming to use 3 or more control materials per instrument worked in general practice situations certainly increases the index of suspicion that respondents may have been unclear about how to answer this question. Approximately 50% of respondents send duplicate samples to reference laboratories for comparison. While sending duplicate samples to reference laboratories can be helpful, direct comparison for accuracy is problematic, as reference laboratories commonly use large-scale analyzers with different methodologies and different RIs. If discrepant results are obtained between 2 different analyzers, one technique to determine if the difference is significant involves calculating the in-clinic result ± total allowable error (TEa). The total error of the 2 systems on a single control material should be within total error available.[6] Other techniques such as annual comparison and statistical analysis of a minimum of 20 paired samples have also been described.[17]

At the time of this submission, point-of-care guidelines are under review by the ASVCP membership, and the majority of clinics surveyed would not be in compliance with this document. These guidelines state that routine, periodic maintenance and assessment should be performed, not just repair and testing at random.7 At a minimum, the manufacturer's daily, weekly, and monthly routine maintenance program should be followed, and all maintenance, calibration, and repair logged. For reference laboratories, the ASVCP recommends that controls should be run at least every 24 hours, or more frequently if the manufacturer recommends, and a minimum of 2 control levels, typically one normal and one abnormal concentration, should be used.[17]

The majority of respondents used a cutoff of 2–3 SD to determine acceptability of control data as recommended by the ASVCP quality control guidelines, and a few (a mix of academic laboratories and general practices) performed statistical analyses based on their own calculated SD and CV. However, over 20% either relied on instrument representatives, did not run controls regularly, did not run controls at all, or were unsure how QC was performed in their laboratory. Although newer instrumentation may possess internal QC functions that are built into the instrument or its unit devices (slides, rotors, etc), it is still essential to have a designated laboratory testing coordinator who is responsible for QC, proficiency testing, troubleshooting, in-service training, and operator competency evaluation.

While the majority of respondents did provide a procedures manual for employee reference, some did not, and a small minority were unsure what an SOP was. The previously mentioned ASVCP guidelines state that all procedures currently in use should be included in a procedures manual that is easily accessible by all personnel performing the assay, and that training logs be maintained for each procedure. While many in-clinic assays are fairly simple to perform, ensuring that all staff members understand the science behind the procedures and methods allows better prediction, prevention, and identification of problems. Additionally, training of personnel (both initially and continually) ensures that testing is performed correctly to yield accurate and precise results, skill levels are maintained, and that everyone performs the tests in the same way.

Remarkably, less than one-third of respondents performed a manual backup of WBC differential counts either all the time or most of the time. Examination of a blood smear is essential to look for nucleated erythrocytes, parasites, WBC and RBC morphology changes, and platelet abnormalities, for example. The ASVCP recommends that all automated differential counts should be verified by manual (microscopic) evaluation of the blood smear. The ASVCP also recommends that performance information be contained in internal monitoring logs for all assays performed, including ELISA testing.[4, 17] The majority of individuals surveyed did not keep track of ELISA results at all, or did so by “impression only.” Performance logs are critical, especially for assays on labile time-sensitive samples that are not conducive to being sent out for external comparative analysis.

Reference intervals

A recent review article on RIs describes several different methods for RI generation, including (1) de novo determination from measurements made in reference individuals, (2) RI transference when a method/instrument is changed, and (3) validation of a previously established or transferred RI.[8] While the guidelines specifically state that RIs from the literature should not be used, roughly one-third of respondents stated that RIs from textbooks were used on a regular basis. Manufacturers currently provide RIs that have been based on the same methodology, but different machines. As there is currently no cross-analyzer comparison, and significant bias has been found between both in-house and reference analyzers, use of manufacturer-provided RIs is likely to result in misdiagnosis and should be avoided unless the RIs are validated by the practice.[9] While development of RIs can be difficult, expensive, and time-consuming, it is an important endeavor to ensure the most accurate interpretation of laboratory data for the patient. Verification, using n = 20 healthy animals of a species, should be performed in each clinic prior to use of instruments and should be included in the start-up costs of the instrument.

Current guidelines and governance

There are few guidelines and no enforceable regulations for QA programs in the veterinary field at this time. Guidelines have recently been published by the ASVCP, but currently the only regulatory board with QA guidelines in place is the College of Veterinarians of Ontario (CVO). Analogous to a state regulatory board, the CVO[18] is responsible for licensing veterinarians and veterinary practices in the province of Ontario, Canada; it has also taken the step of providing QA recommendations to its members. While these recommendations are not law and are not strictly enforceable, if they are not followed they can comprise at least partial grounds for license suspension, if the complaint is accompanied by other substantial grounds.[19] Depending on perspective, veterinary medicine is either blessed or cursed that veterinary laboratories are not affected by regulations such as the Good Laboratories Practice Act and Clinical Laboratories Improvement Amendments of 1988, which are used as guidelines in human medical laboratories. Adherence to these regulations would make in-clinic laboratories more expensive for the average practitioner, as quality-oriented activities typically consume between 30% and 40% of laboratory budgets.[20] The lack of regulatory oversight, however, puts the onus on the practice to ensure that they have a quality assurance plan in place. Any instrument can provide numbers; it is up to the practice to ensure that these numbers can be relied upon. In this VIN survey, a small, but significant, minority of respondents did not perform any type of QC, either because they did not know they needed to, they did not know how to, the company representative said it was not necessary, or it was perceived as time-intensive or cost-prohibitive. Education and external QC products and internal QA are necessary to ensure consistently reliable results, regardless of level of technology.[21]

Summary

Based on the survey of in-clinic practices and referral laboratories, we found that most clinics had an in-clinic laboratory, claimed to perform some type of maintenance, and claimed to have some type of QA program in place. However, many of these were partial programs covering one of multiple instruments or were based on clinician-generated quality concerns rather than a scheduled external QA program. A minority of respondents stated that they did not have a periodic QA program; the most commonly provided reason was due to lack of knowledge, but increased cost and time were also cited. The desire from practitioners is there and voluntary compliance is already occurring; however, support resources and knowledge bases for development and implementation of QA/QC programs are lacking, and the level of time and monetary input required is often unknown. This study found that the majority of respondents were not in compliance with ASVCP guidelines, illustrating the need for improved education of technical staff, veterinary students, and veterinarians in the United States regarding limitations of in-clinic laboratory equipment and the importance of regular QC, maintenance, training, and development of RIs.

While difficult to define in a single sentence, one definition proffered for laboratory quality is “the most appropriate laboratory test, correctly performed, reported, and utilized within a clinically optimal timeframe to produce the most proper patient diagnosis and optimal patient management result.”[20] Ultimately, it is up to each practitioner to ensure that in-clinic testing delivers accurate, precise results, and QA and QC programs are essential to assure their generation. Improved test turnaround time of accurate results with decreased transport artifact is quality; improved turnaround time of inaccurate results can actually be detrimental to patient care and outcome, and potentially have a negative impact on practitioner credibility, liability, and public health.

Disclosure: The authors have indicated that they have no affiliations or financial involvement with any organization or entity with a financial interest in, or in financial competition with, the subject matter or materials discussed in this article.

References

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information

Supporting Information

  1. Top of page
  2. Abstract
  3. Introduction
  4. Materials and Methods
  5. Results
  6. Discussion
  7. References
  8. Supporting Information
FilenameFormatSizeDescription
vcp12142-sup-0001-AppendixS1.docxWord document24K

Appendix S1. Questionnaire for quality control survey on point-of-care instrumentation, analysis, and quality assurance in veterinary practice.

Please note: Wiley Blackwell is not responsible for the content or functionality of any supporting information supplied by the authors. Any queries (other than missing content) should be directed to the corresponding author for the article.