American Society for Veterinary Clinical Pathology
2424 American Lane
Madison, WI 53704

Telephone: +1-608-443-2479
Fax: +1-608-443-2474

2. Analytical factors important in veterinary clinical pathology


2.1. General


2.1.1. Monitoring


a. Internal monitoring.  Internal monitoring of all equipment with regards to electronic safety, calibration, equipment maintenance and equipment performance is recommended. An Instrument Performance Log is recommended for each instrument, including information about any problems encountered and their investigation and resolution.  Use of quality control materials for the purpose of monitoring internal performance is covered in detail in section 2.1.5. Quality control.  Accumulated quality control results should be systematically reviewed on a regular schedule through use of Levey-Jennings plots, and appropriate actions taken when quality controls results exceed the limits or demonstrate undesirable trends.(Westgard, 2006) 


b. External monitoring (Proficiency testing).  External monitoring should include participation in an external proficiency program that is specific to veterinary diagnostic laboratories. A more complete description of proficiency testing can be found in Bellamy and Olexson. (Bellamy, 2000)

i. All participating laboratories should analyze the same materials.

ii. Results should be tabulated regularly (monthly, quarterly or annually) and distributed to participants with statistical summaries expressing the closeness of individual laboratory results to the group mean.

iii. Means should be calculated and analyzed based on identification of the method (same methods compared).

iv. Each laboratory should carefully assess the validity of their reported performance.  A marked deviation from the group mean should prompt an inquiry. 


2.1.2. Method Validation.

Prior to adopting a new test procedure or bringing a new instrument on-line, method or instrument validation should be performed to ensure the procedure performs according to the laboratory's standards and manufacturer's claims.  Method or instrument validation studies should assess linearity, precision, accuracy, analytical range, lower limit of detection (LLD)/biological limit of detection (BLD)/functional sensitivity (FS) of the method and examine the effects of interfering substances. Reference intervals and quality control procedures for the new method should be determined before patient testing is initiated. If there is limited data available for reference interval determination, this should be explained in an addendum to the test and the basis for the interpretation explained. (Linnet, 2006) 


Analytical quality requirements, such as total allowable error (TEa) or clinical decision limits should be established for each test prior to initiating method or instrument validation studies.(Westgard, 1974)  These requirements serve as a benchmark for test performance.  The total error inherent in the new method or instrument, as determined during validation studies, must fall within these requirements or the new method should be rejected. (Westgard, 2006b)                                          


Method or instrument validation procedures are listed in the order in which they are performed.  Numerous commercial software programs are available to facilitate the statistical analysis of results collected during method validation studies.  Additional information and graphing tools for method validation can be found at 


a. Linearity study: determination of the reportable range of the method.

i. Five levels of solutions are recommended and can be prepared as indicated.  Solutions with matrices that approximate real samples are preferable to water or saline dilution. (Westgard, 2008a)

          Level 1: close to the detection limit of the assay

          Level 2: 3 parts low pool plus 1 part high pool

          Level 3: 2 parts low pool and 2 parts high pool

          Level 4: 1 part low pool and 3 parts high pool

          Level 5: exceeding the expected upper limit of the assay

ii. Three to 4 replicate measurements on each specimen are recommended.(Westgard, 2008a)

iii. The mean value for each specimen is plotted on the y–axis and expected value on the x-axis. (Westgard, 2008a)

iv. The plot is visually inspected for outliers, linearity, and ‘best fit' line. (Westgard, 2008a)

v. If the assay is not linear within the manufacturer's recommended working range, the method should be rejected.  Alternatively, the working range can be changed to lie within the linear region.


b. Short-term replication study (repeatability or within-run): estimation of the random error (RE), or imprecision, of the method over a short time interval.  Samples are analyzed during a single 8-hour shift or with-in a single analytical run. (Westgard, 2008c)

i. Standard solutions, commercially available control materials or pooled fresh patient samples can be used.

ii. The level of analyte should approximate important clinical decision levels.  A minimum of two levels (normal and high) is recommended if the analyte is medically significant when increased.  At least three levels are recommended (low, normal and high) if the analyte is medically significant when decreased or increased.

iii. A minimum of 20 replicates is recommended during the time interval of interest.

iv. Gaussian distribution is determined by plotting data on a histogram or normal plot.  If Gaussian distribution is not present, data should be examined for outliers.  The cause of outliers should be determined and corrected if possible. If Gaussian distribution is not achievable following elimination of possible outliers, then transformation of the data may be required for additional statistical analyses.

v. Analysis of data includes calculation of the mean, SD and CV. 

vi. Compare the SD and CV, as measures of RE, to the laboratory standard (TEa or clinical decision limit).  If the SD or CV exceeds this standard, the method should be rejected.  For this initial assessment, bias is assumed to be zero. Additional analyses including bias (determined from the Comparison of Methods Study) should be conducted after this information is available. 


c. Long-term replication study (reproducibility or between-run): estimation of the random error (RE), or imprecision, of the method over a longer time interval that approximates real working conditions.  A minimum of 20 samples is analyzed during different shifts (and runs) over a minimum of 20 days.  Sample selection and data analysis are the same as for the short-term replication experiment.


d. Comparison of Methods: estimation of bias, or systematic error (SE), of the test (new) method as compared to the comparison method, if one exists.

i. Choose the comparison (reference) method with consideration for known accuracy and quality.  The comparison method may be a definitive method, a reference method, or another field method as defined by Tietz. [Tietz 1979]  ii. Comparison to proficiency testing data may also be considered; however, careful attention to the known accuracy of such data is recommended.

ii. a minimum of 40 patient specimens tested by both methods is recommended.(Jensen, 2006; Westgard, 2008d)

iii. specimens should represent the spectrum of results expected in clinical application of the method and span the entire working range with adequate sample number at ends of the range.(Jensen, 2006)

iv. Duplicate measurements by each method are desirable, but single measurements are acceptable.(Jensen, 2006) Results should be examined at the time they are performed.  If a significant difference is detected in values obtained by the two methods, immediate retesting should be performed to determine if the discrepancy is repeatable or if an error occurred. 

v. Specimens should be analyzed within two hours of each other (or sooner, depending upon analyte stability) by the test and comparative methods. Specimen handling should be defined to avoid extraneous variation between the methods. If samples are analyzed at different laboratories (>2 hour interval between testing), sample stability must be considered. 

vi. The study should be conducted over 5-20 days with a preference for the longer time period; e.g., 2-5 specimens per day for 20 days.

vii. Analysis of data:

1. A comparison plot is recommended for visual inspection with the test method and the comparative method plotted on the y-axis and the x-axis, respectively. Outliers should be re-analyzed if samples are fresh.  A ‘best fit' line can be drawn based on visual assessment of the data. (Jensen, 2006)

2. The calculation of a correlation coefficient (r) is used to determine which statistical equation should be used to estimate SE (bias) but is not acceptable as a measure of agreement.  For analytes that vary over a wide range, regression statistics are typically used to determine SE (bias). (Jensen, 2006; Westgard, 2008d) For analytes that vary over a narrow range (electrolytes), t-test statistics are used to determine SE (bias).  (Westgard 2008d) 


- If r ≥ 0.99 for data with a broad range or >0.975 for data with a narrow range, standard linear regression statistics can be used to estimate the SE (bias) at medical decision concentrations. (Jensen, 2006; Westgard 2008d; Stockl, 1998)   The SE (bias) at a particular decision level (Xc) can be determined by calculating the corresponding y-value (Yc) from the regression line. 

Yc = a(slope)Xc + b(y-intercept)                     

SE (bias) = Yc – Xc


-    If r < 0.99 (or <0.975), the data could be improved by collecting more data points or decreasing variance by doing  replicate measurements, or paired t-test statistics should be used to estimate the SE (bias) as the difference between the means of the results by the two methods.(Jensen, 2006; Westgard 2008d)  Paired t-test, however, is not applicable in the presence of proportional error.(Westgard 2008e)  Alternatively, Passing-Bablok or Deming regression analysis can be used.  Subdivision of results into groups (below, within, or above the reference interval) may be used to provide additional evaluation of means in ranges that are clinically significant.(Jensen, 2006)


viii. Creation of a difference plot (Bland-Altman) is also recommended.  The difference between the test and comparative method is plotted on the y-axis, and the mean of both methods is plotted on the x-axis. The line of difference identifies SE (bias).  For tests with no bias, results are scattered around the line of zero difference, with approximately ½ above and ½ below this line. (Bland, 1986; Jensen, 2006; Hyloft, 1997)

ix. Criteria for acceptable performance depend on the TEa for the test as determined by each laboratory. Calculated total error (TEcalc) includes SE (bias), as determined by the comparison experiment, and RE (S), as determined by the replication (long-term) experiment. TEcalc = Biasmeas + 3Smeas.  Performance is considered acceptable if TEcalc < TEa.  A Method Evaluation Decision Chart, which takes into account the TEa, SE and RE, also can be used to determine method acceptability.(Westgard, 2008b)


e. Interference study: estimation of systematic error caused by substances within the specimen being analyzed. These errors are typically constant with the size of error proportional to the concentration of the interfering material.(Westgard, 2008f)  Common interfering substances include hemolysis, lipemia and bilirubin.(Bellany, 2000)  Additional comparisons may be made between heparin plasma vs. serum and serum samples collected in gel tubes vs. plain tubes or other possible interferents, as indicated by the test or instrument of interest.

i. Standard solutions, patient specimens or pooled patient samples can be used. The latter two are preferred because of their ready availability and complex matrix.(Westgard, 2008f)  Samples with varying levels of the analyte that at least spans the clinical range should be chosen.(Westgard, 2006)

ii. Defined quantities of hemoglobin (from lysed RBC), lipid (commercially available solutions) and bilirubin (commercial standard solutions) are added to samples to reach an increased concentration that is anticipated to occur in patient samples.(Westgard, 2008f) 

iii. The volume of interferent added should be minimized to avoid changes in the sample matrix.(Westgard, 2008f) Duplicate measurements on all samples are recommended.  Small differences in the measured analyte caused by the interferent may be masked by random error inherent to the method.  Duplicate measurements will help obviate this problem.(Westgard, 2008f) 

iv. Measurements should be performed by both the new method and the comparative method, if one exists.  If both methods show similar SE (bias) caused by the interferent, presence of bias alone may not be sufficient to reject the new method. (Westgard, 2008f)

v. Calculation of bias due to the interferent: (Westgard, 2008f)

1. Determine the mean for the duplicates of the interferent-containing sample and the control.

2. Calculate the difference (bias) between the interferent-containing sample and its control.  Repeat for all pairs of samples.

3. Calculate the mean difference (bias) for all specimens with a given concentration of interferent. 

vi. A paired t-test is recommended for comparing the results from the interferent-containing sample and the unadulterated control. Regression statistics are not applicable. A t-test statistic of 2 is used as a standard cut-off. The t-test statistic estimates the number of standard deviations that the altered sample differs from the unaltered sample. (Westgard, 2008f)

vii. Criterion for acceptable performance is SEmeas < TEa.    If the SEmeas > TEa, the laboratory should decide whether specimens likely to contain interfering substances can be readily identified and whether specimens should be rejected if potential interferents are present or if their effect can be quantitated or semi-quantitated based on additional studies.


f. Recovery Study: estimation of proportional systematic error (SE). Proportional SE occurs when a substance within the sample matrix reacts with the analyte and competes for analytical reagent.  The magnitude of SE increases as the concentration of the analyte increases.  Proportional SE is determined by calculating the percent recovery of an amount of standard analyte added to a patient specimen. (Westgard, 2008f)

i. Standard solutions of high concentration are often used since they can be added in small amounts in order to minimize specimen dilution but still achieve a recognizable, significant change in the analyte concentration.  Dilution of the original specimen should not exceed 10%.

ii. The amount of analyte added should result in a sample that reaches the next medical decision level for that analyte.  Similar to the interference experiment, small additions will be more affected by the inherent imprecision of the method than large additions.  

iii. Replicate measurements of both adulterated and control specimens are recommended. Recovery samples should be analyzed by both the test and comparison methods.  The number of patient specimens to be tested depends on the numbers and types of reactions anticipated to produce a systematic error.

iv. When a recovery study is being done as part of the evaluation of a new method, it should ideally be performed using both the new method and a comparison method if one exists.

v. Data calculation (For an example of the data calculations involved in a recovery study, see Westgard, 2008f or Accessed November 10, 2009)

1. Calculate amount of analyte added:

            Conc. stnd added x (ml stnd added/ml stnd added + ml sample)

2. Calculate the mean of the replicate measurements for all samples.

3. Calculate the difference between the adulterated sample and the control.

4. Calculate the recovery by dividing the difference by the amount added.

5. Calculate the mean of the recoveries of all the pairs tested.

6. Calculate the proportional SE as 100% - recovery%.

vi. Criterion for acceptable performance is SEmeas < TEa.  Small amounts of proportional systematic error may be acceptable; however, the method should be rejected if large proportional systematic errors that are greater than the total allowable error are observed.


g. Reference interval for new method/instrument: Creation of a new reference interval or validation of an existing reference interval is necessary for clinical decision making.

See New ASVCP Guidelines for Reference Interval and Decision Threshold Generation and Maintenance.


h. Detection limit study: estimation of the lowest concentration of an analyte that can be measured.  Detection limit verification is recommended for all assays in which a low value may be of clinical significance, e.g., forensic tests, therapeutic drug levels, TSH, immunoassays and cancer markers.(Westgard, 2008g)

i. A 'blank' sample that does not contain the analyte of interest and a 'spiked' sample containing a low concentration of the analyte are used.  Several spiked samples, containing analyte at the detection concentration claimed by the manufacturer, may be required.

ii. 20 replicate measurements for each of the samples are recommended.

iii. The blank solution measurements can be performed 'within-run' or 'across-run' on the same day. However, the spiked sample should be analyzed over a longer period of time to take into account day-to-day or between-run variation. A minimum of 5 days is commonly used. (Westgard, 2008g)

iv. Quantitative estimations may be reported as:

1. Lower Limit of Detection (LLD)/Limit of Quantification (LoQ) is the mean of the blank + 2-3 x SD of the blank.

2. Biologic Limit of Detection is the mean of the blank + 2-3 SD of the spiked sample.

3. Functional Sensitivity is the mean of the spiked sample that has CV of 20%. This represents the lowest limit at which quantitative information is reliable. Several spiked samples must be studied in order to determine the spiked sample with a 20% CV. 


i. Selection of QC rules for the statistical monitoring of method performance (QC Validation)

i. QC validation can be done manually using normalized OpSpecs Charts, the EZRUNS calculator ( or other quality assurance programs. (Friedrichs, 2005)

ii. QC validation utilizes TEa requirement (or clinical decision interval)  for the test, along with CV (RE) and bias (SE), determined from replication and comparison of methods experiments, to determine the possible control rules that can be applied for statistical QC.(Westgard, 2006)

iii. For most automated methods, a probability of error detection of 90% and probability of false rejection of <5% are sufficient. For extremely stable assays with few anticipated problems, a probability of error detection as low as 50% may be acceptable. (QP15 Frequently Asked Questions About Quality Planning.  Available  Accessed November 10, 2009.)

iv. Different QC rules may be required for different levels of a single analyte (multilevel QC).  For example, more stringent multirule QC may be required to detect error at lower analyte levels than at higher analytes levels.

v. Adoption of a new method or calibration/maintenance of a method may require different (more stringent) QC rules than those applied during routine use of a method.  This is referred to as multistage QC. 



2.1.3. Instrumentation


a. Instrument performance: The instrumentation and methodologies used must be capable of providing test results within the laboratory's stated performance characteristics. (Linnet, 2006)   These include:

i. Analytical range including detection limit and linearity

ii. Precision

iii. Accuracy

iv. Analytical Specificity - Measurement of the target compound.  This should give an estimate and clearly define any interfering substances. Because interferences cannot always be avoided, consideration should be given to the development of interferographs that examine the effects of added lipid, bilirubin, and hemoglobin on assay results.  Inferences are species specific, ideally interferographs need to be created for each analyte and species tested

v. Analytical Sensitivity

vi. Additional points to consider:

1. Instruments with adjustable settings for different substances and/or species should be carefully checked for compliance.

2. Laboratory and manufacturer defined performance characteristics should be compared and adjustments made as needed.

3. Make sure certain species differences are accommodated; the instrument manufacturer's technical representatives generally assist in this portion of instrument qualification and setup.

b. Function checks

i. Appropriate function checks of critical operating characteristics should be made on all instruments.  (i.e., stray light, zeroing, electrical levels, optical alignment, background checks, etc.)

ii. Prior to sample testing, laboratory personnel should perform QC and/or calibrate each instrument daily or once per shift. Instruments should be operated per manufacturer instructions

c. Calibration

i. Instruments should be calibrated at least every 6 months.   More frequent calibrations may take place: (Westgard, 2008a)

1. According to manufacturer's recommendation.

2. After major service.

3. When QC values are outside limits or troubleshooting indicates need.

4. When workload, equipment performance, or reagent stability indicate the need for more frequent calibration.

ii. After calibration, controls should be run according to SOP.


2.1.4. Personnel Knowledge.

Laboratory personnel should have thorough working knowledge of the equipment and its use, including, but not limited to the following topics.

a. Linearity differences in animal compared to human samples.

b. Effects of hemolysis, lipemia, icterus, carotenoid pigments (especially large animals), and different anticoagulants on each assay.

c. Reportable ranges.

d. Species-specific or strain-specific reportable ranges and reference intervals.

e. Expected physiologic ranges.  Repeat criteria may be established that triggers re-analysis of a sample.  Criteria for repeating a test should include any equipment generated error messages or flags, as well as results that are grossly outside of normal physiologic range.  For the latter, consider use of ‘panic values' pre-programmed into biochemistry analyzer operating system.   Retesting to confirm an abnormal result should be communicated to the client as part of the report.

f. Common problems encountered with veterinary samples and appropriate steps to take with various error messages or flags.

g. Regular instrument maintenance schedule (daily, weekly, monthly, and as needed).

h. Replacement of inadequate or faulty equipment.

i. Problem-solving procedures (troubleshooting).

j. Appropriate use of comments and species-specific criteria   Comments and species-specific criteria may be determined to be of interpretive benefit to clients.  Direct communication with clients should be limited to those in the organization who are qualified to provide data interpretation in the context of clinical history and previous therapies.



2.1.5. Quality Control.

Calibrators and controls should be identified appropriately, and their use and frequency should be documented as part of the quality plan to ensure accuracy of results.(Westgard, 1998)  Documentation and generation of appropriate actions should follow rules and policies established for analysis of QC parameters.  These may include confirmation of results and appropriate use of charts, graphs, and data entry, as determined by the laboratory for each department and/or type of equipment.  There should be a reporting structure to inform management of QC issues, and problems requiring attention should be forwarded to appropriate locations within the organization. Controls on corrective actions should be in place to evaluate effectiveness.


a. Selection of QC rules for the statistical monitoring of method performance (QC validation)

i. QC validation can be done manually using normalised OpSpecs Charts, the EZRUNS calculator ( or other quality assurance programs.

ii. QC validation utilizes TEa requirement (or clinical decision interval) for the test with CV (RE) and bias (SE), from replication experiment and comparison of methods experiment, to determine the possible control rules that can be applied for statistical QC.

iii. For most automated methods, a probability of error detection of 90% and probability of false rejection of <5% are sufficient. For extremely stable assays with few anticipated problems, a probability of error detection as low as 50% may be acceptable.

iv. Different QC rules may be required for different levels of a single analyte (multilevel QC).  For example, more stringent multirule QC may be required to detect error at lower analyte levels than at higher analytes levels.

v. Different QC rules may be desired during the adoption of a new method or after calibration and maintenance than those required during routine use of an established method (multistage QC).  The former QC rules are typically more stringent than the latter.

b. Reagents and materials used for the procedures should be labeled with date received and date opened and stored according to manufacturer's recommendations when applicable. Expiration dates should be observed. Expired reagents should be discarded appropriately.  Analyte concentrations in control materials often represent low and high results with respect to human pathologic abnormalities in addition to normal human concentrations.  If pathologic concentrations from animal species are significantly divergent from these levels, it may be necessary to include additional control materials with analyte levels similar to animal pathologic concentrations or activities.   

c. The selection of numbers of controls will depend, in part, on the performance of the equipment and is part of the process of QC validation. Traditionally, 2-3 control materials are commonly used, but additional QC data points may be needed in order to ensure a high probability of error detection and low probability of false rejection with some assays. 

d. A maximum run length of 24 hours is recommended unless the instrument manufacturer recommends more frequent control runs. 

e. Verification of reagent stability over the "run-length" should be done during method validation by assaying control materials multiple times throughout an entire "run-length" and comparing the resulting mean and SD with results from "with-in run" precision experiments. 

f. Establish QC frequency with the following considerations

i. Test frequency and throughput (number of tests performed during each run or each day).

ii. Degree to which method and quality requirements for the test depends on precise technical performance.

iii. Analyte or reagent stability.

iv. Frequency of QC failures.

v. Training and experience of personnel.

vi. Cost of QC (increasing frequency adds to cost-per-test).

g. Quality control parameters:

i. Laboratories should establish criteria or verify manufacturer's criteria for an acceptable range of performance for QC materials.  Mean, SD, and CV should be calculated with a minimum 20 of measures.  It is recommended that control materials come from the same lot number.

ii. Controls should be assayed in the same manner as patient specimens.

iii. At least 1 level of control material should be run after a reagent lot is changed.

iv. A mechanism should be in place to determine whether testing personnel follow policies and procedures correctly. 

v. Use of Westgard multi-rule procedures or other rules based on QC validation is recommended.

vi. Accumulated quality control results should be systematically reviewed on a regular schedule, for example through use of Levey-Jennings plots, and appropriate actions taken when quality controls results exceed the limits or demonstrate undesirable trends. 

vii. Policies and procedures should be contained in a procedures manual.  Annual (or more frequent) review of policy and procedures by staff should be documented. 

viii. QC records should be reviewed frequently to ensure that suitable action is taken when QC results fail to meet the criteria for acceptability.  Corrective action should be outlined for laboratory personnel.

ix. Control products, preferably from the same lot number, should be purchased commercially.  If using calibrators as controls, use different lots for each function.  If pooled patient samples are used, establish the mean value of all analytes (minimum n = 10 to establish a mean).

x. . Monitor results of clinical specimens for various sources of error by use of parameters such as anion gap, comparison of test results with previous submissions from same patient (delta checks), and investigation of markedly abnormal results (limit checks).

xi. Manufacturer's instructions for routine maintenance (daily, weekly, monthly) and calibration should be followed unless laboratories have modified them for their own use and documented appropriate instructions.  A log of instrument maintenance, calibration, or repair should be maintained in the laboratory or by a metrology unit.


2.1.6. Procedures Manual.

See Westgard and Klee for an outline of recommended procedure manual contents.(Westgard, 2006) Protocols may be organized as hard copies in manuals and/or stored in computers. All procedures currently in use should be included in a Procedures Manual that is easily accessible by all personnel performing the assay. Editing should be performed by an identified individual(s).  The organization of the manual(s) will vary with the size, needs and requirements of the facility. Certain accrediting organizations may have specific requirements, and specific Standard Operating Procedures (SOPs) are recommended.  Most laboratory procedures should be adequately covered by the categories list here.  Upon completion of training of new personnel, a check-off list should be implemented to document competency in performing the assay and knowledge of related aspects of the assay.  When the procedures document comes under revision, a review with applicable personnel is recommended to ensure that all are familiar with the revised procedures.

a. Index

b. General Information

i. General Policies and Procedures.

ii. Quality Assurance information.

iii. Sample Storage Length and Disposal.

iv. Raw Data Storage and Disposal.

v. Routine Send-Outs.

1. Testing Facility information

2. Sample Requirements

3. Shipping guidelines

4. Turn-around time

c. Standard Operating Procedures (SOP) for each procedure The amount of information contained in an SOP may vary, but the following topics are recommended. (Westgard, 2006)

i. Title (include version date or number)

ii. Purpose and application of the procedure

iii. Principles of the assay

iv. Sample management

1. Patient preparation (e.g., species-specific information)

2. Specimen collection, processing and handling (e.g., minimum volume)

3. Criteria for rejection of samples

v. Operational precautions and limitations

1. Hazards

2. Interferences with the method in use

- Hemolysis, icterus, lipemia

- Anticoagulants

- Drugs, etc.

3. Reportable range

4. Sensitivity and specificity if applicable

vi. Reagents

1. Storage location and conditions

2. Preparation

3. Open shelf-life

4. Manufacturer (e.g., contents)

vii. Equipment and supplies

1. Equipment or tools necessary to complete the procedure

2. Location of supplies

3. Actions to take when system is down (refer to Send-Out section or provide additional information)

viii. Calibration and quality control procedures

1. Materials

2. Frequency

3. Interpretation (when to verify, rerun, troubleshoot, etc.)

ix. Procedure (step-by-step instructions)

x. Reference intervals for each appropriate species

xi. Interpretation and reporting: critical values (recommended actions)

1. Chain of communication

- In-house staff

- Technical representative information

2. Troubleshooting steps                                                                   

- QC check

- Rerun

- Dilutions (appropriate diluent)

xii. Literature references


1. Name, date and signature of generator (date of implementation if different from generation)

2. Name, date and signature of reviewer (if applicable)

3. Training log

xiv. Appendices

1. Logs or worksheets

- Observation and troubleshooting log

- Results log

2. Package inserts (Much of the above information may be obtained directly from the package insert.  In that case, the SOP may refer to applicable sections of the insert.)

3. "Cheat sheets"

- Quick reference guides

- Title and version of procedure


2.1.7. Comparison of Test Results.

If the laboratory performs the same test by more than 1 method, at more than 1 test site, or by a referral laboratory, comparisons should be run at least annually to define the relationships between methods and sites.  Current Guidelines for Methods Comparison are under development.  Please check the ASVCP website.

            The following steps should be included:

a. Compare a minimum of 20 samples that cover the analytical range.

i. Plot data on an x-y comparison plot

ii. Calculate slope and intercept by a least squares method

iii. Use of a clinical pathology statistical software program, such as EP Evaluator, allows method comparison using set analytical ranges and CLIA acceptable performance targets. 

b. Laboratory director or qualified personnel should define acceptable performance limits.

c. If individual test results performed on the same patient or material do not correlate with each other (e.g., urea nitrogen/creatinine, electrolyte balance), the cause should be investigated, the situation documented, and corrective action taken. 

d. Enzyme verification should be compared across analyzers a minimum of every 6 months (biannually) and after any significant technical service or other problems.  Enzyme verification is completed by performing a linearity study and comparing results among analyzers by linear regression.