Combined Interpretations of the 2003, 2009, and 2016 Standards that apply to Volume 1 of the 2016 TNI Standard
MODULE 6: RADIOCHEMICAL TECHNICAL REQUIREMENTS
Section: 1.5.4 Measurement Uncertainty
Question: We are not sure exactly what this section is requiring of us. What does it mean by "the experimentally observed precision at each testing level"? We are assuming that the simplified version of this section would say that our calculated precision values from our duplicates cannot be greater than the uncertainty of either sample used in the calculation. Is that correct?
TNI Response: Section 1.5.4 states specifically that "the experimentally observed precision at each testing level [of the precision evaluation in section 1.5.3] shall not be statistically greater than the maximum combined standard uncertainty of the measurement results at that level, although it may be somewhat less." Section 1.5.3 establishes different approaches for "reference methods" and for laboratory-developed (or modified) methods. For "reference methods", 1.5.3 a) the standard deviation of the results is calculated for at least four spiked samples as described in Section 1.6. The standard deviation of the four replicate results is compared to the calculated combined standard uncertainty for each of the four results. The CSU is acceptable as long as it is equal to or greater than the experimental standard deviation of the four results. 1.5.3 b) addresses non-grandfathered, non-reference methods. It states that the laboratory shall use a documented procedure to evaluate precision and bias. An acceptable approach will determine the standard deviation for at least three blank samples, and at least three replicate samples at each of three known activities that span the range of activities expected from samples to be analyzed using the method. The calculated CSU for each sample result is compared to the standard deviation of results at that activity level and is considered acceptable as long as it is statistically equivalent, or slightly lower. Analysis of duplicate samples will not meet the minimum requirements of either of these two approaches. This section was extensively revised in the 2016 standard. The SIR is obsolete.
Section: 1.7.1 Instrument Set-up, Calibration, Performance Checks, and Background Measurements
Question: Is factory characterization of gamma detectors using monte carlo simulation software an accepted means of efficiency determination. If yes, which simulation software is accepted (i.e. GEANT4, MCNPX)?
TNI Response: No. Section 1.7.1 a) states that "Instrument calibration shall be performed with reference standards as defined in Section 184.108.40.206.c). The standards shall have the same general characteristics (i.e., geometry, homogeneity, density, etc.) as the associated samples." This section was extensively revised in the 2016 standard. The SIR is obsolete.
Question: This section does not specify count times when determining background measurements used for sample subtraction and background measurements used for contamination checks. In addition the background measurement frequency specified for proportional counters is weekly and was changed to daily in the proposed TNI standard V1M6 section 1.7.1c.iii. Typically the count time used for background subtraction is as long as your longest sample count time which for drinking water samples can be counted for 48 hours. Under the current 2003 NELAC Standard performing background measurements for 48 hours on a weekly basis is impractical and would be impossible under the proposed TNI standard. What is your interpretation of the count time and frequency for determining background measurements that are used for subtraction and the count time and frequency for determining background measurements that are used for contamination checks? If the laboratory can provide background measurement data that demonstrates consistent background readings over long periods of time, can the lab use this information to justify reducing the frequency and or reducing the count times for taking background measurements?TNI Response: It was the intent of the authors that background measurements for gas-proportional counters were required once a week and the results would be subtracted from the total measured activity in a sample. The counting time on a background measurement should be as long as the counting time of an average sample (although that is not stated in the NELAC Standard). Background check measurements were required for each day of use and served to check for contamination of the detector. The value obtained is not subtracted from the total measured activity in a sample but is simply a quick check for detector contamination. The counting time for a background check measurement can be relatively short and it certainly does not need to be as long as the counting time of a sample. Again, a required counting time was not specified in the NELAC Standard. D.4.4 c) 3) requires that background measurements be performed on a weekly basis. Negative controls (such as Method blanks) are discussed in D.4.1 a). Method blanks must be prepared with each preparation batch. As the Standard is written, there is no allowance for a longer period between background measurements. The 2009 standard specifies counting times and this section was extensively revised in the 2016 standard. The SIR is obsolete.