Abstract
Some studies are designed to assess the agreement between different raters and/or different instruments in the medical sciences and pharmaceutical research. In practice, the same sample will be used to compare the agreement of two or more assessment methods for simplicity and to take advantage of the positive correlation of the ratings. The concordance correlation coefficient (CCC) is often used as a measure of agreement when the rating is a continuous variable. We present an approach for calculating the sample size required for testing the equality of two CCCs, H0: CCC1=CCC2 vs. HA: CCC1 ≠ CCC2, where two assessment methods are used on the same sample, with two raters resulting in correlated CCC estimates. Our approach is to simulate one large "exemplary" dataset based on the specification of the joint distribution of the pairwise ratings for the two methods. We then create two new random variables from the simulated data that have the same variance-covariance matrix as the two dependent CCC estimates using the Taylor series linearization method. The method requires minimal computing time and can be easily extended to comparing more than two CCCs, or Kappa statistics.
Original language | English |
---|---|
Pages (from-to) | 1145-1160 |
Number of pages | 16 |
Journal | Journal of Biopharmaceutical Statistics |
Volume | 25 |
Issue number | 6 |
DOIs | |
State | Published - 2 Nov 2015 |
Keywords
- Agreement
- Concordance correlation coefficient
- Power
- Sample size
- Taylor series linearization