Calculating power for the comparison of dependent κ-coefficients

  • Hung Mo Lin
  • , John M. Williamson
  • , Stuart R. Lipsitz

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

In the psychosocial and medical sciences, some studies are designed to assess the agreement between different raters and/or different instruments. Often the same sample will be used to compare the agreement between two or more assessment methods for simplicity and to take advantage of the positive correlation of the ratings. Although sample size calculations have become an important element in the design of research projects, such methods for agreement studies are scarce. We adapt the generalized estimating equations approach for modelling dependent κ-statistics to estimate the sample size that is required for dependent agreement studies. We calculate the power based on a Wald test for the equality of two dependent κ-statistics. The Wald test statistic has a non-central χ2-distribution with non-centrality parameter that can be estimated with minimal assumptions. The method proposed is useful for agreement studies with two raters and two instruments, and is easily extendable to multiple raters and multiple instruments. Furthermore, the method proposed allows for rater bias. Power calculations for binary ratings under various scenarios are presented. Analyses of two biomedical studies are used for illustration.

Original languageEnglish
Pages (from-to)391-404
Number of pages14
JournalJournal of the Royal Statistical Society. Series C: Applied Statistics
Volume52
Issue number4
DOIs
StatePublished - 2003
Externally publishedYes

Keywords

  • Agreement
  • Generalized estimating equations
  • Power
  • Sample size
  • κ

Fingerprint

Dive into the research topics of 'Calculating power for the comparison of dependent κ-coefficients'. Together they form a unique fingerprint.

Cite this