A central pathology or site reading of biopsy slides is used in liver transplant clinical trials to determine rejection. We evaluated interrater reliability of readings of “rejection or not” using digitized slides from the Medication Adherence in Children who had a Liver Transplant (MALT) study. Four masked experienced pathologists read the digitized slides and then reread them after a study-specific histologic endpoint development program. Agreement was expressed throughout as a Kappa or Fleiss Kappa statistic (ҡ). A ҡ > 0.6 was predefined as desirable. Readings were correlated with immunosuppressant adherence (the Medication Level Variability Index, [MLVI]), and maximal liver enzyme levels during the study period. Interrater agreement between site and central review in MALT, and between 4 pathologists later on, was low (ҡ = 0.44, Fleiss ҡ = 0.41, respectively). Following the endpoint development program, agreement improved and became acceptable (ҡ = 0.71). The final reading was better-aligned with maximal gamma-glutamyl transferase levels and MLVI as compared with the original central reading. We found substantial disagreement between experienced pathologists reading the same slides. A unique study-specific procedure improved interrater reliability to the point it was acceptable. Such a procedure may be indicated to increase reliability of histopathologic determinations in future research, and perhaps also clinically.