首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Intra-Rater and Inter-Rater Reliability of a Medical Record Abstraction Study on Transition of Care after Childhood Cancer
Authors:Micòl E Gianinazzi  Corina S Rueegg  Karin Zimmerman  Claudia E Kuehni  Gisela Michel  the Swiss Paediatric Oncology Group
Institution:1Department of Health Sciences and Health Policy, University of Lucerne, Lucerne, Switzerland;2Pediatric Hematology/Oncology, University Children’s Hospital, Bern, Switzerland;3Swiss Childhood Cancer Registry, Institute of Social and Preventive Medicine, University of Bern, Bern, Switzerland;Kyushu University Faculty of Medical Science, JAPAN
Abstract:BackgroundThe abstraction of data from medical records is a widespread practice in epidemiological research. However, studies using this means of data collection rarely report reliability. Within the Transition after Childhood Cancer Study (TaCC) which is based on a medical record abstraction, we conducted a second independent abstraction of data with the aim to assess a) intra-rater reliability of one rater at two time points; b) the possible learning effects between these two time points compared to a gold-standard; and c) inter-rater reliability.MethodWithin the TaCC study we conducted a systematic medical record abstraction in the 9 Swiss clinics with pediatric oncology wards. In a second phase we selected a subsample of medical records in 3 clinics to conduct a second independent abstraction. We then assessed intra-rater reliability at two time points, the learning effect over time (comparing each rater at two time-points with a gold-standard) and the inter-rater reliability of a selected number of variables. We calculated percentage agreement and Cohen’s kappa.FindingsFor the assessment of the intra-rater reliability we included 154 records (80 for rater 1; 74 for rater 2). For the inter-rater reliability we could include 70 records. Intra-rater reliability was substantial to excellent (Cohen’s kappa 0-6-0.8) with an observed percentage agreement of 75%-95%. In all variables learning effects were observed. Inter-rater reliability was substantial to excellent (Cohen’s kappa 0.70-0.83) with high agreement ranging from 86% to 100%.ConclusionsOur study showed that data abstracted from medical records are reliable. Investigating intra-rater and inter-rater reliability can give confidence to draw conclusions from the abstracted data and increase data quality by minimizing systematic errors.
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号