共查询到20条相似文献,搜索用时 0 毫秒
1.
Background
Traditionally, clinical research studies rely on collecting data with case report forms, which are subsequently entered into a database to create electronic records. Although well established, this method is time-consuming and error-prone. This study compares four electronic data capture (EDC) methods with the conventional approach with respect to duration of data capture and accuracy. It was performed in a West African setting, where clinical trials involve data collection from urban, rural and often remote locations.Methodology/Principal Findings
Three types of commonly available EDC tools were assessed in face-to-face interviews; netbook, PDA, and tablet PC. EDC performance during telephone interviews via mobile phone was evaluated as a fourth method. The Graeco Latin square study design allowed comparison of all four methods to standard paper-based recording followed by data double entry while controlling simultaneously for possible confounding factors such as interview order, interviewer and interviewee. Over a study period of three weeks the error rates decreased considerably for all EDC methods. In the last week of the study the data accuracy for the netbook (5.1%, CI95%: 3.5–7.2%) and the tablet PC (5.2%, CI95%: 3.7–7.4%) was not significantly different from the accuracy of the conventional paper-based method (3.6%, CI95%: 2.2–5.5%), but error rates for the PDA (7.9%, CI95%: 6.0–10.5%) and telephone (6.3%, CI95% 4.6–8.6%) remained significantly higher. While EDC-interviews take slightly longer, data become readily available after download, making EDC more time effective. Free text and date fields were associated with higher error rates than numerical, single select and skip fields.Conclusions
EDC solutions have the potential to produce similar data accuracy compared to paper-based methods. Given the considerable reduction in the time from data collection to database lock, EDC holds the promise to reduce research-associated costs. However, the successful implementation of EDC requires adjustment of work processes and reallocation of resources. 相似文献2.
3.
In clinical trials, a biomarker (S ) that is measured after randomization and is strongly associated with the true endpoint (T) can often provide information about T and hence the effect of a treatment (Z ) on T. A useful biomarker can be measured earlier than T and cost less than T. In this article, we consider the use of S as an auxiliary variable and examine the information recovery from using S for estimating the treatment effect on T, when S is completely observed and T is partially observed. In an ideal but often unrealistic setting, when S satisfies Prentice's definition for perfect surrogacy, there is the potential for substantial gain in precision by using data from S to estimate the treatment effect on T. When S is not close to a perfect surrogate, it can provide substantial information only under particular circumstances. We propose to use a targeted shrinkage regression approach that data-adaptively takes advantage of the potential efficiency gain yet avoids the need to make a strong surrogacy assumption. Simulations show that this approach strikes a balance between bias and efficiency gain. Compared with competing methods, it has better mean squared error properties and can achieve substantial efficiency gain, particularly in a common practical setting when S captures much but not all of the treatment effect and the sample size is relatively small. We apply the proposed method to a glaucoma data example. 相似文献
4.
5.
6.
Background
Lack of transparency in clinical trial conduct, publication bias and selective reporting bias are still important problems in medical research. Through clinical trials registration, it should be possible to take steps towards resolving some of these problems. However, previous evaluations of registered records of clinical trials have shown that registered information is often incomplete and non-meaningful. If these studies are accurate, this negates the possible benefits of registration of clinical trials.Methods and Findings
A 5% sample of records of clinical trials that were registered between 17 June 2008 and 17 June 2009 was taken from the International Clinical Trials Registry Platform (ICTRP) database and assessed for the presence of contact information, the presence of intervention specifics in drug trials and the quality of primary and secondary outcome reporting. 731 records were included. More than half of the records were registered after recruitment of the first participant. The name of a contact person was available in 94.4% of records from non-industry funded trials and 53.7% of records from industry funded trials. Either an email address or a phone number was present in 76.5% of non-industry funded trial records and in 56.5% of industry funded trial records. Although a drug name or company serial number was almost always provided, other drug intervention specifics were often omitted from registration. Of 3643 reported outcomes, 34.9% were specific measures with a meaningful time frame.Conclusions
Clinical trials registration has the potential to contribute substantially to improving clinical trial transparency and reducing publication bias and selective reporting. These potential benefits are currently undermined by deficiencies in the provision of information in key areas of registered records. 相似文献7.
8.
9.
10.
11.
Summary We develop a new Bayesian approach of sample size determination (SSD) for the design of noninferiority clinical trials. We extend the fitting and sampling priors of Wang and Gelfand (2002, Statistical Science 17 , 193–208) to Bayesian SSD with a focus on controlling the type I error and power. Historical data are incorporated via a hierarchical modeling approach as well as the power prior approach of Ibrahim and Chen (2000, Statistical Science 15 , 46–60). Various properties of the proposed Bayesian SSD methodology are examined and a simulation‐based computational algorithm is developed. The proposed methodology is applied to the design of a noninferiority medical device clinical trial with historical data from previous trials. 相似文献
12.
Simple corrected density indices (CDIs) have been used to measure reductions in pest density in fields. In the contemporary pesticide registration system, few comprehensive statistical frameworks are available that can integrate multiple datasets to evaluate how pesticidal effects are influenced by the products' properties such as mixing multiple active ingredients and possession of systemic ability. In this study, we provide a statistical framework for evaluating pesticide efficacy from multiple field trials and applying it to contemporary pesticides. In this framework, we extended the conventional CDI to a generalized linear mixed model (GLMM), which we applied to a dataset of the pesticide registration test in Japan (n = 758). The estimated mortality of a single active ingredient in reducing pest density is 88.0%, indicating the registered pesticide satisfies the “effective” criterion (roughly 70–95%) under the current pesticide registration system in Japan. Although systemic ability additionally reduced pest population to 55.5% of the post-treatment densities, the addition of active ingredients scarcely enhances efficacy (reducing population to 74.6%), suggesting that the pesticide design resulted in broadening the spectrum of target species rather than increasing toxicity. 相似文献
13.
《Trends in molecular medicine》2023,29(9):765-776
Electronic health records (EHRs) have become increasingly relied upon as a source for biomedical research. One important research application of EHRs is the identification of biomarkers associated with specific patient states, especially within complex conditions. However, using EHRs for biomarker identification can be challenging because the EHR was not designed with research as the primary focus. Despite this challenge, the EHR offers huge potential for biomarker discovery research to transform our understanding of disease etiology and treatment and generate biological insights informing precision medicine initiatives. This review paper provides an in-depth analysis of how EHR data is currently used for phenotyping and identifying molecular biomarkers, current challenges and limitations, and strategies we can take to mitigate challenges going forward. 相似文献
14.
While measurement of quality of life is a vital part of assessing the effect of treatment in many clinical trials, a measure that is responsive to clinically important change is often unavailable. Investigators are therefore faced with the challenge of constructing an index for a specific condition or even for a single trial. There are several stages in the development and testing of a quality-of-life measure: selecting an initial item pool, choosing the "best" items from that pool, deciding on questionnaire format, pretesting the instrument, and demonstrating the responsiveness and validity of the instrument. At each stage the investigator must choose between a rigorous, time-consuming approach to questionnaire construction that will establish the clinical relevance, responsiveness and validity of the instrument and a more efficient, less costly strategy that leaves reproducibility, responsiveness and validity untested. This article describes these options and outlines a pragmatic approach that yields consistently satisfactory disease-specific measures of quality of life. 相似文献
15.
Quantifying mortality of tropical rain forest trees using high-spatial-resolution satellite data 总被引:1,自引:0,他引:1
David B. Clark Carlomagno Soto Castro Luis Diego Alfaro Alvarado Jane M. Read 《Ecology letters》2004,7(1):52-59
Assessment of forest responses to climate change is severely hampered by the limited information on tree death on short temporal and broad spatial scales, particularly in tropical forests. We used 1‐m resolution panchromatic IKONOS and 0.7‐m resolution QuickBird satellite data, acquired in 2000 and 2002, respectively, to evaluate tree death rates at the La Selva Biological Station in old‐growth Tropical Wet Forest in Costa Rica, Central America. Using a calibration factor derived from ground inspection of tree deaths predicted from the images, we calculated a landscape‐scale annual exponential death rate of 2.8%. This corresponds closely to data for all canopy‐level trees in 18 forest inventory plots, each of 0.5 ha, for a mostly‐overlapping 2‐year period (2.8% per year). This study shows that high‐spatial‐resolution satellite data can now be used to measure old‐growth tropical rain forest tree death rates, suggesting many new avenues for tropical forest ecology and global change research. 相似文献
16.
Berger VW 《Biometrical journal. Biometrische Zeitschrift》2005,47(2):119-27; discussion 128-39
Selection bias is most common in observational studies, when patients select their own treatments or treatments are assigned based on patient characteristics, such as disease severity. This first-order selection bias, as we call it, is eliminated by randomization, but there is residual selection bias that may occur even in randomized trials which occurs when, subconsciously or otherwise, an investigator uses advance knowledge of upcoming treatment allocations as the basis for deciding whom to enroll. For example, patients more likely to respond may be preferentially enrolled when the active treatment is due to be allocated, and patients less likely to respond may be enrolled when the control group is due to be allocated. If the upcoming allocations can be observed in their entirety, then we will call the resulting selection bias second-order selection bias. Allocation concealment minimizes the ability to observe upcoming allocations, yet upcoming allocations may still be predicted (imperfectly), or even determined with certainty, if at least some of the previous allocations are known, and if restrictions (such as randomized blocks) were placed on the randomization. This mechanism, based on prediction but not observation of upcoming allocations, is the third-order selection bias that is controlled by perfectly successful masking, but without perfect masking is not controlled even by the combination of advance randomization and allocation concealment. Our purpose is to quantify the magnitude of baseline imbalance that can result from third-order selection bias when the randomized block procedure is used. The smaller the block sizes, the more accurately one can predict future treatment assignments in the same block as known previous assignments, so this magnitude will depend on the block size, as well as on the level of certainty about upcoming allocations required to bias the patient selection. We find that a binary covariate can, on average, be up to 50% unbalanced by third-order selection bias. 相似文献
17.
Simon Schneider Heinz Schmidli Tim Friede 《Biometrical journal. Biometrische Zeitschrift》2013,55(4):617-633
Internal pilot studies are a popular design feature to address uncertainties in the sample size calculations caused by vague information on nuisance parameters. Despite their popularity, only very recently blinded sample size reestimation procedures for trials with count data were proposed and their properties systematically investigated. Although blinded procedures are favored by regulatory authorities, practical application is somewhat limited by fears that blinded procedures are prone to bias if the treatment effect was misspecified in the planning. Here, we compare unblinded and blinded procedures with respect to bias, error rates, and sample size distribution. We find that both procedures maintain the desired power and that the unblinded procedure is slightly liberal whereas the actual significance level of the blinded procedure is close to the nominal level. Furthermore, we show that in situations where uncertainty about the assumed treatment effect exists, the blinded estimator of the control event rate is biased in contrast to the unblinded estimator, which results in differences in mean sample sizes in favor of the unblinded procedure. However, these differences are rather small compared to the deviations of the mean sample sizes from the sample size required to detect the true, but unknown effect. We demonstrate that the variation of the sample size resulting from the blinded procedure is in many practically relevant situations considerably smaller than the one of the unblinded procedures. The methods are extended to overdispersed counts using a quasi‐likelihood approach and are illustrated by trials in relapsing multiple sclerosis. 相似文献
18.
Planned interim analysis of randomized clinical trials has been implemented for over a decade. While the initial proposal advocated analyzing after equal numbers of patients were evaluated, a later modification by Lan and DeMets (1983, Biometrika 70, 659-663) allowed for more flexible boundaries. Rather than fixing the times of analysis at equal numbers of patients, they fixed the rate at which overall alpha was used up according to a use function alpha * (t) on t in with alpha * (0) = 0 and alpha * (1) = alpha. Here we consider how flexible Lan and DeMets' procedure is. We show that the choice of alpha * (t) for a particular trial affects the permissible analysis times if other desirable properties of the sequence of nominal significance levels are to hold. To overcome the difficulties posed by patterns of late analysis, piecewise linear convex use functions are proposed. 相似文献
19.
A random effects model for analyzing multivariate failure time data is proposed. The work is motivated by the need for assessing the mean treatment effect in a multicenter clinical trial study, assuming that the centers are a random sample from an underlying population. An estimating equation for the mean hazard ratio parameter is proposed. The proposed estimator is shown to be consistent and asymptotically normally distributed. A variance estimator, based on large sample theory, is proposed. Simulation results indicate that the proposed estimator performs well in finite samples. The proposed variance estimator effectively corrects the bias of the naive variance estimator, which assumes independence of individuals within a group. The methodology is illustrated with a clinical trial data set from the Studies of Left Ventricular Dysfunction. This shows that the variability of the treatment effect is higher than found by means of simpler models. 相似文献