共查询到20条相似文献,搜索用时 31 毫秒
1.
Background
The WHO estimates that 13% of maternal mortality is due to unsafe abortion, but challenges with measurement and data quality persist. To our knowledge, no systematic assessment of the validity of studies reporting estimates of abortion-related mortality exists.Study Design
To be included in this study, articles had to meet the following criteria: (1) published between September 1st, 2000-December 1st, 2011; (2) utilized data from a country where abortion is “considered unsafe”; (3) specified and enumerated causes of maternal death including “abortion”; (4) enumerated ≥100 maternal deaths; (5) a quantitative research study; (6) published in a peer-reviewed journal.Results
7,438 articles were initially identified. Thirty-six studies were ultimately included. Overall, studies rated “Very Good” found the highest estimates of abortion related mortality (median 16%, range 1–27.4%). Studies rated “Very Poor” found the lowest overall proportion of abortion related deaths (median: 2%, range 1.3–9.4%).Conclusions
Improvements in the quality of data collection would facilitate better understanding global abortion-related mortality. Until improved data exist, better reporting of study procedures and standardization of the definition of abortion and abortion-related mortality should be encouraged. 相似文献2.
Background
The biomarker discovery field is replete with molecular signatures that have not translated into the clinic despite ostensibly promising performance in predicting disease phenotypes. One widely cited reason is lack of classification consistency, largely due to failure to maintain performance from study to study. This failure is widely attributed to variability in data collected for the same phenotype among disparate studies, due to technical factors unrelated to phenotypes (e.g., laboratory settings resulting in “batch-effects”) and non-phenotype-associated biological variation in the underlying populations. These sources of variability persist in new data collection technologies.Methods
Here we quantify the impact of these combined “study-effects” on a disease signature’s predictive performance by comparing two types of validation methods: ordinary randomized cross-validation (RCV), which extracts random subsets of samples for testing, and inter-study validation (ISV), which excludes an entire study for testing. Whereas RCV hardwires an assumption of training and testing on identically distributed data, this key property is lost in ISV, yielding systematic decreases in performance estimates relative to RCV. Measuring the RCV-ISV difference as a function of number of studies quantifies influence of study-effects on performance.Results
As a case study, we gathered publicly available gene expression data from 1,470 microarray samples of 6 lung phenotypes from 26 independent experimental studies and 769 RNA-seq samples of 2 lung phenotypes from 4 independent studies. We find that the RCV-ISV performance discrepancy is greater in phenotypes with few studies, and that the ISV performance converges toward RCV performance as data from additional studies are incorporated into classification.Conclusions
We show that by examining how fast ISV performance approaches RCV as the number of studies is increased, one can estimate when “sufficient” diversity has been achieved for learning a molecular signature likely to translate without significant loss of accuracy to new clinical settings. 相似文献3.
Background
Here we present convergent methodologies using theoretical calculations, empirical assessment on in-house and publicly available datasets as well as in silico simulations, that validate a panel of SNPs for a variety of necessary tasks in human genetics disease research before resources are committed to larger-scale genotyping studies on those samples. While large-scale well-funded human genetic studies routinely have up to a million SNP genotypes, samples in a human genetics laboratory that are not yet part of such studies may be productively utilized in pilot projects or as part of targeted follow-up work though such smaller scale applications require at least some genome-wide genotype data for quality control purposes such as DNA “barcoding” to detect swaps or contamination issues, determining familial relationships between samples and correcting biases due to population effects such as population stratification in pilot studies.Principal Findings
Empirical performance in classification of relative types for any two given DNA samples (e.g., full siblings, parental, etc) indicated that for outbred populations the panel performs sufficiently to classify relationship in extended families and therefore also for smaller structures such as trios and for twin zygosity testing. Additionally, familial relationships do not significantly diminish the (mean match) probability of sharing SNP genotypes in pedigrees, further indicating the uniqueness of the “barcode.” Simulation using these SNPs for an African American case-control disease association study demonstrated that population stratification, even in complex admixed samples, can be adequately corrected under a range of disease models using the SNP panel.Conclusion
The panel has been validated for use in a variety of human disease genetics research tasks including sample barcoding, relationship verification, population substructure detection and statistical correction. Given the ease of genotyping our specific assay contained herein, this panel represents a useful and economical panel for human geneticists. 相似文献4.
Porjai Pattanittum Malinee Laopaiboon David Moher Pisake Lumbiganon Chetta Ngamjarus 《PloS one》2012,7(11)
Background
Systematic reviews (SRs) can provide accurate and reliable evidence, typically about the effectiveness of health interventions. Evidence is dynamic, and if SRs are out-of-date this information may not be useful; it may even be harmful. This study aimed to compare five statistical methods to identify out-of-date SRs.Methods
A retrospective cohort of SRs registered in the Cochrane Pregnancy and Childbirth Group (CPCG), published between 2008 and 2010, were considered for inclusion. For each eligible CPCG review, data were extracted and “3-years previous” meta-analyses were assessed for the need to update, given the data from the most recent 3 years. Each of the five statistical methods was used, with random effects analyses throughout the study.Results
Eighty reviews were included in this study; most were in the area of induction of labour. The numbers of reviews identified as being out-of-date using the Ottawa, recursive cumulative meta-analysis (CMA), and Barrowman methods were 34, 7, and 7 respectively. No reviews were identified as being out-of-date using the simulation-based power method, or the CMA for sufficiency and stability method. The overall agreement among the three discriminating statistical methods was slight (Kappa = 0.14; 95% CI 0.05 to 0.23). The recursive cumulative meta-analysis, Ottawa, and Barrowman methods were practical according to the study criteria.Conclusion
Our study shows that three practical statistical methods could be applied to examine the need to update SRs. 相似文献5.
Background
There is no consensus as to what extent of “wrap” is required in a fundoplication for correction of gastroesophageal reflux disease (GERD).Objective
To evaluate if a complete (360 degree) or partial fundoplication gives better control of GERD.Methods
A systematic search of MEDLINE and Scopus identified interventional and observational studies of fundoplication in children. Screening identified those comparing techniques. The primary outcome was recurrence of GERD following surgery. Dysphagia and complications were secondary outcomes of interest. Meta-analysis was performed when appropriate. Study quality was assessed using the Cochrane Risk of Bias Tool.Results
2289 abstracts were screened, yielding 2 randomized controlled trials (RCTs) and 12 retrospective cohort studies. The RCTs were pooled. There was no difference in surgical success between partial and complete fundoplication, OR 1.33 [0.67,2.66]. In the 12 cohort studies, 3 (25%) used an objective assessment of the surgery, one of which showed improved outcomes with complete fundoplication. Twenty-five different complications were reported; common were dysphagia and gas-bloat syndrome. Overall study quality was poor.Conclusions
The comparison of partial fundoplication with complete fundoplication warrants further study. The evidence does not demonstrate superiority of one technique. The lack of high quality RCTs and the methodological heterogeneity of observational studies limits a powerful meta-analysis. 相似文献6.
Flandre P 《PloS one》2011,6(9):e22871
Background
In recent years the “noninferiority” trial has emerged as the new standard design for HIV drug development among antiretroviral patients often with a primary endpoint based on the difference in success rates between the two treatment groups. Different statistical methods have been introduced to provide confidence intervals for that difference. The main objective is to investigate whether the choice of the statistical method changes the conclusion of the trials.Methods
We presented 11 trials published in 2010 using a difference in proportions as the primary endpoint. In these trials, 5 different statistical methods have been used to estimate such confidence intervals. The five methods are described and applied to data from the 11 trials. The noninferiority of the new treatment is not demonstrated if the prespecified noninferiority margin it includes in the confidence interval of the treatment difference.Results
Results indicated that confidence intervals can be quite different according to the method used. In many situations, however, conclusions of the trials are not altered because point estimates of the treatment difference were too far from the prespecified noninferiority margins. Nevertheless, in few trials the use of different statistical methods led to different conclusions. In particular the use of “exact” methods can be very confusing.Conclusion
Statistical methods used to estimate confidence intervals in noninferiority trials have a strong impact on the conclusion of such trials. 相似文献7.
Dros J Maarsingh OR van der Windt DA Oort FJ ter Riet G de Rooij SE Schellevis FG van der Horst HE van Weert HC 《PloS one》2011,6(1):e16481
Background
The diagnostic approach to dizzy, older patients is not straightforward as many organ systems can be involved and evidence for diagnostic strategies is lacking. A first differentiation in diagnostic subtypes or profiles may guide the diagnostic process of dizziness and can serve as a classification system in future research. In the literature this has been done, but based on pathophysiological reasoning only.Objective
To establish a classification of diagnostic profiles of dizziness based on empirical data.Design
Cross-sectional study.Participants and Setting
417 consecutive patients of 65 years and older presenting with dizziness to 45 primary care physicians in the Netherlands from July 2006 to January 2008.Methods
We performed tests, including patient history, and physical and additional examination, previously selected by an international expert panel and based on an earlier systematic review. We used the results of these tests in a principal component analysis for exploration, data-reduction and finally differentiation into diagnostic dizziness profiles.Results
Demographic data and the results of the tests yielded 221 variables, of which 49 contributed to the classification of dizziness into six diagnostic profiles, that may be named as follows: “frailty”, “psychological”, “cardiovascular”, “presyncope”, “non-specific dizziness” and “ENT”. These explained 32% of the variance.Conclusions
Empirically identified components classify dizziness into six profiles. This classification takes into account the heterogeneity and multicausality of dizziness and may serve as starting point for research on diagnostic strategies and can be a first step in an evidence based diagnostic approach of dizzy older patients. 相似文献8.
Background
Despite the high prevalence and major public health ramifications, obstructive sleep apnea syndrome (OSAS) remains underdiagnosed. In many developed countries, because community pharmacists (CP) are easily accessible, they have been developing additional clinical services that integrate the services of and collaborate with other healthcare providers (general practitioners (GPs), nurses, etc.). Alternative strategies for primary care screening programs for OSAS involving the CP are discussed.Objective
To estimate the quality of life, costs, and cost-effectiveness of three screening strategies among patients who are at risk of having moderate to severe OSAS in primary care.Design
Markov decision model.Data Sources
Published data.Target Population
Hypothetical cohort of 50-year-old male patients with symptoms highly evocative of OSAS.Time Horizon
The 5 years after initial evaluation for OSAS.Perspective
Societal.Interventions
Screening strategy with CP (CP-GP collaboration), screening strategy without CP (GP alone) and no screening.Outcomes measures
Quality of life, survival and costs for each screening strategy.Results of base-case analysis
Under almost all modeled conditions, the involvement of CPs in OSAS screening was cost effective. The maximal incremental cost for “screening strategy with CP” was about 455€ per QALY gained.Results of sensitivity analysis
Our results were robust but primarily sensitive to the treatment costs by continuous positive airway pressure, and the costs of untreated OSAS. The probabilistic sensitivity analysis showed that the “screening strategy with CP” was dominant in 80% of cases. It was more effective and less costly in 47% of cases, and within the cost-effective range (maximum incremental cost effectiveness ratio at €6186.67/QALY) in 33% of cases.Conclusions
CP involvement in OSAS screening is a cost-effective strategy. This proposal is consistent with the trend in Europe and the United States to extend the practices and responsibilities of the pharmacist in primary care. 相似文献9.
10.
11.
Yali Liu Rui Zhang Jiao Huang Xu Zhao Danlu Liu Wanting Sun Yuefen Mai Peng Zhang Yajun Wang Hua Cao Ke hu Yang 《PloS one》2014,9(11)
Background
The QUOROM and PRISMA statements were published in 1999 and 2009, respectively, to improve the consistency of reporting systematic reviews (SRs)/meta-analyses (MAs) of clinical trials. However, not all SRs/MAs adhere completely to these important standards. In particular, it is not clear how well SRs/MAs of acupuncture studies adhere to reporting standards and which reporting criteria are generally ignored in these analyses.Objectives
To evaluate reporting quality in SRs/MAs of acupuncture studies.Methods
We performed a literature search for studies published prior to 2014 using the following public archives: PubMed, EMBASE, Web of Science, the Cochrane Database of Systematic Reviews (CDSR), the Chinese Biomedical Literature Database (CBM), the Traditional Chinese Medicine (TCM) database, the Chinese Journal Full-text Database (CJFD), the Chinese Scientific Journal Full-text Database (CSJD), and the Wanfang database. Data were extracted into pre-prepared Excel data-extraction forms. Reporting quality was assessed based on the PRISMA checklist (27 items).Results
Of 476 appropriate SRs/MAs identified in our search, 203, 227, and 46 were published in Chinese journals, international journals, and the Cochrane Database, respectively. In 476 SRs/MAs, only 3 reported the information completely. By contrast, approximately 4.93% (1/203), 8.81% (2/227) and 0.00% (0/46) SRs/Mas reported less than 10 items in Chinese journals, international journals and CDSR, respectively. In general, the least frequently reported items (reported≤50%) in SRs/MAs were “protocol and registration”, “risk of bias across studies”, and “additional analyses” in both methods and results sections.Conclusions
SRs/MAs of acupuncture studies have not comprehensively reported information recommended in the PRISMA statement. Our study underscores that, in addition to focusing on careful study design and performance, attention should be paid to comprehensive reporting standards in SRs/MAs on acupuncture studies. 相似文献12.
S Brons ME van Beusichem EM Bronkhorst J Draaisma SJ Bergé TJ Maal AM Kuijpers-Jagtman 《PloS one》2012,7(8):e41898
Context
Technological advancements have led craniofacial researchers and clinicians into the era of three-dimensional digital imaging for quantitative evaluation of craniofacial growth and treatment outcomes.Objective
To give an overview of soft-tissue based methods for quantitative longitudinal assessment of facial dimensions in children until six years of age and to assess the reliability of these methods in studies with good methodological quality.Data Source
PubMed, EMBASE, Cochrane Library, Web of Science, Scopus and CINAHL were searched. A hand search was performed to check for additional relevant studies.Study Selection
Primary publications on facial growth and treatment outcomes in children younger than six years of age were included.Data Extraction
Independent data extraction by two observers. A quality assessment instrument was used to determine the methodological quality. Methods, used in studies with good methodological quality, were assessed for reliability expressed as the magnitude of the measurement error and the correlation coefficient between repeated measurements.Results
In total, 47 studies were included describing 4 methods: 2D x-ray cephalometry; 2D photography; anthropometry; 3D imaging techniques (surface laser scanning, stereophotogrammetry and cone beam computed tomography). In general the measurement error was below 1 mm and 1° and correlation coefficients range from 0.65 to 1.0.Conclusion
Various methods have shown to be reliable. However, at present stereophotogrammetry seems to be the best 3D method for quantitative longitudinal assessment of facial dimensions in children until six years of age due to its millisecond fast image capture, archival capabilities, high resolution and no exposure to ionizing radiation. 相似文献13.
Florian Plaza Onate Jean-Michel Batto Catherine Juste Jehane Fadlallah Cyrielle Fougeroux Doriane Gouas Nicolas Pons Sean Kennedy Florence Levenez Joel Dore S Dusko Ehrlich Guy Gorochov Martin Larsen 《BMC genomics》2015,16(1)
Background
The biological and clinical consequences of the tight interactions between host and microbiota are rapidly being unraveled by next generation sequencing technologies and sophisticated bioinformatics, also referred to as microbiota metagenomics. The recent success of metagenomics has created a demand to rapidly apply the technology to large case–control cohort studies and to studies of microbiota from various habitats, including habitats relatively poor in microbes. It is therefore of foremost importance to enable a robust and rapid quality assessment of metagenomic data from samples that challenge present technological limits (sample numbers and size). Here we demonstrate that the distribution of overlapping k-mers of metagenome sequence data predicts sequence quality as defined by gene distribution and efficiency of sequence mapping to a reference gene catalogue.Results
We used serial dilutions of gut microbiota metagenomic datasets to generate well-defined high to low quality metagenomes. We also analyzed a collection of 52 microbiota-derived metagenomes. We demonstrate that k-mer distributions of metagenomic sequence data identify sequence contaminations, such as sequences derived from “empty” ligation products. Of note, k-mer distributions were also able to predict the frequency of sequences mapping to a reference gene catalogue not only for the well-defined serial dilution datasets, but also for 52 human gut microbiota derived metagenomic datasets.Conclusions
We propose that k-mer analysis of raw metagenome sequence reads should be implemented as a first quality assessment prior to more extensive bioinformatics analysis, such as sequence filtering and gene mapping. With the rising demand for metagenomic analysis of microbiota it is crucial to provide tools for rapid and efficient decision making. This will eventually lead to a faster turn-around time, improved analytical quality including sample quality metrics and a significant cost reduction. Finally, improved quality assessment will have a major impact on the robustness of biological and clinical conclusions drawn from metagenomic studies.Electronic supplementary material
The online version of this article (doi:10.1186/s12864-015-1406-7) contains supplementary material, which is available to authorized users. 相似文献14.
15.
16.
17.
Background
Vitamin D deficiency is more prevalent among SLE patients than the general population. Over the past decade, many studies across the globe have been carried out to investigate the role of vitamin D in SLE from various clinical angles. Therefore, the aim of this systematic review is to summarise and evaluate the evidence from the published literature; focusing on the clinical significance of vitamin D in SLE.Methods
The following databases were searched: MEDLINE, Scopus, Web of Knowledge and CINAHL, using the terms “lupus”, “systemic lupus erythematosus”, “SLE and “vitamin D”. We included only adult human studies published in the English language between 2000 and 2012.The reference lists of included studies were thoroughly reviewed in search for other relevant studies.Results
A total of 22 studies met the selection criteria. The majority of the studies were observational (95.5%) and cross sectional (90.9%). Out of the 15 studies which looked into the association between vitamin D and SLE disease activity, 10 studies (including the 3 largest studies in this series) revealed a statistically significant inverse relationship. For disease damage, on the other hand, 5 out of 6 studies failed to demonstrate any association with vitamin D levels. Cardiovascular risk factors such as insulin resistance, hypertension and hypercholesterolaemia were related to vitamin D deficiency, according to 3 of the studies.Conclusion
There is convincing evidence to support the association between vitamin D levels and SLE disease activity. There is paucity of data in other clinical aspects to make firm conclusions. 相似文献18.
19.
Aaron Leong Kaberi Dasgupta Sasha Bernatsky Diane Lacaille Antonio Avina-Zubieta Elham Rahme 《PloS one》2013,8(10)
Objectives
Health administrative data are frequently used for diabetes surveillance. We aimed to determine the sensitivity and specificity of a commonly-used diabetes case definition (two physician claims or one hospital discharge abstract record within a two-year period) and their potential effect on prevalence estimation.Methods
Following Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines, we searched Medline (from 1950) and Embase (from 1980) databases for validation studies through August 2012 (keywords: “diabetes mellitus”; “administrative databases”; “validation studies”). Reviewers abstracted data with standardized forms and assessed quality using Quality Assessment of Diagnostic Accuracy Studies (QUADAS) criteria. A generalized linear model approach to random-effects bivariate regression meta-analysis was used to pool sensitivity and specificity estimates. We applied correction factors derived from pooled sensitivity and specificity estimates to prevalence estimates from national surveillance reports and projected prevalence estimates over 10 years (to 2018).Results
The search strategy identified 1423 abstracts among which 11 studies were deemed relevant and reviewed; 6 of these reported sensitivity and specificity allowing pooling in a meta-analysis. Compared to surveys or medical records, sensitivity was 82.3% (95%CI 75.8, 87.4) and specificity was 97.9% (95%CI 96.5, 98.8). The diabetes case definition underestimated prevalence when it was ≤10.6% and overestimated prevalence otherwise.Conclusion
The diabetes case definition examined misses up to one fifth of diabetes cases and wrongly identifies diabetes in approximately 2% of the population. This may be sufficiently sensitive and specific for surveillance purposes, in particular monitoring prevalence trends. Applying correction factors to adjust prevalence estimates from this definition may be helpful to increase accuracy of estimates. 相似文献20.