首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Type 2 diabetes (T2D) is strongly associated with cardiovascular risk and requires medications that improve glycemic control and other cardiovascular risk factors. The authors aimed to assess the relative effectiveness of pioglitazone (Pio), metformin (Met) and any sulfonylurea (SU) combinations in non-insulin-treated T2D patients who were failing previous hypoglycemic therapy.

Methods

Over a 1-year period, two multicenter, open-labeled, controlled, 1-year, prospective, observational studies evaluated patients with T2D (n = 4585) from routine clinical practice in Spain and Greece with the same protocol. Patients were eligible if they had been prescribed Pio + SU, Pio + Met or SU + Met serving as a control cohort, once they had failed with previous therapy. Anthropometric measurements, lipid and glycemic profiles, blood pressure, and the proportions of patients at microvascular and macrovascular risk were assessed.

Results

All study treatment combinations rendered progressive 6-month and 12-month lipid, glycemic, and blood pressure improvements. Pio combinations, especially Pio + Met, were associated with increases in HDL-cholesterol and decreases in triglycerides and in the atherogenic index of plasma. The proportion of patients at high risk decreased after 12 months in all study cohorts. Minor weight changes (gain or loss) and no treatment-related fractures occurred during the study. The safety profile was good and proved similar among treatments, except for more hypoglycemic episodes in patients receiving SU and for the occurrence of edema in patients using Pio combinations. Serious cardiovascular events were rarely reported.

Conclusions

In patients with T2D failing prior hypoglycemic therapies, Pio combinations with SU or Met (especially Pio + Met) improved blood lipid and glycemic profiles, decreasing the proportion of patients with a high microvascular or macrovascular risk. The combination of Pio with SU or Met may therefore be recommended for T2D second-line therapy in the routine clinical practice, particularly in patients with dyslipidemia.  相似文献   

2.

Background

Next-generation sequencing (NGS) has yielded an unprecedented amount of data for genetics research. It is a daunting task to process the data from raw sequence reads to variant calls and manually processing this data can significantly delay downstream analysis and increase the possibility for human error. The research community has produced tools to properly prepare sequence data for analysis and established guidelines on how to apply those tools to achieve the best results, however, existing pipeline programs to automate the process through its entirety are either inaccessible to investigators, or web-based and require a certain amount of administrative expertise to set up.

Findings

Advanced Sequence Automated Pipeline (ASAP) was developed to provide a framework for automating the translation of sequencing data into annotated variant calls with the goal of minimizing user involvement without the need for dedicated hardware or administrative rights. ASAP works both on computer clusters and on standalone machines with minimal human involvement and maintains high data integrity, while allowing complete control over the configuration of its component programs. It offers an easy-to-use interface for submitting and tracking jobs as well as resuming failed jobs. It also provides tools for quality checking and for dividing jobs into pieces for maximum throughput.

Conclusions

ASAP provides an environment for building an automated pipeline for NGS data preprocessing. This environment is flexible for use and future development. It is freely available at http://biostat.mc.vanderbilt.edu/ASAP.  相似文献   

3.

Introduction

Administrative claims data have not commonly been used to study the clinical effectiveness of medications for rheumatoid arthritis (RA) because of the lack of a validated algorithm for this outcome. We created and tested a claims-based algorithm to serve as a proxy for the clinical effectiveness of RA medications.

Methods

We linked Veterans Health Administration (VHA) medical and pharmacy claims for RA patients participating in the longitudinal Department of Veterans Affairs (VA) RA registry (VARA). Among individuals for whom treatment with a new biologic agent or nonbiologic disease-modifying agent in rheumatic disease (DMARD) was being initiated and with registry follow-up at 1 year, VARA and administrative data were used to create a gold standard for the claims-based effectiveness algorithm. The gold standard outcome was low disease activity (LDA) (Disease Activity Score using 28 joint counts (DAS28) ≤ 3.2) or improvement in DAS28 by > 1.2 units at 12 ± 2 months, with high adherence to therapy. The claims-based effectiveness algorithm incorporated biologic dose escalation or switching, addition of new disease-modifying agents, increase in oral glucocorticoid use and dose as well as parenteral glucocorticoid injections.

Results

Among 1,397 patients, we identified 305 eligible biologic or DMARD treatment episodes in 269 unique individuals. The patients were primarily men (94%) with a mean (± SD) age of 62 ± 10 years. At 1 year, 27% of treatment episodes achieved the effectiveness gold standard. The performance characteristics of the effectiveness algorithm were as follows: positive predictive value, 76% (95% confidence interval (95% CI) = 71% to 81%); negative predictive value, 90% (95% CI = 88% to 92%); sensitivity, 72% (95% CI = 67% to 77%); and specificity, 91% (95% CI = 89% to 93%).

Conclusions

Administrative claims data may be useful in evaluating the effectiveness of medications for RA. Further validation of this effectiveness algorithm will be useful in assessing its generalizability and performance in other populations.  相似文献   

4.

Background

Phylogenies are commonly used to analyse the differences between genes, genomes and species. Patristic distances calculated from tree branch lengths describe the amount of genetic change represented by a tree and are commonly compared with other measures of mutation to investigate the substitutional processes or the goodness of fit of a tree to the raw data. Up until now no universal tool has been available for calculating patristic distances and correlating them with other genetic distance measures.

Results

PATRISTICv1.0 is a java program that calculates patristic distances from large trees in a range of file formats and allows graphical and statistical interpretation of distance matrices calculated by other programs.

Conclusion

The software overcomes some logistic barriers to analysing signals in sequences. In additional to calculating patristic distances, it provides plots for any combination of matrices, calculates commonly used statistics, allows data such as isolation dates to be entered and reorders matrices with matching species or gene labels. It will be used to analyse rates of mutation and substitutional saturation and the evolution of viruses. It is available at http://biojanus.anu.edu.au/programs/ and requires the Java runtime environment.  相似文献   

5.

Background:

The frequency of polypectomy is an important indicator of quality assurance for population-based colorectal cancer screening programs. Although administrative databases of physician claims provide population-level data on the performance of polypectomy, the accuracy of the procedure codes has not been examined. We determined the level of agreement between physician claims for polypectomy and documentation of the procedure in endoscopy reports.

Methods:

We conducted a retrospective cohort study involving patients aged 50–80 years who underwent colonoscopy at seven study sites in Montréal, Que., between January and March 2007. We obtained data on physician claims for polypectomy from the Régie de l’Assurance Maladie du Québec (RAMQ) database. We evaluated the accuracy of the RAMQ data against information in the endoscopy reports.

Results:

We collected data on 689 patients who underwent colonoscopy during the study period. The sensitivity of physician claims for polypectomy in the administrative database was 84.7% (95% confidence interval [CI] 78.6%–89.4%), the specificity was 99.0% (95% CI 97.5%–99.6%), concordance was 95.1% (95% CI 93.1%–96.5%), and the kappa value was 0.87 (95% CI 0.83–0.91).

Interpretation:

Despite providing a reasonably accurate estimate of the frequency of polypectomy, physician claims underestimated the number of procedures performed by more than 15%. Such differences could affect conclusions regarding quality assurance if used to evaluate population-based screening programs for colorectal cancer. Even when a high level of accuracy is anticipated, validating physician claims data from administrative databases is recommended.Population-based screening programs for colorectal cancer rely heavily on the performance of colonoscopy as either the initial examination or as the follow-up to a positive screening by virtual colonography, double-contrast barium enema or fecal occult blood testing. Colonoscopy is the only screening examination accepted at 10-year intervals among people at average risk without significant polyps found. It allows direct visualization of the entire colon and rectum and permits removal of adenomatous polyps, precursors of colorectal cancer. The frequency of polypectomy is an important indicator of quality assurance for colorectal cancer screening programs.In the province of Quebec, physicians are reimbursed for medical services by the Régie de l’Assurance Maladie du Québec (RAMQ), the government agency responsible for administering the provincial health insurance plan. Physicians receive additional remuneration for performing a polypectomy if they include the procedure code in their claim.Data from physician claims databases are commonly used in health services research,17 even though the data are collected for administrative purposes and physician reimbursement. Procedure codes in physician claims databases are presumed to have a very high level of agreement with data in medical charts.8 A physician making a claim will need to submit the diagnostic code and, when applicable, the procedure code. Studies that rely on physician claims databases can be divided into those that examine the diagnostic codes entered and those that examine the procedure codes entered. Few studies have attempted to validate procedure codes, and often not as the primary study objective.914We conducted a study to determine the level of agreement between physician claims for polypectomy and documentation of the procedure in endoscopy reports.  相似文献   

6.
7.

Background

Motor- (MEP) and somatosensory-evoked potentials (SSEP) are susceptible to the effects of intraoperative environmental factors.

Methods

Over a 5-year period, 250 patients with adolescent idiopathic scoliosis (AIS) who underwent corrective surgery with IOM were retrospectively analyzed for MEP suppression (MEPS).

Results

Our results show that four distinct groups of MEPS were encountered over the study period. All 12 patients did not sustain any neurological deficits postoperatively. However, comparison of groups 1 and 2 suggests that neither the duration of anesthesia nor speed of surgical or anesthetic intervention were associated with recovery to a level beyond the criteria for MEPS. For group 3, spontaneous MEPS recovery despite the lack of surgical intervention suggests that anesthetic intervention may play a role in this process. However, spontaneous MEPS recovery was also seen in group 4, suggesting that in certain circumstances, both surgical and anesthetic intervention was not required. In addition, neither the duration of time to the first surgical manoeuver nor the duration of surgical manoeuver to MEPS were related to recovery of MEPS. None of the patients had suppression of SSEPs intraoperatively.

Conclusion

This study suggests that in susceptible individuals, MEPS may rarely occur unpredictably, independent of surgical or anesthetic intervention. However, our findings favor anesthetic before surgical intervention as a proposed protocol. Early recognition of MEPS is important to prevent false positives in the course of IOM for spinal surgery.
  相似文献   

8.
9.

Background

Childhood asthma prevalence is widely measured by parental proxy report of physician-diagnosed asthma in questionnaires. Our objective was to validate this measure in a North American population.

Methods

The 2884 study participants were a subsample of 5619 school children aged 5 to 9 years from 231 schools participating in the Toronto Child Health Evaluation Questionnaire study in 2006. We compared agreement between "questionnaire diagnosis" and a previously validated "health claims data diagnosis". Sensitivity, specificity and kappa were calculated for the questionnaire diagnosis using the health claims diagnosis as the reference standard.

Results

Prevalence of asthma was 15.7% by questionnaire and 21.4% by health claims data. Questionnaire diagnosis was insensitive (59.0%) but specific (95.9%) for asthma. When children with asthma-related symptoms were excluded, the sensitivity increased (83.6%), and specificity remained high (93.6%).

Conclusions

Our results show that parental report of asthma by questionnaire has low sensitivity but high specificity as an asthma prevalence measure. In addition, children with "asthma-related symptoms" may represent a large fraction of under-diagnosed asthma and they should be excluded from the inception cohort for risk factor studies.  相似文献   

10.

Background

While the theory of enzyme kinetics is fundamental to analyzing and simulating biochemical systems, the derivation of rate equations for complex mechanisms for enzyme-catalyzed reactions is cumbersome and error prone. Therefore, a number of algorithms and related computer programs have been developed to assist in such derivations. Yet although a number of algorithms, programs, and software packages are reported in the literature, one or more significant limitation is associated with each of these tools. Furthermore, none is freely available for download and use by the community.

Results

We have implemented an algorithm based on the schematic method of King and Altman (KA) that employs the topological theory of linear graphs for systematic generation of valid reaction patterns in a GUI-based stand-alone computer program called KAPattern. The underlying algorithm allows for the assumption steady-state, rapid equilibrium-binding, and/or irreversibility for individual steps in catalytic mechanisms. The program can automatically generate MathML and MATLAB output files that users can easily incorporate into simulation programs.

Conclusion

A computer program, called KAPattern, for generating rate equations for complex enzyme system is a freely available and can be accessed at http://www.biocoda.org.  相似文献   

11.
12.

Background

Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes.

Results

This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations.

Conclusions

A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation.  相似文献   

13.

Background

Advances in medical science have enabled many children with chronic diseases to survive to adulthood. The transition of adult patients with childhood-onset chronic diseases from pediatric to adult healthcare systems has received attention in Europe and the United States. We conducted a questionnaire survey among 41 pediatricians at pediatric hospitals and 24 nurses specializing in adolescent care to compare the perception of transition of care from pediatric to adult healthcare services for such patients.

Findings

Three-fourths of the pediatricians and all of the nurses reported that transition programs were necessary. A higher proportion of the nurses realized the necessity of transition and had already developed such programs. Both pediatricians and nurses reported that a network covering the transition from pediatric to adult healthcare services has not been established to date.

Conclusions

It has been suggested that spreading the importance of a transition program among pediatricians and developing a pediatric-adult healthcare network would contribute to the biopsychosocial well-being of adult patients with childhood-onset chronic disease.  相似文献   

14.

Background and aims

The relative proportions of phosphorus (P) forms present in manure will determine the overall availability of manure P to plants; however, the link between the forms of P in manures and manure P availability is unclear. This study compares the bioavailability and P speciation of three manures of different stockpiling duration: less than 1 month, 6 months and 12 months; manures were collected concurrently from a single poultry farm.

Methods

Bioavailability to wheat in a glasshouse trial was measured using an isotopic dilution method with manure added at an application rate equivalent to 20 kg P ha?1. Phosphorus speciation was measured by 31P nuclear magnetic resonance (NMR) spectroscopic analysis of NaOH-EDTA extracts of the manures.

Results

The addition of all manures significantly increased shoot biomass and P concentration, with the fresh manure having the greatest effect. Addition of the fresh manure resulted in the largest labile P pool, highest manure P uptake and manure P recovery, while the manure stockpiled for 12 months resulted in the lowest manure P uptake and manure P recovery. NMR analysis indicated that there was more monoester organic P, especially phytate, in manure stockpiled for shorter periods, while the proportion of manure P that was orthophosphate increased with stockpiling time.

Conclusions

Together, these results imply that although the proportion of total P in the manures detected as orthophosphate was higher with longer stockpiling, only a fraction of this orthophosphate was plant-available. This suggests the availability of P from orthophosphate in manures decreases with longer stockpiling time in much the same way that P from orthophosphate in mineral fertilizer becomes less available in soil over time.  相似文献   

15.

Introduction

Health care utilisation ('claims') databases contain information about millions of patients and are an important source of information for a variety of study types. However, they typically do not contain information about disease severity. The goal of the present study was to develop a health care claims index for rheumatoid arthritis (RA) severity using a previously developed medical records-based index for RA severity (RA medical records-based index of severity [RARBIS]).

Methods

The study population consisted of 120 patients from the Veteran's Administration (VA) Health System. We previously demonstrated the construct validity of the RARBIS and established its convergent validity with the Disease Activity Score (DAS28). Potential claims-based indicators were entered into a linear regression model as independent variables and the RARBIS as the dependent variable. The claims-based index for RA severity (CIRAS) was created using the coefficients from models with the highest coefficient of determination (R2) values selected by automated modelling procedures. To compare our claims-based index with our medical records-based index, we examined the correlation between the CIRAS and the RARBIS using Spearman non-parametric tests.

Results

The forward selection models yielded the highest model R2 for both the RARBIS with medications (R2 = 0.31) and the RARBIS without medications (R2 = 0.26). Components of the CIRAS included tests for inflammatory markers, number of chemistry panels and platelet counts ordered, rheumatoid factor, the number of rehabilitation and rheumatology visits, and Felty's syndrome diagnosis. The CIRAS demonstrated moderate correlations with the RARBIS with medication and the RARBIS without medication sub-scales.

Conclusion

We developed the CIRAS that showed moderate correlations with a previously validated records-based index of severity. The CIRAS may serve as a potentially important tool in adjusting for RA severity in pharmacoepidemiology studies of RA treatment and complications using health care utilisation data.  相似文献   

16.

Background

Mortality prediction models generally require clinical data or are derived from information coded at discharge, limiting adjustment for presenting severity of illness in observational studies using administrative data.

Objectives

To develop and validate a mortality prediction model using administrative data available in the first 2 hospital days.

Research Design

After dividing the dataset into derivation and validation sets, we created a hierarchical generalized linear mortality model that included patient demographics, comorbidities, medications, therapies, and diagnostic tests administered in the first 2 hospital days. We then applied the model to the validation set.

Subjects

Patients aged ≥18 years admitted with pneumonia between July 2007 and June 2010 to 347 hospitals in Premier, Inc.’s Perspective database.

Measures

In hospital mortality.

Results

The derivation cohort included 200,870 patients and the validation cohort had 50,037. Mortality was 7.2%. In the multivariable model, 3 demographic factors, 25 comorbidities, 41 medications, 7 diagnostic tests, and 9 treatments were associated with mortality. Factors that were most strongly associated with mortality included receipt of vasopressors, non-invasive ventilation, and bicarbonate. The model had a c-statistic of 0.85 in both cohorts. In the validation cohort, deciles of predicted risk ranged from 0.3% to 34.3% with observed risk over the same deciles from 0.1% to 33.7%.

Conclusions

A mortality model based on detailed administrative data available in the first 2 hospital days had good discrimination and calibration. The model compares favorably to clinically based prediction models and may be useful in observational studies when clinical data are not available.  相似文献   

17.
18.

Background

With the development of sequencing technologies, more and more sequence variants are available for investigation. Different classes of variants in the human genome have been identified, including single nucleotide substitutions, insertion and deletion, and large structural variations such as duplications and deletions. Insertion and deletion (indel) variants comprise a major proportion of human genetic variation. However, little is known about their effects on humans. The absence of understanding is largely due to the lack of both biological data and computational resources.

Results

This paper presents a new indel functional prediction method HMMvar based on HMM profiles, which capture the conservation information in sequences. The results demonstrate that a scoring strategy based on HMM profiles can achieve good performance in identifying deleterious or neutral variants for different data sets, and can predict the protein functional effects of both single and multiple mutations.

Conclusions

This paper proposed a quantitative prediction method, HMMvar, to predict the effect of genetic variation using hidden Markov models. The HMM based pipeline program implementing the method HMMvar is freely available at https://bioinformatics.cs.vt.edu/zhanglab/hmm.  相似文献   

19.

Background

Many common diseases arise from an interaction between environmental and genetic factors. Our knowledge regarding environment and gene interactions is growing, but frameworks to build an association between gene-environment interactions and disease using preexisting, publicly available data has been lacking. Integrating freely-available environment-gene interaction and disease phenotype data would allow hypothesis generation for potential environmental associations to disease.

Methods

We integrated publicly available disease-specific gene expression microarray data and curated chemical-gene interaction data to systematically predict environmental chemicals associated with disease. We derived chemical-gene signatures for 1,338 chemical/environmental chemicals from the Comparative Toxicogenomics Database (CTD). We associated these chemical-gene signatures with differentially expressed genes from datasets found in the Gene Expression Omnibus (GEO) through an enrichment test.

Results

We were able to verify our analytic method by accurately identifying chemicals applied to samples and cell lines. Furthermore, we were able to predict known and novel environmental associations with prostate, lung, and breast cancers, such as estradiol and bisphenol A.

Conclusions

We have developed a scalable and statistical method to identify possible environmental associations with disease using publicly available data and have validated some of the associations in the literature.  相似文献   

20.

Background

Coalescent simulations have proven very useful in many population genetics studies. In order to arrive to meaningful conclusions, it is important that these simulations resemble the process of molecular evolution as much as possible. To date, no single coalescent program is able to simulate codon sequences sampled from populations with recombination, migration and growth.

Results

We introduce a new coalescent program, called Recodon, which is able to simulate samples of coding DNA sequences under complex scenarios in which several evolutionary forces can interact simultaneously (namely, recombination, migration and demography). The basic codon model implemented is an extension to the general time-reversible model of nucleotide substitution with a proportion of invariable sites and among-site rate variation. In addition, the program implements non-reversible processes and mixtures of different codon models.

Conclusion

Recodon is a flexible tool for the simulation of coding DNA sequences under realistic evolutionary models. These simulations can be used to build parameter distributions for testing evolutionary hypotheses using experimental data. Recodon is written in C, can run in parallel, and is freely available from http://darwin.uvigo.es/.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号