首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

Recent studies have questioned our previous understanding on the effect of nitrous oxide on muscle relaxants, since nitrous oxide has been shown to potentiate the action of bolus doses of mivacurium, rocuronium and vecuronium. This study was aimed to investigate the possible effect of nitrous oxide on the infusion requirements of cisatracurium.

Methods

70 ASA physical status I-III patients aged 18-75 years were enrolled in this randomized trial. The patients were undergoing elective surgery requiring general anesthesia with a duration of at least 90 minutes. Patients were randomized to receive propofol and remifentanil by target controlled infusion in combination with either a mixture of oxygen and nitrous oxide (Nitrous oxide/TIVA group) or oxygen in air (Air/TIVA group). A 0.1 mg/kg initial bolus of cisatracurium was administered before tracheal intubation, followed by a closed-loop computer controlled infusion of cisatracurium to produce and maintain a 90% neuromuscular block. Cumulative dose requirements of cisatracurium during the 90-min study period after bolus administration were measured and the asymptotic steady state rate of infusion to produce a constant 90% block was determined by applying nonlinear curve fitting to the data on the cumulative dose requirement during the study period.

Results

Controller performance, i.e. the ability of the controller to maintain neuromuscular block constant at the setpoint and patient characteristics were similar in both groups. The administration of nitrous oxide did not affect cisatracurium infusion requirements. The mean steady-state rates of infusion were 0.072 +/- 0.018 and 0.066 +/- 0.017 mg * kg-1 * h-1 in Air/TIVA and Nitrous oxide/TIVA groups, respectively.

Conclusions

Nitrous oxide does not affect the infusion requirements of cisatracurium.

Trial registration

ClinicalTrials.gov NCT01152905; European Clinical Trials Database at http://eudract.emea.eu.int/2006-006037-41.  相似文献   

2.

Background

Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes.

Results

This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations.

Conclusions

A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation.  相似文献   

3.

Background

Systolic blood flow has been simulated in the abdominal aorta and the superior mesenteric artery. The simulations were carried out using two different computational hemodynamic methods: the finite element method to solve the Navier Stokes equations and the lattice Boltzmann method.

Results

We have validated the lattice Boltzmann method for systolic flows by comparing the veloCity and pressure profiles of simulated blood flow between methods. We have also analyzed flow-specific characteristics such as the formation of a vortex at curvatures and traces of flow.

Conclusion

The lattice Boltzmann Method is as accurate as a Navier Stokes solver for computing complex blood flows. As such it is a good alternative for computational hemodynamics, certainly in situation where coupling to other models is required.  相似文献   

4.

Background

Noninvasive recording of movements caused by the heartbeat and the blood circulation is known as ballistocardiography. Several studies have shown the capability of a force plate to detect cardiac activity in the human body. The aim of this paper is to present a new method based on differential geometry of curves to handle multivariate time series obtained by ballistocardiographic force plate measurements.

Results

We show that the recoils of the body caused by cardiac motion and blood circulation provide a noninvasive method of displaying the motions of the heart muscle and the propagation of the pulse wave along the aorta and its branches. The results are compared with the data obtained invasively during a cardiac catheterization. We show that the described noninvasive method is able to determine the moment of a particular heart movement or the time when the pulse wave reaches certain morphological structure.

Conclusions

Monitoring of heart movements and pulse wave propagation may be used e.g. to estimate the aortic pulse wave velocity, which is widely accepted as an index of aortic stiffness with the application of predicting risk of heart disease in individuals. More extended analysis of the method is however needed to assess its possible clinical application.  相似文献   

5.

Background

Strain Rate Imaging shows the filling phases of the left ventricle to consist of a wave of myocardial stretching, propagating from base to apex. The propagation velocity of the strain rate wave is reduced in delayed relaxation. This study examined the relation between the propagation velocity of strain rate in the myocardium and the propagation velocity of flow during early filling.

Methods

12 normal subjects and 13 patients with treated hypertension and normal systolic function were studied. Patients and controls differed significantly in diastolic early mitral flow measurements, peak early diastolic tissue velocity and peak early diastolic strain rate, showing delayed relaxation in the patient group. There were no significant differences in EF or diastolic diameter.

Results

Strain rate propagation velocity was reduced in the patient group while flow propagation velocity was increased. There was a negative correlation (R = -0.57) between strain rate propagation and deceleration time of the mitral flow E-wave (R = -0.51) and between strain rate propagation and flow propagation velocity and there was a positive correlation (R = 0.67) between the ratio between peak mitral flow velocity/strain rate propagation velocity and flow propagation velocity.

Conclusion

The present study shows strain rate propagation to be a measure of filling time, but flow propagation to be a function of both flow velocity and strain rate propagation. Thus flow propagation is not a simple index of diastolic function in delayed relaxation.  相似文献   

6.

Background

Although Monte Carlo simulations of light propagation in full segmented three-dimensional MRI based anatomical models of the human head have been reported in many articles. To our knowledge, there is no patient-oriented simulation for individualized calibration with NIRS measurement. Thus, we offer an approach for brain modeling based on image segmentation process with in vivo MRI T1 three-dimensional image to investigate the individualized calibration for NIRS measurement with Monte Carlo simulation.

Methods

In this study, an individualized brain is modeled based on in vivo MRI 3D image as five layers structure. The behavior of photon migration was studied for this individualized brain detections based on three-dimensional time-resolved Monte Carlo algorithm. During the Monte Carlo iteration, all photon paths were traced with various source-detector separations for characterization of brain structure to provide helpful information for individualized design of NIRS system.

Results

Our results indicate that the patient-oriented simulation can provide significant characteristics on the optimal choice of source-detector separation within 3.3 cm of individualized design in this case. Significant distortions were observed around the cerebral cortex folding. The spatial sensitivity profile penetrated deeper to the brain in the case of expanded CSF. This finding suggests that the optical method may provide not only functional signal from brain activation but also structural information of brain atrophy with the expanded CSF layer. The proposed modeling method also provides multi-wavelength for NIRS simulation to approach the practical NIRS measurement.

Conclusions

In this study, the three-dimensional time-resolved brain modeling method approaches the realistic human brain that provides useful information for NIRS systematic design and calibration for individualized case with prior MRI data.  相似文献   

7.

Background

Endothelial function in hypercholesterolemic rabbits is usually evaluated ex vivo on isolated aortic rings. In vivo evaluation requires invasive imaging procedures that cannot be repeated serially.

Aim

We evaluated a non-invasive ultrasound technique to assess early endothelial function in rabbits and compare data with ex vivo measurements.

Methods

Twenty-four rabbits (fed with a cholesterol diet (0.5%) for 2 to 8 weeks) were given progressive infusions of acetylcholine (0.05–0.5 μg/kg/min) and their endothelial function was assessed in vivo by transcutaneous vascular ultrasound of the abdominal aorta. Ex vivo endothelial function was evaluated on isolated aortic rings and compared to in vivo data.

Results

Significant endothelial dysfunction was demonstrated in hypercholesterolemic animals as early as 2 weeks after beginning the cholesterol diet (aortic cross-sectional area variation: -2.9% vs. +4% for controls, p < 0.05). Unexpectedly, response to acetylcholine at 8 weeks was more variable. Endothelial function improved in 5 rabbits while 2 rabbits regained a normal endothelial function. These data corroborated well with ex vivo results.

Conclusion

Endothelial function can be evaluated non-invasively in vivo by transcutaneous vascular ultrasound of the abdominal aorta in the rabbit and results correlate well with ex vivo data.  相似文献   

8.

Background

The estimation of individual ancestry from genetic data has become essential to applied population genetics and genetic epidemiology. Software programs for calculating ancestry estimates have become essential tools in the geneticist's analytic arsenal.

Results

Here we describe four enhancements to ADMIXTURE, a high-performance tool for estimating individual ancestries and population allele frequencies from SNP (single nucleotide polymorphism) data. First, ADMIXTURE can be used to estimate the number of underlying populations through cross-validation. Second, individuals of known ancestry can be exploited in supervised learning to yield more precise ancestry estimates. Third, by penalizing small admixture coefficients for each individual, one can encourage model parsimony, often yielding more interpretable results for small datasets or datasets with large numbers of ancestral populations. Finally, by exploiting multiple processors, large datasets can be analyzed even more rapidly.

Conclusions

The enhancements we have described make ADMIXTURE a more accurate, efficient, and versatile tool for ancestry estimation.  相似文献   

9.

Background

Nephropathies are among the most common diseases in dogs. Regular examination of the kidney function plays an important role for an adequate treatment scheme. The determination of the glomerular filtration rate (GFR) is seen as the gold standard in assessing the kidney status. Most of the tests have the disadvantage that only the complete glomerular filtration rate of both kidneys can be assessed and not the single kidney glomerular filtration rate. Imaging examination techniques like dynamic contrast-enhanced magnetic resonance imaging have the potential to evaluate the single kidney GFR. There are studies in human medicine describing the determination of the single kidney GFR using this technique. To our knowledge there are no such studies for dogs.

Results

An exponential fit was found to describe the functional interrelation between signal intensity and contrast medium concentrations. The changes of contrast medium concentrations during the contrast medium bolus propagation were calculated. The extreme values of contrast medium concentrations in the kidneys were reached at nearly the same time in every individual dog (1st maximum aorta 8.5 s, 1st maximum in both kidneys after about 14.5 s; maximum concentration values varied between 17 and 125 µmol/mL in the aorta and between 4 and 15 µmol/mL in the kidneys). The glomerular filtration rate was calculated from the concentration changes of the contrast medium using a modified Rutland-Patlak plot technique. The GFR was 12.7?±?2.9 mL/min m2 BS for the left kidney and 12.0?±?2.2 mL/min/m2 BS for the right kidney. The mean values of the coefficient of determination of the regression lines were averagely 0.91?±?0.08.

Conclusions

The propagation of contrast medium bolus could be depicted well. The contrast medium proceeded in a similar manner for every individual dog. Additionally, the evaluation of the single kidney function of the individual dogs is possible with this method. A standardized examination procedure would be recommended in order to minimize influencing parameters.
  相似文献   

10.
Xia  Fei  Dou  Yong  Lei  Guoqing  Tan  Yusong 《BMC bioinformatics》2011,12(1):1-9

Background

Orthology analysis is an important part of data analysis in many areas of bioinformatics such as comparative genomics and molecular phylogenetics. The ever-increasing flood of sequence data, and hence the rapidly increasing number of genomes that can be compared simultaneously, calls for efficient software tools as brute-force approaches with quadratic memory requirements become infeasible in practise. The rapid pace at which new data become available, furthermore, makes it desirable to compute genome-wide orthology relations for a given dataset rather than relying on relations listed in databases.

Results

The program Proteinortho described here is a stand-alone tool that is geared towards large datasets and makes use of distributed computing techniques when run on multi-core hardware. It implements an extended version of the reciprocal best alignment heuristic. We apply Proteinortho to compute orthologous proteins in the complete set of all 717 eubacterial genomes available at NCBI at the beginning of 2009. We identified thirty proteins present in 99% of all bacterial proteomes.

Conclusions

Proteinortho significantly reduces the required amount of memory for orthology analysis compared to existing tools, allowing such computations to be performed on off-the-shelf hardware.  相似文献   

11.

Background

Numerous studies have been conducted regarding a heartbeat classification algorithm over the past several decades. However, many algorithms have also been studied to acquire robust performance, as biosignals have a large amount of variation among individuals. Various methods have been proposed to reduce the differences coming from personal characteristics, but these expand the differences caused by arrhythmia.

Methods

In this paper, an arrhythmia classification algorithm using a dedicated wavelet adapted to individual subjects is proposed. We reduced the performance variation using dedicated wavelets, as in the ECG morphologies of the subjects. The proposed algorithm utilizes morphological filtering and a continuous wavelet transform with a dedicated wavelet. A principal component analysis and linear discriminant analysis were utilized to compress the morphological data transformed by the dedicated wavelets. An extreme learning machine was used as a classifier in the proposed algorithm.

Results

A performance evaluation was conducted with the MIT-BIH arrhythmia database. The results showed a high sensitivity of 97.51%, specificity of 85.07%, accuracy of 97.94%, and a positive predictive value of 97.26%.

Conclusions

The proposed algorithm achieves better accuracy than other state-of-the-art algorithms with no intrasubject between the training and evaluation datasets. And it significantly reduces the amount of intervention needed by physicians.  相似文献   

12.

Key message

Simple sequence repeat motifs were mined from the genome and EST sequences of Morus notabilis and archived in MulSatDB. Bioinformatics tools were integrated with the database for the analysis of genomic datasets.

Abstract

Mulberry is a crop of economic importance in sericulture, which shapes the lives of millions of rural people among different Eurasian and Latin American countries. Limited availability of genomic resources has constrained the molecular breeding efforts in mulberry, a poorly studied crop. Microsatellite or simple sequence repeat (SSR) has revolutionized the plant breeding and is used in linkage mapping, association studies, diversity, and parentage analysis, etc. Recent availability of mulberry whole genome assembly provided an opportunity for the development of mulberry-specific DNA markers. In this study, we mined a total of 217,312 microsatellites from whole genome and 961 microsatellites from EST sequences of Morus notabilis. Mono-repeats were predominant among both whole genome and EST sequences. The SSR containing EST sequences were functionally annotated, and SSRs mined from whole genome were mapped on chromosomes of the phylogenetically related genus—Fragaria vesca, to aid the selection of markers based on the function and location. All the mined markers were archived in the mulberry microsatellite database (MulSatDB), and the markers can be retrieved based on different criteria like marker location, repeat kind, motif type and size. Primer3plus and CMap tools are integrated with the database to design primers for PCR amplification and to visualize markers on F. vesca chromosomes, respectively. A blast tool is also integrated to collate new markers with the database. MulSatDB is the first and complete destination for mulberry researchers to browse SSR markers, design primers, and locate markers on strawberry chromosomes. MulSatDB is freely accessible at http://btismysore.in/mulsatdb.  相似文献   

13.

Background

Bioinformatics applications are now routinely used to analyze large amounts of data. Application development often requires many cycles of optimization, compiling, and testing. Repeatedly loading large datasets can significantly slow down the development process. We have incorporated HotSwap functionality into the protein workbench STRAP, allowing developers to create plugins using the Java HotSwap technique.

Results

Users can load multiple protein sequences or structures into the main STRAP user interface, and simultaneously develop plugins using an editor of their choice such as Emacs. Saving changes to the Java file causes STRAP to recompile the plugin and automatically update its user interface without requiring recompilation of STRAP or reloading of protein data. This article presents a tutorial on how to develop HotSwap plugins. STRAP is available at http://strapjava.de and http://www.charite.de/bioinf/strap.

Conclusion

HotSwap is a useful and time-saving technique for bioinformatics developers. HotSwap can be used to efficiently develop bioinformatics applications that require loading large amounts of data into memory.  相似文献   

14.

Background

The recent emergence of high-throughput automated image acquisition technologies has forever changed how cell biologists collect and analyze data. Historically, the interpretation of cellular phenotypes in different experimental conditions has been dependent upon the expert opinions of well-trained biologists. Such qualitative analysis is particularly effective in detecting subtle, but important, deviations in phenotypes. However, while the rapid and continuing development of automated microscope-based technologies now facilitates the acquisition of trillions of cells in thousands of diverse experimental conditions, such as in the context of RNA interference (RNAi) or small-molecule screens, the massive size of these datasets precludes human analysis. Thus, the development of automated methods which aim to identify novel and biological relevant phenotypes online is one of the major challenges in high-throughput image-based screening. Ideally, phenotype discovery methods should be designed to utilize prior/existing information and tackle three challenging tasks, i.e. restoring pre-defined biological meaningful phenotypes, differentiating novel phenotypes from known ones and clarifying novel phenotypes from each other. Arbitrarily extracted information causes biased analysis, while combining the complete existing datasets with each new image is intractable in high-throughput screens.

Results

Here we present the design and implementation of a novel and robust online phenotype discovery method with broad applicability that can be used in diverse experimental contexts, especially high-throughput RNAi screens. This method features phenotype modelling and iterative cluster merging using improved gap statistics. A Gaussian Mixture Model (GMM) is employed to estimate the distribution of each existing phenotype, and then used as reference distribution in gap statistics. This method is broadly applicable to a number of different types of image-based datasets derived from a wide spectrum of experimental conditions and is suitable to adaptively process new images which are continuously added to existing datasets. Validations were carried out on different dataset, including published RNAi screening using Drosophila embryos [Additional files 1, 2], dataset for cell cycle phase identification using HeLa cells [Additional files 1, 3, 4] and synthetic dataset using polygons, our methods tackled three aforementioned tasks effectively with an accuracy range of 85%–90%. When our method is implemented in the context of a Drosophila genome-scale RNAi image-based screening of cultured cells aimed to identifying the contribution of individual genes towards the regulation of cell-shape, it efficiently discovers meaningful new phenotypes and provides novel biological insight. We also propose a two-step procedure to modify the novelty detection method based on one-class SVM, so that it can be used to online phenotype discovery. In different conditions, we compared the SVM based method with our method using various datasets and our methods consistently outperformed SVM based method in at least two of three tasks by 2% to 5%. These results demonstrate that our methods can be used to better identify novel phenotypes in image-based datasets from a wide range of conditions and organisms.

Conclusion

We demonstrate that our method can detect various novel phenotypes effectively in complex datasets. Experiment results also validate that our method performs consistently under different order of image input, variation of starting conditions including the number and composition of existing phenotypes, and dataset from different screens. In our findings, the proposed method is suitable for online phenotype discovery in diverse high-throughput image-based genetic and chemical screens.  相似文献   

15.
16.

Background

The digitization of biodiversity data is leading to the widespread application of taxon names that are superfluous, ambiguous or incorrect, resulting in mismatched records and inflated species numbers. The ultimate consequences of misspelled names and bad taxonomy are erroneous scientific conclusions and faulty policy decisions. The lack of tools for correcting this ‘names problem’ has become a fundamental obstacle to integrating disparate data sources and advancing the progress of biodiversity science.

Results

The TNRS, or Taxonomic Name Resolution Service, is an online application for automated and user-supervised standardization of plant scientific names. The TNRS builds upon and extends existing open-source applications for name parsing and fuzzy matching. Names are standardized against multiple reference taxonomies, including the Missouri Botanical Garden's Tropicos database. Capable of processing thousands of names in a single operation, the TNRS parses and corrects misspelled names and authorities, standardizes variant spellings, and converts nomenclatural synonyms to accepted names. Family names can be included to increase match accuracy and resolve many types of homonyms. Partial matching of higher taxa combined with extraction of annotations, accession numbers and morphospecies allows the TNRS to standardize taxonomy across a broad range of active and legacy datasets.

Conclusions

We show how the TNRS can resolve many forms of taxonomic semantic heterogeneity, correct spelling errors and eliminate spurious names. As a result, the TNRS can aid the integration of disparate biological datasets. Although the TNRS was developed to aid in standardizing plant names, its underlying algorithms and design can be extended to all organisms and nomenclatural codes. The TNRS is accessible via a web interface at http://tnrs.iplantcollaborative.org/ and as a RESTful web service and application programming interface. Source code is available at https://github.com/iPlantCollaborativeOpenSource/TNRS/.  相似文献   

17.

Background

An adequate and expressive ontological representation of biological organisms and their parts requires formal reasoning mechanisms for their relations of physical aggregation and containment.

Results

We demonstrate that the proposed formalism allows to deal consistently with "role propagation along non-taxonomic hierarchies", a problem which had repeatedly been identified as an intricate reasoning problem in biomedical ontologies.

Conclusion

The proposed approach seems to be suitable for the redesign of compositional hierarchies in (bio)medical terminology systems which are embedded into the framework of the OBO (Open Biological Ontologies) Relation Ontology and are using knowledge representation languages developed by the Semantic Web community.  相似文献   

18.

Background

Atherosclerosis is the leading cause of death in western societies and cigarette smoke is among the factors that strongly contribute to the development of this disease. The early events in atherogenesis are stimulated on the one hand by cytokines that chemoattract leukocytes and on the other hand by decrease in circulating molecules that protect endothelial cells (ECs) from injury. Here we focus our studies on the effects of "second-hand" smoke on atherogenesis.

Methods

To perform these studies, a smoking system that closely simulates exposure of humans to second-hand smoke was developed and a mouse model system transgenic for human apoB100 was used. These mice have moderate lipid levels that closely mimic human conditions that lead to atherosclerotic plaque formation.

Results

"Second-hand" cigarette smoke decreases plasma high density lipoprotein levels in the blood and also decreases the ratios between high density lipoprotein and low density lipoprotein, high density lipoprotein and triglyceride, and high density lipoprotein and total cholesterol. This change in lipid profiles causes not only more lipid accumulation in the aorta but also lipid deposition in many of the smaller vessels of the heart and in hepatocytes. In addition, mice exposed to smoke have increased levels of Monocyte Chemoattractant Protein–1 in circulation and in the heart/aorta tissue, have increased macrophages in the arterial walls, and have decreased levels of adiponectin, an EC-protective protein. Also, cytokine arrays revealed that mice exposed to smoke do not undergo the switch from the pro-inflammatory cytokine profile (that develops when the mice are initially exposed to second-hand smoke) to the adaptive response. Furthermore, triglyceride levels increase significantly in the liver of smoke-exposed mice.

Conclusion

Long-term exposure to "second-hand" smoke creates a state of permanent inflammation and an imbalance in the lipid profile that leads to lipid accumulation in the liver and in the blood vessels of the heart and aorta. The former potentially can lead to non-alcoholic fatty liver disease and the latter to heart attacks.  相似文献   

19.
20.

Background

Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons.

Findings

We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs). As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH.

Conclusions

ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号