共查询到20条相似文献,搜索用时 15 毫秒
1.
Daniele Perrone Ben Jeurissen Jan Aelterman Timo Roine Jan Sijbers Aleksandra Pizurica Alexander Leemans Wilfried Philips 《PloS one》2016,11(3)
Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume effect, and a limited spatial and angular resolution. The difficulty lies in the lack of a realistic brain phantom on the one hand, and a sufficiently accurate way of modeling the acquisition-related degradation on the other. This paper proposes a software phantom that approximates a human brain to a high degree of realism and that can incorporate complex brain-like structural features. We refer to it as a Diffusion BRAIN (D-BRAIN) phantom. Also, we propose an accurate model of a (DW) MRI acquisition protocol to allow for validation of methods in realistic conditions with data imperfections. The phantom model simulates anatomical and diffusion properties for multiple brain tissue components, and can serve as a ground-truth to evaluate FT algorithms, among others. The simulation of the acquisition process allows one to include noise, partial volume effects, and limited spatial and angular resolution in the images. In this way, the effect of image artifacts on, for instance, fiber tractography can be investigated with great detail. The proposed framework enables reliable and quantitative evaluation of DW-MR image processing and FT algorithms at the level of large-scale WM structures. The effect of noise levels and other data characteristics on cortico-cortical connectivity and tractography-based grey matter parcellation can be investigated as well. 相似文献
2.
A Rivero-Juárez J Morgaz A Camacho P Muñoz-Rascón JM Dominguez R Sánchez-Céspedes J Torre-Cisneros A Rivero 《PloS one》2012,7(7):e41557
Background
The objectives of this study were to evaluate the best position and best exploration probe for determining liver stiffness (LS) in dogs using transient liver elastography (TE). Thirteen dogs were used in the study.Methodology/Principal Findings
Morphometric measurements taken were thoracic circumference, weight and height. Elastographic measurements were taken in 4 anatomical positions using two different probes: medium (M) and small (S). The exploration was considered correct when the success rate was above 60% and the interquartile range of the measurements did not exceed 30%. The best measurements were obtained in the middle of the 6th–9th intercostal spaces, with the dog in the left lateral position and using probe M for preference in adults and probe S mandatory for animals <2 years. The correlation between probes was 99%. Intra-observer variability showed an intra-class correlation of 97.6%.Conclusions/Significance
TE is a technique that is reproducible in dogs. 相似文献3.
Jerod M. Rasmussen Sonja Entringer Annie Nguyen Theo G. M. van Erp Ana Guijarro Fariba Oveisi James M. Swanson Daniele Piomelli Pathik D. Wadhwa Claudia Buss Steven G. Potkin 《PloS one》2013,8(10)
There is a major resurgence of interest in brown adipose tissue (BAT) biology, particularly regarding its determinants and consequences in newborns and infants. Reliable methods for non-invasive BAT measurement in human infants have yet to be demonstrated. The current study first validates methods for quantitative BAT imaging of rodents post mortem followed by BAT excision and re-imaging of excised tissues. Identical methods are then employed in a cohort of in vivo infants to establish the reliability of these measures and provide normative statistics for BAT depot volume and fat fraction. Using multi-echo water-fat MRI, fat- and water-based images of rodents and neonates were acquired and ratios of fat to the combined signal from fat and water (fat signal fraction) were calculated. Neonatal scans (n = 22) were acquired during natural sleep to quantify BAT and WAT deposits for depot volume and fat fraction. Acquisition repeatability was assessed based on multiple scans from the same neonate. Intra- and inter-rater measures of reliability in regional BAT depot volume and fat fraction quantification were determined based on multiple segmentations by two raters. Rodent BAT was characterized as having significantly higher water content than WAT in both in situ as well as ex vivo imaging assessments. Human neonate deposits indicative of bilateral BAT in spinal, supraclavicular and axillary regions were observed. Pairwise, WAT fat fraction was significantly greater than BAT fat fraction throughout the sample (ΔWAT-BAT = 38%, p<10−4). Repeated scans demonstrated a high voxelwise correlation for fat fraction (Rall = 0.99). BAT depot volume and fat fraction measurements showed high intra-rater (ICCBAT,VOL = 0.93, ICCBAT,FF = 0.93) and inter-rater reliability (ICCBAT,VOL = 0.86, ICCBAT,FF = 0.93). This study demonstrates the reliability of using multi-echo water-fat MRI in human neonates for quantification throughout the torso of BAT depot volume and fat fraction measurements. 相似文献
4.
MethodsPubmed/Medline, Embase, Cochrane Library and Ovid were searched for all studies assessing SS and LS simultaneously in EV diagnosis. A total of 16 studies including 1892 patients were included in this meta-analysis, and the pooled statistical parameters were calculated using the bivariate mixed effects models.ResultsIn detection of any EV, for LS measurement, the summary sensitivity was 0.83 (95% confidence interval [CI]: 0.78–0.87), and the specificity was 0.66 (95% CI: 0.60–0.72). While for SS measurement, the pooled sensitivity and specificity was 0.88 (95% CI: 0.83–0.92) and 0.78 (95% CI: 0.73–0.83). The summary receiver operating characteristic (SROC) curve values of LS and SS were 0.81 (95% CI: 0.77–0.84) and 0.88 (95% CI: 0.85–0.91) respectively, and the results had statistical significance (P<0.01). The diagnostic odds ratio (DOR) of SS (25.73) was significantly higher than that of LS (9.54), with the relative DOR value was 2.48 (95%CI: 1.10–5.60), P<0.05.ConclusionsUnder current techniques, SS is significantly superior to LS for identifying the presence of EV in patients with CLD. SS measurement may help to select patients for endoscopic screening. 相似文献
5.
6.
Peter Klimek Alexandra Kautzky-Willer Anna Chmiel Irmgard Schiller-Frühwirth Stefan Thurner 《PLoS computational biology》2015,11(4)
Despite substantial progress in the study of diabetes, important questions remain about its comorbidities and clinical heterogeneity. To explore these issues, we develop a framework allowing for the first time to quantify nation-wide risks and their age- and sex-dependence for each diabetic comorbidity, and whether the association may be consequential or causal, in a sample of almost two million patients. This study is equivalent to nearly 40,000 single clinical measurements. We confirm the highly controversial relation of increased risk for Parkinson’s disease in diabetics, using a 10 times larger cohort than previous studies on this relation. Detection of type 1 diabetes leads detection of depressions, whereas there is a strong comorbidity relation between type 2 diabetes and schizophrenia, suggesting similar pathogenic or medication-related mechanisms. We find significant sex differences in the progression of, for instance, sleep disorders and congestive heart failure in diabetic patients. Hypertension is a highly sex-sensitive comorbidity with females being at lower risk during fertile age, but at higher risk otherwise. These results may be useful to improve screening practices in the general population. Clinical management of diabetes must address age- and sex-dependence of multiple comorbid conditions. 相似文献
7.
Sahan?C.B. Herath Du Yue Shi Hui Min-Cheol Kim Dong-an Wang Qingguo Wang Krystyn?J. Van?Vliet Harry Asada Peter?C.Y. Chen 《Biophysical journal》2014,106(1):332-341
The stiffness of the extracellular matrix (ECM) is known to influence cell behavior. The ability to manipulate the stiffness of ECM has important implications in understanding how cells interact mechanically with their microenvironment. This article describes an approach to manipulating the stiffness ECM, whereby magnetic beads are embedded in the ECM through bioconjugation between the streptavidin-coated beads and the collagen fibers and then manipulated by an external magnetic field. It also reports both analytical results (obtained by formal modeling and numerical simulation) and statistically meaningful experimental results (obtained by atomic force microscopy) that demonstrate the effectiveness of this approach. These results clearly suggest the possibility of creating desired stiffness gradients in ECM in vitro to influence cell behavior. 相似文献
8.
Sahan C.B. Herath Du Yue Shi Hui Min-Cheol Kim Dong-an Wang Qingguo Wang Krystyn J. Van Vliet Harry Asada Peter C.Y. Chen 《Biophysical journal》2014
The stiffness of the extracellular matrix (ECM) is known to influence cell behavior. The ability to manipulate the stiffness of ECM has important implications in understanding how cells interact mechanically with their microenvironment. This article describes an approach to manipulating the stiffness ECM, whereby magnetic beads are embedded in the ECM through bioconjugation between the streptavidin-coated beads and the collagen fibers and then manipulated by an external magnetic field. It also reports both analytical results (obtained by formal modeling and numerical simulation) and statistically meaningful experimental results (obtained by atomic force microscopy) that demonstrate the effectiveness of this approach. These results clearly suggest the possibility of creating desired stiffness gradients in ECM in vitro to influence cell behavior. 相似文献
9.
Background
Effective and accurate diagnosis of attention-deficit/hyperactivity disorder (ADHD) is currently of significant interest. ADHD has been associated with multiple cortical features from structural MRI data. However, most existing learning algorithms for ADHD identification contain obvious defects, such as time-consuming training, parameters selection, etc. The aims of this study were as follows: (1) Propose an ADHD classification model using the extreme learning machine (ELM) algorithm for automatic, efficient and objective clinical ADHD diagnosis. (2) Assess the computational efficiency and the effect of sample size on both ELM and support vector machine (SVM) methods and analyze which brain segments are involved in ADHD.Methods
High-resolution three-dimensional MR images were acquired from 55 ADHD subjects and 55 healthy controls. Multiple brain measures (cortical thickness, etc.) were calculated using a fully automated procedure in the FreeSurfer software package. In total, 340 cortical features were automatically extracted from 68 brain segments with 5 basic cortical features. F-score and SFS methods were adopted to select the optimal features for ADHD classification. Both ELM and SVM were evaluated for classification accuracy using leave-one-out cross-validation.Results
We achieved ADHD prediction accuracies of 90.18% for ELM using eleven combined features, 84.73% for SVM-Linear and 86.55% for SVM-RBF. Our results show that ELM has better computational efficiency and is more robust as sample size changes than is SVM for ADHD classification. The most pronounced differences between ADHD and healthy subjects were observed in the frontal lobe, temporal lobe, occipital lobe and insular.Conclusion
Our ELM-based algorithm for ADHD diagnosis performs considerably better than the traditional SVM algorithm. This result suggests that ELM may be used for the clinical diagnosis of ADHD and the investigation of different brain diseases. 相似文献10.
11.
Background
Real-time quantitative PCR (qPCR) is still the gold-standard technique for gene-expression quantification. Recent technological advances of this method allow for the high-throughput gene-expression analysis, without the limitations of sample space and reagent used. However, non-commercial and user-friendly software for the management and analysis of these data is not available.Results
The recently developed commercial microarrays allow for the drawing of standard curves of multiple assays using the same n-fold diluted samples. Data Analysis Gene (DAG) Expression software has been developed to perform high-throughput gene-expression data analysis using standard curves for relative quantification and one or multiple reference genes for sample normalization. We discuss the application of DAG Expression in the analysis of data from an experiment performed with Fluidigm technology, in which 48 genes and 115 samples were measured. Furthermore, the quality of our analysis was tested and compared with other available methods.Conclusions
DAG Expression is a freely available software that permits the automated analysis and visualization of high-throughput qPCR. A detailed manual and a demo-experiment are provided within the DAG Expression software at http://www.dagexpression.com/dage.zip. 相似文献12.
A thermal convection loop is a annular chamber filled with water, heated on the bottom half and cooled on the top half. With sufficiently large forcing of heat, the direction of fluid flow in the loop oscillates chaotically, dynamics analogous to the Earth’s weather. As is the case for state-of-the-art weather models, we only observe the statistics over a small region of state space, making prediction difficult. To overcome this challenge, data assimilation (DA) methods, and specifically ensemble methods, use the computational model itself to estimate the uncertainty of the model to optimally combine these observations into an initial condition for predicting the future state. Here, we build and verify four distinct DA methods, and then, we perform a twin model experiment with the computational fluid dynamics simulation of the loop using the Ensemble Transform Kalman Filter (ETKF) to assimilate observations and predict flow reversals. We show that using adaptively shaped localized covariance outperforms static localized covariance with the ETKF, and allows for the use of less observations in predicting flow reversals. We also show that a Dynamic Mode Decomposition (DMD) of the temperature and velocity fields recovers the low dimensional system underlying reversals, finding specific modes which together are predictive of reversal direction. 相似文献
13.
Purpose
Absolute concentrations of high-energy phosphorus (31P) metabolites in liver provide more important insight into physiologic status of liver disease compared to resonance integral ratios. A simple method for measuring absolute concentrations of 31P metabolites in human liver is described. The approach uses surface spoiling inhomogeneous magnetic field gradient to select signal from liver tissue. The technique avoids issues caused by respiratory motion, chemical shift dispersion associated with linear magnetic field gradients, and increased tissue heat deposition due to radiofrequency absorption, especially at high field strength.Methods
A method to localize signal from liver was demonstrated using superficial and highly non-uniform magnetic field gradients, which eliminate signal(s) from surface tissue(s) located between the liver and RF coil. A double standard method was implemented to determine absolute 31P metabolite concentrations in vivo. 8 healthy individuals were examined in a 3 T MR scanner.Results
Concentrations of metabolites measured in eight healthy individuals are: γ-adenosine triphosphate (ATP) = 2.44 ± 0.21 (mean ± sd) mmol/l of wet tissue volume, α-ATP = 3.2 ± 0.63 mmol/l, β-ATP = 2.98 ± 0.45 mmol/l, inorganic phosphates (Pi) = 1.87 ± 0.25 mmol/l, phosphodiesters (PDE) = 10.62 ± 2.20 mmol/l and phosphomonoesters (PME) = 2.12 ± 0.51 mmol/l. All are in good agreement with literature values.Conclusions
The technique offers robust and fast means to localize signal from liver tissue, allows absolute metabolite concentration determination, and avoids problems associated with constant field gradient (linear field variation) localization methods. 相似文献14.
15.
The stiffness of fracture fixation devices together with musculoskeletal loading defines the mechanical environment within a long bone fracture, and can be quantified by the interfragmentary movement. In vivo results suggested that this can have acceleratory or inhibitory influences, depending on direction and magnitude of motion, indicating that some complications in fracture treatment could be avoided by optimizing the fixation stiffness. However, general statements are difficult to make due to the limited number of experimental findings. The aim of this study was therefore to numerically investigate healing outcomes under various combinations of shear and axial fixation stiffness, and to detect the optimal configuration. A calibrated and established numerical model was used to predict fracture healing for numerous combinations of axial and shear fixation stiffness under physiological, superimposed, axial compressive and translational shear loading in sheep. Characteristic maps of healing outcome versus fixation stiffness (axial and shear) were created. The results suggest that delayed healing of 3 mm transversal fracture gaps will occur for highly flexible or very rigid axial fixation, which was corroborated by in vivo findings. The optimal fixation stiffness for ovine long bone fractures was predicted to be 1000–2500 N/mm in the axial and >300 N/mm in the shear direction. In summary, an optimized, moderate axial stiffness together with certain shear stiffness enhances fracture healing processes. The negative influence of one improper stiffness can be compensated by adjustment of the stiffness in the other direction. 相似文献
16.
Summary . In this article, we describe a Bayesian approach to the calibration of a stochastic computer model of chemical kinetics. As with many applications in the biological sciences, the data available to calibrate the model come from different sources. Furthermore, these data appear to provide somewhat conflicting information about the model parameters. We describe a modeling framework that allows us to synthesize this conflicting information and arrive at a consensus inference. In particular, we show how random effects can be incorporated into the model to account for between-individual heterogeneity that may be the source of the apparent conflict. 相似文献
17.
To permit linkage of computerized patient data obtained from different sources, a universal and efficient method of patient identification is necessary. A coding system of 16 characters with a high degree of discrimination is proposed. The first five characters code the individual''s family name, the next four his given name; the next six digits are his date of birth expressed in day, month and year; and the last character codes his sex. This system, using readily available patient information, is simple to manipulate and generates codes that are also medically informative. When this method of identification was tested on a list of 18,000 persons, no identical codes were found. 相似文献
18.
This study describes novel algorithms for searching for most parsimonious trees. These algorithms are implemented as a parsimony computer program, PARSIGAL, which performs well even with difficult data sets. For high level search, PARSIGAL uses an evolutionary optimization algorithm, which feeds good tree candidates to a branch-swapping local search procedure. This study also describes an extremely fast method of recomputing state sets for binary characters (additive or nonadditive characters with two states), based on packing 32 characters into a single memory word and recomputing the tree simultaneously for all 32 characters using fast bitwise logical operations. The operational principles of PARSIGAL are quite different from those previously published for other parsimony computer programs. Hence it is conceivable that PARSIGAL may be able to locate islands of trees that are different from those that are easily located with existing parsimony computer programs. 相似文献
19.
Fa-Hsuan Lin Panu T. Vesanen Yi-Cheng Hsu Jaakko O. Nieminen Koos C. J. Zevenhoven Juhani Dabek Lauri T. Parkkonen Juha Simola Antti I. Ahonen Risto J. Ilmoniemi 《PloS one》2013,8(4)
Ultra-low-field (ULF) MRI (B
0 = 10–100 µT) typically suffers from a low signal-to-noise ratio (SNR). While SNR can be improved by pre-polarization and signal detection using highly sensitive superconducting quantum interference device (SQUID) sensors, we propose to use the inter-dependency of the k-space data from highly parallel detection with up to tens of sensors readily available in the ULF MRI in order to suppress the noise. Furthermore, the prior information that an image can be sparsely represented can be integrated with this data consistency constraint to further improve the SNR. Simulations and experimental data using 47 SQUID sensors demonstrate the effectiveness of this data consistency constraint and sparsity prior in ULF-MRI reconstruction. 相似文献
20.
Sean C. Taylor Thomas Berkelman Geetha Yadav Matt Hammond 《Molecular biotechnology》2013,55(3):217-226
Chemiluminescent western blotting has been in common practice for over three decades, but its use as a quantitative method for measuring the relative expression of the target proteins is still debatable. This is mainly due to the various steps, techniques, reagents, and detection methods that are used to obtain the associated data. In order to have confidence in densitometric data from western blots, researchers should be able to demonstrate statistically significant fold differences in protein expression. This entails a necessary evolution of the procedures, controls, and the analysis methods. We describe a methodology to obtain reliable quantitative data from chemiluminescent western blots using standardization procedures coupled with the updated reagents and detection methods. 相似文献