首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
Real-time three-dimensional (RT3D) ultrasound is a relatively new imaging modality that uses a special ultrasound transducer consisting of a matrix array of elements. The array electronically steers an ultrasound beam to interrogate a 3D volume of tissue. The real-time nature of RT3D ultrasound differentiates it from reconstructed 3D ultrasound, in which a conventional ultrasound transducer is moved mechanically through the third dimension. RT3D ultrasound is considerably faster than reconstructed 3D ultrasound, making it suitable for capturing continuous rapid motion such as that of the beating heart. Although RT3D ultrasound has not yet found widespread clinical use, these scanners are presently employed in more than 20 locations worldwide, primarily for cardiac research. The author helped develop the RT3D ultrasound technology as well as specialized analysis and visualization methods for the resulting data. In developing such methods, it has been necessary to consider the physical and mathematical processes by which the ultrasound data are collected. Difficulties arise because of high noise, variation in contrast and intensity between scans, ultrasound's nonrectilinear coordinate system, and the anisotropic nature of the echoes themselves. This article reviews these specific difficulties and provides solutions that are applicable to generalized analysis and visualization of RT3D ultrasound data. Some of the methods presented can also be applied to other imaging modalities with nonrectilinear coordinates.  相似文献   

2.
Flow stagnation and residence time (RT) are important features of diseased arterial flows that influence biochemical transport processes and thrombosis. RT calculation methods are classified into Eulerian and Lagrangian approaches where several measures have been proposed to quantify RT. Each of these methods has a different definition of RT, and it is not clear how they are related. In this study, image-based computational models of blood flow in an abdominal aortic aneurysm and a cerebral aneurysm were considered and RT was calculated using different methods. In the Lagrangian methods, discrete particle tracking of massless tracers was used to calculate particle residence time and mean exposure time. In the Eulerian methods, continuum transport models were used to quantify RT using Eulerian RT and virtual ink approaches. Point-wise RT and Eulerian indicator RT were also computed based on measures derived from velocity. A comparison of these methods is presented and the implications of each method are discussed. Our results highlight that most RT methods have a conceptually distinct definition of RT and therefore should be utilized depending on the specific application of interest.  相似文献   

3.
BackgroundAlthough a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic.ObjectivesThis investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China.MethodsWe performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff.ResultsThere are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively.ConclusionThe overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent.  相似文献   

4.
We propose a general statistical framework for meta-analysis of gene- or region-based multimarker rare variant association tests in sequencing association studies. In genome-wide association studies, single-marker meta-analysis has been widely used to increase statistical power by combining results via regression coefficients and standard errors from different studies. In analysis of rare variants in sequencing studies, region-based multimarker tests are often used to increase power. We propose meta-analysis methods for commonly used gene- or region-based rare variants tests, such as burden tests and variance component tests. Because estimation of regression coefficients of individual rare variants is often unstable or not feasible, the proposed method avoids this difficulty by calculating score statistics instead that only require fitting the null model for each study and then aggregating these score statistics across studies. Our proposed meta-analysis rare variant association tests are conducted based on study-specific summary statistics, specifically score statistics for each variant and between-variant covariance-type (linkage disequilibrium) relationship statistics for each gene or region. The proposed methods are able to incorporate different levels of heterogeneity of genetic effects across studies and are applicable to meta-analysis of multiple ancestry groups. We show that the proposed methods are essentially as powerful as joint analysis by directly pooling individual level genotype data. We conduct extensive simulations to evaluate the performance of our methods by varying levels of heterogeneity across studies, and we apply the proposed methods to meta-analysis of rare variant effects in a multicohort study of the genetics of blood lipid levels.  相似文献   

5.
Robbins LG 《Genetics》2000,154(1):13-26
Graduate school programs in genetics have become so full that courses in statistics have often been eliminated. In addition, typical introductory statistics courses for the "statistics user" rather than the nascent statistician are laden with methods for analysis of measured variables while genetic data are most often discrete numbers. These courses are often seen by students and genetics professors alike as largely irrelevant cookbook courses. The powerful methods of likelihood analysis, although commonly employed in human genetics, are much less often used in other areas of genetics, even though current computational tools make this approach readily accessible. This article introduces the MLIKELY.PAS computer program and the logic of do-it-yourself maximum-likelihood statistics. The program itself, course materials, and expanded discussions of some examples that are only summarized here are available at http://www.unisi. it/ricerca/dip/bio_evol/sitomlikely/mlikely.h tml.  相似文献   

6.

Background

Adjuvant Radiotherapy (RT) after surgical removal of tumors proved beneficial in long-term tumor control and treatment planning. For many years, it has been well concluded that radio-sensitivities of tumors upon radiotherapy decrease according to the sizes of tumors and RT models based on Poisson statistics have been used extensively to validate clinical data.

Results

We found that Poisson statistics on RT is actually derived from bacterial cells despite of many validations from clinical data. However cancerous cells do have abnormal cellular communications and use chemical messengers to signal both surrounding normal and cancerous cells to develop new blood vessels and to invade, to metastasis and to overcome intercellular spatial confinements in general. We therefore investigated the cell killing effects on adjuvant RT and found that radio-sensitivity is actually not a monotonic function of volume as it was believed before. We present detailed analysis and explanation to justify above statement. Based on EUD, we present an equivalent radio-sensitivity model.

Conclusion

We conclude that radio sensitivity is a sophisticated function over tumor volumes, since tumor responses upon radio therapy also depend on cellular communications.
  相似文献   

7.
Currently, mapping genes for complex human traits relies on two complementary approaches, linkage and association analyses. Both suffer from several methodological and theoretical limitations, which can considerably increase the type-1 error rate and reduce the power to map human quantitative trait loci (QTL). This review focuses on linkage methods for QTL mapping. It summarizes the most common linkage statistics used, namely Haseman-Elston-based methods, variance components, and statistics that condition on trait values. Methods developed more recently that accommodate the X-chromosome, parental imprinting and allelic association in linkage analysis are also summarized. The type-I error rate and power of these methods are discussed. Finally, rough guidelines are provided to help guide the choice of linkage statistics.  相似文献   

8.
9.
MOTIVATION: Phylogenetic profiling methods can achieve good accuracy in predicting protein-protein interactions, especially in prokaryotes. Recent studies have shown that the choice of reference taxa (RT) is critical for accurate prediction, but with more than 2500 fully sequenced taxa publicly available, identifying the most-informative RT is becoming increasingly difficult. Previous studies on the selection of RT have provided guidelines for manual taxon selection, and for eliminating closely related taxa. However, no general strategy for automatic selection of RT is currently available. RESULTS: We present three novel methods for automating the selection of RT, using machine learning based on known protein-protein interaction networks. One of these methods in particular, Tree-Based Search, yields greatly improved prediction accuracies. We further show that different methods for constituting phylogenetic profiles often require very different RT sets to support high prediction accuracy.  相似文献   

10.
11.
Phylogenetically enhanced statistical tools for RNA structure prediction   总被引:4,自引:0,他引:4  
MOTIVATION: Methods that predict the structure of molecules by looking for statistical correlation have been quite effective. Unfortunately, these methods often disregard phylogenetic information in the sequences they analyze. Here, we present a number of statistics for RNA molecular-structure prediction. Besides common pair-wise comparisons, we consider a few reasonable statistics for base-triple predictions, and present an elaborate analysis of these methods. All these statistics incorporate phylogenetic relationships of the sequences in the analysis to varying degrees, and the different nature of these tests gives a wide choice of statistical tools for RNA structure prediction. RESULTS: Starting from statistics that incorporate phylogenetic information only as independent sequence evolution models for each position of a multiple alignment, and extending this idea to a joint evolution model of two positions, we enhance the usual purely statistical methods (e.g. methods based on the Mutual Information statistic) with the use of phylogenetic information available in the sequences. In particular, we present a joint model based on the HKY evolution model, and consequently a X(2) test of independence for two positions. A significant part of this work is devoted to some mathematical analysis of these methods. We tested these statistics on regions of 16S and 23S rRNA, and tRNA.  相似文献   

12.
13.
14.
Estimating p-values in small microarray experiments   总被引:5,自引:0,他引:5  
MOTIVATION: Microarray data typically have small numbers of observations per gene, which can result in low power for statistical tests. Test statistics that borrow information from data across all of the genes can improve power, but these statistics have non-standard distributions, and their significance must be assessed using permutation analysis. When sample sizes are small, the number of distinct permutations can be severely limited, and pooling the permutation-derived test statistics across all genes has been proposed. However, the null distribution of the test statistics under permutation is not the same for equally and differentially expressed genes. This can have a negative impact on both p-value estimation and the power of information borrowing statistics. RESULTS: We investigate permutation based methods for estimating p-values. One of methods that uses pooling from a selected subset of the data are shown to have the correct type I error rate and to provide accurate estimates of the false discovery rate (FDR). We provide guidelines to select an appropriate subset. We also demonstrate that information borrowing statistics have substantially increased power compared to the t-test in small experiments.  相似文献   

15.
Rings, circles, and null-models for point pattern analysis in ecology   总被引:50,自引:0,他引:50  
A large number of methods for the analysis of point pattern data have been developed in a wide range of scientific fields. First-order statistics describe large-scale variation in the intensity of points in a study region, whereas second-order characteristics are summary statistics of all point-to-point distances in a mapped area and offer the potential for detecting both different types and scales of patterns. Second-order analysis based on Ripley's K-function is increasingly used in ecology to characterize spatial patterns and to develop hypothesis on underlying processes; however, the full range of available methods has seldomly been applied by ecologists. The aim of this paper is to provide guidance to ecologists with limited experience in second-order analysis to help in the choice of appropriate methods and to point to practical difficulties and pitfalls. We review (1) methods for analytical and numerical implementation of two complementary second-order statistics, Ripley's K and the O-ring statistic, (2) methods for edge correction, (3) methods to account for first-order effects (i.e. heterogeneity) of univariate patterns, and (4) a variety of useful standard and non-standard null models for univariate and bivariate patterns. For illustrative purpose, we analyze examples that deal with non-homogeneous univariate point patterns. We demonstrate that large-scale heterogeneity of a point-pattern biases Ripley's K-function at smaller scales. This bias is difficult to detect without explicitly testing for homogeneity, but we show that it can be removed when applying methods that account for first-order effects. We synthesize our review in a number of step-by-step recommendations that guide the reader through the selection of appropriate methods and we provide a software program that implements most of the methods reviewed and developed here.  相似文献   

16.
The traditional variance components approach for quantitative trait locus (QTL) linkage analysis is sensitive to violations of normality and fails for selected sampling schemes. Recently, a number of new methods have been developed for QTL mapping in humans. Most of the new methods are based on score statistics or regression-based statistics and are expected to be relatively robust to non-normality of the trait distribution and also to selected sampling, at least in terms of type I error. Whereas the theoretical development of these statistics is more or less complete, some practical issues concerning their implementation still need to be addressed. Here we study some of these issues such as the choice of denominator variance estimates, weighting of pedigrees, effect of parameter misspecification, effect of non-normality of the trait distribution, and effect of incorporating dominance. We present a comprehensive discussion of the theoretical properties of various denominator variance estimates and of the weighting issue and then perform simulation studies for nuclear families to compare the methods in terms of power and robustness. Based on our analytical and simulation results, we provide general guidelines regarding the choice of appropriate QTL mapping statistics in practical situations.  相似文献   

17.
18.
19.
For pathway analysis of genomic data, the most common methods involve combining p-values from individual statistical tests. However, there are several multivariate statistical methods that can be used to test whether a pathway has changed. Because of the large number of variables and pathway sizes in genomics data, some of these statistics cannot be computed. However, in metabolomics data, the number of variables and pathway sizes are typically much smaller, making such computations feasible. Of particular interest is being able to detect changes in pathways that may not be detected for the individual variables. We compare the performance of both the p-value methods and multivariate statistics for self-contained tests with an extensive simulation study and a human metabolomics study. Permutation tests, rather than asymptotic results are used to assess the statistical significance of the pathways. Furthermore, both one and two-sided alternatives hypotheses are examined. From the human metabolomic study, many pathways were statistically significant, although the majority of the individual variables in the pathway were not. Overall, the p-value methods perform at least as well as the multivariate statistics for these scenarios.  相似文献   

20.

Background

Over recent years there has been a strong movement towards the improvement of vital statistics and other types of health data that inform evidence-based policies. Collecting such data is not cost free. To date there is no systematic framework to guide investment decisions on methods of data collection for vital statistics or health information in general. We developed a framework to systematically assess the comparative costs and outcomes/benefits of the various data methods for collecting vital statistics.

Methodology

The proposed framework is four-pronged and utilises two major economic approaches to systematically assess the available data collection methods: cost-effectiveness analysis and efficiency analysis. We built a stylised example of a hypothetical low-income country to perform a simulation exercise in order to illustrate an application of the framework.

Findings

Using simulated data, the results from the stylised example show that the rankings of the data collection methods are not affected by the use of either cost-effectiveness or efficiency analysis. However, the rankings are affected by how quantities are measured.

Conclusion

There have been several calls for global improvements in collecting useable data, including vital statistics, from health information systems to inform public health policies. Ours is the first study that proposes a systematic framework to assist countries undertake an economic evaluation of DCMs. Despite numerous challenges, we demonstrate that a systematic assessment of outputs and costs of DCMs is not only necessary, but also feasible. The proposed framework is general enough to be easily extended to other areas of health information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号