首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The technology supporting the analysis of human motion has advanced dramatically. Past decades of locomotion research have provided us with significant knowledge about the accuracy of tests performed, the understanding of the process of human locomotion, and how clinical testing can be used to evaluate medical disorders and affect their treatment. Gait analysis is now recognized as clinically useful and financially reimbursable for some medical conditions. Yet, the routine clinical use of gait analysis has seen very limited growth. The issue of its clinical value is related to many factors, including the applicability of existing technology to addressing clinical problems; the limited use of such tests to address a wide variety of medical disorders; the manner in which gait laboratories are organized, tests are performed, and reports generated; and the clinical understanding and expectations of laboratory results. Clinical use is most hampered by the length of time and costs required for performing a study and interpreting it. A “gait” report is lengthy, its data are not well understood, and it includes a clinical interpretation, all of which do not occur with other clinical tests. Current biotechnology research is seeking to address these problems by creating techniques to capture data rapidly, accurately, and efficiently, and to interpret such data by an assortment of modeling, statistical, wave interpretation, and artificial intelligence methodologies. The success of such efforts rests on both our technical abilities and communication between engineers and clinicians.  相似文献   

2.
Sorensen D 《Genetica》2009,136(2):319-332
A remarkable research impetus has taken place in statistical genetics since the last World Conference. This has been stimulated by breakthroughs in molecular genetics, automated data-recording devices and computer-intensive statistical methods. The latter were revolutionized by the bootstrap and by Markov chain Monte Carlo (McMC). In this overview a number of specific areas are chosen to illustrate the enormous flexibility that McMC has provided for fitting models and exploring features of data that were previously inaccessible. The selected areas are inferences of the trajectories over time of genetic means and variances, models for the analysis of categorical and count data, the statistical genetics of a model postulating that environmental variance is partly under genetic control, and a short discussion of models that incorporate massive genetic marker information. We provide an overview of the application of McMC to study model fit, and finally, a discussion is presented on the development of efficient McMC updating schemes for non-standard models.  相似文献   

3.
Summary The interpretation of animal carcinogenicity tests traditionally rely almost exclusively upon a comparison of specific tumor rates in treated vs. matched and, perhaps, historical control animals. Yet, carcinogenicity tests yield much more biological and pathological data than simply final tumor rates. This additional data should also be considered as part of the total weight of evidence, particularly when analyzing a marginal or equivocal test result. If there are no positive findings among the data discussed here and listed in Table 1, it is unlikely that a marginal or equivocal increase in tumor incidence is actually treatment-related, irrespective of statistical analysis.  相似文献   

4.
A methodology was developed for fully automated measurements of nuclear features in Feulgen-stained tissue sections by means of videomicroscopy and image analysis. Segmentation is performed within one minute on 512 X 512 optical density (OD) images covering about 75 nuclei, resulting in a graphic contour overlay. The corresponding image subset is scanned by an object data extraction program, producing the raw figures for statistical interpretation. The segmentation software was evaluated by three tests, involving comparison with manual delineation and assessment of the influence of OD. Two case studies (ACTH-stimulated adrenal cortex and pancreatic carcinoma) illustrate the biologic accuracy and medical significance of the described methodology.  相似文献   

5.
Alzheimer's disease is a progressive and neurodegenerative disorder which involves multiple molecular mechanisms. Intense research during the last years has accumulated a large body of data and the search for sensitive and specific biomarkers has undergone a rapid evolution. However, the diagnosis remains problematic and the current tests do not accurately detect the process leading to neurodegeneration. Biomarkers discovery and validation are considered the key aspects to support clinical diagnosis and provide discriminatory power between different stages of the disorder. A considerable challenge is to integrate different types of data from new potent approach to reach a common interpretation and replicate the findings across studies and populations. Furthermore, long-term clinical follow-up and combined analysis of several biomarkers are among the most promising perspectives to diagnose and manage the disease. The present review will focus on the recent published data providing an updated overview of the main achievements in the genetic and biochemical research of the Alzheimer's disease. We also discuss the latest and most significant results that will help to define a specific disease signature whose validity might be clinically relevant for future AD diagnosis.  相似文献   

6.

Background  

Modern biology has shifted from "one gene" approaches to methods for genomic-scale analysis like microarray technology, which allow simultaneous measurement of thousands of genes. This has created a need for tools facilitating interpretation of biological data in "batch" mode. However, such tools often leave the investigator with large volumes of apparently unorganized information. To meet this interpretation challenge, gene-set, or cluster testing has become a popular analytical tool. Many gene-set testing methods and software packages are now available, most of which use a variety of statistical tests to assess the genes in a set for biological information. However, the field is still evolving, and there is a great need for "integrated" solutions.  相似文献   

7.
Keith P. Lewis 《Oikos》2004,104(2):305-315
Ecologists rely heavily upon statistics to make inferences concerning ecological phenomena and to make management recommendations. It is therefore important to use statistical tests that are most appropriate for a given data-set. However, inappropriate statistical tests are often used in the analysis of studies with categorical data (i.e. count data or binary data). Since many types of statistical tests have been used in artificial nests studies, a review and comparison of these tests provides an opportunity to demonstrate the importance of choosing the most appropriate statistical approach for conceptual reasons as well as type I and type II errors.
Artificial nests have routinely been used to study the influences of habitat fragmentation, and habitat edges on nest predation. I review the variety of statistical tests used to analyze artificial nest data within the framework of the generalized linear model and argue that logistic regression is the most appropriate and flexible statistical test for analyzing binary data-sets. Using artificial nest data from my own studies and an independent data set from the medical literature as examples, I tested equivalent data using a variety of statistical methods. I then compared the p-values and the statistical power of these tests. Results vary greatly among statistical methods. Methods inappropriate for analyzing binary data often fail to yield significant results even when differences between study groups appear large, while logistic regression finds these differences statistically significant. Statistical power is is 2–3 times higher for logistic regression than for other tests. I recommend that logistic regression be used to analyze artificial nest data and other data-sets with binary data.  相似文献   

8.
The evaluation of the data obtained during the behaviour tests always leads to the problem of multiple correlation, very often with non-linear dependencies on the target. All mathematical and statistical procedures that have been used so far are based on the assumption of an equation for the desired correlation for which parameters and related statistical equivalents are determined eventually. The MODAK system applied here (MODAK = algorithms of modelling for the calculation of multi-dimensional non-linear mathematical models) breaks down a complex correlation into individual dependencies in a mathematical and statistical way and selects suitable equations for each of them independently and determines the corresponding parameters. The numerical example evaluates data of behaviour tests on rats. First results obtained on the correlations of various behaviour tests indicate both the possibility of selecting suitable tests independent of each other and a better interpretation of the observed patterns of behaviour taking into account the interrelations between the tests. In addition, MODAK is a method which can be applied as a matter of course in a general way to all cases which call for the reduction and analysis of data occurring in process and system analysis and in the evaluation of test results requiring statistical modelling. So far, MODAK applications range from engineering sciences to medicine.  相似文献   

9.
The paper presents effective and mathematically exact procedures for selection of variables which are applicable in cases with a very high dimension as, for example, in gene expression analysis. Choosing sets of variables is an important method to increase the power of the statistical conclusions and to facilitate the biological interpretation. For the construction of sets, each single variable is considered as the centre of potential sets of variables. Testing for significance is carried out by means of the Westfall‐Young principle based on resampling or by the parametric method of spherical tests. The particular requirements for statistical stability are taken into account; each kind of overfitting is avoided. Thus, high power is attained and the familywise type I error can be kept in spite of the large dimension. To obtain graphical representations by heat maps and curves, a specific data compression technique is applied. Gene expression data from B‐cell lymphoma patients serve for the demonstration of the procedures.  相似文献   

10.
The diagnostic interpretation of medical images is a complex task aiming to detect potential abnormalities. One of the most used features in this process is texture which is a key component in the human understanding of images. Many studies were conducted to develop algorithms for texture quantification. The relevance of fractal geometry in medical image analysis is justified by the proven self-similarity of anatomical objects when imaged with a finite resolution. Over the last years, fractal geometry was applied extensively in many medical signal analysis applications. The use of these geometries relies heavily on estimation of the fractal features. Various methods were proposed to estimate the fractal dimension or multifractal spectrum of a signal. This article presents an overview of these algorithms, the way they work, their benefits and limits, and their application in the field of medical signal analysis.  相似文献   

11.
Survival analysis has established itself as a major statistical technique in medical research. Applications in hospital epidemiology, however, are only beginning to emerge. One reason for this delay is that usually complete follow-up of patients in hospital is feasible. This overview discusses where survival techniques provide additional insight into hospital epidemiology, and where they are, in fact, needed even in the absence of right-censoring.  相似文献   

12.
The analysis of microarray data often involves performing a large number of statistical tests, usually at least one test per queried gene. Each test has a certain probability of reaching an incorrect inference; therefore, it is crucial to estimate or control error rates that measure the occurrence of erroneous conclusions in reporting and interpreting the results of a microarray study. In recent years, many innovative statistical methods have been developed to estimate or control various error rates for microarray studies. Researchers need guidance choosing the appropriate statistical methods for analysing these types of data sets. This review describes a family of methods that use a set of P-values to estimate or control the false discovery rate and similar error rates. Finally, these methods are classified in a manner that suggests the appropriate method for specific applications and diagnostic procedures that can identify problems in the analysis are described.  相似文献   

13.
14.
Background: High resolution melting (HRM) is an emerging new method for interrogating and characterizing DNA samples. An important aspect of this technology is data analysis. Traditional HRM curves can be difficult to interpret and the method has been criticized for lack of statistical interrogation and arbitrary interpretation of results. Methods: Here we report the basic principles and first applications of a new statistical approach to HRM analysis addressing these concerns. Our method allows automated genotyping of unknown samples coupled with formal statistical information on the likelihood, if an unknown sample is of a known genotype (by discriminant analysis or “supervised learning”). It can also determine the assortment of alleles present (by cluster analysis or “unsupervised learning”) without a priori knowledge of the genotypes present. Conclusion: The new algorithms provide highly sensitive and specific auto-calling of genotypes from HRM data in both supervised an unsupervised analysis mode. The method is based on pure statistical interrogation of the data set with a high degree of standardization. The hypothesis-free unsupervised mode offers various possibilities for de novo HRM applications such as mutation discovery.  相似文献   

15.
SwS: a solvation web service for nucleic acids   总被引:2,自引:0,他引:2  
SwS, based on a statistical analysis of crystallographic structures deposited in the NDB, is designed to provide an exhaustive overview of the solvation of nucleic acid structural elements through the generation of 3D solvent density maps. A first version (v1.0) of this web service focuses on the interaction of DNA, RNA and hybrid base pairs linked by two or three hydrogen bonds with water, cations and/or anions. Data provided by SwS are updated on a weekly basis and can be used by: (i) those involved in molecular dynamics simulation studies for validation purposes; (ii) crystallographers for help in the interpretation of solvent density maps; and all those involved in (iii) drug design and, more generally, in (iv) nucleic acid structural studies. SwS provides also statistical data related to the frequency of occurrence of different types of base pairs in crystallographic structures and the conformation of the involved nucleotides. This web service has been designed to allow a maximum of flexibility in terms of queries and has also been developed with didactic considerations in mind. AVAILABILITY: http://www-ibmc.u-strasbg.fr/arn/sws.html  相似文献   

16.
This is the second article in a series, intended as a tutorial to provide the interested reader with an overview of the concepts not covered in part I, such as: the principles of ion-activation methods, the ability of mass-spectrometric methods to interface with various proteomic strategies, analysis techniques, bioinformatics and data interpretation and annotation. Although these are different topics, it is important that a reader has a basic and collective understanding of all of them for an overall appreciation of how to carry out and analyze a proteomic experiment. Different ion-activation methods for MS/MS, such as collision-induced dissociation (including postsource decay) and surface-induced dissociation, electron capture and electron-transfer dissociation, infrared multiphoton and blackbody infrared radiative dissociation have been discussed since they are used in proteomic research. The high dimensionality of data generated from proteomic studies requires an understanding of the underlying analytical procedures used to obtain these data, as well as the development of improved bioinformatics tools and data-mining approaches for efficient and accurate statistical analyses of biological samples from healthy and diseased individuals, in addition to determining the utility of the interpreted data. Currently available strategies for the analysis of the proteome by mass spectrometry, such as those employed for the analysis of substantially purified proteins and complex peptide mixtures, as well as hypothesis-driven strategies, have been elaborated upon. Processing steps prior to the analysis of mass spectrometry data, statistics and the several informatics steps currently used for the analysis of shotgun proteomic experiments, as well as proteomics ontology, are also discussed.  相似文献   

17.
The phylogenetic mixed model is an application of the quantitative-genetic mixed model to interspecific data. Although this statistical framework provides a potentially unifying approach to quantitative-genetic and phylogenetic analysis, the model has been applied infrequently because of technical difficulties with parameter estimation. We recommend a reparameterization of the model that eliminates some of these difficulties, and we develop a new estimation algorithm for both the original maximum likelihood and new restricted maximum likelihood estimators. The phylogenetic mixed model is particularly rich in terms of the evolutionary insight that might be drawn from model parameters, so we also illustrate and discuss the interpretation of the model parameters in a specific comparative analysis.  相似文献   

18.
This paper describes a general data base management package defined for medical applications. CHRONOS is a user-oriented system which has been designed for physicians to get periodical reports and for researchers to prepare statistical treatments. The basic principles of the data base and program organization are described: many possibilities are offered for data acquisition and specific efforts have been made in order to analyze easily the evolution of patients. Several medical applications are now operational with CHRONOS in fields as different as psychiatry and nephrology.  相似文献   

19.
BackgroundAlthough a substantial number of studies focus on the teaching and application of medical statistics in China, few studies comprehensively evaluate the recognition of and demand for medical statistics. In addition, the results of these various studies differ and are insufficiently comprehensive and systematic.ObjectivesThis investigation aimed to evaluate the general cognition of and demand for medical statistics by undergraduates, graduates, and medical staff in China.MethodsWe performed a comprehensive database search related to the cognition of and demand for medical statistics from January 2007 to July 2014 and conducted a meta-analysis of non-controlled studies with sub-group analysis for undergraduates, graduates, and medical staff.ResultsThere are substantial differences with respect to the cognition of theory in medical statistics among undergraduates (73.5%), graduates (60.7%), and medical staff (39.6%). The demand for theory in medical statistics is high among graduates (94.6%), undergraduates (86.1%), and medical staff (88.3%). Regarding specific statistical methods, the cognition of basic statistical methods is higher than of advanced statistical methods. The demand for certain advanced statistical methods, including (but not limited to) multiple analysis of variance (ANOVA), multiple linear regression, and logistic regression, is higher than that for basic statistical methods. The use rates of the Statistical Package for the Social Sciences (SPSS) software and statistical analysis software (SAS) are only 55% and 15%, respectively.ConclusionThe overall statistical competence of undergraduates, graduates, and medical staff is insufficient, and their ability to practically apply their statistical knowledge is limited, which constitutes an unsatisfactory state of affairs for medical statistics education. Because the demand for skills in this area is increasing, the need to reform medical statistics education in China has become urgent.  相似文献   

20.
Uniformly repeated DNA sequences in genomes known as tandem repeats are one of the most interesting features of many organisms analyzed so far. Among the tandem repeats, microsatellites have attracted many researchers since their associations in several human diseases. The discovery of tandem repeats in the expressed sequence tags (ESTs) or in the cDNA libraries contributed to new ideas and tools for evolutionary studies. With the advent of new biotechnological tools the number of ESTs deposited in databases is rapidly increasing. Therefore, new informative bioinformatics tools are needed to assist the analysis and interpretation of these tandem repeats in ESTs and in other type of DNAs. In the present study we report two new utility tools; Organism Miner and Keyword Finder. Organism Miner utility collects, sorts, splice and provides statistical overview on DNA data files. Keyword Finder analyses all the sequences in the input folder and extracts and collects keywords for each specific organism or the all the organisms, which have the DNA sequence and generates statistical overview. We are currently generating cotton and pepper cDNA libraries and often using the GenBank DNA sequences. Therefore, in this study we used cDNAs and ESTs of cotton and pepper for the demonstrating the use of these two tools. With help of these two utilities we observed that most of ESTs are useful for downstream applications such as mining microsatellites specific to an organ, tissue or development stage. The analyses of ESTs indicated that not only tandem repeats existed in ESTs but also tandem repeats differentially presented in different organ or tissue specific ESTs within and between the species. Utilities and the sample data sets are self-extracting files and freely available from or can be obtained upon request from the corresponding author.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号