首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Throwing is a complex motion that involves the entire body and often puts an inordinate amount of stress on the shoulder and the arm. Warm-up prepares the body for work and can enhance performance. Sling-based exercise (SE) has been theorized to activate muscles, particularly the stabilizers, in a manner beneficial for preactivity warm-up, yet this hypothesis has not been tested. Our purpose was to determine if a warm-up using SE would increase throwing velocity and accuracy compared to a traditional, thrower's 10 warm-up program. Division I baseball players (nonpitchers) (16 men, age: 19.6 ± 1.3, height: 184.2 ± 6.2 cm, mass: 76.9 ± 19.2 kg) volunteered to participate in this crossover study. All subjects underwent both a warm-up routine using a traditional method (Thrower's 10 exercises) and a warm-up routine using closed kinetic chain SE methods (RedCord) on different days separated by 72 hours. Ball velocity and accuracy measures were obtained on 10 throws after either the traditional and SE warm-up regimens. Velocity was recorded using a standard Juggs radar gun (JUGS; Tualatin, OR, USA). Accuracy was recorded using a custom accuracy target. An Analysis of covariance was performed, with the number of throws recorded before the testing was used as a covariate and p < 0.05 was set a priori. There were no statistical differences between the SE warm-up and Thrower's 10 warm-up for throwing velocity (SE: 74.7 ± 7.5 mph, Thrower's 10: 74.6 ± 7.3 mph p = 0.874) or accuracy (SE: 115.6 ± 53.7 cm, Thrower's 10: 91.8 ± 55 cm, p = 0.136). Warming up with SE produced equivalent throwing velocity and accuracy compared to the Thrower's 10 warm-up method. Thus, SE provides an alternative to traditional warm-up.  相似文献   

2.
Summary The statistical properties of three molecular tree construction methods—the unweighted pair-group arithmetic average clustering (UPG), Farris, and modified Farris methods—are examined under the neutral mutation model of evolution. The methods are compared for accuracy in construction of the topology and estimation of the branch lengths, using statistics of these two aspects. The distribution of the statistic concerning topological construction is shown to be as important as its mean and variance for the comparison.Of the three methods, the UPG method constructs the tree topology with the least variation. The modified Farris method, however, gives the best performance when the two aspects are considered simultaneously. It is also shown that a topology based on two genes is much more accurate than that based on one gene.There is a tendency to accept published molecular trees, but uncritical acceptance may lead one to spurious conclusions. It should always be kept in mind that a tree is a statistical result that is affected strongly by the stochastic error of nucleotide substitution and the error intrinsic to the tree construction method itself.  相似文献   

3.
An appropriate structural superposition identifies similarities and differences between homologous proteins that are not evident from sequence alignments alone. We have coupled our Gaussian‐weighted RMSD (wRMSD) tool with a sequence aligner and seed extension (SE) algorithm to create a robust technique for overlaying structures and aligning sequences of homologous proteins (HwRMSD). HwRMSD overcomes errors in the initial sequence alignment that would normally propagate into a standard RMSD overlay. SE can generate a corrected sequence alignment from the improved structural superposition obtained by wRMSD. HwRMSD's robust performance and its superiority over standard RMSD are demonstrated over a range of homologous proteins. Its better overlay results in corrected sequence alignments with good agreement to HOMSTRAD. Finally, HwRMSD is compared to established structural alignment methods: FATCAT, secondary‐structure matching, combinatorial extension, and Dalilite. Most methods are comparable at placing residue pairs within 2 Å, but HwRMSD places many more residue pairs within 1 Å, providing a clear advantage. Such high accuracy is essential in drug design, where small distances can have a large impact on computational predictions. This level of accuracy is also needed to correct sequence alignments in an automated fashion, especially for omics‐scale analysis. HwRMSD can align homologs with low‐sequence identity and large conformational differences, cases where both sequence‐based and structural‐based methods may fail. The HwRMSD pipeline overcomes the dependency of structural overlays on initial sequence pairing and removes the need to determine the best sequence‐alignment method, substitution matrix, and gap parameters for each unique pair of homologs. Proteins 2012. © 2012 Wiley Periodicals, Inc.  相似文献   

4.
Improved method for predicting beta-turn using support vector machine   总被引:2,自引:0,他引:2  
MOTIVATION: Numerous methods for predicting beta-turns in proteins have been developed based on various computational schemes. Here, we introduce a new method of beta-turn prediction that uses the support vector machine (SVM) algorithm together with predicted secondary structure information. Various parameters from the SVM have been adjusted to achieve optimal prediction performance. RESULTS: The SVM method achieved excellent performance as measured by the Matthews correlation coefficient (MCC = 0.45) using a 7-fold cross validation on a database of 426 non-homologous protein chains. To our best knowledge, this MCC value is the highest achieved so far for predicting beta-turn. The overall prediction accuracy Qtotal was 77.3%, which is the best among the existing prediction methods. Among its unique attractive features, the present SVM method avoids overtraining and compresses information and provides a predicted reliability index.  相似文献   

5.
Exhaled air carries information on human health status. Ion mobility spectrometers combined with a multi-capillary column (MCC/IMS) is a well-known technology for detecting volatile organic compounds (VOCs) within human breath. This technique is relatively inexpensive, robust and easy to use in every day practice. However, the potential of this methodology depends on successful application of computational approaches for finding relevant VOCs and classification of patients into disease-specific profile groups based on the detected VOCs. We developed an integrated state-of-the-art system using sophisticated statistical learning techniques for VOC-based feature selection and supervised classification into patient groups. We analyzed breath data from 84 volunteers, each of them either suffering from chronic obstructive pulmonary disease (COPD), or both COPD and bronchial carcinoma (COPD + BC), as well as from 35 healthy volunteers, comprising a control group (CG). We standardized and integrated several statistical learning methods to provide a broad overview of their potential for distinguishing the patient groups. We found that there is strong potential for separating MCC/IMS chromatograms of healthy controls and COPD patients (best accuracy COPD vs CG: 94%). However, further examination of the impact of bronchial carcinoma on COPD/no-COPD classification performance is necessary (best accuracy CG vs COPD vs COPD + BC: 79%). We also extracted 20 high-scoring VOCs that allowed differentiating COPD patients from healthy controls. We conclude that these statistical learning methods have a generally high accuracy when applied to well-structured, medical MCC/IMS data.  相似文献   

6.
Among numerous artificial intelligence approaches, k-Nearest Neighbor algorithms, genetic algorithms, and artificial neural networks are considered as the most common and effective methods in classification problems in numerous studies. In the present study, the results of the implementation of a novel hybrid feature selection-classification model using the above mentioned methods are presented. The purpose is benefitting from the synergies obtained from combining these technologies for the development of classification models. Such a combination creates an opportunity to invest in the strength of each algorithm, and is an approach to make up for their deficiencies. To develop proposed model, with the aim of obtaining the best array of features, first, feature ranking techniques such as the Fisher''s discriminant ratio and class separability criteria were used to prioritize features. Second, the obtained results that included arrays of the top-ranked features were used as the initial population of a genetic algorithm to produce optimum arrays of features. Third, using a modified k-Nearest Neighbor method as well as an improved method of backpropagation neural networks, the classification process was advanced based on optimum arrays of the features selected by genetic algorithms. The performance of the proposed model was compared with thirteen well-known classification models based on seven datasets. Furthermore, the statistical analysis was performed using the Friedman test followed by post-hoc tests. The experimental findings indicated that the novel proposed hybrid model resulted in significantly better classification performance compared with all 13 classification methods. Finally, the performance results of the proposed model was benchmarked against the best ones reported as the state-of-the-art classifiers in terms of classification accuracy for the same data sets. The substantial findings of the comprehensive comparative study revealed that performance of the proposed model in terms of classification accuracy is desirable, promising, and competitive to the existing state-of-the-art classification models.  相似文献   

7.
In this study, we address the problem of local quality assessment in homology models. As a prerequisite for the evaluation of methods for predicting local model quality, we first examine the problem of measuring local structural similarities between a model and the corresponding native structure. Several local geometric similarity measures are evaluated. Two methods based on structural superposition are found to best reproduce local model quality assessments by human experts. We then examine the performance of state-of-the-art statistical potentials in predicting local model quality on three qualitatively distinct data sets. The best statistical potential, DFIRE, is shown to perform on par with the best current structure-based method in the literature, ProQres. A combination of different statistical potentials and structural features using support vector machines is shown to provide somewhat improved performance over published methods.  相似文献   

8.
Pyörälä S 《Theriogenology》1989,31(5):1067-1073
The relative accuracy of two pregnancy testing methods for swine were compared in a field study. The procedures used were manual palpation and amplitude-depth ultrasonic scanning. A total of 369 sows were examined by both methods. Seven additional gilts were examined by ultrasound only and 46 sows by palpation per rectum only. The number of correct positive and negative diagnoses made by both methods were calculated, and determination of accuracy as well as comparison between the tests were made on this basis. The relative accuracy was 97.6% for the manual method and 96.8% for the ultrasound method. Both tests had a high sensitivity, 99.2 and 98.9%, respectively. The ability of the tests to detect the non-pregnant animals was not as high, which is reflected by a lower specificity. No significant differences were noted between the two methods. A lower specificity and a lower negative predictive value were provided by ultrasound scanning as compared with those acquired by manual palpation. Both procedures were considered to be quick and convenient to perform. It was concluded that in spite of the new pregnancy testing methods introduced in the swine industry, manual palpation remains the most practical in terms of its accuracy, ease, and the minimal requirement for equipment. In gilts, palpation is unsuitable and ultrasonography currently remains the best choice for the diagnosis of pregnancy.  相似文献   

9.
Microarrays are tools to study the expression profile of an entire genome. Technology, statistical tools and biological knowledge in general have evolved over the past ten years and it is now possible to improve analysis of previous datasets. We have developed a web interface called PHOENIX that automates the analysis of microarray data from preprocessing to the evaluation of significance through manual or automated parameterization. At each analytical step, several methods are possible for (re)analysis of data. PHOENIX evaluates a consensus score from several methods and thus determines the performance level of the best methods (even if the best performing method is not known). With an estimate of the true gene list, PHOENIX can evaluate the performance of methods or compare the results with other experiments. Each method used for differential expression analysis and performance evaluation has been implemented in the PEGASE back-end package, along with additional tools to further improve PHOENIX. Future developments will involve the addition of steps (CDF selection, geneset analysis, meta-analysis), methods (PLIER, ANOVA, Limma), benchmarks (spike-in and simulated datasets), and illustration of the results (automatically generated report).  相似文献   

10.
ABSTRACT

We compared performance in deriving sleep variables by both Fitbit Charge 2?, which couples body movement (accelerometry) and heart rate variability (HRV) in combination with its proprietary interpretative algorithm (IA), and standard actigraphy (Motionlogger® Micro Watch Actigraph: MMWA), which relies solely on accelerometry in combination with its best performing ‘Sadeh’ IA, to electroencephalography (EEG: Zmachine® Insight+ and its proprietary IA) used as reference. We conducted home sleep studies on 35 healthy adults, 33 of whom provided complete datasets of the three simultaneously assessed technologies. Relative to the Zmachine EEG method, Fitbit showed an overall Kappa agreement of 54% in distinguishing wake/sleep epochs and sensitivity of 95% and specificity of 57% in detecting sleep epochs. Fitbit, relative to EEG, underestimated sleep onset latency (SOL) by ~11 min and overestimated sleep efficiency (SE) by ~4%. There was no statistically significant difference between Fitbit and EEG methods in measuring wake after sleep onset (WASO) and total sleep time (TST). Fitbit showed substantial agreement with EEG in detecting rapid eye movement and deep sleep, but only moderate agreement in detecting light sleep. The MMWA method showed 51% overall Kappa agreement with the EEG one in detecting wake/sleep epochs, with sensitivity of 94% and specificity of 53% in detecting sleep epochs. MMWA, relative to EEG, underestimated SOL by ~10 min. There was no significant difference between Fitbit and MMWA methods in amount of bias in estimating SOL, WASO, TST, and SE; however, the minimum detectable change (MDC) per sleep variable with Fitbit was better (smaller) than with MMWA, respectively, by ~10 min, ~16 min, ~22 min, and ~8%. Overall, performance of Fitbit accelerometry and HRV technology in conjunction with its proprietary IA to detect sleep vs. wake episodes is slightly better than wrist actigraphy that relies solely on accelerometry and best performing Sadeh IA. Moreover, the smaller MDC of Fitbit technology in deriving sleep parameters in comparison to wrist actigraphy makes it a suitable option for assessing changes in sleep quality over time, longitudinally, and/or in response to interventions.  相似文献   

11.
Zhao H  Yang Y  Zhou Y 《Nucleic acids research》2011,39(8):3017-3025
Mechanistic understanding of many key cellular processes often involves identification of RNA binding proteins (RBPs) and RNA binding sites in two separate steps. Here, they are predicted simultaneously by structural alignment to known protein-RNA complex structures followed by binding assessment with a DFIRE-based statistical energy function. This method achieves 98% accuracy and 91% precision for predicting RBPs and 93% accuracy and 78% precision for predicting RNA-binding amino-acid residues for a large benchmark of 212 RNA binding and 6761 non-RNA binding domains (leave-one-out cross-validation). Additional tests revealed that the method makes no false positive prediction from 311 DNA binding domains but correctly detects six domains binding with both DNA and RNA. In addition, it correctly identified 31 of 75 unbound RNA-binding domains with 92% accuracy and 65% precision for predicted binding residues and achieved 86% success rate in its application to SCOP RNA binding domain superfamily (Structural Classification Of Proteins). It further predicts 25 targets as RBPs in 2076 structural genomics targets: 20 of 25 predicted ones (80%) are putatively RNA binding. The superior performance over existing methods indicates the importance of dividing structures into domains, using a Z-score to measure relative structural similarity, and a statistical energy function to measure protein-RNA binding affinity.  相似文献   

12.
Question: How does one best choose native vegetation types and site them in reclamation of disturbed sites ranging from cropland and strip mines? Application: World‐wide, demonstrated in SE Montana. Methods: We assumed that pre‐disturbance native communities are the best targets for revegetation, and that the environmental facet each occupies naturally provides its optimal habitat. Given this assumption, we used pre‐strip‐mine data (800 points from a 88 km2 site) to demonstrate statistical methods for identifying native communities, describing them, and determining their environments. Results and conclusions: Classification and pruning analysis provided an objective method for choosing the number of target community types to be used in reclamation. The composition of eight target types, identified with these analyses, was described with a relevé table to provide a species list, target cover levels and support the choice of species to be seeded. As a basis for siting communities, we modeled community presence as a function of topography, slope/aspect, and substrate. Logistic GLMs identified the optimal environment for each community. Classification and Regression Tree (CART) analysis identified the most probable community in each environmental facet. Topography and slope were generally the best predictors in these models. Because our analyses relate native vegetation to undisturbed environments, our results may apply best to sites with minimal substrate disturbance (i.e. better to abandoned cropland than to strip‐mined sites).  相似文献   

13.
Pattern recognition has been employed in a myriad of industrial, commercial and academic applications. Many techniques have been devised to tackle such a diversity of applications. Despite the long tradition of pattern recognition research, there is no technique that yields the best classification in all scenarios. Therefore, as many techniques as possible should be considered in high accuracy applications. Typical related works either focus on the performance of a given algorithm or compare various classification methods. In many occasions, however, researchers who are not experts in the field of machine learning have to deal with practical classification tasks without an in-depth knowledge about the underlying parameters. Actually, the adequate choice of classifiers and parameters in such practical circumstances constitutes a long-standing problem and is one of the subjects of the current paper. We carried out a performance study of nine well-known classifiers implemented in the Weka framework and compared the influence of the parameter configurations on the accuracy. The default configuration of parameters in Weka was found to provide near optimal performance for most cases, not including methods such as the support vector machine (SVM). In addition, the k-nearest neighbor method frequently allowed the best accuracy. In certain conditions, it was possible to improve the quality of SVM by more than 20% with respect to their default parameter configuration.  相似文献   

14.
《IRBM》2022,43(5):434-446
ObjectiveThe initial principal task of a Brain-Computer Interfacing (BCI) research is to extract the best feature set from a raw EEG (Electroencephalogram) signal so that it can be used for the classification of two or multiple different events. The main goal of the paper is to develop a comparative analysis among different feature extraction techniques and classification algorithms.Materials and methodsIn this present investigation, four different methodologies have been adopted to classify the recorded MI (motor imagery) EEG signal, and their comparative study has been reported. Haar Wavelet Energy (HWE), Band Power, Cross-correlation, and Spectral Entropy (SE) based Cross-correlation feature extraction techniques have been considered to obtain the necessary features set from the raw EEG signals. Four different machine learning algorithms, viz. LDA (Linear Discriminant Analysis), QDA (Quadratic Discriminant Analysis), Naïve Bayes, and Decision Tree, have been used to classify the features.ResultsThe best average classification accuracies are 92.50%, 93.12%, 72.26%, and 98.71% using the four methods. Further, these results have been compared with some recent existing methods.ConclusionThe comparative results indicate a significant accuracy level performance improvement of the proposed methods with respect to the existing one. Hence, this presented work can guide to select the best feature extraction method and the classifier algorithm for MI-based EEG signals.  相似文献   

15.
Haury AC  Gestraud P  Vert JP 《PloS one》2011,6(12):e28210
Biomarker discovery from high-dimensional data is a crucial problem with enormous applications in biology and medicine. It is also extremely challenging from a statistical viewpoint, but surprisingly few studies have investigated the relative strengths and weaknesses of the plethora of existing feature selection methods. In this study we compare 32 feature selection methods on 4 public gene expression datasets for breast cancer prognosis, in terms of predictive performance, stability and functional interpretability of the signatures they produce. We observe that the feature selection method has a significant influence on the accuracy, stability and interpretability of signatures. Surprisingly, complex wrapper and embedded methods generally do not outperform simple univariate feature selection methods, and ensemble feature selection has generally no positive effect. Overall a simple Student's t-test seems to provide the best results.  相似文献   

16.
Comparative accuracy of methods for protein sequence similarity search   总被引:2,自引:0,他引:2  
MOTIVATION: Searching a protein sequence database for homologs is a powerful tool for discovering the structure and function of a sequence. Two new methods for searching sequence databases have recently been described: Probabilistic Smith-Waterman (PSW), which is based on Hidden Markov models for a single sequence using a standard scoring matrix, and a new version of BLAST (WU-BLAST2), which uses Sum statistics for gapped alignments. RESULTS: This paper compares and contrasts the effectiveness of these methods with three older methods (Smith- Waterman: SSEARCH, FASTA and BLASTP). The analysis indicates that the new methods are useful, and often offer improved accuracy. These tools are compared using a curated (by Bill Pearson) version of the annotated portion of PIR 39. Three different statistical criteria are utilized: equivalence number, minimum errors and the receiver operating characteristic. For complete-length protein query sequences from large families, PSW's accuracy is superior to that of the other methods, but its accuracy is poor when used with partial-length query sequences. False negatives are twice as common as false positives irrespective of the search methods if a family-specific threshold score that minimizes the total number of errors (i.e. the most favorable threshold score possible) is used. Thus, sensitivity, not selectivity, is the major problem. Among the analyzed methods using default parameters, the best accuracy was obtained from SSEARCH and PSW for complete-length proteins, and the two BLAST programs, plus SSEARCH, for partial-length proteins.   相似文献   

17.
A comparison of four methods for the determination of total proteins is presented from the following points of view: - sensitivity; - specificity; - amount of work, chemicals, time and equipment needed for the performance of the determination. The following tests have been examined; Tombs' (absorbancy at 210 nm); Waddell's (difference in absorbancy between 215 and 225 nm); Warburg's (absorbancy at 280 nm); Lowry's (absorbancy at 500 nm after the reaction with phenol reagent). The authors recommend Tombs' method for its outstanding sensitivity, specificity and simplicity as the best of the four.  相似文献   

18.
Chen GB  Xu Y  Xu HM  Li MD  Zhu J  Lou XY 《PloS one》2011,6(2):e16981
Detection of interacting risk factors for complex traits is challenging. The choice of an appropriate method, sample size, and allocation of cases and controls are serious concerns. To provide empirical guidelines for planning such studies and data analyses, we investigated the performance of the multifactor dimensionality reduction (MDR) and generalized MDR (GMDR) methods under various experimental scenarios. We developed the mathematical expectation of accuracy and used it as an indicator parameter to perform a gene-gene interaction study. We then examined the statistical power of GMDR and MDR within the plausible range of accuracy (0.50~0.65) reported in the literature. The GMDR with covariate adjustment had a power of >80% in a case-control design with a sample size of ≥2000, with theoretical accuracy ranging from 0.56 to 0.62. However, when the accuracy was <0.56, a sample size of ≥4000 was required to have sufficient power. In our simulations, the GMDR outperformed the MDR under all models with accuracy ranging from 0.56~0.62 for a sample size of 1000-2000. However, the two methods performed similarly when the accuracy was outside this range or the sample was significantly larger. We conclude that with adjustment of a covariate, GMDR performs better than MDR and a sample size of 1000~2000 is reasonably large for detecting gene-gene interactions in the range of effect size reported by the current literature; whereas larger sample size is required for more subtle interactions with accuracy <0.56.  相似文献   

19.
Selective breeding is a common and effective approach for genetic improvement of aquaculture stocks with parental selection as the key factor. Genomic selection (GS) has been proposed as a promising tool to facilitate selective breeding. Here, we evaluated the predictability of four GS methods in Zhikong scallop (Chlamys farreri) through real dataset analyses of four economical traits (e.g., shell length, shell height, shell width, and whole weight). Our analysis revealed that different GS models exhibited variable performance in prediction accuracy depending on genetic and statistical factors, but non-parametric method, including reproducing kernel Hilbert spaces regression (RKHS) and sparse neural networks (SNN), generally outperformed parametric linear method, such as genomic best linear unbiased prediction (GBLUP) and BayesB. Furthermore, we demonstrated that the predictability relied mainly on the heritability regardless of GS methods. The size of training population and marker density also had considerable effects on the predictive performance. In practice, increasing the training population size could better improve the genomic prediction than raising the marker density. This study is the first to apply non-linear model and neural networks for GS in scallop and should be valuable to help develop strategies for aquaculture breeding programs.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号