首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 17 毫秒
1.

Background

In the past, manufacture of prosthetic socket by using traditional handmade method not only consumed research time but also required a special assembly approach. Recently, reverse engineering and rapid prototype technology have grown up explosively, and thus, provide a choice to fabricate prosthetic socket.

Methods

Application 3D computer aided design and manufacturing (computer-aided design/computer-aided engineering) tools approach the surface shape stump data is digitized and can be easily modified and reused. Collocation investigates gait parameters of prosthetic socket, and interface stress between stump and socket with different processing conditions. Meanwhile, questionnaire was utilized to survey satisfaction rating scale, comfort level, of subjects using this kind of artificial device.

Results

The main outcome of current research including gait parameters, stress interface and satisfaction rating scale those would be an informative reference for further studies in design and manufacture as well as clinical applications of prosthetic sockets.

Conclusions

This study found that, regardless of the method used for socket fabrication, most stress was concentrated in tibia end pressure-relief area. This caused discomfort in the area of tibia end to the participant wearing prosthesis. This discomfort was most evident in case when the prosthetic socket was fabricated using RE and RP.
  相似文献   

2.

Background

One of the recent challenges of computational biology is development of new algorithms, tools and software to facilitate predictive modeling of big data generated by high-throughput technologies in biomedical research.

Results

To meet these demands we developed PROPER - a package for visual evaluation of ranking classifiers for biological big data mining studies in the MATLAB environment.

Conclusion

PROPER is an efficient tool for optimization and comparison of ranking classifiers, providing over 20 different two- and three-dimensional performance curves.
  相似文献   

3.

Background

Various kinds of data mining algorithms are continuously raised with the development of related disciplines. The applicable scopes and their performances of these algorithms are different. Hence, finding a suitable algorithm for a dataset is becoming an important emphasis for biomedical researchers to solve practical problems promptly.

Methods

In this paper, seven kinds of sophisticated active algorithms, namely, C4.5, support vector machine, AdaBoost, k-nearest neighbor, naïve Bayes, random forest, and logistic regression, were selected as the research objects. The seven algorithms were applied to the 12 top-click UCI public datasets with the task of classification, and their performances were compared through induction and analysis. The sample size, number of attributes, number of missing values, and the sample size of each class, correlation coefficients between variables, class entropy of task variable, and the ratio of the sample size of the largest class to the least class were calculated to character the 12 research datasets.

Results

The two ensemble algorithms reach high accuracy of classification on most datasets. Moreover, random forest performs better than AdaBoost on the unbalanced dataset of the multi-class task. Simple algorithms, such as the naïve Bayes and logistic regression model are suitable for a small dataset with high correlation between the task and other non-task attribute variables. K-nearest neighbor and C4.5 decision tree algorithms perform well on binary- and multi-class task datasets. Support vector machine is more adept on the balanced small dataset of the binary-class task.

Conclusions

No algorithm can maintain the best performance in all datasets. The applicability of the seven data mining algorithms on the datasets with different characteristics was summarized to provide a reference for biomedical researchers or beginners in different fields.
  相似文献   

4.

Introduction

Collecting feces is easy. It offers direct outcome to endogenous and microbial metabolites.

Objectives

In a context of lack of consensus about fecal sample preparation, especially in animal species, we developed a robust protocol allowing untargeted LC-HRMS fingerprinting.

Methods

The conditions of extraction (quantity, preparation, solvents, dilutions) were investigated in bovine feces.

Results

A rapid and simple protocol involving feces extraction with methanol (1/3, M/V) followed by centrifugation and a step filtration (10 kDa) was developed.

Conclusion

The workflow generated repeatable and informative fingerprints for robust metabolome characterization.
  相似文献   

5.

Introduction

Swine dysentery caused by Brachyspira hyodysenteriae is a production limiting disease in pig farming. Currently antimicrobial therapy is the only treatment and control method available.

Objective

The aim of this study was to characterize the metabolic response of porcine colon explants to infection by B. hyodysenteriae.

Methods

Porcine colon explants exposed to B. hyodysenteriae were analyzed for histopathological, metabolic and pro-inflammatory gene expression changes.

Results

Significant epithelial necrosis, increased levels of l-citrulline and IL-1α were observed on explants infected with B. hyodysenteriae.

Conclusions

The spirochete induces necrosis in vitro likely through an inflammatory process mediated by IL-1α and NO.
  相似文献   

6.

Introduction

Data sharing is being increasingly required by journals and has been heralded as a solution to the ‘replication crisis’.

Objectives

(i) Review data sharing policies of journals publishing the most metabolomics papers associated with open data and (ii) compare these journals’ policies to those that publish the most metabolomics papers.

Methods

A PubMed search was used to identify metabolomics papers. Metabolomics data repositories were manually searched for linked publications.

Results

Journals that support data sharing are not necessarily those with the most papers associated to open metabolomics data.

Conclusion

Further efforts are required to improve data sharing in metabolomics.
  相似文献   

7.

Background

Commonly when designing studies, researchers propose to measure several independent variables in a regression model, a subset of which are identified as the main variables of interest while the rest are retained in a model as covariates or confounders. Power for linear regression in this setting can be calculated using SAS PROC POWER. There exists a void in estimating power for the logistic regression models in the same setting.

Methods

Currently, an approach that calculates power for only one variable of interest in the presence of other covariates for logistic regression is in common use and works well for this special case. In this paper we propose three related algorithms along with corresponding SAS macros that extend power estimation for one or more primary variables of interest in the presence of some confounders.

Results

The three proposed empirical algorithms employ likelihood ratio test to provide a user with either a power estimate for a given sample size, a quick sample size estimate for a given power, and an approximate power curve for a range of sample sizes. A user can specify odds ratios for a combination of binary, uniform and standard normal independent variables of interest, and or remaining covariates/confounders in the model, along with a correlation between variables.

Conclusions

These user friendly algorithms and macro tools are a promising solution that can fill the void for estimation of power for logistic regression when multiple independent variables are of interest, in the presence of additional covariates in the model.
  相似文献   

8.

Background

High-throughput technologies, such as DNA microarray, have significantly advanced biological and biomedical research by enabling researchers to carry out genome-wide screens. One critical task in analyzing genome-wide datasets is to control the false discovery rate (FDR) so that the proportion of false positive features among those called significant is restrained. Recently a number of FDR control methods have been proposed and widely practiced, such as the Benjamini-Hochberg approach, the Storey approach and Significant Analysis of Microarrays (SAM).

Methods

This paper presents a straight-forward yet powerful FDR control method termed miFDR, which aims to minimize FDR when calling a fixed number of significant features. We theoretically proved that the strategy used by miFDR is able to find the optimal number of significant features when the desired FDR is fixed.

Results

We compared miFDR with the BH approach, the Storey approach and SAM on both simulated datasets and public DNA microarray datasets. The results demonstrated that miFDR outperforms others by identifying more significant features under the same FDR cut-offs. Literature search showed that many genes called only by miFDR are indeed relevant to the underlying biology of interest.

Conclusions

FDR has been widely applied to analyzing high-throughput datasets allowed for rapid discoveries. Under the same FDR threshold, miFDR is capable to identify more significant features than its competitors at a compatible level of complexity. Therefore, it can potentially generate great impacts on biological and biomedical research.

Availability

If interested, please contact the authors for getting miFDR.
  相似文献   

9.

Background

Maximum parsimony phylogenetic tree reconciliation is an important technique for reconstructing the evolutionary histories of hosts and parasites, genes and species, and other interdependent pairs. Since the problem of finding temporally feasible maximum parsimony reconciliations is NP-complete, current methods use either exact algorithms with exponential worst-case running time or heuristics that do not guarantee optimal solutions.

Results

We offer an efficient new approach that begins with a potentially infeasible maximum parsimony reconciliation and iteratively “repairs” it until it becomes temporally feasible.

Conclusions

In a non-trivial number of cases, this approach finds solutions that are better than those found by the widely-used Jane heuristic.
  相似文献   

10.

Background

In recent years the visualization of biomagnetic measurement data by so-called pseudo current density maps or Hosaka-Cohen (HC) transformations became popular.

Methods

The physical basis of these intuitive maps is clarified by means of analytically solvable problems.

Results

Examples in magnetocardiography, magnetoencephalography and magnetoneurography demonstrate the usefulness of this method.

Conclusion

Hardware realizations of the HC-transformation and some similar transformations are discussed which could advantageously support cross-platform comparability of biomagnetic measurements.
  相似文献   

11.

Background

Dramatic progress has recently been made in cryo-electron microscopy technologies, which now make possible the reconstruction of a growing number of biomolecular structures to near-atomic resolution. However, the need persists for fitting and refinement approaches that address those cases that require modeling assistance.

Methods

In this paper, we describe algorithms to optimize the performance of such medium-resolution refinement methods. These algorithms aim to automatically optimize the parameters that define the density shape of the flexibly fitted model, as well as the time-dependent damper cutoff distance. Atomic distance constraints can be prescribed for cases where extra containment of parts of the structure is helpful, such as in regions where the density map is poorly defined. Also, we propose a simple stopping criterion that estimates the probable onset of overfitting during the simulation.

Results

The new set of algorithms produce more accurate fitting and refinement results, and yield a faster rate of convergence of the trajectory toward the fitted conformation. The latter is also more reliable due to the overfitting warning provided to the user.

Conclusions

The algorithms described here were implemented in the new Damped-Dynamics Flexible Fitting simulation tool “DDforge” in the Situs package.
  相似文献   

12.
13.

Introduction

Untargeted metabolomics is a powerful tool for biological discoveries. To analyze the complex raw data, significant advances in computational approaches have been made, yet it is not clear how exhaustive and reliable the data analysis results are.

Objectives

Assessment of the quality of raw data processing in untargeted metabolomics.

Methods

Five published untargeted metabolomics studies, were reanalyzed.

Results

Omissions of at least 50 relevant compounds from the original results as well as examples of representative mistakes were reported for each study.

Conclusion

Incomplete raw data processing shows unexplored potential of current and legacy data.
  相似文献   

14.

Background

DNA sequence can be viewed as an unknown language with words as its functional units. Given that most sequence alignment algorithms such as the motif discovery algorithms depend on the quality of background information about sequences, it is necessary to develop an ab initio algorithm for extracting the “words” based only on the DNA sequences.

Methods

We considered that non-uniform distribution and integrity were two important features of a word, based on which we developed an ab initio algorithm to extract “DNA words” that have potential functional meaning. A Kolmogorov-Smirnov test was used for consistency test of uniform distribution of DNA sequences, and the integrity was judged by the sequence and position alignment. Two random base sequences were adopted as negative control, and an English book was used as positive control to verify our algorithm. We applied our algorithm to the genomes of Saccharomyces cerevisiae and 10 strains of Escherichia coli to show the utility of the methods.

Results

The results provide strong evidences that the algorithm is a promising tool for ab initio building a DNA dictionary.

Conclusions

Our method provides a fast way for large scale screening of important DNA elements and offers potential insights into the understanding of a genome.
  相似文献   

15.

Introduction

Intrahepatic cholestasis of pregnancy (ICP) is a common maternal liver disease; development can result in devastating consequences, including sudden fetal death and stillbirth. Currently, recognition of ICP only occurs following onset of clinical symptoms.

Objective

Investigate the maternal hair metabolome for predictive biomarkers of ICP.

Methods

The maternal hair metabolome (gestational age of sampling between 17 and 41 weeks) of 38 Chinese women with ICP and 46 pregnant controls was analysed using gas chromatography–mass spectrometry.

Results

Of 105 metabolites detected in hair, none were significantly associated with ICP.

Conclusion

Hair samples represent accumulative environmental exposure over time. Samples collected at the onset of ICP did not reveal any metabolic shifts, suggesting rapid development of the disease.
  相似文献   

16.

Introduction

Quantification of tetrahydrofolates (THFs), important metabolites in the Wood–Ljungdahl pathway (WLP) of acetogens, is challenging given their sensitivity to oxygen.

Objective

To develop a simple anaerobic protocol to enable reliable THFs quantification from bioreactors.

Methods

Anaerobic cultures were mixed with anaerobic acetonitrile for extraction. Targeted LC–MS/MS was used for quantification.

Results

Tetrahydrofolates can only be quantified if sampled anaerobically. THF levels showed a strong correlation to acetyl-CoA, the end product of the WLP.

Conclusion

Our method is useful for relative quantification of THFs across different growth conditions. Absolute quantification of THFs requires the use of labelled standards.
  相似文献   

17.

Background

Adverse events from Melody valve implantation may be catastrophic. To date a role for three dimensional rotational angiography of the aortic root (3DRAA) during Melody valve implantation has not been established.

Objectives

To describe the role of 3DRAA in the assessment of Melody valve candidacy and to demonstrate that it may improve outcomes.

Methods

All patients who underwent cardiac catheterisation for Melody valve implantation and 3DRAA between August 2013 and February 2015 were reviewed.

Results

31 patients had 3DRAA with balloon sizing. Ten were deemed not Melody candidates (5 coronary compression, 2 aortic root distortion with cusp flattening, 2 RVOT was too large, and 1 had complex branch stenosis and a short landing zone). Of the 21 patients who were Melody candidates, 12 had conduits, 6 prosthetic valves and 3 native RVOTs. In patients with conduits, the technique of stenting the conduit prior to dilation was used after measuring the distance between the conduit and the coronary arteries on 3DRAA. In the Melody patients, we had 100% procedural success and no serious adverse events (coronary compression, tears, stent fracture or endocarditis).

Conclusion

As a tool for case selection, 3DRAA may facilitate higher procedural success and decreased risk of serious adverse events. Furthermore, 3D rotational angiography allows stenting of the conduit prior to dilation, which may prevent tears and possibly endocarditis.
  相似文献   

18.

Introduction

Data processing is one of the biggest problems in metabolomics, given the high number of samples analyzed and the need of multiple software packages for each step of the processing workflow.

Objectives

Merge in the same platform the steps required for metabolomics data processing.

Methods

KniMet is a workflow for the processing of mass spectrometry-metabolomics data based on the KNIME Analytics platform.

Results

The approach includes key steps to follow in metabolomics data processing: feature filtering, missing value imputation, normalization, batch correction and annotation.

Conclusion

KniMet provides the user with a local, modular and customizable workflow for the processing of both GC–MS and LC–MS open profiling data.
  相似文献   

19.

Introduction

It is difficult to elucidate the metabolic and regulatory factors causing lipidome perturbations.

Objectives

This work simplifies this process.

Methods

A method has been developed to query an online holistic lipid metabolic network (of 7923 metabolites) to extract the pathways that connect the input list of lipids.

Results

The output enables pathway visualisation and the querying of other databases to identify potential regulators. When used to a study a plasma lipidome dataset of polycystic ovary syndrome, 14 enzymes were identified, of which 3 are linked to ELAVL1—an mRNA stabiliser.

Conclusion

This method provides a simplified approach to identifying potential regulators causing lipid-profile perturbations.
  相似文献   

20.

Introduction

Aqueous–methanol mixtures have successfully been applied to extract a broad range of metabolites from plant tissue. However, a certain amount of material remains insoluble.

Objectives

To enlarge the metabolic compendium, two ionic liquids were selected to extract the methanol insoluble part of trunk from Betula pendula.

Methods

The extracted compounds were analyzed by LC/MS and GC/MS.

Results

The results show that 1-butyl-3-methylimidazolium acetate (IL-Ac) predominantly resulted in fatty acids, whereas 1-ethyl-3-methylimidazolium tosylate (IL-Tos) mostly yielded phenolic structures. Interestingly, bark yielded more ionic liquid soluble metabolites compared to interior wood.

Conclusion

From this one can conclude that the application of ionic liquids may expand the metabolic snapshot.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号