首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
In the field of orthopaedics, treatment of extremity deformities can be realised by means of external fixators. However, control of such biomedical system is very difficult. Some different mathematical models have been developed to improve quality of this service. Most of the parameters, which are used in these models, have been obtained from two orthogonal X-ray images: one from anteroposterior, AP, direction and the other from a lateral, L, direction. The quality of the results of this model is dependent on the accuracy of the input parameters. Measuring these parameters is a time-consuming issue, and the accuracy of the results is also low. To increase the quality of the measurement, the reference points should be chosen from the edges of the biomedical system, and it is important to find the edges without noise. To achieve this purpose, Sobel edge detector, binary large object analysis, thresholding and inverting are applied as image processing steps. The results are compared with manual measurement values which have been obtained earlier. The results show that semi-automatic measurement of the parameters is more accurate and faster than manual measurement. It shows that the efficiency of the fixator method has been improved.  相似文献   

2.
本文用各种输入模式来考验广义Gabor函数(作为视觉系统信息加工模型)的种种时间特性.当用静止的闪烁光点刺激本模型的不同空间部位,可引出持续性反应、瞬变反应,这些反应类似于真实感受野的一些典型动态反应.当用运动图式(边形、窄条、宽条)输入本系统模型后,可计算得到与实验结果定性一致的反应曲线.在模型的参数值取某些特定值时,对刺激物的运动方向表现出特异的选择性.最后,讨论了本模型与以往各模型之异同,以及可能的神经生物学机制.  相似文献   

3.
Quantification of knee motion under dynamic, in vivo loaded conditions is necessary to understand how knee kinematics influence joint injury, disease, and rehabilitation. Though recent studies have measured three-dimensional knee kinematics by matching geometric bone models to single-plane fluoroscopic images, factors limiting the accuracy of this approach have not been thoroughly investigated. This study used a three-step computational approach to evaluate theoretical accuracy limitations due to the shape matching process alone. First, cortical bone models of the femur tibia/fibula, and patella were created from CT data. Next, synthetic (i.e., computer generated) fluoroscopic images were created by ray tracing the bone models in known poses. Finally, an automated matching algorithm utilizing edge detection methods was developed to align flat-shaded bone models to the synthetic images. Accuracy of the recovered pose parameters was assessed in terms of measurement bias and precision. Under these ideal conditions where other sources of error were eliminated, tibiofemoral poses were within 2 mm for sagittal plane translations and 1.5 deg for all rotations while patellofemoral poses were within 2 mm and 3 deg. However, statistically significant bias was found in most relative pose parameters. Bias disappeared and precision improved by a factor of two when the synthetic images were regenerated using flat shading (i.e., sharp bone edges) instead of ray tracing (i.e., attenuated bone edges). Analysis of absolute pose parameter errors revealed that the automated matching algorithm systematically pushed the flat-shaded bone models too far into the image plane to match the attenuated edges of the synthetic ray-traced images. These results suggest that biased edge detection is the primary factor limiting the theoretical accuracy of this single-plane shape matching procedure.  相似文献   

4.
A variety of water quality indices have been used to assess the state of waterbodies all over the world. In calculating a Water Quality Index (WQI), traditional methods require the evaluation of many water quality parameters, making them costly and time-consuming. In recent years, machine learning (ML) algorithms have emerged as an effective tool to solve many environmental problems, including water quality management. In this study, we investigate the performance of the ML-based method in calculating the WQI. We apply several feature selection techniques to select the key parameters fed the ML models. Experiments are carried out to evaluate the WQI based on a dataset collected from 2007 to 2020 of An Kim Hai system, one of the most important irrigation systems in the north of Vietnam. The obtained results show that the application of selection methods allows reducing significantly the number of water quality parameters fed the ML models without losing their accuracy. In particular, by using the embedded method, we find out four important parameters, including Coliform, DO, Turbidity, and TSS, that have the greatest impact on water quality. Based on these parameters, the Random Forest model provides the best accuracy in predicting the WQI values from the An Kim Hai system with a Similarity of 0.94. The combination of feature selection and ML methods is then considered an effective alternative for calculating the WQI, leading to a desirable performance and a reduction of input parameters. This makes water quality monitoring less costly, substantial effort, and time.  相似文献   

5.

Background  

The rapid proliferation of biomedical text makes it increasingly difficult for researchers to identify, synthesize, and utilize developed knowledge in their fields of interest. Automated information extraction procedures can assist in the acquisition and management of this knowledge. Previous efforts in biomedical text mining have focused primarily upon named entity recognition of well-defined molecular objects such as genes, but less work has been performed to identify disease-related objects and concepts. Furthermore, promise has been tempered by an inability to efficiently scale approaches in ways that minimize manual efforts and still perform with high accuracy. Here, we have applied a machine-learning approach previously successful for identifying molecular entities to a disease concept to determine if the underlying probabilistic model effectively generalizes to unrelated concepts with minimal manual intervention for model retraining.  相似文献   

6.
MOTIVATION: The analysis of metabolic processes is becoming increasingly important to our understanding of complex biological systems and disease states. Nuclear magnetic resonance spectroscopy (NMR) is a particularly relevant technology in this respect, since the NMR signals provide a quantitative measure of the metabolite concentrations. However, due to the complexity of the spectra typical of biological samples, the demands of clinical and high-throughput analysis will only be fully met by a system capable of reliable, automatic processing of the spectra. An initial step in this direction has been taken by Targeted Profiling (TP), employing a set of known and predicted metabolite signatures fitted against the signal. However, an accurate fitting procedure for (1)H NMR data is complicated by shift uncertainties in the peak systems caused by measurement imperfections. These uncertainties have a large impact on the accuracy of identification and quantification and currently require compensation by very time consuming manual interactions. Here, we present an approach, termed Extended Targeted Profiling (ETP), that estimates shift uncertainties based on a genetic algorithm (GA) combined with a least squares optimization (LSQO). The estimated shifts are used to correct the known metabolite signatures leading to significantly improved identification and quantification. In this way, use of the automated system significantly reduces the effort normally associated with manual processing and paves the way for reliable, high-throughput analysis of complex NMR spectra. RESULTS: The results indicate that using simultaneous shift uncertainty correction and least squares fitting significantly improves the identification and quantification results for (1)H NMR data in comparison to the standard targeted profiling approach and compares favorably with the results obtained by manual expert analysis. Preservation of the functional structure of the NMR spectra makes this approach more realistic than simple binning strategies.  相似文献   

7.
以塔里木河下游天然胡杨林为研究对象,利用Riegl VZ-1000型地面激光扫描仪(Terrestrial Laser Scanning, TLS)获取离河道不同距离的8个样方内513棵胡杨的三维点云数据,通过建立冠层高度模型、Hough变换等方法获取单木株数和结构参数,并与传统的每木检尺实测数据和无人机(Unmanned Aerial Vehicle, UAV)低空影像进行对比,验证激光雷达方法的测树精度;对TLS获取的胡杨树形参数进行相关性分析,并建立关系模型;探讨不同水胁迫条件(不同离河道距离,不同地下水埋深)对胡杨单木结构参数的影响;最后按不同径级划分胡杨的年龄,得出各龄级胡杨所占比例。结果表明:(1)TLS能够高精度获取不同密度和长势的胡杨单木株数和结构参数,单株准确分割比率为94%—100%,相对于UAV低空影像更为准确;(2)TLS获取的胡杨树高(Tree height,TH)、胸径(Diameter at breast height, DBH)、冠幅直径(Crown diameter,CD)和冠幅面积(Crown area,CA)与传统实测值拟合度R~2较高,分别为0.95、0.97、0.77和0.84,表明实测数据和TLS获取数据无明显差异;(3)胡杨CD、CA分别与TH呈显著正相关,其相关性系数为0.73、0.67;基于此构建了胡杨TH与CD的关系模型,即TH=2.6274×CD~(0.706),R~2为0.64;(4)根据径级划分胡杨年龄段可知,DBH为15—30 cm的近熟林比例最大,占8个样方内监测胡杨总株数的47%,表明胡杨种群年龄结构相对稳定并总体态势良好,呈现了生态输水对塔河下游胡杨种群恢复有明显的促进作用。总之,激光雷达技术能够客观反映胡杨树形结构参数,可替代耗力、耗费、耗时的传统实测方法,为时时掌握胡杨林生长发育、长势动态以及多尺度、多时相生态耗水研究提供高精度信息,为干旱区荒漠河岸林的有效保护与可持续管理提供科学依据。  相似文献   

8.
In vivo magnetic resonance spectroscopy (MRS) and magnetic resonance imaging (MRI) provide unique quality to attain neurochemical, physiological, anatomical, and functional information noninvasively. These techniques have been increasingly applied to biomedical research and clinical usage in diagnosis and prognosis of diseases. The ability of MRS to detect early yet subtle changes of neurochemicals in vivo permits the use of this technology for the study of cerebral metabolism in physiological and pathological conditions. Recent advances in MR technology have further extended its use to assess the etiology and progression of neurodegeneration. This review focuses on the current technical advances and the applications of MRS and MRI in the study of neurodegenerative disease animal models including amyotrophic lateral sclerosis, Alzheimer's, Huntington's, and Parkinson's diseases. Enhanced MR measurable neurochemical parameters in vivo are described in regard to their importance in neurodegenerative disorders and their investigation into the metabolic alterations accompanying the pathogenesis of neurodegeneration.  相似文献   

9.

Aims

The 3D geometry of individual vascular smooth muscle cells (VSMCs), which are essential for understanding the mechanical function of blood vessels, are currently not available. This paper introduces a new 3D segmentation algorithm to determine VSMC morphology and orientation.

Methods and Results

A total of 112 VSMCs from six porcine coronary arteries were used in the analysis. A 3D semi-automatic segmentation method was developed to reconstruct individual VSMCs from cell clumps as well as to extract the 3D geometry of VSMCs. A new edge blocking model was introduced to recognize cell boundary while an edge growing was developed for optimal interpolation and edge verification. The proposed methods were designed based on Region of Interest (ROI) selected by user and interactive responses of limited key edges. Enhanced cell boundary features were used to construct the cell’s initial boundary for further edge growing. A unified framework of morphological parameters (dimensions and orientations) was proposed for the 3D volume data. Virtual phantom was designed to validate the tilt angle measurements, while other parameters extracted from 3D segmentations were compared with manual measurements to assess the accuracy of the algorithm. The length, width and thickness of VSMCs were 62.9±14.9μm, 4.6±0.6μm and 6.2±1.8μm (mean±SD). In longitudinal-circumferential plane of blood vessel, VSMCs align off the circumferential direction with two mean angles of -19.4±9.3° and 10.9±4.7°, while an out-of-plane angle (i.e., radial tilt angle) was found to be 8±7.6° with median as 5.7°.

Conclusions

A 3D segmentation algorithm was developed to reconstruct individual VSMCs of blood vessel walls based on optical image stacks. The results were validated by a virtual phantom and manual measurement. The obtained 3D geometries can be utilized in mathematical models and leads a better understanding of vascular mechanical properties and function.  相似文献   

10.
The production of large numbers of highly purified proteins for X-ray crystallography is a significant bottleneck in structural genomics. At the Joint Center for Structural Genomics (JCSG; http://www.jcsg.org), specific automated protein expression, purification, and analytical methods are being utilized to study the proteome of Thermotoga maritima. Anion exchange and size exclusion chromatography (SEC), intended for the production of highly purified proteins, have been automated and the procedures are described here in detail. Analytical SEC has been included as a standard quality control test. A biological unit (BU) is the macromolecule that has been proven or is presumed to be functional. Correct assignment of BUs from protein structures can be difficult. BU predictions obtained via the Protein Quaternary Structure file server (PQS; http://pqs.ebi.ac.uk/) were compared to SEC data for 16 representative T. maritima proteins whose structures were solved at the JCSG, revealing an inconsistency in five cases. Herein, we report that SEC can be used to validate or disprove PQS-derived oligomeric models. A substantial amount of associated SEC and structural data should enable us to use certain PQS parameters to gauge the accuracy of these computational models and to generally improve their predictions.  相似文献   

11.
MOTIVATION: Protein annotation is a task that describes protein X in terms of topic Y. Usually, this is constructed using information from the biomedical literature. Until now, most of literature-based protein annotation work has been done manually by human annotators. However, as the number of biomedical papers grows ever more rapidly, manual annotation becomes more difficult, and there is increasing need to automate the process. Recently, information extraction (IE) has been used to address this problem. Typically, IE requires pre-defined relations and hand-crafted IE rules or annotated corpora, and these requirements are difficult to satisfy in real-world scenarios such as in the biomedical domain. In this article, we describe an IE system that requires only sentences labelled according to their relevance or not to a given topic by domain experts. RESULTS: We applied our system to meet the annotation needs of a well-known protein family database; the results show that our IE system can annotate proteins with a set of extracted relations by learning relations and IE rules for disease, function and structure from only relevant and irrelevant sentences.  相似文献   

12.
The measurement of the spatial profile of root elongation needs to determine matching points between time-lapse images and calculate their displacement. These data have been obtained by laborious manual methods in the past. Some computer-based programs have been developed to improve the measurement, but they require many time-series digital images or sprinkling graphite particles on the root prior to image capture. Here, we have developed GrowthTracer, a new image-analysis program for the kinematic study of root elongation. GrowthTracer employs a multiresolution image matching method with a nonlinear filter called the critical point filter (CPF), which extracts critical points from images at each resolution and can determine precise matching points by analysis of only two intact images, without pre-marking by graphite particles. This program calculates the displacement of each matching point and determines the displacement velocity profile along the medial axis of the root. In addition, a manual input of distinct matching points increases the matching accuracy. We show a successful application of this novel program for the kinematic analysis of root growth in Arabidopsis thaliana.  相似文献   

13.
Predicting secondary structures of RNA molecules is one of the fundamental problems of and thus a challenging task in computational structural biology. Over the past decades, mainly two different approaches have been considered to compute predictions of RNA secondary structures from a single sequence: the first one relies on physics-based and the other on probabilistic RNA models. Particularly, the free energy minimization (MFE) approach is usually considered the most popular and successful method. Moreover, based on the paradigm-shifting work by McCaskill which proposes the computation of partition functions (PFs) and base pair probabilities based on thermodynamics, several extended partition function algorithms, statistical sampling methods and clustering techniques have been invented over the last years. However, the accuracy of the corresponding algorithms is limited by the quality of underlying physics-based models, which include a vast number of thermodynamic parameters and are still incomplete. The competing probabilistic approach is based on stochastic context-free grammars (SCFGs) or corresponding generalizations, like conditional log-linear models (CLLMs). These methods abstract from free energies and instead try to learn about the structural behavior of the molecules by learning (a manageable number of) probabilistic parameters from trusted RNA structure databases. In this work, we introduce and evaluate a sophisticated SCFG design that mirrors state-of-the-art physics-based RNA structure prediction procedures by distinguishing between all features of RNA that imply different energy rules. This SCFG actually serves as the foundation for a statistical sampling algorithm for RNA secondary structures of a single sequence that represents a probabilistic counterpart to the sampling extension of the PF approach. Furthermore, some new ways to derive meaningful structure predictions from generated sample sets are presented. They are used to compare the predictive accuracy of our model to that of other probabilistic and energy-based prediction methods. Particularly, comparisons to lightweight SCFGs and corresponding CLLMs for RNA structure prediction indicate that more complex SCFG designs might yield higher accuracy but eventually require more comprehensive and pure training sets. Investigations on both the accuracies of predicted foldings and the overall quality of generated sample sets (especially on an abstraction level, called abstract shapes of generated structures, that is relevant for biologists) yield the conclusion that the Boltzmann distribution of the PF sampling approach is more centered than the ensemble distribution induced by the sophisticated SCFG model, which implies a greater structural diversity within generated samples. In general, neither of the two distinct ensemble distributions is more adequate than the other and the corresponding results obtained by statistical sampling can be expected to bare fundamental differences, such that the method to be preferred for a particular input sequence strongly depends on the considered RNA type.  相似文献   

14.
The dynamics of the numbers of the decapod crustacean populations depends essentially on the anthropogenous load on inner reservoirs. This process affects the most substantially the crayfish speciesAstacus astacus L. that is aboriginal for the North-West of Russia. The sensitivity of these crayfish to changes of the water quality in the ecosystems has been noted by many authors [1, 11, 15]. For an ecophysiological monitoring of the reservoirs as well as for a nature-protective and aquacultural activity necessary, is necessary to have methods and means of determination of quantitative parameters of responses of animal life-providing systems, for example, the cardiovascular system. In the present work, a new non-invasive technique is proposed for recording parameters of the crayfish heart activity. The method is based on measurement of the laser emission of the near-infrared diapason, which is scattered in the reverse direction. This paper describes the instrumental part of the complex and exposes some preliminary data obtained with the aid of the used method of recording the crayfish heart activity. This method is shown to permit not only recording the crayfish heart rate under conditions of free behavior, without any harm to the animals, but also tracing changes of the shape and amplitude parameters of the response, which characterize the animal state.  相似文献   

15.

Background

Infections with HIV still represent a major human health problem worldwide and a vaccine is the only long-term option to fight efficiently against this virus. Standardized assessments of HIV-specific immune responses in vaccine trials are essential for prioritizing vaccine candidates in preclinical and clinical stages of development. With respect to neutralizing antibodies, assays with HIV-1 Env-pseudotyped viruses are a high priority. To cover the increasing demands of HIV pseudoviruses, a complete cell culture and transfection automation system has been developed.

Methodology/Principal Findings

The automation system for HIV pseudovirus production comprises a modified Tecan-based Cellerity system. It covers an area of 5×3 meters and includes a robot platform, a cell counting machine, a CO2 incubator for cell cultivation and a media refrigerator. The processes for cell handling, transfection and pseudovirus production have been implemented according to manual standard operating procedures and are controlled and scheduled autonomously by the system. The system is housed in a biosafety level II cabinet that guarantees protection of personnel, environment and the product. HIV pseudovirus stocks in a scale from 140 ml to 1000 ml have been produced on the automated system. Parallel manual production of HIV pseudoviruses and comparisons (bridging assays) confirmed that the automated produced pseudoviruses were of equivalent quality as those produced manually. In addition, the automated method was fully validated according to Good Clinical Laboratory Practice (GCLP) guidelines, including the validation parameters accuracy, precision, robustness and specificity.

Conclusions

An automated HIV pseudovirus production system has been successfully established. It allows the high quality production of HIV pseudoviruses under GCLP conditions. In its present form, the installed module enables the production of 1000 ml of virus-containing cell culture supernatant per week. Thus, this novel automation facilitates standardized large-scale productions of HIV pseudoviruses for ongoing and upcoming HIV vaccine trials.  相似文献   

16.
Raman‐based multivariate calibration models have been developed for real‐time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO‐based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers Biotechnol. Prog., 31:1004–1013, 2015  相似文献   

17.
Ailanthus altissima has a long history of invasion in urban areas and is currently spreading into suburban and rural areas in the eastern U.S. The objectives of our study were to (1) determine whether A. altissima seed dispersal distance differed between populations on the edges of open fields and intact deciduous forest, and (2) determine whether dispersal differed for north and south winds. We also assessed the relationship between seed characteristics and distance from source populations in fields and forests, and whether seeds disperse at different rates throughout the dispersal season. Using two fields, two intact forest stands, and one partially harvested stand, we sampled the seed rain at 10 m intervals 100 m into each site from October to April 2002–2003. We compared seed density in field and intact forests using a three-way ANOVA with distance from source, wind direction, and environmental structure as independent variables. To assess the accuracy of common empirical dispersal models, mean seed density data at each site were fitted with alternative regression models. We found that mean seed dispersal distance depended on environmental structure and wind direction, a result driven in large part by dispersal at a single site where seed density did not decline with distance. The two alternative regression models fit each site’s dispersal curve equally well. More seeds were dispersed early than in mid- or late-season. Large, heavy seeds traveled as far as small light seeds. Turbulent winds appear to be necessary for seed release, as indicated by a wind tunnel experiment. A. altissima is able to disperse long distances into fields and into mature forests, and can reach canopy gaps and other suitable habitats at least 100 m from the forest edge. It is an effective disperser and can spread rapidly in fragmented landscapes where edges and other high light environments occur. These conditions are increasingly common throughout the eastern U.S. and in other temperate regions worldwide.  相似文献   

18.
Herein we describe the program FAST-Modelfree for the fully automated, high throughput analysis of NMR spin-relaxation data. This program interfaces with the program Modelfree 4.1 and provides an intuitive graphical user interface for configuration as well as complete standalone operation during the model selection and rotational diffusion parameter optimization processes. FAST-Modelfree is also capable of iteratively assigning models to each spin and optimizing the parameters that describe the diffusion tensor. Tests with the protein Ribonuclease A indicate that using this iterative approach even poor initial estimates of the diffusion tensor parameters will converge to the optimal value within a few iterations. In addition to improving the quality of the final fit, this represents a substantial timesaving compared to manual data analysis and minimizes the chance of human error. It is anticipated that this program will be particularly useful for the analysis and comparison of data collected under different conditions such as multiple temperatures and the presence and absence of ligands. Further, this program is intended to establish a more uniform protocol for NMR spin-relaxation data analysis, facilitating the comparison of results both between and within research laboratories. Results obtained with FAST-Modelfree are compared with previous literature results for the proteins Ribonuclease H, E. coli glutaredoxin-1 and the Ca2+-binding protein S100B. These proteins represent data sets collected at both single and multiple static magnetic fields and which required analysis with both isotropic and axially symmetric rotational diffusion tensors. In all cases results obtained with FAST-Modelfree compared favorably with the original literature results.  相似文献   

19.
The performance of objective speech and audio quality measures for the prediction of the perceived quality of frequency-compressed speech in hearing aids is investigated in this paper. A number of existing quality measures have been applied to speech signals processed by a hearing aid, which compresses speech spectra along frequency in order to make information contained in higher frequencies audible for listeners with severe high-frequency hearing loss. Quality measures were compared with subjective ratings obtained from normal hearing and hearing impaired children and adults in an earlier study. High correlations were achieved with quality measures computed by quality models that are based on the auditory model of Dau et al., namely, the measure PSM, computed by the quality model PEMO-Q; the measure qc, computed by the quality model proposed by Hansen and Kollmeier; and the linear subcomponent of the HASQI. For the prediction of quality ratings by hearing impaired listeners, extensions of some models incorporating hearing loss were implemented and shown to achieve improved prediction accuracy. Results indicate that these objective quality measures can potentially serve as tools for assisting in initial setting of frequency compression parameters.  相似文献   

20.
BACKGROUND: Human diversity, namely single nucleotide polymorphisms (SNPs), is becoming a focus of biomedical research. Despite the binary nature of SNP determination, the majority of genotyping assay data need a critical evaluation for genotype calling. We applied statistical models to improve the automated analysis of 2-dimensional SNP data. METHODS: We derived several quantities in the framework of Gaussian mixture models that provide figures of merit to objectively measure the data quality. The accuracy of individual observations is scored as the probability of belonging to a certain genotype cluster, while the assay quality is measured by the overlap between the genotype clusters. RESULTS: The approach was extensively tested with a dataset of 438 nonredundant SNP assays comprising >150,000 datapoints. The performance of our automatic scoring method was compared with manual assignments. The agreement for the overall assay quality is remarkably good, and individual observations were scored differently by man and machine in 2.6% of cases, when applying stringent probability threshold values. CONCLUSION: Our definition of bounds for the accuracy for complete assays in terms of misclassification probabilities goes beyond other proposed analysis methods. We expect the scoring method to minimise human intervention and provide a more objective error estimate in genotype calling.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号