首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Finite element (FE) analysis is a cornerstone of orthopaedic biomechanics research. Three-dimensional medical imaging provides sufficient resolution for the subject-specific FE models to be generated from these data-sets. FE model development requires discretisation of a three-dimensional domain, which can be the most time-consuming component of a FE study. Hexahedral meshing tools based on the multiblock method currently rely on the manual placement of building blocks for mesh generation. We hypothesise that angular analysis of the geometric centreline for a three-dimensional surface could be used to automatically generate building block structures for the multiblock hexahedral mesh generation. Our algorithm uses a set of user-defined points and parameters to automatically generate a multiblock structure based on a surface's geometric centreline. This significantly reduces the time required for model development. We have applied this algorithm to 47 bones of varying geometries and successfully generated a FE mesh in all cases. This work represents significant advancement in automatically generating multiblock structures for a wide range of geometries.  相似文献   

2.
Finite element (FE) analysis is a cornerstone of orthopaedic biomechanics research. Three-dimensional medical imaging provides sufficient resolution for the subject-specific FE models to be generated from these data-sets. FE model development requires discretisation of a three-dimensional domain, which can be the most time-consuming component of a FE study. Hexahedral meshing tools based on the multiblock method currently rely on the manual placement of building blocks for mesh generation. We hypothesise that angular analysis of the geometric centreline for a three-dimensional surface could be used to automatically generate building block structures for the multiblock hexahedral mesh generation. Our algorithm uses a set of user-defined points and parameters to automatically generate a multiblock structure based on a surface's geometric centreline. This significantly reduces the time required for model development. We have applied this algorithm to 47 bones of varying geometries and successfully generated a FE mesh in all cases. This work represents significant advancement in automatically generating multiblock structures for a wide range of geometries.  相似文献   

3.
Li B  Turuvekere S  Agrawal M  La D  Ramani K  Kihara D 《Proteins》2008,71(2):670-683
Experimentally determined protein tertiary structures are rapidly accumulating in a database, partly due to the structural genomics projects. Included are proteins of unknown function, whose function has not been investigated by experiments and was not able to be predicted by conventional sequence-based search. Those uncharacterized protein structures highlight the urgent need of computational methods for annotating proteins from tertiary structures, which include function annotation methods through characterizing protein local surfaces. Toward structure-based protein annotation, we have developed VisGrid algorithm that uses the visibility criterion to characterize local geometric features of protein surfaces. Unlike existing methods, which only concerns identifying pockets that could be potential ligand-binding sites in proteins, VisGrid is also aimed to identify large protrusions, hollows, and flat regions, which can characterize geometric features of a protein structure. The visibility used in VisGrid is defined as the fraction of visible directions from a target position on a protein surface. A pocket or a hollow is recognized as a cluster of positions with a small visibility. A large protrusion in a protein structure is recognized as a pocket in the negative image of the structure. VisGrid correctly identified 95.0% of ligand-binding sites as one of the three largest pockets in 5616 benchmark proteins. To examine how natural flexibility of proteins affects pocket identification, VisGrid was tested on distorted structures by molecular dynamics simulation. Sensitivity decreased approximately 20% for structures of a root mean square deviation of 2.0 A to the original crystal structure, but specificity was not much affected. Because of its intuitiveness and simplicity, the visibility criterion will lay the foundation for characterization and function annotation of local shape of proteins.  相似文献   

4.
This paper addresses an important issue raised for the clinical relevance of Computer-Assisted Surgical applications, namely the methodology used to automatically build patient-specific finite element (FE) models of anatomical structures. From this perspective, a method is proposed, based on a technique called the mesh-matching method, followed by a process that corrects mesh irregularities. The mesh-matching algorithm generates patient-specific volume meshes from an existing generic model. The mesh regularization process is based on the Jacobian matrix transform related to the FE reference element and the current element.This method for generating patient-specific FE models is first applied to computer-assisted maxillofacial surgery, and more precisely, to the FE elastic modelling of patient facial soft tissues. For each patient, the planned bone osteotomies (mandible, maxilla, chin) are used as boundary conditions to deform the FE face model, in order to predict the aesthetic outcome of the surgery. Seven FE patient-specific models were successfully generated by our method. For one patient, the prediction of the FE model is qualitatively compared with the patient's post-operative appearance, measured from a computer tomography scan. Then, our methodology is applied to computer-assisted orbital surgery. It is, therefore, evaluated for the generation of 11 patient-specific FE poroelastic models of the orbital soft tissues. These models are used to predict the consequences of the surgical decompression of the orbit. More precisely, an average law is extrapolated from the simulations carried out for each patient model. This law links the size of the osteotomy (i.e. the surgical gesture) and the backward displacement of the eyeball (the consequence of the surgical gesture).  相似文献   

5.
Finite element (FE) method is a proven powerful and efficient tool to study the biomechanics of the human lumbar spine. However, due to the large inter-subject variability of geometries and material properties in human lumbar spines, concerns existed on the accuracy and predictive power of one single deterministic FE model with one set of spinal geometry and material properties. It was confirmed that the combined predictions (median or mean value) of several distinct FE models can be used as an improved prediction of behavior of human lumbar spine under identical loading and boundary conditions. In light of this improved prediction, five FE models (L1-L5 spinal levels) of the human lumbar spine were developed based on five healthy living subjects with identical modeling method. The five models were extensively validated through experimental and computational results in the literature. Mesh convergence and material sensitivity analysis were also conducted. We have shown that the results from the five FE models developed in this paper were consistent with the experimental data and simulation results from the existing literature. The validated modeling method introduced in this study can be used in modeling dysfunctional lumber spines such as disc degeneration and scoliosis in future work.  相似文献   

6.
Traditional finite element (FE) analysis is computationally demanding. The computational time becomes prohibitively long when multiple loading and boundary conditions need to be considered such as in musculoskeletal movement simulations involving multiple joints and muscles. Presented in this study is an innovative approach that takes advantage of the computational efficiency of both the dynamic multibody (MB) method and neural network (NN) analysis. A NN model that captures the behavior of musculoskeletal tissue subjected to known loading situations is built, trained, and validated based on both MB and FE simulation data. It is found that nonlinear, dynamic NNs yield better predictions over their linear, static counterparts. The developed NN model is then capable of predicting stress values at regions of interest within the musculoskeletal system in only a fraction of the time required by FE simulation.  相似文献   

7.
Protein structure alignment algorithms play an important role in the studies of protein structure and function. In this paper, a novel approach for structure alignment is presented. Specifically, core regions in two protein structures are first aligned by identifying connected components in a network of neighboring geometrically compatible aligned fragment pairs. The initial alignments then are refined through a multi-objective optimization method. The algorithm can produce both sequential and non-sequential alignments. We show the superior performance of the proposed algorithm by the computational experiments on several benchmark datasets and the comparisons with the well-known structure alignment algorithms such as DALI, CE and MATT. The proposed method can obtain accurate and biologically significant alignment results for the case with occurrence of internal repeats or indels, identify the circular permutations, and reveal conserved functional sites. A ranking criterion of our algorithm for fold similarity is presented and found to be comparable or superior to the Z-score of CE in most cases from the numerical experiments. The software and supplementary data of computational results are available at .  相似文献   

8.
Numerical studies on fluid-structure interaction have primarily relied on decoupling the solid and fluid sub-domains with the interactions treated as external boundary conditions on the individual sub-domains. The finite element applications for the fluid-structure interactions can be divided into iterative algorithms and sequential algorithms. In this paper, a new computational methodology for the analysis of tissue-fluid interaction problems is presented. The whole computational domain is treated as a single biphasic continuum, and the same space and time discretisation is carried out for the sub-domains using a penalty-based finite element model. This procedure does not require the explicit modelling of additional boundary conditions or interface elements. The developed biphasic interface finite element model is used in analysing blood flow through normal and stenotic arteries. The increase in fluid flow velocity when passing through a stenosed artery and the drop in pressure at the region are captured using this method.  相似文献   

9.
Longitudinal trials involving surgical interventions commonly have subject-specific intervention times, due to constraints on the availability of surgeons and operating theatres. Moreover, the intervention often effects a discontinuous change in the mean response. We propose a nonparametric estimator for the mean response profile of longitudinal data with staggered intervention times and a discontinuity at the times of intervention, as an exploratory tool to assist the formulation of a suitable parametric model. We use an adaptation of the standard generalized additive model algorithm for estimation, with smoothing constants chosen by a cross-validation criterion. We illustrate the method using longitudinal data from a trial to assess the effect of lung resection surgery in the treatment of emphysema patients.  相似文献   

10.
Claeskens G  Consentino F 《Biometrics》2008,64(4):1062-1069
SUMMARY: Application of classical model selection methods such as Akaike's information criterion (AIC) becomes problematic when observations are missing. In this article we propose some variations on the AIC, which are applicable to missing covariate problems. The method is directly based on the expectation maximization (EM) algorithm and is readily available for EM-based estimation methods, without much additional computational efforts. The missing data AIC criteria are formally derived and shown to work in a simulation study and by application to data on diabetic retinopathy.  相似文献   

11.

Background

The finite volume solver Fluent (Lebanon, NH, USA) is a computational fluid dynamics software employed to analyse biological mass-transport in the vasculature. A principal consideration for computational modelling of blood-side mass-transport is convection-diffusion discretisation scheme selection. Due to numerous discretisation schemes available when developing a mass-transport numerical model, the results obtained should either be validated against benchmark theoretical solutions or experimentally obtained results.

Methods

An idealised aneurysm model was selected for the experimental and computational mass-transport analysis of species concentration due to its well-defined recirculation region within the aneurysmal sac, allowing species concentration to vary slowly with time. The experimental results were obtained from fluid samples extracted from a glass aneurysm model, using the direct spectrophometric concentration measurement technique. The computational analysis was conducted using the four convection-diffusion discretisation schemes available to the Fluent user, including the First-Order Upwind, the Power Law, the Second-Order Upwind and the Quadratic Upstream Interpolation for Convective Kinetics (QUICK) schemes. The fluid has a diffusivity of 3.125 × 10-10 m2/s in water, resulting in a Peclet number of 2,560,000, indicating strongly convection-dominated flow.

Results

The discretisation scheme applied to the solution of the convection-diffusion equation, for blood-side mass-transport within the vasculature, has a significant influence on the resultant species concentration field. The First-Order Upwind and the Power Law schemes produce similar results. The Second-Order Upwind and QUICK schemes also correlate well but differ considerably from the concentration contour plots of the First-Order Upwind and Power Law schemes. The computational results were then compared to the experimental findings. An average error of 140% and 116% was demonstrated between the experimental results and those obtained from the First-Order Upwind and Power Law schemes, respectively. However, both the Second-Order upwind and QUICK schemes accurately predict species concentration under high Peclet number, convection-dominated flow conditions.

Conclusion

Convection-diffusion discretisation scheme selection has a strong influence on resultant species concentration fields, as determined by CFD. Furthermore, either the Second-Order or QUICK discretisation schemes should be implemented when numerically modelling convection-dominated mass-transport conditions. Finally, care should be taken not to utilize computationally inexpensive discretisation schemes at the cost of accuracy in resultant species concentration.  相似文献   

12.
MOTIVATION: Haplotype information has become increasingly important in analyzing fine-scale molecular genetics data, such as disease genes mapping and drug design. Parsimony haplotyping is one of haplotyping problems belonging to NP-hard class. RESULTS: In this paper, we aim to develop a novel algorithm for the haplotype inference problem with the parsimony criterion, based on a parsimonious tree-grow method (PTG). PTG is a heuristic algorithm that can find the minimum number of distinct haplotypes based on the criterion of keeping all genotypes resolved during tree-grow process. In addition, a block-partitioning method is also proposed to improve the computational efficiency. We show that the proposed approach is not only effective with a high accuracy, but also very efficient with the computational complexity in the order of O(m2n) time for n single nucleotide polymorphism sites in m individual genotypes. AVAILABILITY: The software is available upon request from the authors, or from http://zhangroup.aporc.org/bioinfo/ptg/ CONTACT: chen@elec.osaka-sandai.ac.jp SUPPLEMENTARY INFORMATION: Supporting materials is available from http://zhangroup.aporc.org/bioinfo/ptg/bti572supplementary.pdf  相似文献   

13.
This communication extends the recently reported cell-specific finite element (FE) method in Slomka and Gefen (2010) in which geometrically realistic FE cell models are created from confocal microscopy scans for large deformation analyses. The cell-specific FE method is extended here in the following aspects: (i) we demonstrate that cell-specific FE is versatile enough to deal with cells of substantially different geometrical shapes. The examples of an “elongated” pre-adipocyte and a “round” mature adipocyte are used to demonstrate this feature. (ii) We demonstrate that cell-specific FE can be used to analyze the mechanical behavior of cells that incorporate complex intracellular structures and are subjected to large deformations—again through the example of an adipocyte which contains a multitude of lipid droplets, each having a different size and shape. By demonstrating feasibility of inclusion of such inhomogeneities in the cytoplasm, the present work paves the way for modeling cellular organelles such as Golgi bodies, lysosomes and mitochondria in mechanically loaded cells using cell-specific FE.  相似文献   

14.
Computational modeling of the mechanics of cells and multicellular constructs with standard numerical discretization techniques such as the finite element (FE) method is complicated by the complex geometry, material properties and boundary conditions that are associated with such systems. The objectives of this research were to apply the material point method (MPM), a meshless method, to the modeling of vascularized constructs by adapting the algorithm to accurately handle quasi-static, large deformation mechanics, and to apply the modified MPM algorithm to large-scale simulations using a discretization that was obtained directly from volumetric confocal image data. The standard implicit time integration algorithm for MPM was modified to allow the background computational grid to remain fixed with respect to the spatial distribution of material points during the analysis. This algorithm was used to simulate the 3D mechanics of a vascularized scaffold under tension, consisting of growing microvascular fragments embedded in a collagen gel, by discretizing the construct with over 13.6 million material points. Baseline 3D simulations demonstrated that the modified MPM algorithm was both more accurate and more robust than the standard MPM algorithm. Scaling studies demonstrated the ability of the parallel code to scale to 200 processors. Optimal discretization was established for the simulations of the mechanics of vascularized scaffolds by examining stress distributions and reaction forces. Sensitivity studies demonstrated that the reaction force during simulated extension was highly sensitive to the modulus of the microvessels, despite the fact that they comprised only 10.4% of the volume of the total sample. In contrast, the reaction force was relatively insensitive to the effective Poisson's ratio of the entire sample. These results suggest that the MPM simulations could form the basis for estimating the modulus of the embedded microvessels through a parameter estimation scheme. Because of the generality and robustness of the modified MPM algorithm, the relative ease of generating spatial discretizations from volumetric image data, and the ability of the parallel computational implementation to scale to large processor counts, it is anticipated that this modeling approach may be extended to many other applications, including the analysis of other multicellular constructs and investigations of cell mechanics.  相似文献   

15.
Surgeries such as implantation of deep brain stimulation devices require accurate placement of devices within the brain. Because placement affects performance, image guidance and robotic assistance techniques have been widely adopted. These methods require accurate prediction of brain deformation during and following implantation. In this study, a magnetic resonance (MR) image-based finite element (FE) model was proposed by using a coupled Eulerian-Lagrangian method. Anatomical accuracy was achieved by mapping image voxels directly to the volumetric mesh space. The potential utility was demonstrated by evaluating the effect of different surgical approaches on the deformation of the corpus callosum (CC) region. The results showed that the maximum displacement of the corpus callosum increase with an increase of interventional angle with respect to the midline. The maximum displacement of the corpus callosum for different interventional locations was predicted, which is related to the brain curvature and the distance between the interventional area and corpus callosum (CC). The estimated displacement magnitude of the CC region followed those obtained from clinical observations. The proposed method provided an automatic pipeline for generating realistic computational models for interventional surgery. Results also demonstrated the potential of constructing patient-specific models for image-guided, robotic neurological surgery.  相似文献   

16.
The alveolated structure of the pulmonary acinus plays a vital role in gas exchange function. Three-dimensional (3D) analysis of the parenchymal region is fundamental to understanding this structure-function relationship, but only a limited number of attempts have been conducted in the past because of technical limitations. In this study, we developed a new image processing methodology based on finite element (FE) analysis for accurate 3D structural reconstruction of the gas exchange regions of the lung. Stereologically well characterized rat lung samples (Pediatr Res 53: 72-80, 2003) were imaged using high-resolution synchrotron radiation-based X-ray tomographic microscopy. A stack of 1,024 images (each slice: 1024 x 1024 pixels) with resolution of 1.4 mum(3) per voxel were generated. For the development of FE algorithm, regions of interest (ROI), containing approximately 7.5 million voxels, were further extracted as a working subunit. 3D FEs were created overlaying the voxel map using a grid-based hexahedral algorithm. A proper threshold value for appropriate segmentation was iteratively determined to match the calculated volume density of tissue to the stereologically determined value (Pediatr Res 53: 72-80, 2003). The resulting 3D FEs are ready to be used for 3D structural analysis as well as for subsequent FE computational analyses like fluid dynamics and skeletonization.  相似文献   

17.
Large sample theory of semiparametric models based on maximum likelihood estimation (MLE) with shape constraint on the nonparametric component is well studied. Relatively less attention has been paid to the computational aspect of semiparametric MLE. The computation of semiparametric MLE based on existing approaches such as the expectation‐maximization (EM) algorithm can be computationally prohibitive when the missing rate is high. In this paper, we propose a computational framework for semiparametric MLE based on an inexact block coordinate ascent (BCA) algorithm. We show theoretically that the proposed algorithm converges. This computational framework can be applied to a wide range of data with different structures, such as panel count data, interval‐censored data, and degradation data, among others. Simulation studies demonstrate favorable performance compared with existing algorithms in terms of accuracy and speed. Two data sets are used to illustrate the proposed computational method. We further implement the proposed computational method in R package BCA1SG , available at CRAN.  相似文献   

18.
In this study, we evaluated computational efficiency of finite element (FE) simulations when a numerical approximation method was used to obtain the tangent moduli. A fiber-reinforced hyperelastic material model for nearly incompressible soft tissues was implemented for 3D solid elements using both the approximation method and the closed-form analytical method, and validated by comparing the components of the tangent modulus tensor (also referred to as the material Jacobian) between the two methods. The computational efficiency of the approximation method was evaluated with different perturbation parameters and approximation schemes, and quantified by the number of iteration steps and CPU time required to complete these simulations. From the simulation results, it can be seen that the overall accuracy of the approximation method is improved by adopting the central difference approximation scheme compared to the forward Euler approximation scheme. For small-scale simulations with about 10,000 DOFs, the approximation schemes could reduce the CPU time substantially compared to the closed-form solution, due to the fact that fewer calculation steps are needed at each integration point. However, for a large-scale simulation with about 300,000 DOFs, the advantages of the approximation schemes diminish because the factorization of the stiffness matrix will dominate the solution time. Overall, as it is material model independent, the approximation method simplifies the FE implementation of a complex constitutive model with comparable accuracy and computational efficiency to the closed-form solution, which makes it attractive in FE simulations with complex material models.  相似文献   

19.
Bone is a complex material which exhibits several hierarchical levels of structural organization. At the submicron-scale, the local tissue porosity gives rise to discontinuities in the bone matrix which have been shown to influence damage behavior. Computational tools to model the damage behavior of bone at different length scales are mostly based on finite element (FE) analysis, with a range of algorithms developed for this purpose. Although the local mechanical behavior of bone tissue is influenced by microstructural features such as bone canals and osteocyte lacunae, they are often not considered in FE damage models due to the high computational cost required to simulate across several length scales, i.e., from the loads applied at the organ level down to the stresses and strains around bone canals and osteocyte lacunae. Hence, the aim of the current study was twofold: First, a multilevel FE framework was developed to compute, starting from the loads applied at the whole bone scale, the local mechanical forces acting at the micrometer and submicrometer level. Second, three simple microdamage simulation procedures based on element removal were developed and applied to bone samples at the submicrometer-scale, where cortical microporosity is included. The present microdamage algorithm produced a qualitatively analogous behavior to previous experimental tests based on stepwise mechanical compression combined with in situ synchrotron radiation computed tomography. Our results demonstrate the feasibility of simulating microdamage at a physiologically relevant scale using an image-based meshing technique and multilevel FE analysis; this allows relating microdamage behavior to intracortical bone microstructure.  相似文献   

20.
MOTIVATION: Protein expression profiling for differences indicative of early cancer holds promise for improving diagnostics. Due to their high dimensionality, statistical analysis of proteomic data from mass spectrometers is challenging in many aspects such as dimension reduction, feature subset selection as well as construction of classification rules. Search of an optimal feature subset, commonly known as the feature subset selection (FSS) problem, is an important step towards disease classification/diagnostics with biomarkers. METHODS: We develop a parsimonious threshold-independent feature selection (PTIFS) method based on the concept of area under the curve (AUC) of the receiver operating characteristic (ROC). To reduce computational complexity to a manageable level, we use a sigmoid approximation to the empirical AUC as the criterion function. Starting from an anchor feature, the PTIFS method selects a feature subset through an iterative updating algorithm. Highly correlated features that have similar discriminating power are precluded from being selected simultaneously. The classification rule is then determined from the resulting feature subset. RESULTS: The performance of the proposed approach is investigated by extensive simulation studies, and by applying the method to two mass spectrometry data sets of prostate cancer and of liver cancer. We compare the new approach with the threshold gradient descent regularization (TGDR) method. The results show that our method can achieve comparable performance to that of the TGDR method in terms of disease classification, but with fewer features selected. AVAILABILITY: Supplementary Material and the PTIFS implementations are available at http://staff.ustc.edu.cn/~ynyang/PTIFS. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号