首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
冷冻电镜单颗粒三维重构技术是用来解析生物大分子三维结构的常用方法.然而目前在单颗粒三维重构过程中,溶剂平滑操作还存在一定缺陷:没有一款主流的单颗粒三维重构程序能够自动寻找掩模(mask)三维密度图,使得三维重构过程难免受到噪音统计学模型计算偏差的干扰.为解决这一问题,本研究借鉴X射线晶体学中解析优化相位所广泛采用的溶剂平滑方法,采用高斯滤波、坎尼边缘检测、最小误差阈值处理等方法处理重构所得三维密度图,优化溶剂平滑操作,发展在单颗粒三维重构过程中自动寻找mask三维密度图的方法.运用三维密度图傅里叶壳层相关系数(fourier shell correlation,FSC)曲线图、模拟颗粒数据重构角度误差散点图等指标评估此方法的效果.结果表明,自动寻找mask密度图的方法能够较好地找到涵盖分子结构信号区域的mask密度图,较为明显提高三维重构所得密度图分辨率.  相似文献   

2.
We present a multimodal technique for measuring the integral refractive index and the thickness of biological cells and their organelles by integrating interferometric phase microscopy (IPM) and rapid confocal fluorescence microscopy. First, the actual thickness maps of the cellular compartments are reconstructed using the confocal fluorescent sections, and then the optical path difference (OPD) map of the same cell is reconstructed using IPM. Based on the co‐registered data, the integral refractive index maps of the cell and its organelles are calculated. This technique enables rapidly measuring refractive index of live, dynamic cells, where IPM provides quantitative imaging capabilities and confocal fluorescence microscopy provides molecular specificity of the cell organelles. We acquire human colorectal adenocarcinoma cells and show that the integral refractive index values are similar for the whole cell, the cytoplasm and the nucleus on the population level, but significantly different on the single cell level.  相似文献   

3.

Background  

We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness.  相似文献   

4.
Photoacoustic/Optoacoustic tomography aims to reconstruct maps of the initial pressure rise induced by the absorption of light pulses in tissue. This reconstruction is an ill-conditioned and under-determined problem, when the data acquisition protocol involves limited detection positions. The aim of the work is to develop an inversion method which integrates denoising procedure within the iterative model-based reconstruction to improve quantitative performance of optoacoustic imaging. Among the model-based schemes, total-variation (TV) constrained reconstruction scheme is a popular approach. In this work, a two-step approach was proposed for improving the TV constrained optoacoustic inversion by adding a non-local means based filtering step within each TV iteration. Compared to TV-based reconstruction, inclusion of this non-local means step resulted in signal-to-noise ratio improvement of 2.5 dB in the reconstructed optoacoustic images.  相似文献   

5.
Methods designed for inferring phylogenetic trees have been widely applied to reconstruct biogeographic history. Because traditional phylogenetic methods used in biogeographic reconstruction are based on trees rather than networks, they follow the strict assumption in which dispersal among geographical units have occurred on the basis of single dispersal routes across regions and are, therefore, incapable of modelling multiple alternative dispersal scenarios. The goal of this study is to describe a new method that allows for retracing species dispersal by means of directed phylogenetic networks obtained using a horizontal gene transfer (HGT) detection method as well as to draw parallels between the processes of HGT and biogeographic reconstruction. In our case study, we reconstructed the biogeographic history of the postglacial dispersal of freshwater fishes in the Ontario province of Canada. This case study demonstrated the utility and robustness of the new method, indicating that the most important events were south-to-north dispersal patterns, as one would expect, with secondary faunal interchange among regions. Finally, we showed how our method can be used to explore additional questions regarding the commonalities in dispersal history patterns and phylogenetic similarities among species.  相似文献   

6.
High-throughput transgene copy number estimation by competitive PCR   总被引:2,自引:0,他引:2  
Transgene copy number affects the level and stability of gene expression. Therefore, it is important to determine the copy number of each transgenic line. Polymerase chain reaction (PCR) is widely employed to quantify amounts of target sequences. Although PCR is not inherently quantitative, various means of overcoming this limitation have been devised. Recent real-time PCR methods are rapid; however, they typically lack a suitable internal standard, limit the size of the target sequence, and require expensive specialized equipment. Competitive PCR techniques avoid these problems, but traditional competitive methods are time consuming. Here we apply mathematical modeling to create a rapid, simple, and inexpensive copy number determination method that retains the robustness of competitive PCR.  相似文献   

7.
Polarized light scattering spectroscopy (PLSS) is a promising optical technique developed for the detection of cancer, which extracts the single scattering light to infer morphological information of epithelial cells. However, traditional PLSS uses either a rotatable polarizer or two orthogonal polarizers to purify the single scattering light, which makes it complicated and challenged to build a PLSS endoscope. Herein, we propose a snapshot PLSS with a single optical path to directly get the single scattering light for the first time. The single scattering light is encoded using the spectrally - modulated polarimetry and decoded using the continuous slide iterative method. Both the polystyrene microsphere solutions and the ex vivo gastric cancer samples are used to verify the method. The experimental results of the snapshot PLSS are consistent well with that of the traditional PLSS. The proposed method has a potential for the building of snapshot PLSS endoscope systems in future.   相似文献   

8.
Measuring the quality of three-dimensional (3D) reconstructed biological macromolecules by transmission electron microscopy is still an open problem. In this article, we extend the applicability of the spectral signal-to-noise ratio (SSNR) to the evaluation of 3D volumes reconstructed with any reconstruction algorithm. The basis of the method is to measure the consistency between the data and a corresponding set of reprojections computed for the reconstructed 3D map. The idiosyncrasies of the reconstruction algorithm are taken explicitly into account by performing a noise-only reconstruction. This results in the definition of a 3D SSNR which provides an objective indicator of the quality of the 3D reconstruction. Furthermore, the information to build the SSNR can be used to produce a volumetric SSNR (VSSNR). Our method overcomes the need to divide the data set in two. It also provides a direct measure of the performance of the reconstruction algorithm itself; this latter information is typically not available with the standard resolution methods which are primarily focused on reproducibility alone.  相似文献   

9.
Although a large body of work investigating tests of correlated evolution of two continuous characters exists, hypotheses such as character displacement are really tests of whether substantial evolutionary change has occurred on a particular branch or branches of the phylogenetic tree. In this study, we present a methodology for testing such a hypothesis using ancestral character state reconstruction and simulation. Furthermore, we suggest how to investigate the robustness of the hypothesis test by varying the reconstruction methods or simulation parameters. As a case study, we tested a hypothesis of character displacement in body size of Caribbean Anolis lizards. We compared squared-change, weighted squared-change, and linear parsimony reconstruction methods, gradual Brownian motion and speciational models of evolution, and several resolution methods for linear parsimony. We used ancestor reconstruction methods to infer the amount of body size evolution, and tested whether evolutionary change in body size was greater on branches of the phylogenetic tree in which a transition from occupying a single-species island to a two-species island occurred. Simulations were used to generate null distributions of reconstructed body size change. The hypothesis of character displacement was tested using Wilcoxon Rank-Sums. When tested against simulated null distributions, all of the reconstruction methods resulted in more significant P-values than when standard statistical tables were used. These results confirm that P-values for tests using ancestor reconstruction methods should be assessed via simulation rather than from standard statistical tables. Linear parsimony can produce an infinite number of most parsimonious reconstructions in continuous characters. We present an example of assessing the robustness of our statistical test by exploring the sample space of possible resolutions. We compare ACCTRAN and DELTRAN resolutions of ambiguous character reconstructions in linear parsimony to the most and least conservative resolutions for our particular hypothesis.  相似文献   

10.
MR fingerprinting (MRF) is an innovative approach to quantitative MRI. A typical disadvantage of dictionary-based MRF is the explosive growth of the dictionary as a function of the number of reconstructed parameters, an instance of the curse of dimensionality, which determines an explosion of resource requirements. In this work, we describe a deep learning approach for MRF parameter map reconstruction using a fully connected architecture. Employing simulations, we have investigated how the performance of the Neural Networks (NN) approach scales with the number of parameters to be retrieved, compared to the standard dictionary approach. We have also studied optimal training procedures by comparing different strategies for noise addition and parameter space sampling, to achieve better accuracy and robustness to noise. Four MRF sequences were considered: IR-FISP, bSSFP, IR-FISP-B1, and IR-bSSFP-B1. A comparison between NN and the dictionary approaches in reconstructing parameter maps as a function of the number of parameters to be retrieved was performed using a numerical brain phantom. Results demonstrated that training with random sampling and different levels of noise variance yielded the best performance. NN performance was at least as good as the dictionary-based approach in reconstructing parameter maps using Gaussian noise as a source of artifacts: the difference in performance increased with the number of estimated parameters because the dictionary method suffers from the coarse resolution of the parameter space sampling. The NN proved to be more efficient in memory usage and computational burden, and has great potential for solving large-scale MRF problems.  相似文献   

11.
Vector reconstruction from firing rates   总被引:10,自引:0,他引:10  
In a number of systems including wind detection in the cricket, visual motion perception and coding of arm movement direction in the monkey and place cell response to position in the rat hippocampus, firing rates in a population of tuned neurons are correlated with a vector quantity. We examine and compare several methods that allow the coded vector to be reconstructed from measured firing rates. In cases where the neuronal tuning curves resemble cosines, linear reconstruction methods work as well as more complex statistical methods requiring more detailed information about the responses of the coding neurons. We present a new linear method, the optimal linear estimator (OLE), that on average provides the best possible linear reconstruction. This method is compared with the more familiar vector method and shown to produce more accurate reconstructions using far fewer recorded neurons.  相似文献   

12.
ABSTRACT: BACKGROUND: Distance-based phylogenetic reconstruction methods use evolutionary distances between species in order to reconstruct the phylogenetic tree spanning them. There are many different methods for estimating distances from sequence data. These methods assume different substitution models and have different statistical properties. Since the true substitution model is typically unknown, it is important to consider the effect of model misspecification on the performance of a distance estimation method. RESULTS: This paper continues the line of research which attempts to adjust to each given set of input sequences a distance function which maximizes the expected topological accuracy of the reconstructed tree. We focus here on the effect of systematic error caused by assuming an inadequate model, but consider also the stochastic error caused by using short sequences. We introduce a theoretical framework for analyzing both sources of error based on the notion of deviation from additivity, which quantifies the contribution of model misspecification to the estimation error. We demonstrate this framework by studying the behavior of the Jukes-Cantor distance function when applied to data generated according to Kimura's two-parameter model with a transition-transversion bias. We provide both a theoretical derivation for this case, and a detailed simulation study on quartet trees. CONCLUSIONS: We demonstrate both analytically and experimentally that by deliberately assuming an oversimplified evolutionary model, it is possible to increase the topological accuracy of reconstruction. Our theoretical framework provides new insights into the mechanisms that enables statistically inconsistent reconstruction methods to outperform consistent methods.  相似文献   

13.
A computer-assisted three-dimensional (3D) system, 3D-DIASemb, has been developed that allows reconstruction and motion analysis of cells and nuclei in a developing embryo. In the system, 75 optical sections through a live embryo are collected in the z axis by using differential interference contrast microscopy. Optical sections for one reconstruction are collected in a 2.5-s period, and this process is repeated every 5 s. The outer perimeter and nuclear perimeter of each cell in the embryo are outlined in each optical section, converted into beta-spline models, and then used to construct 3D faceted images of the surface and nucleus of every cell in the developing embryo. Because all individual components of the embryo (i.e., each cell surface and each nuclear surface) are individually reconstructed, 3D-DIASemb allows isolation and analysis of (1) all or select nuclei in the absence of cell surfaces, (2) any single cell lineage, and (3) any single nuclear lineage through embryogenesis. Because all reconstructions represent mathematical models, 3D-DIASemb computes over 100 motility and dynamic morphology parameters for every cell, nucleus, or group of cells in the developing embryo at time intervals as short as 5 s. Finally, 3D-DIASemb reconstructs and motion analyzes cytoplasmic flow through the generation and analysis of "vector flow plots." To demonstrate the unique capabilities of this new technology, a Caenorhabditis elegans embryo is reconstructed and motion analyzed through the 28-cell stage. Although 3D-DIASemb was developed by using the C. elegans embryo as the experimental model, it can be applied to other embryonic systems. 3D-DIASemb therefore provides a new method for reconstructing and motion analyzing in 4D every cell and nucleus in a live, developing embryo, and should provide a powerful tool for assessing the effects of drugs, environmental perturbations, and mutations on the cellular and nuclear dynamics accompanying embryogenesis.  相似文献   

14.
Fourier ptychographic microscopy (FPM) is a promising super-resolution computational imaging technology. It stitches a series of low-resolution (LR) images in the Fourier domain by an iterative method. Thus, it obtains a large field of view and high-resolution quantitative phase images. Owing to its capability to perform high-spatial bandwidth product imaging, FPM is widely used in the reconstruction of conventional static samples. However, the influence of the FPM imaging mechanism limits its application in high-speed dynamic imaging. To solve this problem, an adaptive-illumination FPM scheme using regional energy estimation is proposed. Starting with several captured real LR images, the energy distribution of all LR images is estimated, and select the measurement images with large information to perform FPM reconstruction. Simulation and experimental results show that the method produces efficient imaging performance and reduces the required volume of data to more than 65% while ensuring the quality of FPM reconstruction.  相似文献   

15.
We present a three-dimensional (3D) spatial reconstruction of coronary arteries based on fusion of intravascular optical coherence tomography (IVOCT) and digital subtraction angiography (DSA). Centerline of vessel in DSA images is exacted by multi-scale filtering, adaptive segmentation, morphology thinning and Dijkstra's shortest path algorithm. We apply the cross-correction between lumen shapes of IVOCT and DSA images and match their stenosis positions to realize co-registration. By matching the location and tangent direction of the vessel centerline of DSA images and segmented lumen coordinates of IVOCT along pullback path, 3D spatial models of vessel lumen are reconstructed. Using 1121 distinct positions selected from eight vessels, the correlation coefficient between 3D IVOCT model and DSA image in measuring lumen radius is 0.94% and 97.7% of the positions fall within the limit of agreement by Bland–Altman analysis, which means that the 3D spatial reconstruction IVOCT models and DSA images have high matching level.  相似文献   

16.
In the event of a biothreat agent release, hundreds of samples would need to be rapidly processed to characterize the extent of contamination and determine the efficacy of remediation activities. Current biological agent identification and viability determination methods are both labor- and time-intensive such that turnaround time for confirmed results is typically several days. In order to alleviate this issue, automated, high-throughput sample processing methods were developed in which real-time PCR analysis is conducted on samples before and after incubation. The method, referred to as rapid-viability (RV)-PCR, uses the change in cycle threshold after incubation to detect the presence of live organisms. In this article, we report a novel RV-PCR method for detection of live, virulent Bacillus anthracis, in which the incubation time was reduced from 14 h to 9 h, bringing the total turnaround time for results below 15 h. The method incorporates a magnetic bead-based DNA extraction and purification step prior to PCR analysis, as well as specific real-time PCR assays for the B. anthracis chromosome and pXO1 and pXO2 plasmids. A single laboratory verification of the optimized method applied to the detection of virulent B. anthracis in environmental samples was conducted and showed a detection level of 10 to 99 CFU/sample with both manual and automated RV-PCR methods in the presence of various challenges. Experiments exploring the relationship between the incubation time and the limit of detection suggest that the method could be further shortened by an additional 2 to 3 h for relatively clean samples.  相似文献   

17.
Optical projection tomography (OPT) provides a non-invasive 3-D imaging modality that can be applied to longitudinal studies of live disease models, including in zebrafish. Current limitations include the requirement of a minimum number of angular projections for reconstruction of reasonable OPT images using filtered back projection (FBP), which is typically several hundred, leading to acquisition times of several minutes. It is highly desirable to decrease the number of required angular projections to decrease both the total acquisition time and the light dose to the sample. This is particularly important to enable longitudinal studies, which involve measurements of the same fish at different time points. In this work, we demonstrate that the use of an iterative algorithm to reconstruct sparsely sampled OPT data sets can provide useful 3-D images with 50 or fewer projections, thereby significantly decreasing the minimum acquisition time and light dose while maintaining image quality. A transgenic zebrafish embryo with fluorescent labelling of the vasculature was imaged to acquire densely sampled (800 projections) and under-sampled data sets of transmitted and fluorescence projection images. The under-sampled OPT data sets were reconstructed using an iterative total variation-based image reconstruction algorithm and compared against FBP reconstructions of the densely sampled data sets. To illustrate the potential for quantitative analysis following rapid OPT data acquisition, a Hessian-based method was applied to automatically segment the reconstructed images to select the vasculature network. Results showed that 3-D images of the zebrafish embryo and its vasculature of sufficient visual quality for quantitative analysis can be reconstructed using the iterative algorithm from only 32 projections—achieving up to 28 times improvement in imaging speed and leading to total acquisition times of a few seconds.  相似文献   

18.
We report a reconstruction method to achieve high spatial resolution for hyperspectral imaging of chromophore features in skin in vivo. The method utilizes an established structure‐adaptive normalized convolution algorithm to reconstruct high spatial resolution of hyperspectral images from snapshot low‐resolution hyperspectral image sequences captured by a snapshot spectral camera. The reconstructed images at chromophore‐sensitive wavebands are used to map the skin features of interest. We demonstrate the method experimentally by mapping the blood perfusion and melanin features (moles) on the facial skin. The method relaxes the constrains of the relatively low spatial resolution in the snapshot hyperspectral camera, making it more usable in imaging applications.  相似文献   

19.
The rapidly evolving field of photoacoustic tomography utilizes endogenous chromophores to extract both functional and structural information from deep within tissues. It is this power to perform precise quantitative measurements in vivo—with endogenous or exogenous contrastthat makes photoacoustic tomography highly promising for clinical translation in functional brain imaging, early cancer detection, real-time surgical guidance, and the visualization of dynamic drug responses. Considering photoacoustic tomography has benefited from numerous engineering innovations, it is of no surprise that many of photoacoustic tomography’s current cutting-edge developments incorporate advances from the equally novel field of artificial intelligence. More specifically, alongside the growth and prevalence of graphical processing unit capabilities within recent years has emerged an offshoot of artificial intelligence known as deep learning. Rooted in the solid foundation of signal processing, deep learning typically utilizes a method of optimization known as gradient descent to minimize a loss function and update model parameters. There are already a number of innovative efforts in photoacoustic tomography utilizing deep learning techniques for a variety of purposes, including resolution enhancement, reconstruction artifact removal, undersampling correction, and improved quantification. Most of these efforts have proven to be highly promising in addressing long-standing technical obstacles where traditional solutions either completely fail or make only incremental progress. This concise review focuses on the history of applied artificial intelligence in photoacoustic tomography, presents recent advances at this multifaceted intersection of fields, and outlines the most exciting advances that will likely propagate into promising future innovations.  相似文献   

20.
Cloning technology would allow targeted genetic alterations in the rat, a species which is yet unaccessible for such studies due to the lack of germline-competent embryonic stem cells. The present study was performed to examine the developmental ability of reconstructed rat embryos after transfer of nuclei from early preimplantation stages. We observed that single blastomeres from two-cell embryos and zygotes reconstructed by pronuclei exchange can develop in vitro until morula/blastocyst stage. When karyoplasts from blastomeres were used for the reconstruction of embryos, highest in vitro cleavage rates were obtained with nuclei in an early phase of the cell cycle transferred into enucleated preactivated oocytes or zygotes. However, further in vitro development of reconstructed embryos produced from blastomere nuclei was arrested at early cleavage stages under all conditions tested in this study. In contrast, immediate transfer to foster mothers of reconstructed embryos with nuclei from two-cell embryos at an early stage of the cell cycle in preactivated enucleated oocytes resulted in live newborn rats, with a general efficiency of 0.4%-2.2%. The genetic origin of the cloned offspring was verified by using donor nuclei from embryos of Black Hooded Wistar rats and transgenic rats carrying an ubiquitously expressed green fluorescent protein transgene. Thus, we report for the first time the production of live cloned rats using nuclei from two-cell embryos.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号