首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Real-time functional magnetic resonance imaging (rtfMRI) is a recently emerged technique that demands fast data processing within a single repetition time (TR), such as a TR of 2 seconds. Data preprocessing in rtfMRI has rarely involved spatial normalization, which can not be accomplished in a short time period. However, spatial normalization may be critical for accurate functional localization in a stereotactic space and is an essential procedure for some emerging applications of rtfMRI. In this study, we introduced an online spatial normalization method that adopts a novel affine registration (AFR) procedure based on principal axes registration (PA) and Gauss-Newton optimization (GN) using the self-adaptive β parameter, termed PA-GN(β) AFR and nonlinear registration (NLR) based on discrete cosine transform (DCT). In AFR, PA provides an appropriate initial estimate of GN to induce the rapid convergence of GN. In addition, the β parameter, which relies on the change rate of cost function, is employed to self-adaptively adjust the iteration step of GN. The accuracy and performance of PA-GN(β) AFR were confirmed using both simulation and real data and compared with the traditional AFR. The appropriate cutoff frequency of the DCT basis function in NLR was determined to balance the accuracy and calculation load of the online spatial normalization. Finally, the validity of the online spatial normalization method was further demonstrated by brain activation in the rtfMRI data.  相似文献   

2.
Functional magnetic resonance imaging (fMRI) is recently developing as imaging modality used for mapping hemodynamics of neuronal and motor event related tissue blood oxygen level dependence (BOLD) in terms of brain activation. Image processing is performed by segmentation and registration methods. Segmentation algorithms provide brain surface-based analysis, automated anatomical labeling of cortical fields in magnetic resonance data sets based on oxygen metabolic state. Registration algorithms provide geometric features using two or more imaging modalities to assure clinically useful neuronal and motor information of brain activation. This review article summarizes the physiological basis of fMRI signal, its origin, contrast enhancement, physical factors, anatomical labeling by segmentation, registration approaches with examples of visual and motor activity in brain. Latest developments are reviewed for clinical applications of fMRI along with other different neurophysiological and imaging modalities.  相似文献   

3.
The localization precision is a crucial and important parameter for single-molecule localization microscopy (SMLM) and directly influences the achievable spatial resolution. It primarily depends on experimental imaging conditions and the registration potency of the algorithm used. We propose a new and simple routine to estimate the average experimental localization precision in SMLM, based on the nearest neighbor analysis. By exploring different experimental and simulated targets, we show that this approach can be generally used for any 2D or 3D SMLM data and that reliable values for the localization precision σ SMLM are obtained. Knowing σ SMLM is a prerequisite for consistent visualization or any quantitative structural analysis, e.g., cluster analysis or colocalization studies.  相似文献   

4.
Peng  Bo  Li  Lei 《Cognitive neurodynamics》2015,9(2):249-256
Wireless sensor network (WSN) are widely used in many applications. A WSN is a wireless decentralized structure network comprised of nodes, which autonomously set up a network. The node localization that is to be aware of position of the node in the network is an essential part of many sensor network operations and applications. The existing localization algorithms can be classified into two categories: range-based and range-free. The range-based localization algorithm has requirements on hardware, thus is expensive to be implemented in practice. The range-free localization algorithm reduces the hardware cost. Because of the hardware limitations of WSN devices, solutions in range-free localization are being pursued as a cost-effective alternative to more expensive range-based approaches. However, these techniques usually have higher localization error compared to the range-based algorithms. DV-Hop is a typical range-free localization algorithm utilizing hop-distance estimation. In this paper, we propose an improved DV-Hop algorithm based on genetic algorithm. Simulation results show that our proposed algorithm improves the localization accuracy compared with previous algorithms.  相似文献   

5.
Red-shifts and red herrings in geographical ecology   总被引:26,自引:0,他引:26  
Jack J. Lennon 《Ecography》2000,23(1):101-113
I draw attention to the need for ecologists to take spatial structure into account more seriously in hypothesis testing. If spatial autocorrelation is ignored, as it usually is, then analyses of ecological patterns in terms of environmental factors can produce very misleading results. This is demonstrated using synthetic but realistic spatial patterns with known spatial properties which are subjected to classical correlation and multiple regression analyses. Correlation between an autocorrelated response variable and each of a set of explanatory variables is strongly biased in favour of those explanatory variables that are highly autocorrelated - the expected magnitude of the correlation coefficient increases with autocorrelation even if the spatial patterns are completely independent. Similarly, multiple regression analysis finds highly autocorrelated explanatory variables "significant" much more frequently than it should. The chances of mistakenly identifying a "significant" slope across an autocorrelated pattern is very high if classical regression is used. Consequently, under these circumstances strongly autocorrelated environmental factors reported in the literature as associated with ecological patterns may not actually be significant. It is likely that these factors wrongly described as important constitute a red-shifted subset of the set of potential explanations, and that more spatially discontinuous factors (those with bluer spectra) are actually relatively more important than their present status suggests. There is much that ecologists can do to improve on this situation. I discuss various approaches to the problem of spatial autocorrelation from the literature and present a randomisation test for the association of two spatial patterns which has advantages over currently available methods.  相似文献   

6.
An accurate spatial relationship between 3D in-vivo carotid plaque and lumen imaging and histological cross sections is required to study the relationship between biomechanical parameters and atherosclerotic plaque components. We present and evaluate a fully three-dimensional approach for this registration problem, which accounts for deformations that occur during the processing of the specimens. By using additional imaging steps during tissue processing and semi-automated non-linear registration techniques, a 3D-reconstruction of the histology is obtained.The methodology was evaluated on five specimens obtained from patients, operated for severe atherosclerosis in the carotid bifurcation. In more than 80% of the histology slices, the quality of the semi-automated registration with computed tomography angiography (CTA) was equal to or better than the manual registration. The inter-observer variability was between one and two in-vivo CT voxels and was equal to the manual inter-observer variability. Our technique showed that the angles between the normals of the registered histology slices and the in-vivo CTA scan direction ranged 6–56°, indicating that proper 3D-registration is crucial for establishing a correct spatial relation with in-vivo imaging modalities. This new 3D-reconstruction technique of atherosclerotic plaque tissue opens new avenues in the field of biomechanics as well as in the field of image processing, where it can be used for validation purposes of segmentation algorithms.  相似文献   

7.
Microcalcifications are an early mammographic sign of breast cancer and a target for stereotactic breast needle biopsy. Here, we develop and compare different approaches for developing Raman classification algorithms to diagnose invasive and in situ breast cancer, fibrocystic change and fibroadenoma that can be associated with microcalcifications. In this study, Raman spectra were acquired from tissue cores obtained from fresh breast biopsies and analyzed using a constituent‐based breast model. Diagnostic algorithms based on the breast model fit coefficients were devised using logistic regression, C4.5 decision tree classification, k‐nearest neighbor (k ‐NN) and support vector machine (SVM) analysis, and subjected to leave‐one‐out cross validation. The best performing algorithm was based on SVM analysis (with radial basis function), which yielded a positive predictive value of 100% and negative predictive value of 96% for cancer diagnosis. Importantly, these results demonstrate that Raman spectroscopy provides adequate diagnostic information for lesion discrimination even in the presence of microcalcifications, which to the best of our knowledge has not been previously reported. (© 2013 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

8.
Species Distribution Models (SDMs) are a powerful tool to derive habitat suitability predictions relating species occurrence data with habitat features. Two of the most frequently applied algorithms to model species-habitat relationships are Generalised Linear Models (GLM) and Random Forest (RF). The former is a parametric regression model providing functional models with direct interpretability. The latter is a machine learning non-parametric algorithm, more tolerant than other approaches in its assumptions, which has often been shown to outperform parametric algorithms. Other approaches have been developed to produce robust SDMs, like training data bootstrapping and spatial scale optimisation. Using felid presence-absence data from three study regions in Southeast Asia (mainland, Borneo and Sumatra), we tested the performances of SDMs by implementing four modelling frameworks: GLM and RF with bootstrapped and non-bootstrapped training data. With Mantel and ANOVA tests we explored how the four combinations of algorithms and bootstrapping influenced SDMs and their predictive performances. Additionally, we tested how scale-optimisation responded to species' size, taxonomic associations (species and genus), study area and algorithm. We found that choice of algorithm had strong effect in determining the differences between SDMs' spatial predictions, while bootstrapping had no effect. Additionally, algorithm followed by study area and species, were the main factors driving differences in the spatial scales identified. SDMs trained with GLM showed higher predictive performance, however, ANOVA tests revealed that algorithm had significant effect only in explaining the variance observed in sensitivity and specificity and, when interacting with bootstrapping, in Percent Correctly Classified (PCC). Bootstrapping significantly explained the variance in specificity, PCC and True Skills Statistics (TSS). Our results suggest that there are systematic differences in the scales identified and in the predictions produced by GLM vs. RF, but that neither approach was consistently better than the other. The divergent predictions and inconsistent predictive abilities suggest that analysts should not assume machine learning is inherently superior and should test multiple methods. Our results have strong implications for SDM development, revealing the inconsistencies introduced by the choice of algorithm on scale optimisation, with GLM selecting broader scales than RF.  相似文献   

9.
Biplane 2D-3D registration approaches have been used for measuring 3D, in vivo glenohumeral (GH) joint kinematics. Computed tomography (CT) has become the gold standard for reconstructing 3D bone models, as it provides high geometric accuracy and similar tissue contrast to video-radiography. Alternatively, magnetic resonance imaging (MRI) would not expose subjects to radiation and provides the ability to add cartilage and other soft tissues to the models. However, the accuracy of MRI-based 2D-3D registration for quantifying glenohumeral kinematics is unknown. We developed an automatic 2D-3D registration program that works with both CT- and MRI-based image volumes for quantifying joint motions. The purpose of this study was to use the proposed 2D-3D auto-registration algorithm to describe the humerus and scapula tracking accuracy of CT- and MRI-based registration relative to radiostereometric analysis (RSA) during dynamic biplanar video-radiography. The GH kinematic accuracy (RMS error) was 0.6–1.0 mm and 0.6–2.2° for the CT-based registration and 1.4–2.2 mm and 1.2–2.6° for MRI-based registration. Higher kinematic accuracy of CT-based registration was expected as MRI provides lower spatial resolution and bone contrast as compared to CT and suffers from spatial distortions. However, the MRI-based registration is within an acceptable accuracy for many clinical research questions.  相似文献   

10.
The initial consideration in use of a radiopharmaceutical in therapy is specificity of localization. A variety of biological principles such as active transport and binding to cellular components have been utilized to achieve this localization. The next concern is to maximize radiation to the lesion while minimizing that to the remainder of the body. This means that there is a major role to be played by emissions with a short path length (such as α particles, weak β particles and Auger electrons). To achieve maximal irradiation of the lesion, dissociation of the radiolabel from the tissue should be minimized; potential approaches for acheiving this are reviewed. Finally, “synergistic effects” between radiation and chemical agents are discussed.  相似文献   

11.
Image registration, the process of optimally aligning homologous structures in multiple images, has recently been demonstrated to support automated pixel-level analysis of pedobarographic images and, subsequently, to extract unique and biomechanically relevant information from plantar pressure data. Recent registration methods have focused on robustness, with slow but globally powerful algorithms. In this paper, we present an alternative registration approach that affords both speed and accuracy, with the goal of making pedobarographic image registration more practical for near-real-time laboratory and clinical applications. The current algorithm first extracts centroid-based curvature trajectories from pressure image contours, and then optimally matches these curvature profiles using optimization based on dynamic programming. Special cases of disconnected images (that occur in high-arched subjects, for example) are dealt with by introducing an artificial spatially linear bridge between adjacent image clusters. Two registration algorithms were developed: a ‘geometric’ algorithm, which exclusively matched geometry, and a ‘hybrid’ algorithm, which performed subsequent pseudo-optimization. After testing the two algorithms on 30 control image pairs considered in a previous study, we found that, when compared with previously published results, the hybrid algorithm improved overlap ratio (p=0.010), but both current algorithms had slightly higher mean-squared error, assumedly because they did not consider pixel intensity. Nonetheless, both algorithms greatly improved the computational efficiency (25±8 and 53±9 ms per image pair for geometric and hybrid registrations, respectively). These results imply that registration-based pixel-level pressure image analyses can, eventually, be implemented for practical clinical purposes.  相似文献   

12.
With the advent of single-molecule localization microscopy (SMLM) techniques, intracellular proteins can be imaged at unprecedented resolution with high specificity and contrast. These techniques can lead to a better understanding of cell functioning, as they allow, among other applications, counting the number of molecules of a protein specie in a single cell, studying the heterogeneity in protein spatial organization, and probing the spatial interactions between different protein species. However, the use of these techniques for accurate quantitative measurements requires corrections for multiple inherent sources of error, including: overcounting due to multiple localizations of a single fluorophore (i.e., photoblinking), undercounting caused by incomplete photoconversion, uncertainty in the localization of single molecules, sample drift during the long imaging time, and inaccurate image registration in the case of dual-color imaging. In this paper, we review recent efforts that address some of these sources of error in quantitative SMLM and give examples in the context of photoactivated localization microscopy (PALM).  相似文献   

13.
Ensemble forecasting is advocated as a way of reducing uncertainty in species distribution modeling (SDM). This is because it is expected to balance accuracy and robustness of SDM models. However, there are little available data regarding the spatial similarity of the combined distribution maps generated by different consensus approaches. Here, using eight niche-based models, nine split-sample calibration bouts (or nine random model-training subsets), and nine climate change scenarios, the distributions of 32 forest tree species in China were simulated under current and future climate conditions. The forecasting ensembles were combined to determine final consensual prediction maps for target species using three simple consensus approaches (average, frequency, and median [PCA]). Species’ geographic ranges changed (area change and shifting distance) in response to climate change, but the three consensual projections did not differ significantly with respect to how much or in which direction, but they did differ with respect to the spatial similarity of the three consensual predictions. Incongruent areas were observed primarily at the edges of species’ ranges. Multiple stepwise regression models showed the three factors (niche marginality and specialization, and niche model accuracy) to be related to the observed variations in consensual prediction maps among consensus approaches. Spatial correspondence among prediction maps was the highest when niche model accuracy was high and marginality and specialization were low. The difference in spatial predictions suggested that more attention should be paid to the range of spatial uncertainty before any decisions regarding specialist species can be made based on map outputs. The niche properties and single-model predictive performance provide promising insights that may further understanding of uncertainties in SDM.  相似文献   

14.
Characterizing the spatial patterns of genetic diversity in human populations has a wide range of applications, from detecting genetic mutations associated with disease to inferring human history. Current approaches, including the widely used principal-component analysis, are not suited for the analysis of linked markers, and local and long-range linkage disequilibrium (LD) can dramatically reduce the accuracy of spatial localization when unaccounted for. To overcome this, we have introduced an approach that performs spatial localization of individuals on the basis of their genetic data and explicitly models LD among markers by using a multivariate normal distribution. By leveraging external reference panels, we derive closed-form solutions to the optimization procedure to achieve a computationally efficient method that can handle large data sets. We validate the method on empirical data from a large sample of European individuals from the POPRES data set, as well as on a large sample of individuals of Spanish ancestry. First, we show that by modeling LD, we achieve accuracy superior to that of existing methods. Importantly, whereas other methods show decreased performance when dense marker panels are used in the inference, our approach improves in accuracy as more markers become available. Second, we show that accurate localization of genetic data can be achieved with only a part of the genome, and this could potentially enable the spatial localization of admixed samples that have a fraction of their genome originating from a given continent. Finally, we demonstrate that our approach is resistant to distortions resulting from long-range LD regions; such distortions can dramatically bias the results when unaccounted for.  相似文献   

15.
Several segmentation methods of lesion uptake in 18F-FDG PET imaging have been proposed in the literature. Their principles are presented along with their clinical results. The main approach proposed in the literature is the thresholding method. The most commonly used is a constant threshold around 40% of the maximum uptake within the lesion. This simple approach is not valid for small (< 4 or 5 mL), poorly contrasted positive tissue (SUV < 2) or lesion in movement. To limit these problems, more complex thresholding algorithms have been proposed to define the optimal threshold value to be applied to segment the lesion. The principle is to adapt the threshold following a fitting model according to one or two characteristic image parameters. Those algorithms based on iterative approaches to find the optimal threshold value are preferred as they take into account patient data. The main drawback is the need of a calibration step depending on the PET device, the acquisition conditions and the algorithm used for image reconstruction. To avoid this problem, some more sophisticated segmentation methods have been proposed in the literature: derivative methods, watershed and pattern recognition algorithms. The delineation of positive tissue on FDG-PET images is a complex problem, always under investigation.  相似文献   

16.
High-resolution anatomical image data in preclinical brain PET and SPECT studies is often not available, and inter-modality spatial normalization to an MRI brain template is frequently performed. However, this procedure can be challenging for tracers where substantial anatomical structures present limited tracer uptake. Therefore, we constructed and validated strain- and tracer-specific rat brain templates in Paxinos space to allow intra-modal registration. PET [18F]FDG, [11C]flumazenil, [11C]MeDAS, [11C]PK11195 and [11C]raclopride, and SPECT [99mTc]HMPAO brain scans were acquired from healthy male rats. Tracer-specific templates were constructed by averaging the scans, and by spatial normalization to a widely used MRI-based template. The added value of tracer-specific templates was evaluated by quantification of the residual error between original and realigned voxels after random misalignments of the data set. Additionally, the impact of strain differences, disease uptake patterns (focal and diffuse lesion), and the effect of image and template size on the registration errors were explored. Mean registration errors were 0.70±0.32mm for [18F]FDG (n = 25), 0.23±0.10mm for [11C]flumazenil (n = 13), 0.88±0.20 mm for [11C]MeDAS (n = 15), 0.64±0.28mm for [11C]PK11195 (n = 19), 0.34±0.15mm for [11C]raclopride (n = 6), and 0.40±0.13mm for [99mTc]HMPAO (n = 15). These values were smallest with tracer-specific templates, when compared to the use of [18F]FDG as reference template (p&0.001). Additionally, registration errors were smallest with strain-specific templates (p&0.05), and when images and templates had the same size (p≤0.001). Moreover, highest registration errors were found for the focal lesion group (p&0.005) and the diffuse lesion group (p = n.s.). In the voxel-based analysis, the reported coordinates of the focal lesion model are consistent with the stereotaxic injection procedure. The use of PET/SPECT strain- and tracer-specific templates allows accurate registration of functional rat brain data, independent of disease specific uptake patterns and with registration error below spatial resolution of the cameras. The templates and the SAMIT package will be freely available for the research community.  相似文献   

17.
Most evolutionary processes occur in a spatial context and several spatial analysis techniques have been employed in an exploratory context. However, the existence of autocorrelation can also perturb significance tests when data is analyzed using standard correlation and regression techniques on modeling genetic data as a function of explanatory variables. In this case, more complex models incorporating the effects of autocorrelation must be used. Here we review those models and compared their relative performances in a simple simulation, in which spatial patterns in allele frequencies were generated by a balance between random variation within populations and spatially-structured gene flow. Notwithstanding the somewhat idiosyncratic behavior of the techniques evaluated, it is clear that spatial autocorrelation affects Type I errors and that standard linear regression does not provide minimum variance estimators. Due to its flexibility, we stress that principal coordinate of neighbor matrices (PCNM) and related eigenvector mapping techniques seem to be the best approaches to spatial regression. In general, we hope that our review of commonly used spatial regression techniques in biology and ecology may aid population geneticists towards providing better explanations for population structures dealing with more complex regression problems throughout geographic space.  相似文献   

18.

Background  

Protein subcellular localization is an important determinant of protein function and hence, reliable methods for prediction of localization are needed. A number of prediction algorithms have been developed based on amino acid compositions or on the N-terminal characteristics (signal peptides) of proteins. However, such approaches lead to a loss of contextual information. Moreover, where information about the physicochemical properties of amino acids has been used, the methods employed to exploit that information are less than optimal and could use the information more effectively.  相似文献   

19.
Fluorescence microscopy has revolutionized in vivo cellular biology. Through the specific labeling of a protein of interest with a fluorescent protein, one is able to study movement and colocalization, and even count individual proteins in a live cell. Different algorithms exist to quantify the total intensity and position of a fluorescent focus. Although these algorithms have been rigorously studied for in vitro conditions, which are greatly different than the in-homogenous and variable cellular environments, their exact limits and applicability in the context of a live cell have not been thoroughly and systematically evaluated. In this study, we quantitatively characterize the influence of different background subtraction algorithms on several focus analysis algorithms. We use, to our knowledge, a novel approach to assess the sensitivity of the focus analysis algorithms to background removal, in which simulated and experimental data are combined to maintain full control over the sensitivity of a focus within a realistic background of cellular fluorescence. We demonstrate that the choice of algorithm and the corresponding error are dependent on both the brightness of the focus, and the cellular context. Expectedly, focus intensity estimation and localization accuracy suffer in all algorithms at low focus to background ratios, with the bacteroidal background subtraction in combination with the median excess algorithm, and the region of interest background subtraction in combination with a two-dimensional Gaussian fit algorithm, performing the best. We furthermore show that the choice of background subtraction algorithm is dependent on the expression level of the protein under investigation, and that the localization error is dependent on the distance of a focus from the bacterial edge and pole. Our results establish a set of guidelines for what signals can be analyzed to give a targeted spatial and intensity accuracy within a bacterial cell.  相似文献   

20.
Some of the jaw tracking methods may be limited in terms of their accuracy or clinical applicability. This article introduces the sphere-based registration method to minimize the fiducial (reference landmark) localization error (FLE) in tracking and coregistration of physical and virtual dental models, to enable an effective clinical analysis of the patient’s masticatory functions. In this method, spheres (registration fiducials) are placed on the corresponding polygonal concavities of the physical and virtual dental models based on the geometrical principle that establishes a unique spatial position for a sphere inside an infinite trihedron. The experiments in this study were implemented using an optical system which tracked active tracking markers connected to the upper and lower dental casts. The accuracy of the tracking workflow was confirmed in vitro, based on comparing virtually calculated interocclusal regions of close proximity against the physical interocclusal impressions. The target registration error of the tracking was estimated based on the leave-one-sphere-out method to be the sum of the error of the sensors, i.e., the FLE was negligible. Moreover, based on a user study, the FLE of the proposed method was confirmed to be 5 and 10 times smaller than the FLE of conventional fiducial selections on the physical and virtual models, respectively. The proposed tracking method is non-invasive and appears to be sufficiently accurate. To conclude, the proposed registration and tracking principles can be extended to track any biomedical and non-biomedical geometries that contain polygonal concavities.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号