首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
BACKGROUND: A major challenge in the post genomic era is to map and decipher the functional molecular networks of proteins directly in a cell or a tissue. This task requires technologies for the colocalization of random numbers of different molecular components (e.g. proteins) in one sample in one experiment. METHODS: Multi-epitope-ligand-"kartographie" (MELK) was developed as a microscopic imaging technology running cycles of iterative fluorescence tagging, imaging, and bleaching, to colocalize a large number of proteins in one sample (morphologically intact routinely fixed cells or tissue). RESULTS: In the present study, 18 different cell surface proteins were colocalized by MELK in cells and tissue sections in different compartments of the human immune system. From the resulting sets of multidimensional binary vectors the most prominent groups of protein-epitope arrangements were extracted and imaged as protein "toponome" maps providing direct insight in the higher order topological organization of immune compartments uncovering new tissue domains. The data sets suggest that protein networks, topologically organized in proteomes in situ, obey a unique protein-colocation and -anticolocation code describable by three symbols. CONCLUSION: The technology has the potential to colocalize hundreds of proteins and other molecular components in one sample and may offer many applications in biology and medicine.  相似文献   

2.
Looking and listening to light: the evolution of whole-body photonic imaging   总被引:22,自引:0,他引:22  
Optical imaging of live animals has grown into an important tool in biomedical research as advances in photonic technology and reporter strategies have led to widespread exploration of biological processes in vivo. Although much attention has been paid to microscopy, macroscopic imaging has allowed small-animal imaging with larger fields of view (from several millimeters to several centimeters depending on implementation). Photographic methods have been the mainstay for fluorescence and bioluminescence macroscopy in whole animals, but emphasis is shifting to photonic methods that use tomographic principles to noninvasively image optical contrast at depths of several millimeters to centimeters with high sensitivity and sub-millimeter to millimeter resolution. Recent theoretical and instrumentation advances allow the use of large data sets and multiple projections and offer practical systems for quantitative, three-dimensional whole-body images. For photonic imaging to fully realize its potential, however, further progress will be needed in refining optical inversion methods and data acquisition techniques.  相似文献   

3.
Terminal restriction fragment length polymorphism (T-RFLP) is increasingly being used to examine microbial community structure and accordingly, a range of approaches have been used to analyze data sets. A number of published reports have included data and results that were statistically flawed or lacked rigorous statistical testing. A range of simple, yet powerful techniques are available to examine community data, however their use is seldom, if ever, discussed in microbial literature. We describe an approach that overcomes some of the problems associated with analyzing community datasets and offer an approach that makes data interpretation simple and effective. The Bray-Curtis coefficient is suggested as an ideal coefficient to be used for the construction of similarity matrices. Its strengths include its ability to deal with data sets containing multiple blocks of zeros in a meaningful manner. Non-metric multi-dimensional scaling is described as a powerful, yet easily interpreted method to examine community patterns based on T-RFLP data. Importantly, we describe the use of significance testing of data sets to allow quantitative assessment of similarity, removing subjectivity in comparing complex data sets. Finally, we introduce a quantitative measure of sample dispersion and suggest its usefulness in describing site heterogeneity. This revised version was published online in June 2006 with corrections to the Cover Date.  相似文献   

4.
This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits.  相似文献   

5.
Repeat imaging data sets performed on patients with cancer are becoming publicly available. The potential utility of these data sets for addressing important questions in imaging biomarker development is vast. In particular, these data sets may be useful to help characterize the variability of quantitative parameters derived from imaging. This article reviews statistical analysis that may be performed to use results of repeat imaging to 1) calculate the level of change in parameter value that may be seen in individual patients to confidently characterize that patient as showing true parameter change, 2) calculate the level of change in parameters value that may be seen in individual patients to confidently categorize that patient as showing true lack of parameter change, 3) determine if different imaging devices are interchangeable from the standpoint of repeatability, and 4) estimate the numbers of patients needed to precisely calculate repeatability. In addition, we recommend a set of statistical parameters that should be reported when the repeatability of continuous parameters is studied.  相似文献   

6.
Morphological and molecular data sets favor robustly supported, contradictory interpretations of crocodylian phylogeny. A longstanding perception in the field of systematics is that such significantly conflicting data sets should be analyzed separately. Here we utilize a combined approach, simultaneous analyses of all relevant character data, to summarize common support and to reconcile discrepancies among data sets. By conjoining rather than separating incongruent classes of data, secondary phylogenetic signals emerge from both molecular and morphological character sets and provide solid evidence for a unified hypothesis of crocodylian phylogeny. Simultaneous analyses of four gene sequences and paleontological data suggest that putative adaptive convergences in the jaws of gavialines (gavials) and tomistomines (false gavials) offer character support for a grouping of these taxa, making Gavialinae an atavistic taxon. Simple new methods for measuring the influence of extinct taxa on topological support indicate that in this vertebrate order fossils generally stabilize relationships and accentuate hidden phylogenetic signals. Remaining inconsistencies in minimum length trees, including concentrated hierarchical patterns of homoplasy and extensive gaps in the fossil record, indicate where future work in crocodylian systematics should be directed.  相似文献   

7.
Fluorescent confocal laser scanning microscopy allows an improved imaging of microscopic objects in three dimensions. However, the resolution along the axial direction is three times worse than the resolution in lateral directions. A method to overcome this axial limitation is tilting the object under the microscope, in a way that the direction of the optical axis points into different directions relative to the sample. A new technique for a simultaneous reconstruction from a number of such axial tomographic confocal data sets was developed and used for high resolution reconstruction of 3D-data both from experimental and virtual microscopic data sets. The reconstructed images have a highly improved 3D resolution, which is comparable to the lateral resolution of a single deconvolved data set. Axial tomographic imaging in combination with simultaneous data reconstruction also opens the possibility for a more precise quantification of 3D data. The color images of this publication can be accessed from http://www.esacp.org/acp/2000/20-1/heintzmann.++ +htm. At this web address an interactive 3D viewer is additionally provided for browsing the 3D data. This java applet displays three orthogonal slices of the data set which are dynamically updated by user mouse clicks or keystrokes.  相似文献   

8.
Identifying neurobiological mechanisms mediating the emergence of individual differences in behavior is critical for advancing our understanding of relative risk for psychopathology. Neuroreceptor positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) can be used to assay in vivo regional brain chemistry and function, respectively. Typically, these neuroimaging modalities are implemented independently despite the capacity for integrated data sets to offer unique insight into molecular mechanisms associated with brain function. Through examples from the serotonin and dopamine system and its effects on threat- and reward-related brain function, we review evidence for how such a multimodal neuroimaging strategy can be successfully implemented. Furthermore, we discuss how multimodal PET-fMRI can be integrated with techniques such as imaging genetics, pharmacological challenge paradigms and gene-environment interaction models to more completely map biological pathways mediating individual differences in behavior and related risk for psychopathology and inform the development of novel therapeutic targets.  相似文献   

9.
Functional imaging with MRI contrast agents is an emerging experimental approach that can combine the specificity of cellular neural recording techniques with noninvasive whole-brain coverage. A variety of contrast agents sensitive to aspects of brain activity have recently been introduced. These include new probes for calcium and other metal ions that offer high sensitivity and membrane permeability, as well as imaging agents for high-resolution pH and metabolic mapping in living animals. Genetically encoded MRI contrast agents have also been described. Several of the new probes have been validated in the brain; in vivo use of other agents remains a challenge. This review outlines advantages and disadvantages of specific molecular imaging approaches and discusses current or potential applications in neurobiology.  相似文献   

10.
Yang X  Belin TR  Boscardin WJ 《Biometrics》2005,61(2):498-506
Across multiply imputed data sets, variable selection methods such as stepwise regression and other criterion-based strategies that include or exclude particular variables typically result in models with different selected predictors, thus presenting a problem for combining the results from separate complete-data analyses. Here, drawing on a Bayesian framework, we propose two alternative strategies to address the problem of choosing among linear regression models when there are missing covariates. One approach, which we call "impute, then select" (ITS) involves initially performing multiple imputation and then applying Bayesian variable selection to the multiply imputed data sets. A second strategy is to conduct Bayesian variable selection and missing data imputation simultaneously within one Gibbs sampling process, which we call "simultaneously impute and select" (SIAS). The methods are implemented and evaluated using the Bayesian procedure known as stochastic search variable selection for multivariate normal data sets, but both strategies offer general frameworks within which different Bayesian variable selection algorithms could be used for other types of data sets. A study of mental health services utilization among children in foster care programs is used to illustrate the techniques. Simulation studies show that both ITS and SIAS outperform complete-case analysis with stepwise variable selection and that SIAS slightly outperforms ITS.  相似文献   

11.

Background  

The visual combination of different modalities is essential for many medical imaging applications in the field of Computer-Assisted medical Diagnosis (CAD) to enhance the clinical information content. Clinically, incontinence is a diagnosis with high clinical prevalence and morbidity rate. The search for a method to identify risk patients and to control the success of operations is still a challenging task. The conjunction of magnetic resonance (MR) and 3D ultrasound (US) image data sets could lead to a new clinical visual representation of the morphology as we show with corresponding data sets of the female anal canal with this paper.  相似文献   

12.
The connectivity architecture of neuronal circuits is essential to understand how brains work, yet our knowledge about the neuronal wiring diagrams remains limited and partial. Technical breakthroughs in labeling and imaging methods starting more than a century ago have advanced knowledge in the field. However, the volume of data associated with imaging a whole brain or a significant fraction thereof, with electron or light microscopy, has only recently become amenable to digital storage and analysis. A mouse brain imaged at light-microscopic resolution is about a terabyte of data, and 1mm(3) of the brain at EM resolution is about half a petabyte. This has given rise to a new field of research, computational analysis of large-scale neuroanatomical data sets, with goals that include reconstructions of the morphology of individual neurons as well as entire circuits. The problems encountered include large data management, segmentation and 3D reconstruction, computational geometry and workflow management allowing for hybrid approaches combining manual and algorithmic processing. Here we review this growing field of neuronal data analysis with emphasis on reconstructing neurons from EM data cubes.  相似文献   

13.
In this contribution we investigate the applicability of different methods from the field of independent component analysis (ICA) for the examination of dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) data from breast cancer research. DCE-MRI has evolved in recent years as a powerful complement to X-ray based mammography for breast cancer diagnosis and monitoring. In DCE-MRI the time related development of the signal intensity after the administration of a contrast agent can provide valuable information about tissue states and characteristics. To this end, techniques related to ICA, offer promising options for data integration and feature extraction at voxel level. In order to evaluate the applicability of ICA, topographic ICA and tree-dependent component analysis (TCA), these methods are applied to twelve clinical cases from breast cancer research with a histopathologically confirmed diagnosis. For ICA these experiments are complemented by a reliability analysis of the estimated components. The outcome of all algorithms is quantitatively evaluated by means of receiver operating characteristics (ROC) statistics whereas the results for specific data sets are discussed exemplarily in terms of reification, score-plots and score images.  相似文献   

14.
Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology.  相似文献   

15.
Colocalization aims at characterizing spatial associations between two fluorescently tagged biomolecules by quantifying the co-occurrence and correlation between the two channels acquired in fluorescence microscopy. Colocalization is presented either as the degree of overlap between the two channels or the overlays of the red and green images, with areas of yellow indicating colocalization of the molecules. This problem remains an open issue in diffraction-limited microscopy and raises new challenges with the emergence of superresolution imaging, a microscopic technique awarded by the 2014 Nobel prize in chemistry. We propose GcoPS, for Geo-coPositioning System, an original method that exploits the random sets structure of the tagged molecules to provide an explicit testing procedure. Our simulation study shows that GcoPS unequivocally outperforms the best competitive methods in adverse situations (noise, irregularly shaped fluorescent patterns, and different optical resolutions). GcoPS is also much faster, a decisive advantage to face the huge amount of data in superresolution imaging. We demonstrate the performances of GcoPS on two biological real data sets, obtained by conventional diffraction-limited microscopy technique and by superresolution technique, respectively.  相似文献   

16.
17.
18.
This report, compiled by experts on the treatment of mobile targets with advanced radiotherapy, summarizes the main conclusions and innovations achieved during the 4D treatment planning workshop 2013. This annual workshop focuses on research aiming to advance 4D radiotherapy treatments, including all critical aspects of time resolved delivery, such as in-room imaging, motion detection, motion managing, beam application, and quality assurance techniques. The report aims to revise achievements in the field and to discuss remaining challenges and potential solutions. As main achievements advances in the development of a standardized 4D phantom and in the area of 4D-treatment plan optimization were identified. Furthermore, it was noticed that MR imaging gains importance and high interest for sequential 4DCT/MR data sets was expressed, which represents a general trend of the field towards data covering a longer time period of motion. A new point of attention was work related to dose reconstructions, which may play a major role in verification of 4D treatment deliveries. The experimental validation of results achieved by 4D treatment planning and the systematic evaluation of different deformable image registration methods especially for inter-modality fusions were identified as major remaining challenges. A challenge that was also suggested as focus for future 4D workshops was the adaptation of image guidance approaches from conventional radiotherapy into particle therapy. Besides summarizing the last workshop, the authors also want to point out new evolving demands and give an outlook on the focus of the next workshop.  相似文献   

19.
Multimodal molecular imaging can offer a synergistic improvement of diagnostic ability over a single imaging modality. Recent development of hybrid imaging systems has profoundly impacted the pool of available multimodal imaging probes. In particular, much interest has been focused on biocompatible, inorganic nanoparticle-based multimodal probes. Inorganic nanoparticles offer exceptional advantages to the field of multimodal imaging owing to their unique characteristics, such as nanometer dimensions, tunable imaging properties, and multifunctionality. Nanoparticles mainly based on iron oxide, quantum dots, gold, and silica have been applied to various imaging modalities to characterize and image specific biologic processes on a molecular level. A combination of nanoparticles and other materials such as biomolecules, polymers, and radiometals continue to increase functionality for in vivo multimodal imaging and therapeutic agents. In this review, we discuss the unique concepts, characteristics, and applications of the various multimodal imaging probes based on inorganic nanoparticles.  相似文献   

20.
With the advent of high-throughput measurement techniques, scientists and engineers are starting to grapple with massive data sets and encountering challenges with how to organize, process and extract information into meaningful structures. Multidimensional spatio-temporal biological data sets such as time series gene expression with various perturbations over different cell lines, or neural spike trains across many experimental trials, have the potential to acquire insight about the dynamic behavior of the system. For this potential to be realized, we need a suitable representation to understand the data. A general question is how to organize the observed data into meaningful structures and how to find an appropriate similarity measure. A natural way of viewing these complex high dimensional data sets is to examine and analyze the large-scale features and then to focus on the interesting details. Since the wide range of experiments and unknown complexity of the underlying system contribute to the heterogeneity of biological data, we develop a new method by proposing an extension of Robust Principal Component Analysis (RPCA), which models common variations across multiple experiments as the lowrank component and anomalies across these experiments as the sparse component. We show that the proposed method is able to find distinct subtypes and classify data sets in a robust way without any prior knowledge by separating these common responses and abnormal responses. Thus, the proposed method provides us a new representation of these data sets which has the potential to help users acquire new insight from data.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号