首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Why is Real-World Visual Object Recognition Hard?   总被引:1,自引:0,他引:1  
Progress in understanding the brain mechanisms underlying vision requires the construction of computational models that not only emulate the brain's anatomy and physiology, but ultimately match its performance on visual tasks. In recent years, “natural” images have become popular in the study of vision and have been used to show apparently impressive progress in building such models. Here, we challenge the use of uncontrolled “natural” images in guiding that progress. In particular, we show that a simple V1-like model—a neuroscientist's “null” model, which should perform poorly at real-world visual object recognition tasks—outperforms state-of-the-art object recognition systems (biologically inspired and otherwise) on a standard, ostensibly natural image recognition test. As a counterpoint, we designed a “simpler” recognition test to better span the real-world variation in object pose, position, and scale, and we show that this test correctly exposes the inadequacy of the V1-like model. Taken together, these results demonstrate that tests based on uncontrolled natural images can be seriously misleading, potentially guiding progress in the wrong direction. Instead, we reexamine what it means for images to be natural and argue for a renewed focus on the core problem of object recognition—real-world image variation.  相似文献   

2.
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called “crowding”. Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, “compulsory averaging”, and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.  相似文献   

3.
We propose a conceptual framework for artificial object recognition systems based on findings from neurophysiological and neuropsychological research on the visual system in primate cortex. We identify some essential questions, which have to be addressed in the course of designing object recognition systems. As answers, we review some major aspects of biological object recognition, which are then translated into the technical field of computer vision. The key suggestions are the use of incremental and view-based approaches together with the ability of online feature selection and the interconnection of object-views to form an overall object representation. The effectiveness of the computational approach is estimated by testing a possible realization in various tasks and conditions explicitly designed to allow for a direct comparison with the biological counterpart. The results exhibit excellent performance with regard to recognition accuracy, the creation of sparse models and the selection of appropriate features.  相似文献   

4.
The primate visual system achieves remarkable visual object recognition performance even in brief presentations, and under changes to object exemplar, geometric transformations, and background variation (a.k.a. core visual object recognition). This remarkable performance is mediated by the representation formed in inferior temporal (IT) cortex. In parallel, recent advances in machine learning have led to ever higher performing models of object recognition using artificial deep neural networks (DNNs). It remains unclear, however, whether the representational performance of DNNs rivals that of the brain. To accurately produce such a comparison, a major difficulty has been a unifying metric that accounts for experimental limitations, such as the amount of noise, the number of neural recording sites, and the number of trials, and computational limitations, such as the complexity of the decoding classifier and the number of classifier training examples. In this work, we perform a direct comparison that corrects for these experimental limitations and computational considerations. As part of our methodology, we propose an extension of “kernel analysis” that measures the generalization accuracy as a function of representational complexity. Our evaluations show that, unlike previous bio-inspired models, the latest DNNs rival the representational performance of IT cortex on this visual object recognition task. Furthermore, we show that models that perform well on measures of representational performance also perform well on measures of representational similarity to IT, and on measures of predicting individual IT multi-unit responses. Whether these DNNs rely on computational mechanisms similar to the primate visual system is yet to be determined, but, unlike all previous bio-inspired models, that possibility cannot be ruled out merely on representational performance grounds.  相似文献   

5.
Perfusion technology has been successfully used for the commercial production of biotherapeutics, in particular unstable recombinant proteins, for more than a decade. However, there has been a general lack of high-throughput cell culture tools specifically for perfusion-based cell culture processes. Here, we have developed a high-throughput cell retention operation for use with the ambr® 15 bioreactor system. Experiments were run in both 24 and 48 reactor configurations for comparing perfusion mimic models, media development, and clone screening. Employing offline centrifugation for cell retention and a variable volume model developed with MATLAB computational software, the established screening model has demonstrated cell culture performance, productivity, and product quality were comparable to bench scale bioreactors. The automated, single use, high-throughput perfusion mimic is a powerful tool that enables us to have rapid and efficient process development of perfusion-based cell culture processes.  相似文献   

6.
A major challenge in computational biology is constraining free parameters in mathematical models. Adjusting a parameter to make a given model output more realistic sometimes has unexpected and undesirable effects on other model behaviors. Here, we extend a regression-based method for parameter sensitivity analysis and show that a straightforward procedure can uniquely define most ionic conductances in a well-known model of the human ventricular myocyte. The model''s parameter sensitivity was analyzed by randomizing ionic conductances, running repeated simulations to measure physiological outputs, then collecting the randomized parameters and simulation results as “input” and “output” matrices, respectively. Multivariable regression derived a matrix whose elements indicate how changes in conductances influence model outputs. We show here that if the number of linearly-independent outputs equals the number of inputs, the regression matrix can be inverted. This is significant, because it implies that the inverted matrix can specify the ionic conductances that are required to generate a particular combination of model outputs. Applying this idea to the myocyte model tested, we found that most ionic conductances could be specified with precision (R2 > 0.77 for 12 out of 16 parameters). We also applied this method to a test case of changes in electrophysiology caused by heart failure and found that changes in most parameters could be well predicted. We complemented our findings using a Bayesian approach to demonstrate that model parameters cannot be specified using limited outputs, but they can be successfully constrained if multiple outputs are considered. Our results place on a solid mathematical footing the intuition-based procedure simultaneously matching a model''s output to several data sets. More generally, this method shows promise as a tool to define model parameters, in electrophysiology and in other biological fields.  相似文献   

7.
8.
Semantics-based model composition is an approach for generating complex biosimulation models from existing components that relies on capturing the biological meaning of model elements in a machine-readable fashion. This approach allows the user to work at the biological rather than computational level of abstraction and helps minimize the amount of manual effort required for model composition. To support this compositional approach, we have developed the SemGen software, and here report on SemGen’s semantics-based merging capabilities using real-world modeling use cases. We successfully reproduced a large, manually-encoded, multi-model merge: the “Pandit-Hinch-Niederer” (PHN) cardiomyocyte excitation-contraction model, previously developed using CellML. We describe our approach for annotating the three component models used in the PHN composition and for merging them at the biological level of abstraction within SemGen. We demonstrate that we were able to reproduce the original PHN model results in a semi-automated, semantics-based fashion and also rapidly generate a second, novel cardiomyocyte model composed using an alternative, independently-developed tension generation component. We discuss the time-saving features of our compositional approach in the context of these merging exercises, the limitations we encountered, and potential solutions for enhancing the approach.  相似文献   

9.
Advanced statistical methods used to analyze high-throughput data such as gene-expression assays result in long lists of “significant genes.” One way to gain insight into the significance of altered expression levels is to determine whether Gene Ontology (GO) terms associated with a particular biological process, molecular function, or cellular component are over- or under-represented in the set of genes deemed significant. This process, referred to as enrichment analysis, profiles a gene-set, and is widely used to makes sense of the results of high-throughput experiments. The canonical example of enrichment analysis is when the output dataset is a list of genes differentially expressed in some condition. To determine the biological relevance of a lengthy gene list, the usual solution is to perform enrichment analysis with the GO. We can aggregate the annotating GO concepts for each gene in this list, and arrive at a profile of the biological processes or mechanisms affected by the condition under study. While GO has been the principal target for enrichment analysis, the methods of enrichment analysis are generalizable. We can conduct the same sort of profiling along other ontologies of interest. Just as scientists can ask “Which biological process is over-represented in my set of interesting genes or proteins?” we can also ask “Which disease (or class of diseases) is over-represented in my set of interesting genes or proteins?“. For example, by annotating known protein mutations with disease terms from the ontologies in BioPortal, Mort et al. recently identified a class of diseases—blood coagulation disorders—that were associated with a 14-fold depletion in substitutions at O-linked glycosylation sites. With the availability of tools for automatic annotation of datasets with terms from disease ontologies, there is no reason to restrict enrichment analyses to the GO. In this chapter, we will discuss methods to perform enrichment analysis using any ontology available in the biomedical domain. We will review the general methodology of enrichment analysis, the associated challenges, and discuss the novel translational analyses enabled by the existence of public, national computational infrastructure and by the use of disease ontologies in such analyses.

What to Learn in This Chapter

  • Review the commonly used approach of Gene Ontology based enrichment analysis
  • Understand the pitfalls associated with current approaches
  • Understand the national infrastructure available for using alternative ontologies for enrichment analysis
  • Learn about a generalized enrichment analysis workflow and its application using disease ontologies
This article is part of the “Translational Bioinformatics” collection for PLOS Computational Biology.
  相似文献   

10.
The identification of the surface area able to generate the protein-protein complexation ligand and ion ligation is critical for the recognition of the biological function of particular proteins. The technique based on the analysis of the irregularity of hydrophobicity distribution is used as the criterion for the recognition of the interaction regions. Particularly, the exposure of hydrophobic residues on the surface of protein as well as the localization of the hydrophilic residues in the hydrophobic core is treated as potential area ready to interact with external molecules. The model based on the “fuzzy oil drop” approach treating the protein molecule as the drop of hydrophobicity concentrated in the central part of structure with the hydrophobicity close to zero on the surface according to 3-dimensional Gauss function. The comparison with the observed hydrophobicy in particular protein reveals some irregularities. These irregularities seem to represent the aim-oriented localization.  相似文献   

11.
The use of computational modeling and simulation has increased in many biological fields, but despite their potential these techniques are only marginally applied in nutritional sciences. Nevertheless, recent applications of modeling have been instrumental in answering important nutritional questions from the cellular up to the physiological levels. Capturing the complexity of today''s important nutritional research questions poses a challenge for modeling to become truly integrative in the consideration and interpretation of experimental data at widely differing scales of space and time. In this review, we discuss a selection of available modeling approaches and applications relevant for nutrition. We then put these models into perspective by categorizing them according to their space and time domain. Through this categorization process, we identified a dearth of models that consider processes occurring between the microscopic and macroscopic scale. We propose a “middle-out” strategy to develop the required full-scale, multilevel computational models. Exhaustive and accurate phenotyping, the use of the virtual patient concept, and the development of biomarkers from “-omics” signatures are identified as key elements of a successful systems biology modeling approach in nutrition research—one that integrates physiological mechanisms and data at multiple space and time scales.  相似文献   

12.
While collective intelligence (CI) is a powerful approach to increase decision accuracy, few attempts have been made to unlock its potential in medical decision-making. Here we investigated the performance of three well-known collective intelligence rules (“majority”, “quorum”, and “weighted quorum”) when applied to mammography screening. For any particular mammogram, these rules aggregate the independent assessments of multiple radiologists into a single decision (recall the patient for additional workup or not). We found that, compared to single radiologists, any of these CI-rules both increases true positives (i.e., recalls of patients with cancer) and decreases false positives (i.e., recalls of patients without cancer), thereby overcoming one of the fundamental limitations to decision accuracy that individual radiologists face. Importantly, we find that all CI-rules systematically outperform even the best-performing individual radiologist in the respective group. Our findings demonstrate that CI can be employed to improve mammography screening; similarly, CI may have the potential to improve medical decision-making in a much wider range of contexts, including many areas of diagnostic imaging and, more generally, diagnostic decisions that are based on the subjective interpretation of evidence.  相似文献   

13.
Virtual high-throughput screening of molecular databases and in particular high-throughput protein–ligand docking are both common methodologies that identify and enrich hits in the early stages of the drug design process. Current protein–ligand docking algorithms often implement a program-specific model for protein–ligand interaction geometries. However, in order to create a platform for arbitrary queries in molecular databases, a new program is desirable that allows more manual control of the modeling of molecular interactions.For that reason, ProPose, an advanced incremental construction docking engine, is presented here that implements a fast and fully configurable molecular interaction and scoring model. This program uses user-defined, discrete, pharmacophore-like representations of molecular interactions that are transformed on-the-fly into a continuous potential energy surface, allowing for the incorporation of target specific interaction mechanisms into docking protocols in a straightforward manner. A torsion angle library, based on semi-empirical quantum chemistry calculations, is used to provide minimum energy torsion angles for the incremental construction algorithm. Docking results of a diverse set of protein–ligand complexes from the Protein Data Bank demonstrate the feasibility of this new approach.As a result, the seamless integration of pharmacophore-like interaction types into the docking and scoring scheme implemented in ProPose opens new opportunities for efficient, receptor-specific screening protocols. Figure ProPose — a fully configurable protein-ligand docking program — transforms pharmacophores into a smooth potential energy surface.This revised version was published online in October 2004 with corrections to the Graphical Abstract.  相似文献   

14.
People have limited computational resources, yet they make complex strategic decisions over enormous spaces of possibilities. How do people efficiently search spaces with combinatorially branching paths? Here, we study players’ search strategies for a winning move in a “k-in-a-row” game. We find that players use scoring strategies to prune the search space and augment this pruning by a “shutter” heuristic that focuses the search on the paths emanating from their previous move. This strong pruning has its costs—both computational simulations and behavioral data indicate that the shutter size is correlated with players’ blindness to their opponent’s winning moves. However, simulations of the search while varying the shutter size, complexity levels, noise levels, branching factor, and computational limitations indicate that despite its costs, a narrow shutter strategy is the dominant strategy for most of the parameter space. Finally, we show that in the presence of computational limitations, the shutter heuristic enhances the performance of deep learning networks in these end-game scenarios. Together, our findings suggest a novel adaptive heuristic that benefits search in a vast space of possibilities of a strategic game.  相似文献   

15.

Background

Fully differentiated adipocytes are considered to be refractory to introduction of siRNA via lipid-based transfection. However, large scale siRNA-based loss-of-function screening of adipocytes using either electroporation or virally-mediated transfection approaches can be prohibitively complex and expensive.

Methodology/Principal Findings

We present a method for introducing small interfering RNA (siRNA) into differentiated 3T3-L1 adipocytes and primary human adipocytes using an approach based on forming the siRNA/cell complex with the adipocytes in suspension rather than as an adherent monolayer, a variation of “reverse transfection”.

Conclusions/Significance

Transfection of adipocytes with siRNA by this method is economical, highly efficient, has a simple workflow, and allows standardization of the ratio of siRNA/cell number, making this approach well-suited for high-throughput screening of fully differentiated adipocytes.  相似文献   

16.
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.  相似文献   

17.

Background

Genome-wide saturation mutagenesis and subsequent phenotype-driven screening has been central to a comprehensive understanding of complex biological processes in classical model organisms such as flies, nematodes, and plants. The degree of “saturation” (i.e., the fraction of possible target genes identified) has been shown to be a critical parameter in determining all relevant genes involved in a biological function, without prior knowledge of their products. In mammalian model systems, however, the relatively large scale and labor intensity of experiments have hampered the achievement of actual saturation mutagenesis, especially for recessive traits that require biallelic mutations to manifest detectable phenotypes.

Results

By exploiting the recently established haploid mouse embryonic stem cells (ESCs), we present an implementation of almost complete saturation mutagenesis in a mammalian system. The haploid ESCs were mutagenized with the chemical mutagen N-ethyl-N-nitrosourea (ENU) and processed for the screening of mutants defective in various steps of the glycosylphosphatidylinositol-anchor biosynthetic pathway. The resulting 114 independent mutant clones were characterized by a functional complementation assay, and were shown to be defective in any of 20 genes among all 22 known genes essential for this well-characterized pathway. Ten mutants were further validated by whole-exome sequencing. The predominant generation of single-nucleotide substitutions by ENU resulted in a gene mutation rate proportional to the length of the coding sequence, which facilitated the experimental design of saturation mutagenesis screening with the aid of computational simulation.

Conclusions

Our study enables mammalian saturation mutagenesis to become a realistic proposition. Computational simulation, combined with a pilot mutagenesis experiment, could serve as a tool for the estimation of the number of genes essential for biological processes such as drug target pathways when a positive selection of mutants is available.

Electronic supplementary material

The online version of this article (doi:10.1186/1471-2164-15-1016) contains supplementary material, which is available to authorized users.  相似文献   

18.
Intracellular biochemical parameters, such as the expression level of gene products, are considered to be optimized so that a biological system, including the parameters, works effectively. Those parameters should have some permissible range so that the systems have robustness against perturbations, such as noise in gene expression. However, little is known about the permissible range in real cells because there has been no experimental technique to test it. In this study, we developed a genetic screening method, named “genetic tug-of-war” (gTOW) that evaluates upper limit copy numbers of genes in a model eukaryote Saccharomyces cerevisiae, and we applied it for 30 cell-cycle related genes (CDC genes). The experiment provided unique quantitative data that could be used to argue the system-level properties of the cell cycle such as robustness and fragility. The data were used to evaluate the current computational model, and refinements to the model were suggested.  相似文献   

19.
Oligonucleotide microarrays are commonly adopted for detecting and qualifying the abundance of molecules in biological samples. Analysis of microarray data starts with recording and interpreting hybridization signals from CEL images. However, many CEL images may be blemished by noises from various sources, observed as “bright spots”, “dark clouds”, and “shadowy circles”, etc. It is crucial that these image defects are correctly identified and properly processed. Existing approaches mainly focus on detecting defect areas and removing affected intensities. In this article, we propose to use a mixed effect model for imputing the affected intensities. The proposed imputation procedure is a single-array-based approach which does not require any biological replicate or between-array normalization. We further examine its performance by using Affymetrix high-density SNP arrays. The results show that this imputation procedure significantly reduces genotyping error rates. We also discuss the necessary adjustments for its potential extension to other oligonucleotide microarrays, such as gene expression profiling. The R source code for the implementation of approach is freely available upon request.  相似文献   

20.
Despite intense interest and considerable effort via high-throughput screening, there are few examples of small molecules that directly inhibit protein-protein interactions. This suggests that many protein interaction surfaces may not be intrinsically “druggable” by small molecules, and elevates in importance the few successful examples as model systems for improving our fundamental understanding of druggability. Here we describe an approach for exploring protein fluctuations enriched in conformations containing surface pockets suitable for small molecule binding. Starting from a set of seven unbound protein structures, we find that the presence of low-energy pocket-containing conformations is indeed a signature of druggable protein interaction sites and that analogous surface pockets are not formed elsewhere on the protein. We further find that ensembles of conformations generated with this biased approach structurally resemble known inhibitor-bound structures more closely than equivalent ensembles of unbiased conformations. Collectively these results suggest that “druggability” is a property encoded on a protein surface through its propensity to form pockets, and inspire a model in which the crude features of the predisposed pocket(s) restrict the range of complementary ligands; additional smaller conformational changes then respond to details of a particular ligand. We anticipate that the insights described here will prove useful in selecting protein targets for therapeutic intervention.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号