首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Storm is a software package that allows users to test a variety of hypotheses regarding patterns of relatedness and patterns of mate choice and/or mate compatibility within a population. These functions are based on four main calculations that can be conducted either independently or in the hypothesis-testing framework: internal relatedness; homozygosity by loci; pairwise relatedness; and a new metric called allele inheritance, which calculates the proportion of loci at which an offspring inherits a paternal allele different from that inherited from its mother. STORM allows users to test four hypotheses based on these calculations and Monte Carlo simulations: (i) are individuals within observed associations or groupings more/less related than expected; (ii) do observed offspring have more/less genetic variability (based on internal relatedness or homozygosity by loci) than expected from the gene pool; (iii) are observed mating pairs more/less related than expected if mating is random with respect to relatedness; and (iv) do observed offspring inherit paternal alleles different from those inherited from the mother more/less often than expected based on Mendelian inheritance.  相似文献   

3.
Cardiac experimental electrophysiology is in need of a well-defined Minimum Information Standard for recording, annotating, and reporting experimental data. As a step towards establishing this, we present a draft standard, called Minimum Information about a Cardiac Electrophysiology Experiment (MICEE). The ultimate goal is to develop a useful tool for cardiac electrophysiologists which facilitates and improves dissemination of the minimum information necessary for reproduction of cardiac electrophysiology research, allowing for easier comparison and utilisation of findings by others. It is hoped that this will enhance the integration of individual results into experimental, computational, and conceptual models. In its present form, this draft is intended for assessment and development by the research community. We invite the reader to join this effort, and, if deemed productive, implement the Minimum Information about a Cardiac Electrophysiology Experiment standard in their own work.  相似文献   

4.
With continued efforts towards a single MSI data format, data conversion routines must be made universally available. The benefits of a common imaging format, imzML, are slowly becoming more widely appreciated but the format remains to be used by only a small proportion of imaging groups. Increased awareness amongst researchers and continued support from major MS vendors in providing tools for converting proprietary formats into imzML are likely to result in a rapidly increasing uptake of the format. It is important that this does not lead to the exclusion of researchers using older or unsupported instruments. We describe an open source converter, imzMLConverter, to ensure against this. We propose that proprietary formats should first be converted to mzML using one of the widely available converters, such as msconvert and then use imzMLConverter to convert mzML to imzML. This will allow a wider audience to benefit from the imzML format immediately.  相似文献   

5.
MOTIVATION: Reliable, automated communication of biological information requires methods to declare the information's semantics. In this paper I describe an approach to semantic declaration intended to permit independent, distributed databases, algorithms, and servers to exchange and process requests for information and computations without requiring coordination or agreement among them on universe of discourse, data model, schema, or implementation. RESULTS: This approach uses Glossa, a formal language defining the semantics of biological ideas, information, and algorithms, to executably define the semantics of complex ideas and computations by constructs of semiotes, terms which axiomatically define very simple notions. A database or algorithm wishing to exchange information or computations maintains a set of mappings between its particular notions and semiotes, and a parser to translate between its indigenous ideas and implementation and the semiotes. Requests from other databases or algorithms are issued as semiotic messages, locally interpreted and processed, and the results returned as semiotes to the requesting entity. Thus, semiotes serve as a shared, abstract layer of definitions which can be computably combined by each database or algorithm according to its own needs and ideas. By combining the explicit declaration of semantics with the computation of the semantics of complex ideas, Glossa and its semiotes permit independent computational entities to lightly federate their capabilities as desired while maintaining their unique perspectives on both scientific and technical questions.  相似文献   

6.
PurposeSpectral Computed Tomography (SCT) systems equipped with photon counting detectors (PCD) are clinically desired, since such systems provide not only additional diagnostic information but also radiation dose reductions by a factor of two or more. The current unavailability of clinical PCDs makes a simulation of such systems necessary.MethodsIn this paper, we present a Monte Carlo-based simulation of a SCT equipped with a PCD. The aim of this development is to facilitate research on potential clinical applications. Our MC simulator takes into account scattering interactions within the scanned object and has the ability to simulate scans with and without scatter and a wide variety of imaging parameters. To demonstrate the usefulness of such a MC simulator for development of SCT applications, a phantom with contrast targets covering a wide range of clinically significant iodine concentrations is simulated. With those simulations the impact of scatter and exposure on image quality and material decomposition results is investigated.ResultsOur results illustrate that scatter radiation plays a significant role in visual as well as quantitative results. Scatter radiation can reduce the accuracy of contrast agent concentration by up to 15%.ConclusionsWe present a reliable and robust software bench for simulation of SCTs equipped with PCDs.  相似文献   

7.
Much of the debate about reciprocity in humans and other primates hinges on proximate mechanisms, or more precisely, the contingency of one service on another. While there is good evidence for long-term statistical contingencies of services given and received in primates, results for short-term behavioral contingencies are mixed. Indeed, as we show here controlled experiments using artificial tasks and explicit turn-taking were unlikely to find short-term effects. We therefore used more naturalistic experiments to test for short-term contingencies of grooming on food sharing and vice versa in one group of chimpanzees and two groups of bonobos. Overall, we found significant effects of grooming on food sharing and vice versa, however, in the chimpanzees these effects disappeared when controlling for long-term characteristics of the dyad including services exchanged over the whole study period. In the bonobos, short-term contingencies remained significant which was likely a consequence of considerable tension surrounding monopolizable food resulting in higher rates of grooming and other affiliative behaviors around sharing sessions. These results are consistent with the fact that previous evidence for short-term contingency often involved grooming and that long-term contingency is more commonly observed in primates. We propose that long-term contingency is proximately regulated by a ‘relationship score’ computed through a tally of past interactions which tend to outweigh recent single events. We therefore suggest that future research into the proximate mechanisms of reciprocity should trace the development of such a score by focusing on newly formed dyads with no history of interactions.  相似文献   

8.
Databases have become integral parts of data management, dissemination, and mining in biology. At the Second Annual Conference on Electron Tomography, held in Amsterdam in 2001, we proposed that electron tomography data should be shared in a manner analogous to structural data at the protein and sequence scales. At that time, we outlined our progress in creating a database to bring together cell level imaging data across scales, The Cell Centered Database (CCDB). The CCDB was formally launched in 2002 as an on-line repository of high-resolution 3D light and electron microscopic reconstructions of cells and subcellular structures. It contains 2D, 3D, and 4D structural and protein distribution information from confocal, multiphoton, and electron microscopy, including correlated light and electron microscopy. Many of the data sets are derived from electron tomography of cells and tissues. In the 5 years since its debut, we have moved the CCDB from a prototype to a stable resource and expanded the scope of the project to include data management and knowledge engineering. Here, we provide an update on the CCDB and how it is used by the scientific community. We also describe our work in developing additional knowledge tools, e.g., ontologies, for annotation and query of electron microscopic data.  相似文献   

9.
10.
11.
12.
SUMMARY: GeneContent is a software system to infer the genome phylogeny based on an additive genome distance that can be estimated from the extended gene content data, which contains the genome-wide information (absence of a gene family, presence as single copy or presence as duplicates) across multiple species. GeneContent can also be used to explore the genome-wide evolutionary pattern of gene loss and proliferation. AVAILABILITY: Distribution packages of GeneContent for both Microsoft Windows and Linux operating systems are available at http://xgu.zool.iastate.edu CONTACT: xgu@iastate.edu.  相似文献   

13.
MOTIVATION: MethylCoder is a software program that generates per-base methylation data given a set of bisulfite-treated reads. It provides the option to use either of two existing short-read aligners, each with different strengths. It accounts for soft-masked alignments and overlapping paired-end reads. MethylCoder outputs data in text and binary formats in addition to the final alignment in SAM format, so that common high-throughput sequencing tools can be used on the resulting output. It is more flexible than existing software and competitive in terms of speed and memory use. AVAILABILITY: MethylCoder requires only a python interpreter and a C compiler to run. Extensive documentation and the full source code are available under the MIT license at: https://github.com/brentp/methylcode. CONTACT: bpederse@gmail.com.  相似文献   

14.

Background  

Interpreting and controlling bioelectromagnetic phenomena require realistic physiological models and accurate numerical solvers. A semi-realistic model often used in practise is the piecewise constant conductivity model, for which only the interfaces have to be meshed. This simplified model makes it possible to use Boundary Element Methods. Unfortunately, most Boundary Element solutions are confronted with accuracy issues when the conductivity ratio between neighboring tissues is high, as for instance the scalp/skull conductivity ratio in electro-encephalography. To overcome this difficulty, we proposed a new method called the symmetric BEM, which is implemented in the OpenMEEG software. The aim of this paper is to present OpenMEEG, both from the theoretical and the practical point of view, and to compare its performances with other competing software packages.  相似文献   

15.
16.
The integration of software into special-purpose systems (e.g.for gene sequence analysis) can be a difficult task. We describea general-purpose software integration tool, the BCETM program,that facilitates assembly of VAX-based software into applicationsystems and provides an easy-to-use, intuitive user interface.We describe the use of BCE to integrate a heterogeneous collectionof sequence analysis tools. Many BCE design features are generallyapplicable and can be implemented in other language or hardwareenvironments. Received on May 13, 1987; accepted on October 2, 1987  相似文献   

17.
Pise is interface construction software for bioinformatics applications that run by command-line operations. It creates common, easy-to-use interfaces to these applications for the Web, or other uses. It is adaptable to new bioinformatics tools, and offers program chaining, Unix system batch and other controls, making it an attractive method for building and using your own bioinformatics web services.  相似文献   

18.

Background  

Despite the recent success of genome-wide association studies in identifying novel loci contributing effects to complex human traits, such as type 2 diabetes and obesity, much of the genetic component of variation in these phenotypes remains unexplained. One way to improving power to detect further novel loci is through meta-analysis of studies from the same population, increasing the sample size over any individual study. Although statistical software analysis packages incorporate routines for meta-analysis, they are ill equipped to meet the challenges of the scale and complexity of data generated in genome-wide association studies.  相似文献   

19.
20.

Background  

Systems biologists work with many kinds of data, from many different sources, using a variety of software tools. Each of these tools typically excels at one type of analysis, such as of microarrays, of metabolic networks and of predicted protein structure. A crucial challenge is to combine the capabilities of these (and other forthcoming) data resources and tools to create a data exploration and analysis environment that does justice to the variety and complexity of systems biology data sets. A solution to this problem should recognize that data types, formats and software in this high throughput age of biology are constantly changing.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号