首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
The Graphical Query Language (GQL) is a set of tools for the analysis of gene expression time-courses. They allow a user to pre-process the data, to query it for interesting patterns, to perform model-based clustering or mixture estimation, to include subsequent refinements of clusters and, finally, to use other biological resources to evaluate the results. Analyses are carried out in a graphical and interactive environment, allowing expert intervention in all stages of the data analysis. AVAILABILITY: The GQL package is freely available under the GNU general public license (GPL) at http://www.ghmm.org/gql  相似文献   

2.
This essay addresses three questions: (1) What happens to information that cannot be recorded on film or videotape? (2) Is the “visual image” conceived to be an autonomous, universalized object of study, as if it exists prior to and independent of conventional human languages? (3) Who are the much-talked-about viewers of ethnographic films, and what is their relationship to visual anthropologists' investigations? The essay's focus is on Grimshaw's statement about visual anthropologists who “seek legitimation by turning away from the mainstream textual tradition” [Grimshaw 2001 Grimshaw , Anna 2001 The Ethnographer's Eye: Ways of Seeing in Modern Anthropology . Cambridge : Cambridge University Press .[Crossref] [Google Scholar]: 172]. I have attempted to submit evidence that such a move weakens visual anthropologists' capacity to provide “mainstream anthropology” with a vital, growing and legitimate contribution.  相似文献   

3.
Objectives. The cost of a genetic linkage or association study is largely determined by the number of individuals to be recruited, phenotyped, and genotyped. The efficiency can be increased by using a sequential procedure that reduces time and cost on average. Two strategies for sequential designs in genetic epidemiological studies can be distinguished: One approach is to increase the sample size sequentially and to conduct multiple significance tests on accumulating data. If significance or futility can be assumed with a certain probability, the study is stopped. Otherwise, it is carried on to the next stage. The second approach is to conduct early linkage analyses on a coarse marker grid, and to increase marker density in later stages. Interim analyses are performed to select interesting genomic areas for follow up. The aim of this article is to give a review on sequential procedures in the context of genetic linkage and association studies. Methods. A systematic literature search was performed in the Medline and the Linkage Bibliography databases. Articles were defined as relevant if a sequential design was proposed or applied in genetic linkage or association studies. Results. The majority of proposed study designs is developed to meet the demands of specific studies and lacks a theoretical foundation. A second group of procedures is based on simulation results and principally restricted to the specific simulated situations. Finally, some theoretically founded procedures have been proposed that are discussed in detail. Conclusions. Although interesting and promising procedures have been suggested, they still lack realizations for practical purposes. In addition, further developments are required to adapt sequential strategies for optimal use in genetic epidemiological studies.  相似文献   

4.
The construction of complex simulation models and the application of new computer hardware to ecological problems has resulted in the need for many ecologists to rely on computer programmers to develop their modelling software. However, this can lead to a lack of flexibility and understanding in model implementation and in resource problems for researchers. This paper presents a new programming language, Viola, based on a simple organisational concept which can be used by most researchers to develop complex simulations much more easily than could be achieved with standard programming languages such as C++. The language is object oriented and implemented through a visual interface. It is specifically designed to cope with complicated individual based behavioural simulations and comes with embedded concurrency handling abilities.  相似文献   

5.
6.
7.
Current demand for understanding the behavior of groups of related genes, combined with the greater availability of data, has led to an increased focus on statistical methods in gene set analysis. In this paper, we aim to perform a critical appraisal of the methodology based on graphical models developed in Massa et al. ( 2010 ) that uses pathway signaling networks as a starting point to develop statistically sound procedures for gene set analysis. We pay attention to the potential of the methodology with respect to the organizational aspects of dealing with such complex but highly informative starting structures, that is pathways. We focus on three themes: the translation of a biological pathway into a graph suitable for modeling, the role of shrinkage when more genes than samples are obtained, the evaluation of respondence of the statistical models to the biological expectations. To study the impact of shrinkage, two simulation studies will be run. To evaluate the biological expectation we will use data from a network with known behavior that offer the possibility of carrying out a realistic check of respondence of the model to changes in the experimental conditions.  相似文献   

8.

Background

Case-only designs have been used since late 1980’s. In these, as opposed to case-control or cohort studies for instance, only cases are required and are self-controlled, eliminating selection biases and confounding related to control subjects, and time-invariant characteristics. The objectives of this systematic review were to analyze how the two main case-only designs – case-crossover (CC) and self-controlled case series (SCCS) – have been applied and reported in pharmacoepidemiology literature, in terms of applicability assumptions and specificities of these designs.

Methodology/Principal Findings

We systematically selected all reports in this field involving case-only designs from MEDLINE and EMBASE up to September 15, 2010. Data were extracted using a standardized form. The analysis included 93 reports 50 (54%) of CC and 45 (48%) SCCS, 2 reports combined both designs. In 12 (24%) CC and 18 (40%) SCCS articles, all applicable validity assumptions of the designs were fulfilled, respectively. Fifty (54%) articles (15 CC (30%) and 35 (78%) SCCS) adequately addressed the specificities of the case-only analyses in the way they reported results.

Conclusions/Significance

Our systematic review underlines that implementation of CC and SCCS designs needs to be more rigorous with regard to validity assumptions, as well as improvement in results reporting.  相似文献   

9.
Electrostatic forces are one of the primary determinants of molecular interactions. They help guide the folding of proteins, increase the binding of one protein to another and facilitate protein-DNA and protein-ligand binding. A popular method for computing the electrostatic properties of biological systems is to numerically solve the Poisson-Boltzmann (PB) equation, and there are several easy-to-use software packages available that solve the PB equation for soluble proteins. Here we present a freely available program, called APBSmem, for carrying out these calculations in the presence of a membrane. The Adaptive Poisson-Boltzmann Solver (APBS) is used as a back-end for solving the PB equation, and a Java-based graphical user interface (GUI) coordinates a set of routines that introduce the influence of the membrane, determine its placement relative to the protein, and set the membrane potential. The software Jmol is embedded in the GUI to visualize the protein inserted in the membrane before the calculation and the electrostatic potential after completing the computation. We expect that the ease with which the GUI allows one to carry out these calculations will make this software a useful resource for experimenters and computational researchers alike. Three examples of membrane protein electrostatic calculations are carried out to illustrate how to use APBSmem and to highlight the different quantities of interest that can be calculated.  相似文献   

10.
Modern applications of Sanger DNA sequencing often require converting a large number of chromatogram trace files into high-quality DNA sequences for downstream analyses. Relatively few nonproprietary software tools are available to assist with this process. SeqTrace is a new, free, and open-source software application that is designed to automate the entire workflow by facilitating easy batch processing of large numbers of trace files. SeqTrace can identify, align, and compute consensus sequences from matching forward and reverse traces, filter low-quality base calls, and end-trim finished sequences. The software features a graphical interface that includes a full-featured chromatogram viewer and sequence editor. SeqTrace runs on most popular operating systems and is freely available, along with supporting documentation, at http://seqtrace.googlecode.com/.  相似文献   

11.
One of the most important steps in biomedical longitudinal studies is choosing a good experimental design that can provide high accuracy in the analysis of results with a minimum sample size. Several methods for constructing efficient longitudinal designs have been developed based on power analysis and the statistical model used for analyzing the final results. However, development of this technology is not available to practitioners through user-friendly software. In this paper we introduce LADES (Longitudinal Analysis and Design of Experiments Software) as an alternative and easy-to-use tool for conducting longitudinal analysis and constructing efficient longitudinal designs. LADES incorporates methods for creating cost-efficient longitudinal designs, unequal longitudinal designs, and simple longitudinal designs. In addition, LADES includes different methods for analyzing longitudinal data such as linear mixed models, generalized estimating equations, among others. A study of European eels is reanalyzed in order to show LADES capabilities. Three treatments contained in three aquariums with five eels each were analyzed. Data were collected from 0 up to the 12th week post treatment for all the eels (complete design). The response under evaluation is sperm volume. A linear mixed model was fitted to the results using LADES. The complete design had a power of 88.7% using 15 eels. With LADES we propose the use of an unequal design with only 14 eels and 89.5% efficiency. LADES was developed as a powerful and simple tool to promote the use of statistical methods for analyzing and creating longitudinal experiments in biomedical research.  相似文献   

12.
The paper proposes a new method for the analysis of unreplicated factorial designs. The new method does not use an estimate of the error variance and has the potential to identify up to m – 1 active contrasts, where m is the number of contrasts in the study. It can be shown that the proposed test statistic, called MaxUr, is a function of the generalized likelihood ratio test statistic under normality, which was also used by Al‐Shiha and Yang (1999). Our strategy to identify active contrasts using MaxUr, however, is different from the multistage procedure proposed by Al‐Shiha and Yang (1999). Additionally, our simulation study seems to show that the new method is superior to Al‐Shiha and Yang's (1999) method. In order to test the performance of the new method, we did an extensive simulation study based on 10,000 samples to compare the new method with 12 other methods from the literature. To reasonably compare these methods, some of the 12 existing methods had to be slightly modified, such that the probability of falsely rejecting the global null hypothesis of no active factors was 0.05 for all methods. Two types of evaluation standard were used, the empirical power and the loss of decision. The results show that the new method performs very well, especially for large number of active contrasts (say more than 3 out of 15). A second purpose of the simulation study was to compare the performance of two well‐known estimates for the variance. One is the PSE introduced by Lenth (1989), the other the ASE introduced by Dong (1993). The simulation study confirmed the approximations of Kunert (1997) which indicate that Dong's (1993) estimate should perform better for small numbers of active factors. (© 2004 WILEY‐VCH Verlag GmbH & Co. KGaA, Weinheim)  相似文献   

13.
14.
A new class of phase-confounded designs is suggested. The class possesses the property that if in a year one of the rotations carries the test crop all other rotations also do. It also leads to equal block sizes when the rotations are structured in specified ways. Conditions under which designs of this class exist as well as method of constructing them are discussed.  相似文献   

15.
Several experimental studies in the literature have shown that even when performing purely kinesthetic tasks, such as reaching for a kinesthetically felt target with a hidden hand, the brain reconstructs a visual representation of the movement. In our previous studies, however, we did not observe any role of a visual representation of the movement in a purely kinesthetic task. This apparent contradiction could be related to a fundamental difference between the studied tasks. In our study subjects used the same hand to both feel the target and to perform the movement, whereas in most other studies, pointing to a kinesthetic target consisted of pointing with one hand to the finger of the other, or to some other body part. We hypothesize, therefore, that it is the necessity of performing inter-limb transformations that induces a visual representation of purely kinesthetic tasks. To test this hypothesis we asked subjects to perform the same purely kinesthetic task in two conditions: INTRA and INTER. In the former they used the right hand to both perceive the target and to reproduce its orientation. In the latter, subjects perceived the target with the left hand and responded with the right. To quantify the use of a visual representation of the movement we measured deviations induced by an imperceptible conflict that was generated between visual and kinesthetic reference frames. Our hypothesis was confirmed by the observed deviations of responses due to the conflict in the INTER, but not in the INTRA, condition. To reconcile these observations with recent theories of sensori-motor integration based on maximum likelihood estimation, we propose here a new model formulation that explicitly considers the effects of covariance between sensory signals that are directly available and internal representations that are ‘reconstructed’ from those inputs through sensori-motor transformations.  相似文献   

16.
In a typical comparative clinical trial the randomization scheme is fixed at the beginning of the study, and maintained throughout the course of the trial. A number of researchers have championed a randomized trial design referred to as ‘outcome‐adaptive randomization.’ In this type of trial, the likelihood of a patient being enrolled to a particular arm of the study increases or decreases as preliminary information becomes available suggesting that treatment may be superior or inferior. While the design merits of outcome‐adaptive trials have been debated, little attention has been paid to significant ethical concerns that arise in the conduct of such studies. These include loss of equipoise, lack of processes for adequate informed consent, and inequalities inherent in the research design which could lead to perceptions of injustice that may have negative implications for patients and the research enterprise. This article examines the ethical difficulties inherent in outcome‐adaptive trials.  相似文献   

17.
Doing large-scale genomics experiments can be expensive, and so experimenters want to get the most information out of each experiment. To this end the Maximally Informative Next Experiment (MINE) criterion for experimental design was developed. Here we explore this idea in a simplified context, the linear model. Four variations of the MINE method for the linear model were created: MINE-like, MINE, MINE with random orthonormal basis, and MINE with random rotation. Each method varies in how it maximizes the MINE criterion. Theorem 1 establishes sufficient conditions for the maximization of the MINE criterion under the linear model. Theorem 2 establishes when the MINE criterion is equivalent to the classic design criterion of D-optimality. By simulation under the linear model, we establish that the MINE with random orthonormal basis and MINE with random rotation are faster to discover the true linear relation with regression coefficients and observations when . We also establish in simulations with , , and 1000 replicates that these two variations of MINE also display a lower false positive rate than the MINE-like method and additionally, for a majority of the experiments, for the MINE method.  相似文献   

18.
Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19–59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA) with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA). All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions) and the post-fMRI testing recall revealed correct memory encoding at 86.33±5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN). Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s) of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.  相似文献   

19.
Perception is generally thought to occur centrally in the nervous system as a result of information which flows unidirectionally through a hierarchy of sensory processors. Such a view is in conflict with recent experimental evidence for a centrifugal control capable of enhancing particular features of the sensory input. Certain phenomena in human perception, resembling order-disorder transitions in physics, also suggest the existence of a positive feedback mechanism in the sensory pathway. A mechanism of perception is proposed in which unstructured feedback can accomplish the desired feature-specific enhancement of the input. The principle used here — the Alopex principle — is one that was devised in this laboratory for the experimental determination of visual receptive fields. The biological requirements for the operation of the principle are discussed, and a possible site in the thalamic relay nuclei is suggested.  相似文献   

20.
The Rosetta Molecular Modeling suite is a command-line-only collection of applications that enable high-resolution modeling and design of proteins and other molecules. Although extremely useful, Rosetta can be difficult to learn for scientists with little computational or programming experience. To that end, we have created a Graphical User Interface (GUI) for Rosetta, called the PyRosetta Toolkit, for creating and running protocols in Rosetta for common molecular modeling and protein design tasks and for analyzing the results of Rosetta calculations. The program is highly extensible so that developers can add new protocols and analysis tools to the PyRosetta Toolkit GUI.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号