首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
High throughput mutation screening in an automated environment generates large data sets that have to be organized and stored reliably. Complex multistep workflows require strict process management and careful data tracking. We have developed a Laboratory Information Management Systems (LIMS) tailored to high throughput candidate gene mutation scanning and resequencing that respects these requirements. Designed with a client/server architecture, our system is platform independent and based on open-source tools from the database to the web application development strategy. Flexible, expandable and secure, the LIMS, by communicating with most of the laboratory instruments and robots, tracks samples and laboratory information, capturing data at every step of our automated mutation screening workflow. An important feature of our LIMS is that it enables tracking of information through a laboratory workflow where the process at one step is contingent on results from a previous step. AVAILABILITY: Script for MySQL database table creation and source code of the whole JSP application are freely available on our website: http://www-gcs.iarc.fr/lims/. SUPPLEMENTARY INFORMATION: System server configuration, database structure and additional details on the LIMS and the mutation screening workflow are available on our website: http://www-gcs.iarc.fr/lims/  相似文献   

2.
To detect the hybrid cells forming as a result of fusion of the mouse myeloma cells with the immunic mouse splenocytes, a radioautographic method was used which involved the application of 3H-hypoxanthine as a labelled precursor of nucleic acids and of high-sensitive UK emulsion for accelerated preparations of autographs. The optimal conditions of hybridization providing for the maximum yield of hybrid cells and hybrid clones in our experiments were: use of polyethylene glycol ("L?ba", mol. weight 4000) in 50% (w/v) concentration, of NS-O or X-653 myeloma cells in the ratio of parental cells 1:5 (myeloma cell: splenocyte).  相似文献   

3.
The use of array comparative genomic hybridization (array CGH) as a diagnostic tool in molecular genetics has facilitated the identification of many new microdeletion/microduplication syndromes (MMSs). Furthermore, this method has allowed for the identification of copy number variations (CNVs) whose pathogenic role has yet to be uncovered. Here, we report on our application of array CGH for the identification of pathogenic CNVs in 79 Russian children with intellectual disability (ID). Twenty-six pathogenic or likely pathogenic changes in copy number were detected in 22 patients (28%): 8 CNVs corresponded to known MMSs, and 17 were not associated with previously described syndromes. In this report, we describe our findings and comment on genes potentially associated with ID that are located within the CNV regions.  相似文献   

4.
We present an approach to self-assessment for Autonomic Computing, based on the synthesis of utility functions, at the level of an autonomic application, or even a single task or feature performed by that application. This paper describes the fundamental steps of our approach: instrumentation of the application; collection of exhaustive samples of runtime data about relevant quality attributes of the application, as well as characteristics of its runtime environment; synthesis of a utility function through statistical correlation over the collected data points; and embedding of code corresponding to the equation of the synthesized utility function within the application, which enables the computation of utility values at run time. We describe a number of case studies, with their results and implications, to motivate and discuss the significance of application-level utility, illustrate our statistical synthesis method, and present our framework for instrumentation, monitoring, and utility function embedding/evaluation.  相似文献   

5.

Background

To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography) and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality), a user-friendly interface, and suitability for cooperative work.

Methods and Results

Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application) that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and differences between simulation runs. In an iterative development process, our easy-to-use application was developed in close cooperation with meteorologists and visualization experts. The usability of the application has been validated with user tests. We report on how this application supports the users to prove and disprove existing hypotheses and discover new insights. In addition, the application has been used at public events to communicate research results.  相似文献   

6.
SNPper: retrieval and analysis of human SNPs   总被引:4,自引:0,他引:4  
MOTIVATION: Single Nucleotide Polymorphisms (SNPs) are an increasingly important tool for the study of the human genome. SNPs can be used as markers to create high-density genetic maps, as causal candidates for diseases, or to reconstruct the history of our genome. SNP-based studies rely on the availability of large numbers of validated, high-frequency SNPs whose position on the chromosomes is known with precision. Although large collections of SNPs exist in public databases, researchers need tools to effectively retrieve and manipulate them. RESULTS: We describe the implementation and usage of SNPper, a web-based application to automate the tasks of extracting SNPs from public databases, analyzing them and exporting them in formats suitable for subsequent use. Our application is oriented toward the needs of candidate-gene, whole-genome and fine-mapping studies, and provides several flexible ways to present and export the data. The application has been publicly available for over a year, and has received positive user feedback and high usage levels.  相似文献   

7.
We present a substantial improvement of S-flexfit, our recently proposed method for flexible fitting in three dimensional electron microscopy (3D-EM) at a resolution range of 8-12A, together with a comparison of the method capabilities with Normal Mode Analysis (NMA), application examples and a user's guide. S-flexfit uses the evolutionary information contained in protein domain databases like CATH, by means of the structural alignment of the elements of a protein superfamily. The added development is based on a recent extension of the Singular Value Decomposition (SVD) algorithm specifically designed for situations with missing data: Incremental Singular Value Decomposition (ISVD). ISVD can manage gaps and allows considering more aminoacids in the structural alignment of a superfamily, extending the range of application and producing better models for the fitting step of our methodology. Our previous SVD-based flexible fitting approach can only take into account positions with no gaps in the alignment, being appropriate when the superfamily members are relatively similar and there are few gaps. However, with new data coming from structural proteomics works, the later situation is becoming less likely, making ISVD the technique of choice for further works. We present the results of using ISVD in the process of flexible fitting with both simulated and experimental 3D-EM maps (GroEL and Poliovirus 135S cell entry intermediate).  相似文献   

8.
We describe several analytical techniques for use in developing genetic models of oncogenesis including: methods for the selection of important genetic events, construction of graph models (including distance-based trees, branching trees, contingency trees and directed acyclic graph models) from these events and methods for interpretation of the resulting models. The models can be used to make predictions about: which genetic events tend to occur early, which events tend to occur together and the likely order of events. Unlike simple path models of oncogenesis, our models allow dependencies to exist between specific genetic changes and allow for multiple, divergent paths in tumor progression. A variety of genetic events can be used with the graph models including chromosome breaks, losses or gains of large DNA regions, small mutations and changes in methylation. As an application of the techniques, we use a recently published cytogenetic analysis of 206 melanoma cases [Nelson et al. (2000), Cancer Genet. Cytogenet.122, 101-109] to derive graph models for chromosome breaks in melanoma. Among our predictions are: (1) breaks in 6q1 and 1q1 are early events, with 6q1 preferentially occurring first and increasing the probability of a break in 1q1 and (2) breaks in the two sets [1p1, 1p2, 9q1] and [1q1, 7p2, 9p2] tend to occur together. This study illustrates that the application of graph models to genetic data from tumor sets provide new information on the interrelationships among genetic changes during tumor progression.  相似文献   

9.
SUMMARY: Protein name extraction is an important step in mining biological literature. We describe two new methods for this task: semiCRFs and dictionary HMMs. SemiCRFs are a recently-proposed extension to conditional random fields (CRFs) that enables more effective use of dictionary information as features. Dictionary HMMs are a technique in which a dictionary is converted to a large HMM that recognizes phrases from the dictionary, as well as variations of these phrases. Standard training methods for HMMs can be used to learn which variants should be recognized. We compared the performance of our new approaches with that of Maximum Entropy (MaxEnt) and normal CRFs on three datasets, and improvement was obtained for all four methods over the best published results for two of the datasets. CRFs and semiCRFs achieved the highest overall performance according to the widely-used F-measure, while the dictionary HMMs performed the best at finding entities that actually appear in the dictionary-the measure of most interest in our intended application. AVAILABILITY: Dictionary HMMs were implemented in Java. Algorithms are available through an information extraction package MINORTHIRD on http://minorthird.sourceforge.net  相似文献   

10.
SUMMARY: MACiE (mechanism, annotation and classification in enzymes) is a publicly available web-based database, held in CMLReact (an XML application), that aims to help our understanding of the evolution of enzyme catalytic mechanisms and also to create a classification system which reflects the actual chemical mechanism (catalytic steps) of an enzyme reaction, not only the overall reaction. AVAILABILITY: http://www-mitchell.ch.cam.ac.uk/macie/.  相似文献   

11.
INTRODUCTION: In recent years the use of computer systems has allowed numerical analysis of medical images to be introduced and has speeded up the conversion of numerical data into clinically valuable information. The creation of a software application that could almost automatically calculate the volume of anatomical structures imaged by MRI has seemed possible. The aim of our study was to determine the clinical usefulness of an numerical segmentation image technique (NSI) software application in estimating the volume of extraocular muscles. MATERIAL AND METHODS: The study group was formed of 45 patients (90 orbits). All the patients underwent MRI examinations of the orbits by a 1.5 T scanner using a head coil. The degree of exophthalmos was determined clinically and radiologically in relation to the interzygomatic line. The quantitative assessment of all eye muscles was carried out using the NSI application, a new software program introduced by the authors. RESULTS: A close correlation between muscle volume and the degree of exophthalmos was revealed and confirmed by statistical analysis (r = 0.543, p = 3.13396E-08) in agreement with other papers. CONCLUSIONS: The NSI software program is an application which offers a reliable and precise estimation of eye muscle volume. It is therefore useful in the diagnosis of the pathological processes leading to exophthalmos. It has special clinical value for monitoring discrete volume changes of muscles during treatment.  相似文献   

12.
目的:探讨时间分辨荧光分析技术(DELFIA)筛查新生儿先天性甲状腺功能低下的的准确率。方法:回顾性分析我院2008年1月至2013年5月收治的20000例新生儿的临床资料。分别利用酶联免疫吸附法(ELISA)和时间分辨荧光分析技术检测新生儿足跟血中三碘甲腺原氨酸(T3)、四碘甲腺原氨酸(T4)及促甲状腺素(TSH)水平。比较两种检测方法的准确率。结果:DELFIA初次筛查CH的准确性明显高于ELISA。对召回的新生儿进行TSH复测,DELFIA对TSH≥20 m U/L的检测准确率最高,差异具有统计学意义(P0.05)。结论:DELFIA在筛查先天性甲状腺功能低下中具有较高的应用价值,可作为临床筛查的首选方法。  相似文献   

13.
MOTIVATION: To enhance the exploration of gene expression data in a metabolic context, one requires an application that allows the integration of this data and which represents this data in a (genome-wide) metabolic map. The layout of this metabolic map must be highly flexible to enable discoveries of biological phenomena. Moreover, it must allow the simultaneous representation of additional information about genes and enzymes. Since the layout and properties of existing maps did not fulfill our requirements, we developed a new way of representing gene expression data in metabolic charts. RESULTS: ViMAc generates user-specified (genome-wide) metabolic maps to explore gene expression data. To enhance the interpretation of these maps information such as sub-cellular localization is included. ViMAc can be used to analyse human or yeast expression data obtained with DNA microarrays or SAGE. We introduce our metabolic map method and demonstrate how it can be applied to explore DNA microarray data for yeast. Availability: ViMAc is freely available for academic institutions on request from the authors.  相似文献   

14.
《Ecological monographs》2011,82(1):129-147
Ecological theory often fails applied ecologists in three ways: (1) Theory has little predictive value but is nevertheless applied in conservation with a risk of perverse outcomes, (2) individual theories have limited heuristic value for planning and framing research because they are narrowly focused, and (3) theory can lead to poor communication among scientists and hinder scientific progress through inconsistent use of terms and widespread redundancy. New approaches are therefore needed that improve the distillation, communication, and application of ecological theory. We advocate three approaches to resolve these problems: (1) improve prediction by reviewing theory across case studies to develop contingent theory where possible, (2) plan new research using a checklist of phenomena to avoid the narrow heuristic value of individual theories, and (3) improve communication among scientists by rationalizing theory associated with particular phenomena to purge redundancy and by developing definitions for key terms. We explored the extent to which these problems and solutions have been featured in two case studies of long-term ecological research programs in forests and plantations of southeastern Australia. We found that our main contentions were supported regarding the prediction, planning, and communication limitations of ecological theory. We illustrate how inappropriate application of theory can be overcome or avoided by investment in boundary-spanning actions. The case studies also demonstrate how some of our proposed solutions could work, particularly the use of theory in secondary case studies after developing primary case studies without theory. When properly coordinated and implemented through a widely agreed upon and broadly respected international collaboration, the framework that we present will help to speed the progress of ecological research and lead to better conservation decisions.  相似文献   

15.
16.
MOTIVATION: We have established a novel data mining procedure for the identification of genes associated with pre-defined phenotypes and/or molecular pathways. Based on the observation that these genes are frequently expressed in the same place or in close proximity at about the same time, we have devised an approach termed Common Denominator Procedure. One unusual feature of this approach is that the specificity and probability to identify genes linked to the desired phenotype/pathway increase with greater diversity of the input data. RESULT: To show the feasibility of our approach, the Cancer Genome Anatomy Project expression data combined with a defined set of angiogenic factors was used to identify additional and novel angiogenesis-associated genes. A multitude of these additional genes were known to be associated with angiogenesis according to published data, verifying our approach. For some of the remaining candidate genes, application of a high-throughput functional genomics platform (XantoScreen) provided further experimental evidence for association with angiogenesis.  相似文献   

17.
Variable selection is an essential part of any statistical analysis and yet has been somewhat neglected in the context of longitudinal data analysis. In this article, we propose a generalized version of Mallows's C(p) (GC(p)) suitable for use with both parametric and nonparametric models. GC(p) provides an estimate of a measure of model's adequacy for prediction. We examine its performance with popular marginal longitudinal models (fitted using GEE) and contrast results with what is typically done in practice: variable selection based on Wald-type or score-type tests. An application to real data further demonstrates the merits of our approach while at the same time emphasizing some important robust features inherent to GC(p).  相似文献   

18.
A balance needs to be struck between facilitating compassionate access to innovative treatments for those in desperate need, and the duty to protect such vulnerable individuals from the harms of untested/unlicensed treatments. We introduced a principle‐based framework (2009) to evaluate such requests and describe its application in the context of recently evolved UK, US and European regulatory processes. 24 referrals (20 individual; four group) were received by our quaternary children's hospital Clinical Ethics Committee (CEC) over the 5‐year period (2011‐16). The CEC‐rapid response group evaluated individual cases within 48‐hours; the main referrers being haematology/oncology, immunology or transplant services (14). Most requests were for drug/vaccine/pre‐trial access (13) or biological/cellular therapies (8). The majority of individual requests were approved (19/20); neutral or negative opinions were given in 5, including 3 group requests. Recently evolved regulatory processes share common criteria and conditions to our framework including: demonstration of clinical need; sound scientific basis with lack of viable alternative; risks‐benefit/best interests evaluation; arrangements for fully informed consent; no compromise of arrangements to test treatment for licensing purposes; consideration of resource implications. There are differences between individual processes and with our framework, with respect to procedures, scope, application format, costs and obligation to make available all outcome data. Our experience has emphasized the need for an independent, principled, consistent, fair and transparent response to the increasing demand for innovative treatment on a compassionate basis. We believe that there is a need for harmonization of the recent proliferation of regulation and legislation in this area.  相似文献   

19.
MOTIVATION: High-throughput screening (HTS) plays a central role in modern drug discovery, allowing for testing of >100,000 compounds per screen. The aim of our work was to develop and implement methods for minimizing the impact of systematic error in the analysis of HTS data. To the best of our knowledge, two new data correction methods included in HTS-Corrector are not available in any existing commercial software or freeware. RESULTS: This paper describes HTS-Corrector, a software application for the analysis of HTS data, detection and visualization of systematic error, and corresponding correction of HTS signals. Three new methods for the statistical analysis and correction of raw HTS data are included in HTS-Corrector: background evaluation, well correction and hit-sigma distribution procedures intended to minimize the impact of systematic errors. We discuss the main features of HTS-Corrector and demonstrate the benefits of the algorithms.  相似文献   

20.
Summary .  The International Biometric Society (IBS) brings together members from a diversity of cultural backgrounds, organized into geographically based Regions and National Groups, and covering a diverse range of interests, in terms of both methodological topics and application areas. We briefly reflect on how the historical development of our science, society, and international conferences reflects this diversity, with a focus on the history of the British and Irish Region of the IBS. Then, by considering the cultural/geographical diversity of the society, and the scientific diversity of the society and biometricians, we identify both some strengths of the society (diverse topics for meetings arranged across the world, application of biometrical methods to diverse application areas, management of the society by members from a diversity of backgrounds) and also some current challenges (electronic delivery of journals and other information, the diversity of application areas addressed by members of the society, improving links with the scientific societies of those who motivate our research). Finally, we illustrate the diversity of scientific problems that each of us face in our roles as biometricians.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号