首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 406 毫秒
1.
Because of the features provided by an abundance of specialized experimental software packages, personal computers have become prominent and powerful tools in cognitive research. Most of these programs have mechanisms to control the precision and accuracy with which visual stimuli are presented as well as the response times. However, external factors, often related to the technology used to display the visual information, can have a noticeable impact on the actual performance and may be easily overlooked by researchers. The aim of this study is to measure the precision and accuracy of the timing mechanisms of some of the most popular software packages used in a typical laboratory scenario in order to assess whether presentation times configured by researchers do not differ from measured times more than what is expected due to the hardware limitations. Despite the apparent precision and accuracy of the results, important issues related to timing setups in the presentation of visual stimuli were found, and they should be taken into account by researchers in their experiments.  相似文献   

2.
Macintosh sequence analysis software   总被引:3,自引:0,他引:3  
The analysis of information in nucleotide and amino acid sequence data from an investigator’s own laboratory, or from the ever-growing worldwide databases, is critically dependent on well planned and written software. Although the most powerful packages previously have been confined to workstations, there has been a dramatic increase over the last few years in the sophistication of the programs available for personal computers, as the speed and power of these have increased. A wide choice of software is available for the Macintosh, including the LaserGene suite of programs from DNAStar. This review assesses the strengths and weaknesses of LaserGene and concludes that it provides a useful and comprehensive range of sequence analysis tools.  相似文献   

3.
Abstract. Personal computers of ever‐increasing speed have motivated programmers of multivariate software to adapt their programs to be run in Microsoft Windows and Macintosh platforms. Updated versions of these multivariate programs appear more and more frequently and are marketed intensively. In this review we provide a comparative analysis of the most recent versions of three analytical software packages –Canoco for Windows 4.5, PC‐ORD version 4 and SYN‐TAX 2000. The three packages share two characteristics. First, the most recent versions are now compatible with the most recent Windows platforms and should therefore be accessible for use by virtually all vegetation scientists. Second, they have capabilities for numerous multivariate techniques, although each package has some unique techniques. Thus, any one of the packages will have much to offer the user.  相似文献   

4.
A microcomputer-based system for the storage and retrieval of information on strains in a culture collection is described. The system was designed around commercially available software packages written for microcomputers. Two additional programs were written using the BASIC language to allow a catalogue of the culture collection to be printed in a specific format. The details of each strain in the collection were stored on a floppy disc. Details of new strains were added to this database and information relating to existing cultures was modified or, where necessary, deleted from the collection. The database can be searched to supply details of a particular culture or to identify those cultures which possess certain attributes. The records for the whole collection were sorted alphabetically by species name, and numerically by accession number, and a word-processing package was used to print a catalogue of the culture collection.  相似文献   

5.
A microcomputer-based system for the storage and retrieval of information on strains in a culture collection is described. The system was designed around commercially available software packages written for microcomputers. Two additional programs were written using the BASIC language to allow a catalogue of the culture collection to be printed in a specific format. The details of each strain in the collection were stored on a floppy disc. Details of new strains were added to this database and information relating to existing cultures was modified or, where necessary, deleted from the collection. The database can be searched to supply details of a particular culture or to identify those cultures which possess certain attributes. The records for the whole collection were sorted alphabetically by species name, and numerically by accession number, and a word-processing package was used to print a catalogue of the culture collection.  相似文献   

6.
The use of computers in a fermentation pilot plant is described from a practical point of view. The aim is not the application of a computer to a single special process but a general application of computers to prepare and present data for the following analysis of fermentation processes. The hardware is normally bought from computer manufacturers, but some additional installations are useful. Application software has to be developed by the biotechnologist, therefore the software structure is the most important part of the computer application. Data storage is divided into three parts: short-time memory, long-time storage, and a medium memory mainly to be used in process analyses and process control. Four types of programs are used: main schedule tasks, low-priority on-line tasks, sense-switch routine, and different off-line programs. A table of all programs is presented, the main schedule task is described in detail to demonstrate the software structure.  相似文献   

7.
M J Kelly 《Génome》1989,31(2):1027-1033
Mapping and sequencing the human genome will generate large amounts of data, which must be sorted, analyzed, and stored for rapid retrieval to complete this enormous task. Computers and their software programs provide the most important tool to the molecular biologist today. A discussion of current capabilities and future needs in computer hardware and software for the human genome project is the topic of this paper. The use of computer programs to generate restriction maps, manage clone libraries, manage sequence projects, and generate consensus sequences is presented. The use of computers to communicate useful information rapidly to scientific colleagues is also mentioned. The role of both GenBank and BIONET is central to the dissemination and analysis of sequence information. The capabilities of electronic communication worldwide for assisting this project is available on the BIONET National Computer Resource, using existing networks.  相似文献   

8.
VOSTORG is a new, versatile package of programs for the inference and presentation of phylogenetic trees, as well as an efficient tool for nucleotide (nt) and amino acid (aa) sequence analysis (sequence input, verification, alignment, construction of consensus, etc.). On appropriately equipped systems, these data can be displayed on a video monitor or printed as required. They are implemented on IBM PC/XT/AT/PS-2 or compatible computers and hardware graphic support is recommended. The package is designed to be easily handled by occasional computer users and yet it is powerful enough for experienced professionals.  相似文献   

9.
Cloud computing and cluster computing are user-centric computing services. The shared software and hardware resources and information can be provided to the computers and other equipments according to the demands of users. A majority of services are deployed through outsourcing. Outsourcing computation allows resource-constrained clients to outsource their complex computation workloads to a powerful server which is rich of computation resources. Modular exponentiation is one of the most complex computations in public key based cryptographic schemes. It is useful to reduce the computation cost of the clients by using outsourcing computation. In this paper, we propose a novel outsourcing algorithm for modular exponentiation based on the new mathematical division under the setting of two non-colluding cloud servers. The base and the power of the outsourced data can be kept private and the efficiency is improved compared with former works.  相似文献   

10.
We present eight computer programs written in the C programming language that are designed to analyze genotypic data and to support existing software used to construct genetic linkage maps. Although each program has a unique purpose, they all share the common goals of affording a greater understanding of genetic linkage data and of automating tasks to make computers more effective tools for map building. The PIC/HET and FAMINFO programs automate calculation of relevant quantities such as heterozygosity, PIC, allele frequencies, and informativeness of markers and pedigrees. PREINPUT simplifies data submissions to the Centre d'Etude du Polymorphisme Humain (CEPH) data base by creating a file with genotype assignments that CEPH's INPUT program would otherwise require to be input manually. INHERIT is a program written specifically for mapping the X chromosome: by assigning a dummy allele to males, in the nonpseudoautosomal region, it eliminates falsely perceived noninheritances in the data set. The remaining four programs complement the previously published genetic linkage mapping software CRI-MAP and LINKAGE. TWOTABLE produces a more readable format for the output of CRI-MAP two-point calculations; UNMERGE is the converse to CRI-MAP's merge option; and GENLINK and LINKGEN automatically convert between the genotypic data file formats required by these packages. All eight applications read input from the same types of data files that are used by CRI-MAP and LINKAGE. Their use has simplified the management of data, has increased knowledge of the content of information in pedigrees, and has reduced the amount of time needed to construct genetic linkage maps of chromosomes.  相似文献   

11.
12.
A Reeves 《Génome》2001,44(3):439-443
The ability to identify individual chromosomes in cytological preparations is an essential component of many investigations. While several computer software applications have been used to facilitate such quantitative karyotype analysis, most of these programs are limited by design for specific types of analyses, or can be used only with specific hardware configurations. MicroMeasure is a new image analysis application that may be used to collect data for a wide variety of chromosomal parameters from electronically captured or scanned images. Unlike similar applications, MicroMeasure may be individually configured by the end user to suit a wide variety of research needs. This program can be used with most common personal computers, and requires no unusual or specific hardware. MicroMeasure is made available to the research community without cost by the Department of Biology at Colorado State University via the World Wide Web at http://www.biology.colostate.edu/MicroMeasure.  相似文献   

13.

Background  

Two-dimensional data colourings are an effective medium by which to represent three-dimensional data in two dimensions. Such "color-grid" representations have found increasing use in the biological sciences (e.g. microarray 'heat maps' and bioactivity data) as they are particularly suited to complex data sets and offer an alternative to the graphical representations included in traditional statistical software packages. The effectiveness of color-grids lies in their graphical design, which introduces a standard for customizable data representation. Currently, software applications capable of generating limited color-grid representations can be found only in advanced statistical packages or custom programs (e.g. micro-array analysis tools), often associated with steep learning curves and requiring expert knowledge.  相似文献   

14.
M Gulotta 《Biophysical journal》1995,69(5):2168-2173
LabVIEW is a graphic object-oriented computer language developed to facilitate hardware/software communication. LabVIEW is a complete computer language that can be used like Basic, FORTRAN, or C. In LabVIEW one creates virtual instruments that aesthetically look like real instruments but are controlled by sophisticated computer programs. There are several levels of data acquisition VIs that make it easy to control data flow, and many signal processing and analysis algorithms come with the software as premade VIs. In the classroom, the similarity between virtual and real instruments helps students understand how information is passed between the computer and attached instruments. The software may be used in the absence of hardware so that students can work at home as well as in the classroom. This article demonstrates how LabVIEW can be used to control data flow between computers and instruments, points out important features for signal processing and analysis, and shows how virtual instruments may be used in place of physical instrumentation. Applications of LabVIEW to the teaching laboratory are also discussed, and a plausible course outline is given.  相似文献   

15.
A cross-platform public domain PC image-analysis program for the comet assay   总被引:47,自引:0,他引:47  
The single-cell gel electrophoresis, also known as the comet assay, has gained wide-spread popularity as a simple and reliable method to measure genotoxic and cytotoxic effects of physical and chemical agents as well as kinetics of DNA repair. Cells are generally stained with fluorescent dyes. The analysis of comets--damaged cells which form a typical comet-shaped pattern--is greatly facilitated by the use of a computer image-analysis program. Although several image-analysis programs are available commercially, they are expensive and their source codes are not provided. For Macintosh computers a cost-free public domain macro is available on the Internet. No ready for use, cost-free program exists for the PC platform. We have, therefore, developed such a public domain program under the GNU license for PC computers. The program is called CASP and can be run on a variety of hardware and software platforms. Its practical merit was tested on human lymphocytes exposed to gamma-rays and found to yield reproducible results. The binaries for Windows 95 and Linux, together with the source code can be obtained from: http://www.casp.of.pl.  相似文献   

16.
Bioinformatics is the use of informatics tools and techniques in the study of molecular biology, genetic, or clinical data. The field of bioinformatics has expanded tremendously to cope with the large expansion of information generated by the mouse and human genome projects, as newer generations of computers that are much more powerful have emerged in the commercial market. It is now possible to employ the computing hardware and software at hand to generate novel methodologies in order to link data across the different databanks generated by these international projects and derive clinical and biological relevance from all of the information gathered. The ultimate goal would be to develop a computer program that can provide information correlating genes, their single nucleotide polymorphisms (SNPs), and the possible structural and functional effects on the encoded proteins with relation to known information on complex diseases with great ease and speed. Here, the recent developments of available software methods to analyze SNPs in relation to complex diseases are reviewed with emphasis on the type of predictions on protein structure and functions that can be made. The need for further development of comprehensive bioinformatics tools that can cope with information generated by the genomics communities is emphasized.  相似文献   

17.
Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.  相似文献   

18.
Server scalability is more important than ever in today's client/server dominated network environments. Recently, researchers have begun to consider cluster-based computers using commodity hardware as an alternative to expensive specialized hardware for building scalable Web servers. In this paper, we present performance results comparing two cluster-based Web servers based on different server architectures: OSI layer two dispatching (LSMAC) and OSI layer three dispatching (LSNAT). Both cluster-based server systems were implemented as application-space programs running on commodity hardware in contrast to other, similar, solutions which require specialized hardware/software. We point out the advantages and disadvantages of both systems. We also identify when servers should be clustered and when clustering will not improve performance. This revised version was published online in July 2006 with corrections to the Cover Date.  相似文献   

19.
20.
Despite the widespread use and obvious strengths of model-based methods for phylogeographic study, a persistent concern for such analyses is related to the definition of the model itself. The study by Peter et al. (2010) in this issue of Molecular Ecology demonstrates an approach for overcoming such hurdles. The authors were motivated by a deceptively simple goal; they sought to infer whether a population has remained at a low and stable size or has undergone a decline, and certainly there is no shortage of software packages for such a task (e.g., see list of programs in Excoffier & Heckel 2006). However, each of these software packages makes basic assumptions about the underling population (e.g., is the population subdivided or panmictic); these assumptions are explicit to any model-based approach but can bias parameter estimates and produce misleading inferences if the model does not approximate the actual demographic history in a reasonable manner. Rather than guessing which model might be best for analyzing the data (microsatellite data from samples of chimpanzees), Peter et al. (2010) quantify the relative fit of competing models for estimating the population genetic parameters of interest. Complemented by a revealing simulation study, the authors highlight the peril inherent to model-based inferences that lack a statistical evaluation of the fit of a model to the data, while also demonstrating an approach for model selection with broad applicability to phylogeographic analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号