首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   381篇
  免费   17篇
  国内免费   9篇
  2024年   1篇
  2023年   4篇
  2022年   4篇
  2021年   7篇
  2020年   9篇
  2019年   14篇
  2018年   12篇
  2017年   13篇
  2016年   9篇
  2015年   14篇
  2014年   21篇
  2013年   29篇
  2012年   13篇
  2011年   16篇
  2010年   9篇
  2009年   26篇
  2008年   34篇
  2007年   27篇
  2006年   35篇
  2005年   9篇
  2004年   24篇
  2003年   15篇
  2002年   17篇
  2001年   5篇
  2000年   3篇
  1999年   1篇
  1998年   3篇
  1997年   2篇
  1996年   5篇
  1995年   2篇
  1994年   2篇
  1993年   2篇
  1992年   3篇
  1991年   3篇
  1990年   2篇
  1989年   2篇
  1988年   2篇
  1987年   1篇
  1986年   1篇
  1984年   1篇
  1982年   1篇
  1980年   2篇
  1978年   1篇
  1977年   1篇
排序方式: 共有407条查询结果,搜索用时 31 毫秒
101.
Methods and milliliter scale devices for high-throughput bioprocess design   总被引:1,自引:1,他引:0  
Based on electromagnetic simulations as well as on computational fluid dynamics simulations gas-inducing impellers and their magnetic inductive drive were optimized for stirred-tank reactors on a 10 ml-scale arranged in a bioreaction block with 48 bioreactors. High impeller speeds of up to 4,000 rpm were achieved at very small electrical power inputs (63 W with 48 bioreactors). The maxima of local energy dissipation in the reaction medium were estimated to be up to 50 W L−1 at 2,800 rpm. Total power input and local energy dissipation are thus well comparable to standard stirred-tank bioreactors. A prototype fluorescence reader for 8 bioreactors with immobilized fluorometric sensor spots was applied for online measurement of dissolved oxygen concentration making use of the phase detection method. A self-optimizing scheduling software was developed for parallel control of 48 bioreactors with a liquid-handling system for automation of titration and sampling. It was shown on the examples of simple parallel batch cultivations of Escherichia coli with different media compositions that high cell densities of up to 16.5 g L−1 dry cell mass can be achieved without pH-control within 5 h with a high parallel reproducibility (standard deviation<3.5%, n=48) due to the high oxygen transfer capability of the gas-inducing stirred-tank bioreactors.  相似文献   
102.
This paper describes the design and implementation of a parallel programming environment called Distributed Shared Array (DSA), which provides a shared global array abstract across different machines connected by a network. In DSA, users can define and use global arrays that can be accessed uniformly from any machines in the network. Explicit management of array area allocation, replication, and migration is achieved by explicit calls for array manipulation: defining array regions, reading and writing array regions, synchronization, and control of replication and migration. The DSA is integrated with Grid (Globus) services. This paper also describes the use of our model for gene cluster analysis, multiple alignment and molecular dynamics simulation. In these applications, global arrays are used for storing the distance matrix, alignment matrix and atom coordinates, respectively. Large array areas, which cannot be stored in the memory of individual machines, are made available by the DSA. Scalable performance of DSA was obtained compared to that of conventional parallel programs written in MPI.  相似文献   
103.
Nowadays, biomedicine is characterised by a growing need for processing of large amounts of data in real time. This leads to new requirements for information and communication technologies (ICT). Cloud computing offers a solution to these requirements and provides many advantages, such as cost savings, elasticity and scalability of using ICT. The aim of this paper is to explore the concept of cloud computing and the related use of this concept in the area of biomedicine. Authors offer a comprehensive analysis of the implementation of the cloud computing approach in biomedical research, decomposed into infrastructure, platform and service layer, and a recommendation for processing large amounts of data in biomedicine. Firstly, the paper describes the appropriate forms and technological solutions of cloud computing. Secondly, the high-end computing paradigm of cloud computing aspects is analysed. Finally, the potential and current use of applications in scientific research of this technology in biomedicine is discussed.  相似文献   
104.
dadi is a popular but computationally intensive program for inferring models of demographic history and natural selection from population genetic data. I show that running dadi on a Graphics Processing Unit can dramatically speed computation compared with the CPU implementation, with minimal user burden. Motivated by this speed increase, I also extended dadi to four- and five-population models. This functionality is available in dadi version 2.1.0, https://bitbucket.org/gutenkunstlab/dadi/.  相似文献   
105.

Background

Next-Generation Sequencing (NGS) has emerged as a widely used tool in molecular biology. While time and cost for the sequencing itself are decreasing, the analysis of the massive amounts of data remains challenging. Since multiple algorithmic approaches for the basic data analysis have been developed, there is now an increasing need to efficiently use these tools to obtain results in reasonable time.

Results

We have developed QuickNGS, a new workflow system for laboratories with the need to analyze data from multiple NGS projects at a time. QuickNGS takes advantage of parallel computing resources, a comprehensive back-end database, and a careful selection of previously published algorithmic approaches to build fully automated data analysis workflows. We demonstrate the efficiency of our new software by a comprehensive analysis of 10 RNA-Seq samples which we can finish in only a few minutes of hands-on time. The approach we have taken is suitable to process even much larger numbers of samples and multiple projects at a time.

Conclusion

Our approach considerably reduces the barriers that still limit the usability of the powerful NGS technology and finally decreases the time to be spent before proceeding to further downstream analysis and interpretation of the data.

Electronic supplementary material

The online version of this article (doi:10.1186/s12864-015-1695-x) contains supplementary material, which is available to authorized users.  相似文献   
106.
The big data storage is a challenge in a post genome era. Hence, there is a need for high performance computing solutions for managing large genomic data. Therefore, it is of interest to describe a parallel-computing approach using message-passing library for distributing the different compression stages in clusters. The genomic compression helps to reduce the on disk“foot print” of large data volumes of sequences. This supports the computational infrastructure for a more efficient archiving. The approach was shown to find utility in 21 Eukaryotic genomes using stratified sampling in this report. The method achieves an average of 6-fold disk space reduction with three times better compression time than COMRAD.

Availability

The source codes are written in C using message passing libraries and are available at https:// sourceforge.net/ projects/ comradmpi/files / COMRADMPI/  相似文献   
107.
High-performance next-generation sequencing (NGS) technologies are advancing genomics and molecular biological research. However, the immense amount of sequence data requires computational skills and suitable hardware resources that are a challenge to molecular biologists. The DNA Data Bank of Japan (DDBJ) of the National Institute of Genetics (NIG) has initiated a cloud computing-based analytical pipeline, the DDBJ Read Annotation Pipeline (DDBJ Pipeline), for a high-throughput annotation of NGS reads. The DDBJ Pipeline offers a user-friendly graphical web interface and processes massive NGS datasets using decentralized processing by NIG supercomputers currently free of charge. The proposed pipeline consists of two analysis components: basic analysis for reference genome mapping and de novo assembly and subsequent high-level analysis of structural and functional annotations. Users may smoothly switch between the two components in the pipeline, facilitating web-based operations on a supercomputer for high-throughput data analysis. Moreover, public NGS reads of the DDBJ Sequence Read Archive located on the same supercomputer can be imported into the pipeline through the input of only an accession number. This proposed pipeline will facilitate research by utilizing unified analytical workflows applied to the NGS data. The DDBJ Pipeline is accessible at http://p.ddbj.nig.ac.jp/.  相似文献   
108.
In this paper, we propose a novel patient-specific method of modelling pulmonary airflow using graphics processing unit (GPU) computation that can be applied in medical practice. To overcome the barriers imposed by computation speed, installation price and footprint to the application of computational fluid dynamics, we focused on GPU computation and the lattice Boltzmann method (LBM). The GPU computation and LBM are compatible due to the characteristics of the GPU. As the optimisation of data access is essential for the performance of the GPU computation, we developed an adaptive meshing method, in which an airway model is covered by isotropic subdomains consisting of a uniform Cartesian mesh. We found that 43 size subdomains gave the best performance. The code was also tested on a small GPU cluster to confirm its performance and applicability, as the price and footprint are reasonable for medical applications.  相似文献   
109.
Several opensource or commercially available software platforms are widely used to develop dynamic simulations of movement. While computational approaches are conceptually similar across platforms, technical differences in implementation may influence output. We present a new upper limb dynamic model as a tool to evaluate potential differences in predictive behavior between platforms. We evaluated to what extent differences in technical implementations in popular simulation software environments result in differences in kinematic predictions for single and multijoint movements using EMG- and optimization-based approaches for deriving control signals. We illustrate the benchmarking comparison using SIMM–Dynamics Pipeline–SD/Fast and OpenSim platforms. The most substantial divergence results from differences in muscle model and actuator paths. This model is a valuable resource and is available for download by other researchers. The model, data, and simulation results presented here can be used by future researchers to benchmark other software platforms and software upgrades for these two platforms.  相似文献   
110.
Rosen classified sciences into two categories: formalizable and unformalizable. Whereas formalizable sciences expressed in terms of mathematical theories were highly valued by Rutherford, Hutchins pointed out that unformalizable parts of soft sciences are of genuine interest and importance. Attempts to build mathematical theories for biology in the past century was met with modest and sporadic successes, and only in simple systems. In this article, a qualitative model of humans' high creativity is presented as a starting point to consider whether the gap between soft and hard sciences is bridgeable. Simonton's chance-configuration theory, which mimics the process of evolution, was modified and improved. By treating problem solving as a process of pattern recognition, the known dichotomy of visual thinking vs. verbal thinking can be recast in terms of analog pattern recognition (non-algorithmic process) and digital pattern recognition (algorithmic process), respectively. Additional concepts commonly encountered in computer science, operations research and artificial intelligence were also invoked: heuristic searching, parallel and sequential processing. The refurbished chance-configuration model is now capable of explaining several long-standing puzzles in human cognition: a) why novel discoveries often came without prior warning, b) why some creators had no ideas about the source of inspiration even after the fact, c) why some creators were consistently luckier than others, and, last but not least, d) why it was so difficult to explain what intuition, inspiration, insight, hunch, serendipity, etc. are all about. The predictive power of the present model was tested by means of resolving Zeno's paradox of Achilles and the Tortoise after one deliberately invoked visual thinking. Additional evidence of its predictive power must await future large-scale field studies. The analysis was further generalized to constructions of scientific theories in general. This approach is in line with Campbell's evolutionary epistemology. Instead of treating science as immutable Natural Laws, which already existed and which were just waiting to be discovered, scientific theories are regarded as humans' mental constructs, which must be invented to reconcile with observed natural phenomena. In this way, the pursuit of science is shifted from diligent and systematic (or random) searching for existing Natural Laws to firing up humans' imagination to comprehend Nature's behavioral pattern. The insights gained in understanding human creativity indicated that new mathematics that is capable of handling effectively parallel processing and human subjectivity is sorely needed. The past classification of formalizability vs. non-formalizability was made in reference to contemporary mathematics. Rosen's conclusion did not preclude future inventions of new biology-friendly mathematics.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号