首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
A number of important data analysis problems in neuroscience can be solved using state-space models. In this article, we describe fast methods for computing the exact maximum a posteriori (MAP) path of the hidden state variable in these models, given spike train observations. If the state transition density is log-concave and the observation model satisfies certain standard assumptions, then the optimization problem is strictly concave and can be solved rapidly with Newton–Raphson methods, because the Hessian of the loglikelihood is block tridiagonal. We can further exploit this block-tridiagonal structure to develop efficient parameter estimation methods for these models. We describe applications of this approach to neural decoding problems, with a focus on the classic integrate-and-fire model as a key example.  相似文献   

2.
In 2003, Gusfield introduced the haplotype inference by pure parsimony (HIPP) problem and presented an integer program (IP) that quickly solved many simulated instances of the problem. Although it solved well on small instances, Gusfield's IP can be of exponential size in the worst case. Several authors have presented polynomial-sized IPs for the problem. In this paper, we further the work on IP approaches to HIPP. We extend the existing polynomial-sized IPs by introducing several classes of valid cuts for the IP. We also present a new polynomial-sized IP formulation that is a hybrid between two existing IP formulations and inherits many of the strengths of both. Many problems that are too complex for the exponential-sized formulations can still be solved in our new formulation in a reasonable amount of time. We provide a detailed empirical comparison of these IP formulations on both simulated and real genotype sequences. Our formulation can also be extended in a variety of ways to allow errors in the input or model the structure of the population under consideration.  相似文献   

3.
Constraint-based approaches recently brought new insight into our understanding of metabolism. By making very simple assumptions such as that the system is at steady-state and some reactions are irreversible, and without requiring kinetic parameters, general properties of the system can be derived. A central concept in this methodology is the notion of an elementary mode (EM for short) which represents a minimal functional subsystem. The computation of EMs still forms a limiting step in metabolic studies and several algorithms have been proposed to address this problem leading to increasingly faster methods. However, although a theoretical upper bound on the number of elementary modes that a network may possess has been established, surprisingly, the complexity of this problem has never been systematically studied. In this paper, we give a systematic overview of the complexity of optimisation problems related to modes. We first establish results regarding network consistency. Most consistency problems are easy, i.e., they can be solved in polynomial time. We then establish the complexity of finding and counting elementary modes. We show in particular that finding one elementary mode is easy but that this task becomes hard when a specific EM (i.e. an EM containing some specified reactions) is sought. We then show that counting the number of elementary modes is musical sharpP-complete. We emphasize that the easy problems can be solved using currently existing software packages. We then analyse the complexity of a closely related task which is the computation of so-called minimum reaction cut sets and we show that this problem is hard. We then present two positive results which both allow to avoid computing EMs as a prior to the computation of reaction cuts. The first one is a polynomial approximation algorithm for finding a minimum reaction cut set. The second one is a test for verifying whether a set of reactions constitutes a reaction cut; this test can be readily included in existing algorithms to improve their performance. Finally, we discuss the complexity of other cut-related problems.  相似文献   

4.
Hidden Markov models (HMMs) have been successfully applied to a variety of problems in molecular biology, ranging from alignment problems to gene finding and annotation. Alignment problems can be solved with pair HMMs, while gene finding programs rely on generalized HMMs in order to model exon lengths. In this paper, we introduce the generalized pair HMM (GPHMM), which is an extension of both pair and generalized HMMs. We show how GPHMMs, in conjunction with approximate alignments, can be used for cross-species gene finding and describe applications to DNA-cDNA and DNA-protein alignment. GPHMMs provide a unifying and probabilistically sound theory for modeling these problems.  相似文献   

5.
One line of DNA computing research focuses on parallel search algorithms, which can be used to solve many optimization problems. DNA in solution can provide an enormous molecular library, which can be searched by molecular biological techniques. We have implemented such a parallel search for solutions to knapsack problems, which ask for the best way to pack a knapsack of limited volume. Several instances of knapsack problems were solved using DNA. We demonstrate how the computations can be extended by in vivo translation of the DNA library into protein. This combination of DNA and protein allows for multi-criterion optimization. The knapsack computations performed can then be seen as protein optimizations, one of the most complex computations performed by natural systems.  相似文献   

6.
目的:通过总结与分析,研究适合军校新入校大学生心理档案建立的合理人格测验。方法:分别采用卡特尔十六种人格因素测验(16PF)、明尼苏达多项人格测验(MMPI)、症状自评量表(SCL-90)测量今年某军校大一新生98名,剔除无效后保留有效数据94例,根据结果筛选人员并做相关分析。结果:①16PF检测出3名问题人员,MMPI检测出18名,SCL-90检测出8名。②MMPI和SCL-90有较高的相关性,r<0.42,P<0.05,MMPI完全可以替代SCL-90③MMPI部分临床量表与16PF些许因素呈显著相关,可考虑将相关因素加入16PF评价标准里。结论:建立心理档案时,使用16PF作为初筛测验,MMPI作为复检测验,能够较完善地建立心理档案。  相似文献   

7.
分支分类问题的遗传算法   总被引:2,自引:0,他引:2  
分支分类问题可归结为聚类问题.通常的分支分类方法一般只能保证得到局部最优解.本文首先给出一种聚类方法,即同步插入法,然后将之转化为离散空间上的优化问题,并应用遗传算法以期得到全局最优解.实验结果表明该方法是正确和可行的.  相似文献   

8.
The polymerase chain reaction (PCR) has been used to amplify DNA fragments by using eucaryotic genomic DNA as a template. We show that bacterial genomic DNA can be used as a template for PCR amplification. We demonstrate that DNA fragments at least as large as 4,400 base pairs can be amplified with fidelity and that the amplified DNA can be used as a substrate for most operations involving DNA. We discuss problems inherent in the direct sequencing of the amplified product, one of the important exploitations of this methodology. We have solved the problems by developing an "asymmetric amplification" method in which one of the oligonucleotide primers is used in limiting amounts, thus allowing the accumulation of single-stranded copies of only one of the DNA strands. As an illustration of the use of PCR in bacteria, we have amplified, sequenced, and subcloned several DNA fragments carrying mutations in genes of the histidine permease operon. These mutations are part of a preliminary approach to studying protein-protein interactions in transport, and their nature is discussed.  相似文献   

9.
Reprograming somatic cells using exogenetic gene expression represents a groundbreaking step in regenerative medicine. Induced pluripotent stem cells(i PSCs) are expected to yield novel therapies with the potential to solve many issues involving incurable diseases. In particular, applying i PSCs clinically holds the promise of addressing the problems of immune rejection and ethics that have hampered the clinical applications of embryonic stem cells. However, as i PSC research has progressed, new problems have emerged that need to be solved before the routine clinical application of i PSCs can become established. In this review, we discuss the current technologies and future problems of human i PSC generation methods for clinical use.  相似文献   

10.
Recently, the semiautomated tetrazolium-based MTT colorimetric assay have been used to measure chemosensitivity. We also have been used this assay for 4 ovarian clear cell carcinoma cell lines to investigate the chemosensitivity of this tumor. In this study, several problems have been faced to be solved. In this paper, we pointed out these problems and indicated solutions.  相似文献   

11.
The dead-end elimination (DEE) theorems are powerful tools for the combinatorial optimization of protein side-chain placement in protein design and homology modeling. In order to reach their full potential, the theorems must be extended to handle very hard problems. We present a suite of new algorithms within the DEE paradigm that significantly extend its range of convergence and reduce run time. As a demonstration, we show that a total protein design problem of 10(115) combinations, a hydrophobic core design problem of 10(244) combinations, and a side-chain placement problem of 10(1044) combinations are solved in less than two weeks, a day and a half, and an hour of CPU time, respectively. This extends the range of the method by approximately 53, 144 and 851 log-units, respectively, using modest computational resources. Small to average-sized protein domains can now be designed automatically, and side-chain placement calculations can be solved for nearly all sizes of proteins and protein complexes in the growing field of structural genomics.  相似文献   

12.
We describe a set of computational problems motivated by certain analysis tasks in genome resequencing. These are assembly problems for which multiple distinct sequences must be assembled, but where the relative positions of reads to be assembled are already known. This information is obtained from a common reference genome and is characteristic of resequencing experiments. The simplest variant of the problem aims at determining a minimum set of superstrings such that each sequenced read matches at least one superstring. We give an algorithm with time complexity O(N), where N is the sum of the lengths of reads, substantially improving on previous algorithms for solving the same problem. We also examine the problem of finding the smallest number of reads to remove such that the remaining reads are consistent with k superstrings. By exploiting a surprising relationship with the minimum cost flow problem, we show that this problem can be solved in polynomial time when nested reads are excluded. If nested reads are permitted, this problem of removing the minimum number of reads becomes NP-hard. We show that permitting mismatches between reads and their nearest superstrings generally renders these problems NP-hard.  相似文献   

13.
RNA pseudoknot prediction in energy-based models.   总被引:11,自引:0,他引:11  
RNA molecules are sequences of nucleotides that serve as more than mere intermediaries between DNA and proteins, e.g., as catalytic molecules. Computational prediction of RNA secondary structure is among the few structure prediction problems that can be solved satisfactorily in polynomial time. Most work has been done to predict structures that do not contain pseudoknots. Allowing pseudoknots introduces modeling and computational problems. In this paper we consider the problem of predicting RNA secondary structures with pseudoknots based on free energy minimization. We first give a brief comparison of energy-based methods for predicting RNA secondary structures with pseudoknots. We then prove that the general problem of predicting RNA secondary structures containing pseudoknots is NP complete for a large class of reasonable models of pseudoknots.  相似文献   

14.
Cell transplantation therapy has certain limitations including immune rejection and limited cell viability, which seriously hinder the transformation of stem cell-based tissue regeneration into clinical practice. Extracellular vesicles (EVs) not only possess the advantages of its derived cells, but also can avoid the risks of cell transplantation. EVs are intelligent and controllable biomaterials that can participate in a variety of physiological and pathological activities, tissue repair and regeneration by transmitting a variety of biological signals, showing great potential in cell-free tissue regeneration. In this review, we summarized the origins and characteristics of EVs, introduced the pivotal role of EVs in diverse tissues regeneration, discussed the underlying mechanisms, prospects, and challenges of EVs. We also pointed out the problems that need to be solved, application directions, and prospects of EVs in the future and shed new light on the novel cell-free strategy for using EVs in the field of regenerative medicine.  相似文献   

15.
We analyze a disturbed form of the general Lotka-Volterra model of an ecosystem with m interacting species. The disturbances act on the intrinsic growth rates of the species and are assumed to be bounded but otherwise unknown. We employ a Lyapunov technique and the concept of "reachable set" from control theory to estimate the set of all possible population densities that are attainable as a result of the disturbances. To calculate estimates for this reachable set, a number of numerical methods that entail the solution to one or more global optimization problems are developed. Specific examples involving two, three, and four species are solved. We also derive an explicit analytical expression that represents an estimate for the reachable set in the m-dimensional case. The estimate is conservative but can be evaluated without carrying out any optimization procedure. We show that methods developed in this paper can be applied to certain other types of nonlinear ecosystem models.  相似文献   

16.
Parameterized complexity analysis in computational biology   总被引:2,自引:0,他引:2  
Many computational problems in biology involve par–ametersfor which a small range of values cover important applications.We argue that for many problems in this setting, parameterizedcomputational complexity rather than NP-completeness is theappropriate tool for studying apparent intractability. At issuein the theory of parameter–ized complexity is whethera problem can be solved in time O(n)for each fixed parametervalue, where a is a constant independent of the parameter. Inaddition to surveying this complexity framework, we describea new result for the Longest Common Subsequence problem. Inparticular, we show that the problem is hard for W[t] for allI when parameterized by the number of strings and the size ofthe alphabet. Lower bounds on the complexity of this basic combinatorialproblem imply lower bounds on more general sequence alignmentand consensus discovery problems. We also describe a numberof open problems pertaining to the parameterized complexityof problems in computational biology where small parameter valuesare important  相似文献   

17.
关于人类发展的环境容量问题之哲学思考   总被引:3,自引:0,他引:3  
关于人类发展的环境容量问题之哲学思考陈贻安(北京交通管理干部学院101601)APhilosophicThoughonEnvironmentalCapacityforHumanDevelopment.¥ChenYian(BeijingCommunic...  相似文献   

18.
植物组织石蜡切片的扫描电镜观察方法研究   总被引:2,自引:0,他引:2  
石蜡切片的扫描电镜观察法有其独到之处:集光镜和扫捕电镜特长于一体,在大量的石蜡切片光镜观察的基础上,挑选具有研究线索的切片,采用此法转移到扫描电镜下作高分辩研究,既可普查切片全貌,又可处得切片中亚微结构的三维图像,这对结构的准确分辩十分有利,且便于作连续切片观察。本文简要介绍这一实验技术。  相似文献   

19.
生物被膜是细菌的一种特殊生存方式。生物被膜感染在临床中的高耐药性问题、反复性问题、迁延性问题等是临床中亟待解决的问题。益生菌作为机体共生微生物的一部分,能够通过多种方式对抗病原菌。本文就临床生物被膜治疗中亟待解决的问题做出了部分总结,归纳总结了部分益生菌生物被膜对抗病原菌生物被膜的作用机制,而且还从改进益生菌生物被膜研究方法、增强益生菌生物被膜稳定性和开发新型生物被膜态益生菌的角度出发,就如何开发生物被膜态益生菌的策略方法进行了综述。  相似文献   

20.
In Japan, there are some problems with fine needle aspiration (FNA) cytology of the breast, such as insufficient smeared cells, air-drying artefact and excessive erythrocytes. Liquid-based cytology has been found to solve these problems. Equipment for such preparations has been developed, but can be expensive to purchase and operate. We developed Auto Cyto Fix 1000 (ACF), which is inexpensive and automatically smears and fixes cells. The purpose of this study was to compare the various cytological features of conventional and ACF specimens. We evaluated whether the ACF method would be able to replace the conventional method. Forty-eight FNA specimens of breast were studied. All specimens were prepared by the direct smeared (DS) and ACF methods and evaluated for unsatisfactory cell collection, air-drying artefacts, background findings and epithelial cell findings. Although ACF specimens were prepared using the cells remaining in the needle and syringe after preparing DS specimens, the cellularity of two of the ACF specimens was better than that of the corresponding DS specimens. ACF specimens never showed air-drying artefact. Unlike DS specimens, which have many erythrocytes in the background, erythrocytes were filtered out and the background of ACF specimens was clean. We believe that many problems attributable to conventional FNA specimen preparation have been solved in this study. Preparation using the ACF apparatus can reduce running costs and can be used to prepare FNA specimens of the breast for cytological examination as an alternative to the conventional method.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号