首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Two new PRP conjugate Algorithms are proposed in this paper based on two modified PRP conjugate gradient methods: the first algorithm is proposed for solving unconstrained optimization problems, and the second algorithm is proposed for solving nonlinear equations. The first method contains two aspects of information: function value and gradient value. The two methods both possess some good properties, as follows: 1)β k ≥ 0 2) the search direction has the trust region property without the use of any line search method 3) the search direction has sufficient descent property without the use of any line search method. Under some suitable conditions, we establish the global convergence of the two algorithms. We conduct numerical experiments to evaluate our algorithms. The numerical results indicate that the first algorithm is effective and competitive for solving unconstrained optimization problems and that the second algorithm is effective for solving large-scale nonlinear equations.  相似文献   

2.
The properties (or labels) of nodes in networks can often be predicted based on their proximity and their connections to other labeled nodes. So-called “label propagation algorithms” predict the labels of unlabeled nodes by propagating information about local label density iteratively through the network. These algorithms are fast, simple and scale to large networks but nonetheless regularly perform better than slower and much more complex algorithms on benchmark problems. We show here, however, that these algorithms have an intrinsic limitation that prevents them from adapting to some common patterns of network node labeling; we introduce a new algorithm, 3Prop, that retains all their advantages but is much more adaptive. As we show, 3Prop performs very well on node labeling problems ill-suited to label propagation, including predicting gene function in protein and genetic interaction networks and gender in friendship networks, and also performs slightly better on problems already well-suited to label propagation such as labeling blogs and patents based on their citation networks. 3Prop gains its adaptability by assigning separate weights to label information from different steps of the propagation. Surprisingly, we found that for many networks, the third iteration of label propagation receives a negative weight.

Availability

The code is available from the authors by request.  相似文献   

3.
X-ray computed tomography (CT) iterative image reconstruction from sparse-view projection data has been an important research topic for radiation reduction in clinic. In this paper, to relieve the requirement of misalignment reduction operation of the prior image constrained compressed sensing (PICCS) approach introduced by Chen et al, we present an iterative image reconstruction approach for sparse-view CT using a normal-dose image induced total variation (ndiTV) prior. The associative objective function of the present approach is constructed under the penalized weighed least-square (PWLS) criteria, which contains two terms, i.e., the weighted least-square (WLS) fidelity and the ndiTV prior, and is referred to as “PWLS-ndiTV”. Specifically, the WLS fidelity term is built based on an accurate relationship between the variance and mean of projection data in the presence of electronic background noise. The ndiTV prior term is designed to reduce the influence of the misalignment between the desired- and prior- image by using a normal-dose image induced non-local means (ndiNLM) filter. Subsequently, a modified steepest descent algorithm is adopted to minimize the associative objective function. Experimental results on two different digital phantoms and an anthropomorphic torso phantom show that the present PWLS-ndiTV approach for sparse-view CT image reconstruction can achieve noticeable gains over the existing similar approaches in terms of noise reduction, resolution-noise tradeoff, and low-contrast object detection.  相似文献   

4.
Total generalized variation (TGV)-based computed tomography (CT) image reconstruction, which utilizes high-order image derivatives, is superior to total variation-based methods in terms of the preservation of edge information and the suppression of unfavorable staircase effects. However, conventional TGV regularization employs l1-based form, which is not the most direct method for maximizing sparsity prior. In this study, we propose a total generalized p-variation (TGpV) regularization model to improve the sparsity exploitation of TGV and offer efficient solutions to few-view CT image reconstruction problems. To solve the nonconvex optimization problem of the TGpV minimization model, we then present an efficient iterative algorithm based on the alternating minimization of augmented Lagrangian function. All of the resulting subproblems decoupled by variable splitting admit explicit solutions by applying alternating minimization method and generalized p-shrinkage mapping. In addition, approximate solutions that can be easily performed and quickly calculated through fast Fourier transform are derived using the proximal point method to reduce the cost of inner subproblems. The accuracy and efficiency of the simulated and real data are qualitatively and quantitatively evaluated to validate the efficiency and feasibility of the proposed method. Overall, the proposed method exhibits reasonable performance and outperforms the original TGV-based method when applied to few-view problems.  相似文献   

5.
Compressed sensing has shown to be promising to accelerate magnetic resonance imaging. In this new technology, magnetic resonance images are usually reconstructed by enforcing its sparsity in sparse image reconstruction models, including both synthesis and analysis models. The synthesis model assumes that an image is a sparse combination of atom signals while the analysis model assumes that an image is sparse after the application of an analysis operator. Balanced model is a new sparse model that bridges analysis and synthesis models by introducing a penalty term on the distance of frame coefficients to the range of the analysis operator. In this paper, we study the performance of the balanced model in tight frame based compressed sensing magnetic resonance imaging and propose a new efficient numerical algorithm to solve the optimization problem. By tuning the balancing parameter, the new model achieves solutions of three models. It is found that the balanced model has a comparable performance with the analysis model. Besides, both of them achieve better results than the synthesis model no matter what value the balancing parameter is. Experiment shows that our proposed numerical algorithm constrained split augmented Lagrangian shrinkage algorithm for balanced model (C-SALSA-B) converges faster than previously proposed algorithms accelerated proximal algorithm (APG) and alternating directional method of multipliers for balanced model (ADMM-B).  相似文献   

6.
The reconstruction and synthesis of ancestral RNAs is a feasible goal for paleogenetics. This will require new bioinformatics methods, including a robust statistical framework for reconstructing histories of substitutions, indels and structural changes. We describe a “transducer composition” algorithm for extending pairwise probabilistic models of RNA structural evolution to models of multiple sequences related by a phylogenetic tree. This algorithm draws on formal models of computational linguistics as well as the 1985 protosequence algorithm of David Sankoff. The output of the composition algorithm is a multiple-sequence stochastic context-free grammar. We describe dynamic programming algorithms, which are robust to null cycles and empty bifurcations, for parsing this grammar. Example applications include structural alignment of non-coding RNAs, propagation of structural information from an experimentally-characterized sequence to its homologs, and inference of the ancestral structure of a set of diverged RNAs. We implemented the above algorithms for a simple model of pairwise RNA structural evolution; in particular, the algorithms for maximum likelihood (ML) alignment of three known RNA structures and a known phylogeny and inference of the common ancestral structure. We compared this ML algorithm to a variety of related, but simpler, techniques, including ML alignment algorithms for simpler models that omitted various aspects of the full model and also a posterior-decoding alignment algorithm for one of the simpler models. In our tests, incorporation of basepair structure was the most important factor for accurate alignment inference; appropriate use of posterior-decoding was next; and fine details of the model were least important. Posterior-decoding heuristics can be substantially faster than exact phylogenetic inference, so this motivates the use of sum-over-pairs heuristics where possible (and approximate sum-over-pairs). For more exact probabilistic inference, we discuss the use of transducer composition for ML (or MCMC) inference on phylogenies, including possible ways to make the core operations tractable.  相似文献   

7.
Hemispherical photography is a well-established method to optically assess ecological parameters related to plant canopies; e.g. ground-level light regimes and the distribution of foliage within the crown space. Interpreting hemispherical photographs involves classifying pixels as either sky or vegetation. A wide range of automatic thresholding or binarization algorithms exists to classify the photographs. The variety in methodology hampers ability to compare results across studies. To identify an optimal threshold selection method, this study assessed the accuracy of seven binarization methods implemented in software currently available for the processing of hemispherical photographs. Therefore, binarizations obtained by the algorithms were compared to reference data generated through a manual binarization of a stratified random selection of pixels. This approach was adopted from the accuracy assessment of map classifications known from remote sensing studies. Percentage correct () and kappa-statistics () were calculated. The accuracy of the algorithms was assessed for photographs taken with automatic exposure settings (auto-exposure) and photographs taken with settings which avoid overexposure (histogram-exposure). In addition, gap fraction values derived from hemispherical photographs were compared with estimates derived from the manually classified reference pixels. All tested algorithms were shown to be sensitive to overexposure. Three of the algorithms showed an accuracy which was high enough to be recommended for the processing of histogram-exposed hemispherical photographs: “Minimum” ( 98.8%; 0.952), “Edge Detection” ( 98.1%; 0.950), and “Minimum Histogram” ( 98.1%; 0.947). The Minimum algorithm overestimated gap fraction least of all (11%). The overestimation by the algorithms Edge Detection (63%) and Minimum Histogram (67%) were considerably larger. For the remaining four evaluated algorithms (IsoData, Maximum Entropy, MinError, and Otsu) an incompatibility with photographs containing overexposed pixels was detected. When applied to histogram-exposed photographs, these algorithms overestimated the gap fraction by at least 180%.  相似文献   

8.
Sirt1, the closest mammalian homolog of the Sir2 yeast longevity protein, has been extensively investigated in the last few years as an avenue to understand the connection linking nutrients and energy metabolism with aging and related diseases. From this research effort the picture has emerged of an enzyme at the hub of a complex array of molecular interactions whereby nutrient-triggered signals are translated into several levels of adaptive cell responses, the failure of which underlies diseases as diverse as diabetes, neurodegeneration and cancer. Sirt1 thus connects moderate calorie intake to “healthspan,” and a decline of Sirt-centered protective circuits over time may explain the “catastrophic” nature of aging.  相似文献   

9.
This paper presents a stable and fast algorithm for independent component analysis with reference (ICA-R). This is a technique for incorporating available reference signals into the ICA contrast function so as to form an augmented Lagrangian function under the framework of constrained ICA (cICA). The previous ICA-R algorithm was constructed by solving the optimization problem via a Newton-like learning style. Unfortunately, the slow convergence and potential misconvergence limit the capability of ICA-R. This paper first investigates and probes the flaws of the previous algorithm and then introduces a new stable algorithm with a faster convergence speed. There are two other highlights in this paper: first, new approaches, including the reference deflation technique and a direct way of obtaining references, are introduced to facilitate the application of ICA-R; second, a new method is proposed that the new ICA-R is used to recover the complete underlying sources with new advantages compared with other classical ICA methods. Finally, the experiments on both synthetic and real-world data verify the better performance of the new algorithm over both previous ICA-R and other well-known methods.  相似文献   

10.
Classical Marr-Albus theories of cerebellar learning employ only cortical sites of plasticity. However, tests of these theories using adaptive calibration of the vestibulo–ocular reflex (VOR) have indicated plasticity in both cerebellar cortex and the brainstem. To resolve this long-standing conflict, we attempted to identify the computational role of the brainstem site, by using an adaptive filter version of the cerebellar microcircuit to model VOR calibration for changes in the oculomotor plant. With only cortical plasticity, introducing a realistic delay in the retinal-slip error signal of 100 ms prevented learning at frequencies higher than 2.5 Hz, although the VOR itself is accurate up to at least 25 Hz. However, the introduction of an additional brainstem site of plasticity, driven by the correlation between cerebellar and vestibular inputs, overcame the 2.5 Hz limitation and allowed learning of accurate high-frequency gains. This “cortex-first” learning mechanism is consistent with a wide variety of evidence concerning the role of the flocculus in VOR calibration, and complements rather than replaces the previously proposed “brainstem-first” mechanism that operates when ocular tracking mechanisms are effective. These results (i) describe a process whereby information originally learnt in one area of the brain (cerebellar cortex) can be transferred and expressed in another (brainstem), and (ii) indicate for the first time why a brainstem site of plasticity is actually required by Marr-Albus type models when high-frequency gains must be learned in the presence of error delay.  相似文献   

11.
BOLD fMRI is sensitive to blood-oxygenation changes correlated with brain function; however, it is limited by relatively weak signal and significant noise confounds. Many preprocessing algorithms have been developed to control noise and improve signal detection in fMRI. Although the chosen set of preprocessing and analysis steps (the “pipeline”) significantly affects signal detection, pipelines are rarely quantitatively validated in the neuroimaging literature, due to complex preprocessing interactions. This paper outlines and validates an adaptive resampling framework for evaluating and optimizing preprocessing choices by optimizing data-driven metrics of task prediction and spatial reproducibility. Compared to standard “fixed” preprocessing pipelines, this optimization approach significantly improves independent validation measures of within-subject test-retest, and between-subject activation overlap, and behavioural prediction accuracy. We demonstrate that preprocessing choices function as implicit model regularizers, and that improvements due to pipeline optimization generalize across a range of simple to complex experimental tasks and analysis models. Results are shown for brief scanning sessions (<3 minutes each), demonstrating that with pipeline optimization, it is possible to obtain reliable results and brain-behaviour correlations in relatively small datasets.  相似文献   

12.
Models for prediction of allogeneic hematopoietic stem transplantation (HSCT) related mortality partially account for transplant risk. Improving predictive accuracy requires understating of prediction limiting factors, such as the statistical methodology used, number and quality of features collected, or simply the population size. Using an in-silico approach (i.e., iterative computerized simulations), based on machine learning (ML) algorithms, we set out to analyze these factors. A cohort of 25,923 adult acute leukemia patients from the European Society for Blood and Marrow Transplantation (EBMT) registry was analyzed. Predictive objective was non-relapse mortality (NRM) 100 days following HSCT. Thousands of prediction models were developed under varying conditions: increasing sample size, specific subpopulations and an increasing number of variables, which were selected and ranked by separate feature selection algorithms. Depending on the algorithm, predictive performance plateaued on a population size of 6,611–8,814 patients, reaching a maximal area under the receiver operator characteristic curve (AUC) of 0.67. AUCs’ of models developed on specific subpopulation ranged from 0.59 to 0.67 for patients in second complete remission and receiving reduced intensity conditioning, respectively. Only 3–5 variables were necessary to achieve near maximal AUCs. The top 3 ranking variables, shared by all algorithms were disease stage, donor type, and conditioning regimen. Our findings empirically demonstrate that with regards to NRM prediction, few variables “carry the weight” and that traditional HSCT data has been “worn out”. “Breaking through” the predictive boundaries will likely require additional types of inputs.  相似文献   

13.
The advent of high-throughput metagenomic sequencing has prompted the development of efficient taxonomic profiling methods allowing to measure the presence, abundance and phylogeny of organisms in a wide range of environmental samples. Multivariate sequence-derived abundance data further has the potential to enable inference of ecological associations between microbial populations, but several technical issues need to be accounted for, like the compositional nature of the data, its extreme sparsity and overdispersion, as well as the frequent need to operate in under-determined regimes.The ecological network reconstruction problem is frequently cast into the paradigm of Gaussian Graphical Models (GGMs) for which efficient structure inference algorithms are available, like the graphical lasso and neighborhood selection. Unfortunately, GGMs or variants thereof can not properly account for the extremely sparse patterns occurring in real-world metagenomic taxonomic profiles. In particular, structural zeros (as opposed to sampling zeros) corresponding to true absences of biological signals fail to be properly handled by most statistical methods.We present here a zero-inflated log-normal graphical model (available at https://github.com/vincentprost/Zi-LN) specifically aimed at handling such “biological” zeros, and demonstrate significant performance gains over state-of-the-art statistical methods for the inference of microbial association networks, with most notable gains obtained when analyzing taxonomic profiles displaying sparsity levels on par with real-world metagenomic datasets.  相似文献   

14.
Weigand MR  Sundin GW 《Genetics》2009,181(1):199-208
Mutagenic DNA repair (MDR) employs low-fidelity DNA polymerases capable of replicating past DNA lesions resulting from exposure to high-energy ultraviolet radiation (UVR). MDR confers UVR tolerance and activation initiates a transient mutator phenotype that may provide opportunities for adaptation. To investigate the potential role of MDR in adaptation, we have propagated parallel lineages of the highly mutable epiphytic plant pathogen Pseudomonas cichorii 302959 with daily UVR activation (UVR lineages) for ~500 generations. Here we examine those lineages through the measurement of relative fitness and observation of distinct colony morphotypes that emerged. Isolates and population samples from UVR lineages displayed gains in fitness relative to the ancestor despite increased rates of inducible mutation to rifampicin resistance. Regular activation of MDR resulted in the maintenance of genetic diversity within UVR lineages, including the reproducible diversification and coexistence of “round” and “fuzzy” colony morphotypes. These results suggest that inducible mutability may present a reasonable strategy for adaptive evolution in stressful environments by contributing to gains in relative fitness and diversification.  相似文献   

15.
We examined the course of repetitive behavior and restricted interests (RBRI) in children with and without Down syndrome (DS) over a two-year time period. Forty-two typically-developing children and 43 persons with DS represented two mental age (MA) levels: “younger” 2–4 years; “older” 5–11 years. For typically developing younger children some aspects of RBRI increased from Time 1 to Time 2. In older children, these aspects remained stable or decreased over the two-year period. For participants with DS, RBRI remained stable or increased over time. Time 1 RBRI predicted Time 2 adaptive behavior (measured by the Vineland Scales) in typically developing children, whereas for participants with DS, Time 1 RBRI predicted poor adaptive outcome (Child Behavior Checklist) at Time 2. The results add to the body of literature examining the adaptive and maladaptive nature of repetitive behavior.  相似文献   

16.
The Locomotion of Mouse Fibroblasts in Tissue Culture   总被引:12,自引:2,他引:10       下载免费PDF全文
Time-lapse cinematography was used to investigate the motion of mouse fibroblasts in tissue culture. Observations over successive short time intervals revealed a tendency for the cells to persist in their direction of motion from one 2.5 hr time interval to the next. Over 5.0-hr time intervals, however, the direction of motion appeared random. This fact suggested that D, the diffusion constant of a random walk model, might serve to characterize cellular motility if suitably long observation times were used. We therefore investigated the effect of “persistence” on the pure random walk model, and we found theoretically and confirmed experimentally that the motility of a persisting cell could indeed be characterized by an augmented diffusion constant, D*. A method for determining confidence limits on D* was also developed. Thus a random walk model, modified to comprehend the persistence effect, was found to describe the motion of fibroblasts in tissue culture and to provide a numerical measure of cellular motility.  相似文献   

17.
Reconstructions of cellular metabolism are publicly available for a variety of different microorganisms and some mammalian genomes. To date, these reconstructions are “genome-scale” and strive to include all reactions implied by the genome annotation, as well as those with direct experimental evidence. Clearly, many of the reactions in a genome-scale reconstruction will not be active under particular conditions or in a particular cell type. Methods to tailor these comprehensive genome-scale reconstructions into context-specific networks will aid predictive in silico modeling for a particular situation. We present a method called Gene Inactivity Moderated by Metabolism and Expression (GIMME) to achieve this goal. The GIMME algorithm uses quantitative gene expression data and one or more presupposed metabolic objectives to produce the context-specific reconstruction that is most consistent with the available data. Furthermore, the algorithm provides a quantitative inconsistency score indicating how consistent a set of gene expression data is with a particular metabolic objective. We show that this algorithm produces results consistent with biological experiments and intuition for adaptive evolution of bacteria, rational design of metabolic engineering strains, and human skeletal muscle cells. This work represents progress towards producing constraint-based models of metabolism that are specific to the conditions where the expression profiling data is available.  相似文献   

18.

Background

Since both the number of SNPs (single nucleotide polymorphisms) used in genomic prediction and the number of individuals used in training datasets are rapidly increasing, there is an increasing need to improve the efficiency of genomic prediction models in terms of computing time and memory (RAM) required.

Methods

In this paper, two alternative algorithms for genomic prediction are presented that replace the originally suggested residual updating algorithm, without affecting the estimates. The first alternative algorithm continues to use residual updating, but takes advantage of the characteristic that the predictor variables in the model (i.e. the SNP genotypes) take only three different values, and is therefore termed “improved residual updating”. The second alternative algorithm, here termed “right-hand-side updating” (RHS-updating), extends the idea of improved residual updating across multiple SNPs. The alternative algorithms can be implemented for a range of different genomic predictions models, including random regression BLUP (best linear unbiased prediction) and most Bayesian genomic prediction models. To test the required computing time and RAM, both alternative algorithms were implemented in a Bayesian stochastic search variable selection model.

Results

Compared to the original algorithm, the improved residual updating algorithm reduced CPU time by 35.3 to 43.3%, without changing memory requirements. The RHS-updating algorithm reduced CPU time by 74.5 to 93.0% and memory requirements by 13.1 to 66.4% compared to the original algorithm.

Conclusions

The presented RHS-updating algorithm provides an interesting alternative to reduce both computing time and memory requirements for a range of genomic prediction models.  相似文献   

19.
Computational protein design has found great success in engineering proteins for thermodynamic stability, binding specificity, or enzymatic activity in a ‘single state’ design (SSD) paradigm. Multi-specificity design (MSD), on the other hand, involves considering the stability of multiple protein states simultaneously. We have developed a novel MSD algorithm, which we refer to as REstrained CONvergence in multi-specificity design (RECON). The algorithm allows each state to adopt its own sequence throughout the design process rather than enforcing a single sequence on all states. Convergence to a single sequence is encouraged through an incrementally increasing convergence restraint for corresponding positions. Compared to MSD algorithms that enforce (constrain) an identical sequence on all states the energy landscape is simplified, which accelerates the search drastically. As a result, RECON can readily be used in simulations with a flexible protein backbone. We have benchmarked RECON on two design tasks. First, we designed antibodies derived from a common germline gene against their diverse targets to assess recovery of the germline, polyspecific sequence. Second, we design “promiscuous”, polyspecific proteins against all binding partners and measure recovery of the native sequence. We show that RECON is able to efficiently recover native-like, biologically relevant sequences in this diverse set of protein complexes.  相似文献   

20.
Sleep benefits veridical memories, resulting in superior recall relative to off-line intervals spent awake. Sleep also increases false memory recall in the Deese-Roediger-McDermott (DRM) paradigm. Given the suggestion that emotional veridical memories are prioritized for consolidation over sleep, here we examined whether emotion modulates sleep’s effect on false memory formation. Participants listened to semantically related word lists lacking a critical lure representing each list’s “gist.” Free recall was tested after 12 hours containing sleep or wake. The Sleep group recalled more studied words than the Wake group but only for emotionally neutral lists. False memories of both negative and neutral critical lures were greater following sleep relative to wake. Morning and Evening control groups (20-minute delay) did not differ ruling out circadian accounts for these differences. These results support the adaptive function of sleep in both promoting the consolidation of veridical declarative memories and in extracting unifying aspects from memory details.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号