首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.

Background

The basic RNA secondary structure prediction problem or single sequence folding problem (SSF) was solved 35 years ago by a now well-known \(O(n^3)\)-time dynamic programming method. Recently three methodologies—Valiant, Four-Russians, and Sparsification—have been applied to speedup RNA secondary structure prediction. The sparsification method exploits two properties of the input: the number of subsequence Z with the endpoints belonging to the optimal folding set and the maximum number base-pairs L. These sparsity properties satisfy \(0 \le L \le n / 2\) and \(n \le Z \le n^2 / 2\), and the method reduces the algorithmic running time to O(LZ). While the Four-Russians method utilizes tabling partial results.

Results

In this paper, we explore three different algorithmic speedups. We first expand the reformulate the single sequence folding Four-Russians \(\Theta \left(\frac{n^3}{\log ^2 n}\right)\)-time algorithm, to utilize an on-demand lookup table. Second, we create a framework that combines the fastest Sparsification and new fastest on-demand Four-Russians methods. This combined method has worst-case running time of \(O(\tilde{L}\tilde{Z})\), where \(\frac{{L}}{\log n} \le \tilde{L}\le min\left({L},\frac{n}{\log n}\right)\) and \(\frac{{Z}}{\log n}\le \tilde{Z} \le min\left({Z},\frac{n^2}{\log n}\right)\). Third we update the Four-Russians formulation to achieve an on-demand \(O( n^2/ \log ^2n )\)-time parallel algorithm. This then leads to an asymptotic speedup of \(O(\tilde{L}\tilde{Z_j})\) where \(\frac{{Z_j}}{\log n}\le \tilde{Z_j} \le min\left({Z_j},\frac{n}{\log n}\right)\) and \(Z_j\) the number of subsequence with the endpoint j belonging to the optimal folding set.

Conclusions

The on-demand formulation not only removes all extraneous computation and allows us to incorporate more realistic scoring schemes, but leads us to take advantage of the sparsity properties. Through asymptotic analysis and empirical testing on the base-pair maximization variant and a more biologically informative scoring scheme, we show that this Sparse Four-Russians framework is able to achieve a speedup on every problem instance, that is asymptotically never worse, and empirically better than achieved by the minimum of the two methods alone.
  相似文献   

2.

Background

Suffix arrays, augmented by additional data structures, allow solving efficiently many string processing problems. The external memory construction of the generalized suffix array for a string collection is a fundamental task when the size of the input collection or the data structure exceeds the available internal memory.

Results

In this article we present and analyze \(\mathsf {eGSA}\) [introduced in CPM (External memory generalized suffix and \(\mathsf {LCP}\) arrays construction. In: Proceedings of CPM. pp 201–10, 2013)], the first external memory algorithm to construct generalized suffix arrays augmented with the longest common prefix array for a string collection. Our algorithm relies on a combination of buffers, induced sorting and a heap to avoid direct string comparisons. We performed experiments that covered different aspects of our algorithm, including running time, efficiency, external memory access, internal phases and the influence of different optimization strategies. On real datasets of size up to 24 GB and using 2 GB of internal memory, \(\mathsf {eGSA}\) showed a competitive performance when compared to \(\mathsf {eSAIS}\) and \(\mathsf {SAscan}\), which are efficient algorithms for a single string according to the related literature. We also show the effect of disk caching managed by the operating system on our algorithm.

Conclusions

The proposed algorithm was validated through performance tests using real datasets from different domains, in various combinations, and showed a competitive performance. Our algorithm can also construct the generalized Burrows-Wheeler transform of a string collection with no additional cost except by the output time.
  相似文献   

3.

Background

Cancer is an evolutionary process characterized by the accumulation of somatic mutations in a population of cells that form a tumor. One frequent type of mutations is copy number aberrations, which alter the number of copies of genomic regions. The number of copies of each position along a chromosome constitutes the chromosome’s copy-number profile. Understanding how such profiles evolve in cancer can assist in both diagnosis and prognosis.

Results

We model the evolution of a tumor by segmental deletions and amplifications, and gauge distance from profile \(\mathbf {a}\) to \(\mathbf {b}\) by the minimum number of events needed to transform \(\mathbf {a}\) into \(\mathbf {b}\). Given two profiles, our first problem aims to find a parental profile that minimizes the sum of distances to its children. Given k profiles, the second, more general problem, seeks a phylogenetic tree, whose k leaves are labeled by the k given profiles and whose internal vertices are labeled by ancestral profiles such that the sum of edge distances is minimum.

Conclusions

For the former problem we give a pseudo-polynomial dynamic programming algorithm that is linear in the profile length, and an integer linear program formulation. For the latter problem we show it is NP-hard and give an integer linear program formulation that scales to practical problem instance sizes. We assess the efficiency and quality of our algorithms on simulated instances.
  相似文献   

4.

Background

In this work, we present a new coarse grained representation of RNA dynamics. It is based on adjacency matrices and their interactions patterns obtained from molecular dynamics simulations. RNA molecules are well-suited for this representation due to their composition which is mainly modular and assessable by the secondary structure alone. These interactions can be represented as adjacency matrices of k nucleotides. Based on those, we define transitions between states as changes in the adjacency matrices which form Markovian dynamics. The intense computational demand for deriving the transition probability matrices prompted us to develop StreAM-\(T_g\), a stream-based algorithm for generating such Markov models of k-vertex adjacency matrices representing the RNA.

Results

We benchmark StreAM-\(T_g\) (a) for random and RNA unit sphere dynamic graphs (b) for the robustness of our method against different parameters. Moreover, we address a riboswitch design problem by applying StreAM-\(T_g\) on six long term molecular dynamics simulation of a synthetic tetracycline dependent riboswitch (500 ns) in combination with five different antibiotics.

Conclusions

The proposed algorithm performs well on large simulated as well as real world dynamic graphs. Additionally, StreAM-\(T_g\) provides insights into nucleotide based RNA dynamics in comparison to conventional metrics like the root-mean square fluctuation. In the light of experimental data our results show important design opportunities for the riboswitch.
  相似文献   

5.

Background

Mathematical modeling is a powerful tool to analyze, and ultimately design biochemical networks. However, the estimation of the parameters that appear in biochemical models is a significant challenge. Parameter estimation typically involves expensive function evaluations and noisy data, making it difficult to quickly obtain optimal solutions. Further, biochemical models often have many local extrema which further complicates parameter estimation. Toward these challenges, we developed Dynamic Optimization with Particle Swarms (DOPS), a novel hybrid meta-heuristic that combined multi-swarm particle swarm optimization with dynamically dimensioned search (DDS). DOPS uses a multi-swarm particle swarm optimization technique to generate candidate solution vectors, the best of which is then greedily updated using dynamically dimensioned search.

Results

We tested DOPS using classic optimization test functions, biochemical benchmark problems and real-world biochemical models. We performed \(\mathcal {T}\) = 25 trials with \(\mathcal {N}\) = 4000 function evaluations per trial, and compared the performance of DOPS with other commonly used meta-heuristics such as differential evolution (DE), simulated annealing (SA) and dynamically dimensioned search (DDS). On average, DOPS outperformed other common meta-heuristics on the optimization test functions, benchmark problems and a real-world model of the human coagulation cascade.

Conclusions

DOPS is a promising meta-heuristic approach for the estimation of biochemical model parameters in relatively few function evaluations. DOPS source code is available for download under a MIT license at http://www.varnerlab.org.
  相似文献   

6.

Introduction

In systems biology, where a main goal is acquiring knowledge of biological systems, one of the challenges is inferring biochemical interactions from different molecular entities such as metabolites. In this area, the metabolome possesses a unique place for reflecting “true exposure” by being sensitive to variation coming from genetics, time, and environmental stimuli. While influenced by many different reactions, often the research interest needs to be focused on variation coming from a certain source, i.e. a certain covariable \(\mathbf {X}_m\).

Objective

Here, we use network analysis methods to recover a set of metabolite relationships, by finding metabolites sharing a similar relation to \(\mathbf {X}_m\). Metabolite values are based on information coming from individuals’ \(\mathbf {X}_m\) status which might interact with other covariables.

Methods

Alternative to using the original metabolite values, the total information is decomposed by utilizing a linear regression model and the part relevant to \(\mathbf {X}_m\) is further used. For two datasets, two different network estimation methods are considered. The first is weighted gene co-expression network analysis based on correlation coefficients. The second method is graphical LASSO based on partial correlations.

Results

We observed that when using the parts related to the specific covariable of interest, resulting estimated networks display higher interconnectedness. Additionally, several groups of biologically associated metabolites (very large density lipoproteins, lipoproteins, etc.) were identified in the human data example.

Conclusions

This work demonstrates how information on the study design can be incorporated to estimate metabolite networks. As a result, sets of interconnected metabolites can be clustered together with respect to their relation to a covariable of interest.
  相似文献   

7.

Introduction

To aid the development of better algorithms for \(^1\)H NMR data analysis, such as alignment or peak-fitting, it is important to characterise and model chemical shift changes caused by variation in pH. The number of protonation sites, a key parameter in the theoretical relationship between pH and chemical shift, is traditionally estimated from the molecular structure, which is often unknown in untargeted metabolomics applications.

Objective

We aim to use observed NMR chemical shift titration data to estimate the number of protonation sites for a range of urinary metabolites.

Methods

A pool of urine from healthy subjects was titrated in the range pH 2–12, standard \(^1\)H NMR spectra were acquired and positions of 51 peaks (corresponding to 32 identified metabolites) were recorded. A theoretical model of chemical shift was fit to the data using a Bayesian statistical framework, using model selection procedures in a Markov Chain Monte Carlo algorithm to estimate the number of protonation sites for each molecule.

Results

The estimated number of protonation sites was found to be correct for 41 out of 51 peaks. In some cases, the number of sites was incorrectly estimated, due to very close pKa values or a limited amount of data in the required pH range.

Conclusions

Given appropriate data, it is possible to estimate the number of protonation sites for many metabolites typically observed in \(^1\)H NMR metabolomics without knowledge of the molecular structure. This approach may be a valuable resource for the development of future automated metabolite alignment, annotation and peak fitting algorithms.
  相似文献   

8.

Background

In the absence of horizontal gene transfer it is possible to reconstruct the history of gene families from empirically determined orthology relations, which are equivalent to event-labeled gene trees. Knowledge of the event labels considerably simplifies the problem of reconciling a gene tree T with a species trees S, relative to the reconciliation problem without prior knowledge of the event types. It is well-known that optimal reconciliations in the unlabeled case may violate time-consistency and thus are not biologically feasible. Here we investigate the mathematical structure of the event labeled reconciliation problem with horizontal transfer.

Results

We investigate the issue of time-consistency for the event-labeled version of the reconciliation problem, provide a convenient axiomatic framework, and derive a complete characterization of time-consistent reconciliations. This characterization depends on certain weak conditions on the event-labeled gene trees that reflect conditions under which evolutionary events are observable at least in principle. We give an \(\mathcal {O}(|V(T)|\log (|V(S)|))\)-time algorithm to decide whether a time-consistent reconciliation map exists. It does not require the construction of explicit timing maps, but relies entirely on the comparably easy task of checking whether a small auxiliary graph is acyclic. The algorithms are implemented in C++ using the boost graph library and are freely available at https://github.com/Nojgaard/tc-recon.

Significance

The combinatorial characterization of time consistency and thus biologically feasible reconciliation is an important step towards the inference of gene family histories with horizontal transfer from orthology data, i.e., without presupposed gene and species trees. The fast algorithm to decide time consistency is useful in a broader context because it constitutes an attractive component for all tools that address tree reconciliation problems.
  相似文献   

9.

Background

Isometric gene tree reconciliation is a gene tree/species tree reconciliation problem where both the gene tree and the species tree include branch lengths, and these branch lengths must be respected by the reconciliation. The problem was introduced by Ma et al. in 2008 in the context of reconstructing evolutionary histories of genomes in the infinite sites model.

Results

In this paper, we show that the original algorithm by Ma et al. is incorrect, and we propose a modified algorithm that addresses the problems that we discovered. We have also improved the running time from \(O(N^2)\) to \(O(N\log N)\), where N is the total number of nodes in the two input trees. Finally, we examine two new variants of the problem: reconciliation of two unrooted trees and scaling of branch lengths of the gene tree during reconciliation of two rooted trees.

Conclusions

We provide several new algorithms for isometric reconciliation of trees. Some questions in this area remain open; most importantly extensions of the problem allowing for imprecise estimates of branch lengths.
  相似文献   

10.

Main conclusion

Starch granule size distributions in plant tissues, when determined in high resolution and specifiedproperly as a frequency function, could provide useful information on the granule formation and growth.

Abstract

To better understand genetic control of physical properties of starch granules, we attempted a new approach to analyze developmental and genotypic effects on morphology and size distributions of starch granules in sweetpotato storage roots. Starch granules in sweetpotatoes exhibited low sphericity, many shapes that appeared to be independent of genotypes or developmental stages, and non-randomly distributed sizes. Granule size distributions of sweetpotato starches were determined in high resolution as differential volume-percentage distributions of volume-equivalent spherical diameters, rigorously curve-fitted to be lognormal, and specified using their geometric means \(\bar{x}^{*}\) and multiplicative standard deviations \(s^{*}\) in a \(\bar{x}^{*} \times /({\text{multiply/divide}})s^{*}\) form. The scale (\(\bar{x}^{*}\)) and shape (\(\bar{s}^{*}\)) of these distributions were independently variable, ranging from 14.02 to 19.36 μm and 1.403 to 1.567, respectively, among 22 cultivars/clones. The shape (\(s^{*}\)) of granule lognormal volume-size distributions of sweetpotato starch were found to be highly significantly and inversely correlated with their apparent amylose contents. More importantly, granule lognormal volume-size distributions of starches in developing sweetpotatoes displayed the same self-preserving kinetics, i.e., preserving the shape but shifting upward the scale, as those of particles undergoing agglomeration, which strongly indicated involvement of agglomeration in the formation and growth of starch granules. Furthermore, QTL analysis of a segregating null allele at one of three homoeologous starch synthase II loci in a reciprocal-cross population, which was identified through profiling starch granule-bound proteins in sweetpotatoes of diverse genotypes, showed that the locus is a QTL modulating the scale of granule volume-size distributions of starch in sweetpotatoes.
  相似文献   

11.
We prove almost sure exponential stability for the disease-free equilibrium of a stochastic differential equations model of an SIR epidemic with vaccination. The model allows for vertical transmission. The stochastic perturbation is associated with the force of infection and is such that the total population size remains constant in time. We prove almost sure positivity of solutions. The main result concerns especially the smaller values of the diffusion parameter, and describes the stability in terms of an analogue \(\mathcal{R}_\sigma\) of the basic reproduction number \(\mathcal{R}_0\) of the underlying deterministic model, with \(\mathcal{R}_\sigma \le \mathcal{R}_0\). We prove that the disease-free equilibrium is almost sure exponentially stable if \(\mathcal{R}_\sigma <1\).  相似文献   

12.

Background

Patterns with wildcards in specified positions, namely spaced seeds, are increasingly used instead of k-mers in many bioinformatics applications that require indexing, querying and rapid similarity search, as they can provide better sensitivity. Many of these applications require to compute the hashing of each position in the input sequences with respect to the given spaced seed, or to multiple spaced seeds. While the hashing of k-mers can be rapidly computed by exploiting the large overlap between consecutive k-mers, spaced seeds hashing is usually computed from scratch for each position in the input sequence, thus resulting in slower processing.

Results

The method proposed in this paper, fast spaced-seed hashing (FSH), exploits the similarity of the hash values of spaced seeds computed at adjacent positions in the input sequence. In our experiments we compute the hash for each positions of metagenomics reads from several datasets, with respect to different spaced seeds. We also propose a generalized version of the algorithm for the simultaneous computation of multiple spaced seeds hashing. In the experiments, our algorithm can compute the hashing values of spaced seeds with a speedup, with respect to the traditional approach, between 1.6\(\times\) to 5.3\(\times\), depending on the structure of the spaced seed.

Conclusions

Spaced seed hashing is a routine task for several bioinformatics application. FSH allows to perform this task efficiently and raise the question of whether other hashing can be exploited to further improve the speed up. This has the potential of major impact in the field, making spaced seed applications not only accurate, but also faster and more efficient.

Availability

The software FSH is freely available for academic use at: https://bitbucket.org/samu661/fsh/overview.
  相似文献   

13.
Despite major strides in the treatment of cancer, the development of drug resistance remains a major hurdle. One strategy which has been proposed to address this is the sequential application of drug therapies where resistance to one drug induces sensitivity to another drug, a concept called collateral sensitivity. The optimal timing of drug switching in these situations, however, remains unknown. To study this, we developed a dynamical model of sequential therapy on heterogeneous tumors comprised of resistant and sensitive cells. A pair of drugs (DrugA, DrugB) are utilized and are periodically switched during therapy. Assuming resistant cells to one drug are collaterally sensitive to the opposing drug, we classified cancer cells into two groups, \(A_\mathrm{R}\) and \(B_\mathrm{R}\), each of which is a subpopulation of cells resistant to the indicated drug and concurrently sensitive to the other, and we subsequently explored the resulting population dynamics. Specifically, based on a system of ordinary differential equations for \(A_\mathrm{R}\) and \(B_\mathrm{R}\), we determined that the optimal treatment strategy consists of two stages: an initial stage in which a chosen effective drug is utilized until a specific time point, T, and a second stage in which drugs are switched repeatedly, during which each drug is used for a relative duration (i.e., \(f \Delta t\)-long for DrugA and \((1-f) \Delta t\)-long for DrugB with \(0 \le f \le 1\) and \(\Delta t \ge 0\)). We prove that the optimal duration of the initial stage, in which the first drug is administered, T, is shorter than the period in which it remains effective in decreasing the total population, contrary to current clinical intuition. We further analyzed the relationship between population makeup, \(\mathcal {A/B} = A_\mathrm{R}/B_\mathrm{R}\), and the effect of each drug. We determine a critical ratio, which we term \(\mathcal {(A/B)}^{*}\), at which the two drugs are equally effective. As the first stage of the optimal strategy is applied, \(\mathcal {A/B}\) changes monotonically to \(\mathcal {(A/B)}^{*}\) and then, during the second stage, remains at \(\mathcal {(A/B)}^{*}\) thereafter. Beyond our analytic results, we explored an individual-based stochastic model and presented the distribution of extinction times for the classes of solutions found. Taken together, our results suggest opportunities to improve therapy scheduling in clinical oncology.  相似文献   

14.
A general mathematical model of anthrax (caused by Bacillus anthracis) transmission is formulated that includes live animals, infected carcasses and spores in the environment. The basic reproduction number \(\mathcal {R}_0\) is calculated, and existence of a unique endemic equilibrium is established for \(\mathcal {R}_0\) above the threshold value 1. Using data from the literature, elasticity indices for \(\mathcal {R}_0\) and type reproduction numbers are computed to quantify anthrax control measures. Including only herbivorous animals, anthrax is eradicated if \(\mathcal {R}_0 < 1\). For these animals, oscillatory solutions arising from Hopf bifurcations are numerically shown to exist for certain parameter values with \(\mathcal {R}_0>1\) and to have periodicity as observed from anthrax data. Including carnivores and assuming no disease-related death, anthrax again goes extinct below the threshold. Local stability of the endemic equilibrium is established above the threshold; thus, periodic solutions are not possible for these populations. It is shown numerically that oscillations in spore growth may drive oscillations in animal populations; however, the total number of infected animals remains about the same as with constant spore growth.  相似文献   

15.

Background

Reconstructing the genome of a species from short fragments is one of the oldest bioinformatics problems. Metagenomic assembly is a variant of the problem asking to reconstruct the circular genomes of all bacterial species present in a sequencing sample. This problem can be naturally formulated as finding a collection of circular walks of a directed graph G that together cover all nodes, or edges, of G.

Approach

We address this problem with the “safe and complete” framework of Tomescu and Medvedev (Research in computational Molecular biology—20th annual conference, RECOMB 9649:152–163, 2016). An algorithm is called safe if it returns only those walks (also called safe) that appear as subwalk in all metagenomic assembly solutions for G. A safe algorithm is called complete if it returns all safe walks of G.

Results

We give graph-theoretic characterizations of the safe walks of G, and a safe and complete algorithm finding all safe walks of G. In the node-covering case, our algorithm runs in time \(O(m^2 + n^3)\), and in the edge-covering case it runs in time \(O(m^2n)\); n and m denote the number of nodes and edges, respectively, of G. This algorithm constitutes the first theoretical tight upper bound on what can be safely assembled from metagenomic reads using this problem formulation.
  相似文献   

16.
Phylogenetic networks generalise phylogenetic (evolutionary) trees by allowing for the representation of reticulation (non-treelike) events. The structure of such networks is often viewed by the phylogenetic trees they embed. In this paper, we determine when a phylogenetic network \({\mathcal {N}}\) has two phylogenetic tree embeddings which collectively contain all of the edges of \({\mathcal {N}}\). This determination leads to a polynomial-time algorithm for recognising such networks and an unexpected characterisation of the class of reticulation-visible networks.  相似文献   

17.
Zeng  Chao  Hamada  Michiaki 《BMC genomics》2018,19(10):906-49

Background

With the increasing number of annotated long noncoding RNAs (lncRNAs) from the genome, researchers are continually updating their understanding of lncRNAs. Recently, thousands of lncRNAs have been reported to be associated with ribosomes in mammals. However, their biological functions or mechanisms are still unclear.

Results

In this study, we tried to investigate the sequence features involved in the ribosomal association of lncRNA. We have extracted ninety-nine sequence features corresponding to different biological mechanisms (i.e., RNA splicing, putative ORF, k-mer frequency, RNA modification, RNA secondary structure, and repeat element). An \(\mathcal {L}1\)-regularized logistic regression model was applied to screen these features. Finally, we obtained fifteen and nine important features for the ribosomal association of human and mouse lncRNAs, respectively.

Conclusion

To our knowledge, this is the first study to characterize ribosome-associated lncRNAs and ribosome-free lncRNAs from the perspective of sequence features. These sequence features that were identified in this study may shed light on the biological mechanism of the ribosomal association and provide important clues for functional analysis of lncRNAs.
  相似文献   

18.
We developed a dynamic model of a rat proximal convoluted tubule cell in order to investigate cell volume regulation mechanisms in this nephron segment. We examined whether regulatory volume decrease (RVD), which follows exposure to a hyposmotic peritubular solution, can be achieved solely via stimulation of basolateral K\(^+\) and \(\hbox {Cl}^-\) channels and \(\hbox {Na}^+\)\(\hbox {HCO}_3^-\) cotransporters. We also determined whether regulatory volume increase (RVI), which follows exposure to a hyperosmotic peritubular solution under certain conditions, may be accomplished by activating basolateral \(\hbox {Na}^+\)/H\(^+\) exchangers. Model predictions were in good agreement with experimental observations in mouse proximal tubule cells assuming that a 10% increase in cell volume induces a fourfold increase in the expression of basolateral K\(^+\) and \(\hbox {Cl}^-\) channels and \(\hbox {Na}^+\)\(\hbox {HCO}_3^-\) cotransporters. Our results also suggest that in response to a hyposmotic challenge and subsequent cell swelling, \(\hbox {Na}^+\)\(\hbox {HCO}^-_3\) cotransporters are more efficient than basolateral K\(^+\) and \(\hbox {Cl}^-\) channels at lowering intracellular osmolality and reducing cell volume. Moreover, both RVD and RVI are predicted to stabilize net transcellular \(\hbox {Na}^+\) reabsorption, that is, to limit the net \(\hbox {Na}^+\) flux decrease during a hyposmotic challenge or the net \(\hbox {Na}^+\) flux increase during a hyperosmotic challenge.  相似文献   

19.

Key message

Heuristic genomic inbreeding controls reduce inbreeding in genomic breeding schemes without reducing genetic gain.

Abstract

Genomic selection is increasingly being implemented in plant breeding programs to accelerate genetic gain of economically important traits. However, it may cause significant loss of genetic diversity when compared with traditional schemes using phenotypic selection. We propose heuristic strategies to control the rate of inbreeding in outbred plants, which can be categorised into three types: controls during mate allocation, during selection, and simultaneous selection and mate allocation. The proposed mate allocation measure GminF allocates two or more parents for mating in mating groups that minimise coancestry using a genomic relationship matrix. Two types of relationship-adjusted genomic breeding values for parent selection candidates (\({{\widetilde{\text{GEBV}}}_{\text{P}}}\)) and potential offspring (\({{\widetilde{\text{GEBV}}}_{\text{O}}}\)) are devised to control inbreeding during selection and even enabling simultaneous selection and mate allocation. These strategies were tested in a case study using a simulated perennial ryegrass breeding scheme. As compared to the genomic selection scheme without controls, all proposed strategies could significantly decrease inbreeding while achieving comparable genetic gain. In particular, the scenario using \({{\widetilde{\text{GEBV}}}_{\text{O}}}\) in simultaneous selection and mate allocation reduced inbreeding to one-third of the original genomic selection scheme. The proposed strategies are readily applicable in any outbred plant breeding program.
  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号