共查询到20条相似文献,搜索用时 15 毫秒
1.
Xiaodong Chen Vikram Sadineni Mita Maity Yong Quan Matthew Enterline Rao V. Mantri 《AAPS PharmSciTech》2015,16(6):1317-1326
Lyophilization is an approach commonly undertaken to formulate drugs that are unstable to be commercialized as ready to use (RTU) solutions. One of the important aspects of commercializing a lyophilized product is to transfer the process parameters that are developed in lab scale lyophilizer to commercial scale without a loss in product quality. This process is often accomplished by costly engineering runs or through an iterative process at the commercial scale. Here, we are highlighting a combination of computational and experimental approach to predict commercial process parameters for the primary drying phase of lyophilization. Heat and mass transfer coefficients are determined experimentally either by manometric temperature measurement (MTM) or sublimation tests and used as inputs for the finite element model (FEM)-based software called PASSAGE, which computes various primary drying parameters such as primary drying time and product temperature. The heat and mass transfer coefficients will vary at different lyophilization scales; hence, we present an approach to use appropriate factors while scaling-up from lab scale to commercial scale. As a result, one can predict commercial scale primary drying time based on these parameters. Additionally, the model-based approach presented in this study provides a process to monitor pharmaceutical product robustness and accidental process deviations during Lyophilization to support commercial supply chain continuity. The approach presented here provides a robust lyophilization scale-up strategy; and because of the simple and minimalistic approach, it will also be less capital intensive path with minimal use of expensive drug substance/active material. 相似文献
2.
The potential of random forest and neural networks for biomass and recombinant protein modeling in Escherichia coli fed‐batch fermentations
下载免费PDF全文
![点击此处可从《Biotechnology journal》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Michael Melcher Theresa Scharl Bernhard Spangl Markus Luchner Monika Cserjan Karl Bayer Friedrich Leisch Gerald Striedner 《Biotechnology journal》2015,10(11):1770-1782
Product quality assurance strategies in production of biopharmaceuticals currently undergo a transformation from empirical “quality by testing” to rational, knowledge‐based “quality by design” approaches. The major challenges in this context are the fragmentary understanding of bioprocesses and the severely limited real‐time access to process variables related to product quality and quantity. Data driven modeling of process variables in combination with model predictive process control concepts represent a potential solution to these problems. The selection of statistical techniques best qualified for bioprocess data analysis and modeling is a key criterion. In this work a series of recombinant Escherichia coli fed‐batch production processes with varying cultivation conditions employing a comprehensive on‐ and offline process monitoring platform was conducted. The applicability of two machine learning methods, random forest and neural networks, for the prediction of cell dry mass and recombinant protein based on online available process parameters and two‐dimensional multi‐wavelength fluorescence spectroscopy is investigated. Models solely based on routinely measured process variables give a satisfying prediction accuracy of about ± 4% for the cell dry mass, while additional spectroscopic information allows for an estimation of the protein concentration within ± 12%. The results clearly argue for a combined approach: neural networks as modeling technique and random forest as variable selection tool. 相似文献
3.
Bengt Autzen 《Biology & philosophy》2011,26(4):567-581
Although Bayesian methods are widely used in phylogenetic systematics today, the foundations of this methodology are still
debated among both biologists and philosophers. The Bayesian approach to phylogenetic inference requires the assignment of
prior probabilities to phylogenetic trees. As in other applications of Bayesian epistemology, the question of whether there
is an objective way to assign these prior probabilities is a contested issue. This paper discusses the strategy of constraining
the prior probabilities of phylogenetic trees by means of the Principal Principle. In particular, I discuss a proposal due
to Velasco (Biol Philos 23:455–473, 2008) of assigning prior probabilities to tree topologies based on the Yule process. By invoking the Principal Principle I argue
that prior probabilities of tree topologies should rather be assigned a weighted mixture of probability distributions based
on Pinelis’ (P Roy Soc Lond B Bio 270:1425–1431, 2003) multi-rate branching process including both the Yule distribution and the uniform distribution. However, I argue that this
solves the problem of the priors of phylogenetic trees only in a weak form. 相似文献
4.
Marginal regression via generalized estimating equations is widely used in biostatistics to model longitudinal data from subjects
whose outcomes and covariates are observed at several time points. In this paper we consider two issues that have been raised
in the literature concerning the marginal regression approach. The first is that even though the past history may be predictive
of outcome, the marginal approach does not use this history. Although marginal regression has the flexibility of allowing
between-subject variations in the observation times, it may lose substantial prediction power in comparison with the transitional
modeling approach that relates the responses to the covariate and outcome histories. We address this issue by using the concept
of “information sets” for prediction to generalize the “partly conditional mean” approach of Pepe and Couper (J. Am. Stat.
Assoc. 92:991–998, 1997). This modeling approach strikes a balance between the flexibility of the marginal approach and the predictive power of transitional
modeling. Another issue is the problem of excess zeros in the outcomes over what the underlying model for marginal regression
implies. We show how our predictive modeling approach based on information sets can be readily modified to handle the excess
zeros in the longitudinal time series. By synthesizing the marginal, transitional, and mixed effects modeling approaches in
a predictive framework, we also discuss how their respective advantages can be retained while their limitations can be circumvented
for modeling longitudinal data. 相似文献
5.
With a large number of DNA and protein sequences already known, the crucial question is to find out how the biological function
of these macromolecules is "written" in the sequence of nucleotides or amino acids. Biological processes in any living organism
are based on selective interactions between particular bio-molecules, mostly proteins. The rules governing the coding of a
protein's biological function, i.e. its ability to selectively interact with other molecules, are still not elucidated. In
addition, with the rapid accumulation of databases of protein primary structures, there is an urgent need for theoretical
approaches that are capable of analysing protein structure-function relationships. The Resonant Recognition Model (RRM) [1, 2] is one attempt to identify the selectivity of protein interactions within the amino acid sequence. The RRM [1, 2] is a physico-mathematical approach that interprets protein sequence linear information using digital signal processing methods.
In the RRM the protein primary structure is represented as a numerical series by assigning to each amino acid in the sequence
a physical parameter value relevant to the protein's biological activity. The RRM concept is based on the finding that there
is a significant correlation between spectra of the numerical presentation of amino acids and their biological activity. Once
the characteristic frequency for a particular protein function/interaction is identified, it is possible then to utilize the
RRM approach to predict the amino acids in the protein sequence, which predominantly contribute to this frequency and thus,
to the observed function, as well as to design de novo peptides having the desired periodicities. As was shown in our previous studies of fibroblast growth factor (FGF) peptidic
antagonists [2, 3] and human immunodeficiency virus (HIV) envelope agonists [2, 4], such de novo designed peptides express desired biological function. This study utilises the RRM computational approach to the analysis
of oncogene and proto-oncogene proteins. The results obtained have shown that the RRM is capable of identifying the differences
between the oncogenic and proto-oncogenic proteins with the possibility of identifying the "cancer-causing" features within
their protein primary structure. In addition, the rational design of bioactive peptide analogues displaying oncogenic or proto-oncogenic-like
activity is presented here. 相似文献
6.
Over the last few decades, much effort has been taken to develop approaches for identifying good predictions of RNA secondary
structure. This is due to the fact that most computational prediction methods based on free energy minimization compute a
number of suboptimal foldings and we have to identify the native folding among all these possible secondary structures. Using
the abstract shapes approach as introduced by Giegerich et al. (Nucleic Acids Res 32(16):4843–4851, 2004), each class of similar secondary structures is represented by one shape and the native structures can be found among the
top shape representatives. In this article, we derive some interesting results answering enumeration problems for abstract
shapes and secondary structures of RNA. We compute precise asymptotics for the number of different shape representations of
size n and for the number of different shapes showing up when abstracting from secondary structures of size n under a combinatorial point of view. A more realistic model taking primary structures into account remains an open challenge.
We give some arguments why the present techniques cannot be applied in this case. 相似文献
7.
In pharmaceutical tablet manufacturing processes, a major source of disturbance affecting drug product quality is the (lot-to-lot)
variability of the incoming raw materials. A novel modeling and process optimization strategy that compensates for raw material
variability is presented. The approach involves building partial least squares models that combine raw material attributes
and tablet process parameters and relate these to final tablet attributes. The resulting models are used in an optimization
framework to then find optimal process parameters which can satisfy all the desired requirements for the final tablet attributes,
subject to the incoming raw material lots. In order to de-risk the potential (lot-to-lot) variability of raw materials on
the drug product quality, the effect of raw material lot variability on the final tablet attributes was investigated using
a raw material database containing a large number of lots. In this way, the raw material variability, optimal process parameter
space and tablet attributes are correlated with each other and offer the opportunity of simulating a variety of changes in silico without actually performing experiments. The connectivity obtained between the three sources of variability (materials, parameters,
attributes) can be considered a design space consistent with Quality by Design principles, which is defined by the ICH-Q8
guidance (USDA 2006). The effectiveness of the methodologies is illustrated through a common industrial tablet manufacturing case study. 相似文献
8.
Various approaches have been applied to optimize biological product fermentation processes and define design space. In this article, we present a stepwise approach to optimize a Saccharomyces cerevisiae fermentation process through risk assessment analysis, statistical design of experiments (DoE), and multivariate Bayesian predictive approach. The critical process parameters (CPPs) were first identified through a risk assessment. The response surface for each attribute was modeled using the results from the DoE study with consideration given to interactions between CPPs. A multivariate Bayesian predictive approach was then used to identify the region of process operating conditions where all attributes met their specifications simultaneously. The model prediction was verified by twelve consistency runs where all batches achieved broth titer more than 1.53 g/L of broth and quality attributes within the expected ranges. The calculated probability was used to define the reliable operating region. To our knowledge, this is the first case study to implement the multivariate Bayesian predictive approach to the process optimization for the industrial application and its corresponding verification at two different production scales. This approach can be extended to other fermentation process optimizations and reliable operating region quantitation. © 2012 American Institute of Chemical Engineers Biotechnol. Prog., 28: 1095–1105, 2012 相似文献
9.
Development of a scale down cell culture model using multivariate analysis as a qualification tool
下载免费PDF全文
![点击此处可从《Biotechnology progress》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Valerie Liu Tsang Angela X. Wang Helena Yusuf‐Makagiansar Thomas Ryll 《Biotechnology progress》2014,30(1):152-160
In characterizing a cell culture process to support regulatory activities such as process validation and Quality by Design, developing a representative scale down model for design space definition is of great importance. The manufacturing bioreactor should ideally reproduce bench scale performance with respect to all measurable parameters. However, due to intrinsic geometric differences between scales, process performance at manufacturing scale often varies from bench scale performance, typically exhibiting differences in parameters such as cell growth, protein productivity, and/or dissolved carbon dioxide concentration. Here, we describe a case study in which a bench scale cell culture process model is developed to mimic historical manufacturing scale performance for a late stage CHO‐based monoclonal antibody program. Using multivariate analysis (MVA) as primary data analysis tool in addition to traditional univariate analysis techniques to identify gaps between scales, process adjustments were implemented at bench scale resulting in an improved scale down cell culture process model. Finally we propose an approach for small scale model qualification including three main aspects: MVA, comparison of key physiological rates, and comparison of product quality attributes. © 2013 American Institute of Chemical Engineers Biotechnol. Prog., 30:152–160, 2014 相似文献
10.
Sarah J. Converse J. Andrew Royle Peter H. Adler Richard P. Urbanek Jeb A. Barzen 《Ecology and evolution》2013,3(13):4439-4447
Nest success is a critical determinant of the dynamics of avian populations, and nest survival modeling has played a key role in advancing avian ecology and management. Beginning with the development of daily nest survival models, and proceeding through subsequent extensions, the capacity for modeling the effects of hypothesized factors on nest survival has expanded greatly. We extend nest survival models further by introducing an approach to deal with incompletely observed, temporally varying covariates using a hierarchical model. Hierarchical modeling offers a way to separate process and observational components of demographic models to obtain estimates of the parameters of primary interest, and to evaluate structural effects of ecological and management interest. We built a hierarchical model for daily nest survival to analyze nest data from reintroduced whooping cranes (Grus americana) in the Eastern Migratory Population. This reintroduction effort has been beset by poor reproduction, apparently due primarily to nest abandonment by breeding birds. We used the model to assess support for the hypothesis that nest abandonment is caused by harassment from biting insects. We obtained indices of blood‐feeding insect populations based on the spatially interpolated counts of insects captured in carbon dioxide traps. However, insect trapping was not conducted daily, and so we had incomplete information on a temporally variable covariate of interest. We therefore supplemented our nest survival model with a parallel model for estimating the values of the missing insect covariates. We used Bayesian model selection to identify the best predictors of daily nest survival. Our results suggest that the black fly Simulium annulus may be negatively affecting nest survival of reintroduced whooping cranes, with decreasing nest survival as abundance of S. annulus increases. The modeling framework we have developed will be applied in the future to a larger data set to evaluate the biting‐insect hypothesis and other hypotheses for nesting failure in this reintroduced population; resulting inferences will support ongoing efforts to manage this population via an adaptive management approach. Wider application of our approach offers promise for modeling the effects of other temporally varying, but imperfectly observed covariates on nest survival, including the possibility of modeling temporally varying covariates collected from incubating adults. 相似文献
11.
Shelly A. Pizarro Rachel Dinges Rachel Adams Ailen Sanchez Charles Winter 《Biotechnology and bioengineering》2009,104(2):340-351
Process analytical technology (PAT) is an initiative from the US FDA combining analytical and statistical tools to improve manufacturing operations and ensure regulatory compliance. This work describes the use of a continuous monitoring system for a protein refolding reaction to provide consistency in product quality and process performance across batches. A small‐scale bioreactor (3 L) is used to understand the impact of aeration for refolding recombinant human vascular endothelial growth factor (rhVEGF) in a reducing environment. A reverse‐phase HPLC assay is used to assess product quality. The goal in understanding the oxygen needs of the reaction and its impact to quality, is to make a product that is efficiently refolded to its native and active form with minimum oxidative degradation from batch to batch. Because this refolding process is heavily dependent on oxygen, the % dissolved oxygen (DO) profile is explored as a PAT tool to regulate process performance at commercial manufacturing scale. A dynamic gassing out approach using constant mass transfer (kLa) is used for scale‐up of the aeration parameters to manufacturing scale tanks (2,000 L, 15,000 L). The resulting DO profiles of the refolding reaction show similar trends across scales and these are analyzed using rpHPLC. The desired product quality attributes are then achieved through alternating air and nitrogen sparging triggered by changes in the monitored DO profile. This approach mitigates the impact of differences in equipment or feedstock components between runs, and is directly inline with the key goal of PAT to “actively manage process variability using a knowledge‐based approach.” Biotechnol. Bioeng. 2009; 104: 340–351 © 2009 Wiley Periodicals, Inc. 相似文献
12.
Vincent Brunner Manuel Siegl Dominik Geier Thomas Becker 《Biotechnology and bioengineering》2020,117(9):2749-2759
A common control strategy for the production of recombinant proteins in Pichia pastoris using the alcohol oxidase 1 (AOX1) promotor is to separate the bioprocess into two main phases: biomass generation on glycerol and protein production via methanol induction. This study reports the establishment of a soft sensor for the prediction of biomass concentration that adapts automatically to these distinct phases. A hybrid approach combining mechanistic (carbon balance) and data-driven modeling (multiple linear regression) is used for this purpose. The model parameters are dynamically adapted according to the current process phase using a multilevel phase detection algorithm. This algorithm is based on the online data of CO2 in the off-gas (absolute value and first derivative) and cumulative base feed. The evaluation of the model resulted in a mean relative prediction error of 5.52% and R² of .96 for the entire process. The resulting model was implemented as a soft sensor for the online monitoring of the P. pastoris bioprocess. The soft sensor can be used for quality control and as input to process control systems, for example, for methanol control. 相似文献
13.
Freeze-drying is a relatively expensive process requiring long processing time, and hence one of the key objectives during
freeze-drying process development is to minimize the primary drying time, which is the longest of the three steps in freeze-drying.
However, increasing the shelf temperature into secondary drying before all of the ice is removed from the product will likely
cause collapse or eutectic melt. Thus, from product quality as well as process economics standpoint, it is very critical to
detect the end of primary drying. Experiments were conducted with 5% mannitol and 5% sucrose as model systems. The apparent
end point of primary drying was determined by comparative pressure measurement (i.e., Pirani vs. MKS Baratron), dew point, Lyotrack (gas plasma spectroscopy), water concentration from tunable diode laser absorption spectroscopy,
condenser pressure, pressure rise test (manometric temperature measurement or variations of this method), and product thermocouples.
Vials were pulled out from the drying chamber using a sample thief during late primary and early secondary drying to determine
percent residual moisture either gravimetrically or by Karl Fischer, and the cake structure was determined visually for melt-back,
collapse, and retention of cake structure at the apparent end point of primary drying (i.e., onset, midpoint, and offset).
By far, the Pirani is the best choice of the methods tested for evaluation of the end point of primary drying. Also, it is
a batch technique, which is cheap, steam sterilizable, and easy to install without requiring any modification to the existing
dryer. 相似文献
14.
Saziye Bayram Tracy L. Stepien E. Bruce Pitman 《Bulletin of mathematical biology》2009,71(6):1482-1506
This paper presents a mathematical model of a system of many coupled nephrons branching from a common cortical radial artery,
and accompanying analysis of that system. This modeling effort is a first step in understanding how coupling magnifies the
tendency of nephrons to oscillate owing to tubuloglomerular feedback. Central to the present work is the single nephron integral
model (as in Pitman et al., The IMA Volumes in Mathematics and Its Applications, vol. 129, pp. 345–364, 2002 and in Zaritski, Ph.D. Dissertation, 1999) which is a simplification of the single nephron PDE model of Layton et al. (Am. J. Physiol. 261, F904–F919, 1991). A second principal idea used in the present model is a coupling of model nephrons, generalizing the work of Pitman et al.
(Bull. Math. Biol. 66, 1463–1492, 2004) who proposed a model of two coupled nephrons. In this study, we couple nephrons through a nearest neighbor interaction.
Speaking generally, our results suggest that a series of similar nephrons coupled to their nearest neighbors are more prone
to be found in an oscillatory mode, relative to a single nephron with the same properties. More specifically, we show analytically
that, for N coupled identical nephrons, the region supporting oscillatory solutions in the time delay–gain parameter plane increases
with N. Numerical simulations suggest that, if N nephrons have gains and time delays that do not differ by much, the system is, again, more prone to oscillate, relative to
a single nephron, and the oscillations tend to be approximately synchronous and in-phase. We examine the effect of parameters
on bifurcation. We also examine alternative models of coupling; this analysis allows us to conclude that the increased propensity
of coupled nephrons to oscillate is a robust finding, true for several models of nephron interaction. 相似文献
15.
Summary We develop a new Bayesian approach of sample size determination (SSD) for the design of noninferiority clinical trials. We extend the fitting and sampling priors of Wang and Gelfand (2002, Statistical Science 17 , 193–208) to Bayesian SSD with a focus on controlling the type I error and power. Historical data are incorporated via a hierarchical modeling approach as well as the power prior approach of Ibrahim and Chen (2000, Statistical Science 15 , 46–60). Various properties of the proposed Bayesian SSD methodology are examined and a simulation‐based computational algorithm is developed. The proposed methodology is applied to the design of a noninferiority medical device clinical trial with historical data from previous trials. 相似文献
16.
Characterization of a Saccharomyces cerevisiae fermentation process for production of a therapeutic recombinant protein using a multivariate Bayesian approach
下载免费PDF全文
![点击此处可从《Biotechnology progress》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Zhibiao Fu Daniel Baker Aili Cheng Julie Leighton Edward Appelbaum Juan Aon 《Biotechnology progress》2016,32(3):799-812
The principle of quality by design (QbD) has been widely applied to biopharmaceutical manufacturing processes. Process characterization is an essential step to implement the QbD concept to establish the design space and to define the proven acceptable ranges (PAR) for critical process parameters (CPPs). In this study, we present characterization of a Saccharomyces cerevisiae fermentation process using risk assessment analysis, statistical design of experiments (DoE), and the multivariate Bayesian predictive approach. The critical quality attributes (CQAs) and CPPs were identified with a risk assessment. The statistical model for each attribute was established using the results from the DoE study with consideration given to interactions between CPPs. Both the conventional overlapping contour plot and the multivariate Bayesian predictive approaches were used to establish the region of process operating conditions where all attributes met their specifications simultaneously. The quantitative Bayesian predictive approach was chosen to define the PARs for the CPPs, which apply to the manufacturing control strategy. Experience from the 10,000 L manufacturing scale process validation, including 64 continued process verification batches, indicates that the CPPs remain under a state of control and within the established PARs. The end product quality attributes were within their drug substance specifications. The probability generated with the Bayesian approach was also used as a tool to assess CPP deviations. This approach can be extended to develop other production process characterization and quantify a reliable operating region. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 32:799–812, 2016 相似文献
17.
Elvira M. Erhardt Moreno Ursino Jeike Biewenga Tom Jacobs Mauro Gasparini 《Biometrical journal. Biometrische Zeitschrift》2019,61(5):1104-1119
The primary goal of “in vitro–in vivo correlation” (IVIVC) is the reliable prediction of the in vivo serum concentration‐time course, based on the in vitro drug dissolution or release profiles. IVIVC methods are particularly appropriate for formulations that are released over an extended period of time or with a lag in absorption and may support approving a change in formulation of a drug without additional bioequivalence trials in human subjects. Most of the current IVIVC models are assessed using frequentist methods, such as linear regression, based on averaged data and entail complex and potentially unstable mathematical deconvolution. The proposed IVIVC approach includes (a) a nonlinear‐mixed effects model for the in vitro release data; (b) a population pharmacokinetic (PK) compartment model for the in vivo immediate release (IR) data; and (c) a system of ordinal differential equations (ODEs), containing the submodels (a) and (b), which approximates and predicts the in vivo controlled release (CR) data. The innovation in this paper consists of splitting the parameter space between submodels (a) and (b) versus (c). Subsequently, the uncertainty on these parameters is accounted for using a Bayesian framework, that is estimates from the first two submodels serve as priors for the Bayesian hierarchical third submodel. As such, the Bayesian method explained ensures a natural integration and transfer of knowledge between various sources of information, balancing possible differences in sample size and parameter uncertainty of in vitro and in vivo studies. Consequently, it is a very flexible approach yielding results for a broad range of data situations. The application of the method is demonstrated for a transdermal patch (TD). 相似文献
18.
Jamie Powers Yan Zhao Shuo Lin Edward R. B. McCabe 《Development genes and evolution》2009,219(8):419-425
Zebrafish teeth develop on pharyngeal jaws in the 5th branchial arch, but early tooth development is remarkably similar to
mammals (Borday-Birraux et al., Evol Dev 8:130, 2006). Recently, eve1 has been shown to be associated with the primary tooth (4V1) and early ameloblast development, the enamel organ precursor (Laurenti et al., Dev Dyn 230:727, 2004). dax1 is initially expressed in the 5th branchial arch in zebrafish at approximately 26 h postfertilization (hpf) and colocalizes
with eve1 expression at ~48 hpf. Embryos injected with dax1 morpholino show downregulation of eve1 expression. Based on the zebrafish observations, we demonstrated novel DAX1 expression in normal human dental, benign ameloblastoma, and malignant ameloblastoma tissues. The association of NR0B1 and its protein product DAX1 with primary tooth development and ameloblastoma tumorigenesis is an association not previously
described. 相似文献
19.
Template-based modeling that employs various meta-threading techniques is currently the most accurate, and consequently the most commonly used, approach for protein structure prediction. Despite the evident progress in this field, accurate structure models cannot be constructed for a significant fraction of gene products, thus the development of new algorithms is required. Here, we describe the development, optimization and large-scale benchmarking of eThread, a highly accurate meta-threading procedure for the identification of structural templates and the construction of corresponding target-to-template alignments. eThread integrates ten state-of-the-art threading/fold recognition algorithms in a local environment and extensively uses various machine learning techniques to carry out fully automated template-based protein structure modeling. Tertiary structure prediction employs two protocols based on widely used modeling algorithms: Modeller and TASSER-Lite. As a part of eThread, we also developed eContact, which is a Bayesian classifier for the prediction of inter-residue contacts and eRank, which effectively ranks generated multiple protein models and provides reliable confidence estimates as structure quality assessment. Excluding closely related templates from the modeling process, eThread generates models, which are correct at the fold level, for >80% of the targets; 40–50% of the constructed models are of a very high quality, which would be considered accurate at the family level. Furthermore, in large-scale benchmarking, we compare the performance of eThread to several alternative methods commonly used in protein structure prediction. Finally, we estimate the upper bound for this type of approach and discuss the directions towards further improvements. 相似文献
20.
The taxonomy and phylogeny of the cyprinid genus Opsariichthys Bleeker (Teleostei: Cyprinidae) from Taiwan, with description of a new species 总被引:1,自引:0,他引:1
The morphological and mitochondrial genetic differentiation in the cyprinid genus, Opsariichthys Bleeker (Nederlandsch Tijdschrift voor de Dierkunde 1:187–218, 1863) have been surveyed in Taiwan. Among them, there are three valid species can be recognized in Taiwan including Opsariichthys pachycephalus Günther (1868) distributed in northern and western Taiwan, Opsariichthys evolans (Jordan and Evermann Proc US Nat Mus 25:315–368, 1902) in northern Taiwan and an unnamed species from southern Taiwan described herein as Opsariichthys kaopingensis Chen and Wu, new species which can be well distinguished from the related O. pachycaphalus by their body proportions, scale counts, and specific coloration patterns. We utilized mitochondrial complete D-loop sequence
data to infer phylogenetic relationships within a subset of related genera of opsariichthines, and to examine evidence for
genetic differentiation in these two sibling species formerly assigned to “Zacco” pachycephalus and their genetic relationship with other congeneric species around nearby regions. The clade of O. pachycephalus and O. kaopingensis in genetically were recovered as more closely related to Opsariichthys uncirostris (Temminck and Schlegel 1846) species complex including both O. uncirostris and O. bidens Günther (1868) from Japan and mainland China than to typical Zacco from Japan. This molecular phylogenetic insight strongly supports the assignment for both so-called “Zacco” pachycephalus and this new species described herein as the typical monophyletic members of Opsariichthys and the type species of Zacco as Zacco platypus (Temminck and Schlegel 1846) from Japan is sister clade for all species groups in Opsariichthys. Opsariichthys pachycephalus and O. kaopingensis were strongly differentiated by large mitogenetic distances and phylogenetic support from distance and discrete method and
Bayesian inference based on complete mtDNA D-loop sequences, with an average mitogenetic divergence of 3.3%, which may suggest
that the separation of the two species happened much earlier than the last glacial period. Opsariichthys evolans seems to share the close genetic relationship with O. acutipinnis (Bleeker Nederlandsch Tijdschrift voor de Dierkunde 1:187–218, 1863) from the Yangtsi River basin. 相似文献