首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A multitask production system is one in which different subsets of the components can be used to perform distinct functions or tasks. For such a system, some of the components are used intermittently and some may be used continuously. This type of operational protocol is often applied to flexible manufacturing systems. In this paper, we develop models of the operational reliability and of the availability of multitask systems. We construct models for both the case in which preventive maintenance is used and the case in which preventive maintenance is not used. The models presented here constitute an extension to existing models to allow the study of the behavior of modern manufacturing equipment.  相似文献   

2.
Appropriate selection of parents for the development of mapping populations is pivotal to maximizing the power of quantitative trait loci detection. Trait genotypic variation within a family is indicative of the family's informativeness for genetic studies. Accurate prediction of the most useful parental combinations within a species would help guide quantitative genetics studies. We tested the reliability of genotypic and phenotypic distance estimators between pairs of maize inbred lines to predict genotypic variation for quantitative traits within families derived from biparental crosses. We developed 25 families composed of ~200 random recombinant inbred lines each from crosses between a common reference parent inbred, B73, and 25 diverse maize inbreds. Parents and families were evaluated for 19 quantitative traits across up to 11 environments. Genetic distances (GDs) among parents were estimated with 44 simple sequence repeat and 2303 single-nucleotide polymorphism markers. GDs among parents had no predictive value for progeny variation, which is most likely due to the choice of neutral markers. In contrast, we observed for about half of the traits measured a positive correlation between phenotypic parental distances and within-family genetic variance estimates. Consequently, the choice of promising segregating populations can be based on selecting phenotypically diverse parents. These results are congruent with models of genetic architecture that posit numerous genes affecting quantitative traits, each segregating for allelic series, with dispersal of allelic effects across diverse genetic material. This architecture, common to many quantitative traits in maize, limits the predictive value of parental genotypic or phenotypic values on progeny variance.  相似文献   

3.
A major goal of biophysics is to understand the physical mechanisms of biological molecules and systems. Mechanistic models are evaluated based on their ability to explain carefully controlled experiments. By fitting models to data, biophysical parameters that cannot be measured directly can be estimated from experimentation. However, it might be the case that many different combinations of model parameters can explain the observations equally well. In these cases, the model parameters are not identifiable: the experimentation has not provided sufficient constraining power to enable unique estimation of their true values. We demonstrate that this pitfall is present even in simple biophysical models. We investigate the underlying causes of parameter non-identifiability and discuss straightforward methods for determining when parameters of simple models can be inferred accurately. However, for models of even modest complexity, more general tools are required to diagnose parameter non-identifiability. We present a method based in Bayesian inference that can be used to establish the reliability of parameter estimates, as well as yield accurate quantification of parameter confidence.  相似文献   

4.
Postnatal growth in birds is traditionally modelled by fitting three‐parameter models, namely the logistic, the Gompertz, or the von Bertalanffy models. The purpose of this paper is to address the utility of the Unified‐Richards (U‐Richards) model. We draw attention to two forms of the U‐Richards and lay down a set of recommendations for the analysis of bird growth, in order to make this model and the methods more accessible. We examine the behaviour of the four parameters in each model form and the four derived measurements, and we show that all are easy to interpret, and that each parameter controls a single curve characteristic. The two parameters that control the inflection point, enable us to compare its placement in two dimensions, 1) inflection value (mass or length at inflection) and 2) inflection time (time since hatching), between data sets (e.g. between biometrics or between species). We also show how the parameter controlling growth rate directly presents us with the relative growth rate at inflection, and we demonstrate how one can compare growth rates across data sets. The three traditional models, where the inflection value is fixed (to a specific percentage of the upper asymptote), provide incompatible growth‐rate coefficients. One of the two forms of the U‐Richards model makes it possible to fix not only the upper asymptote (adult value), but also the intersection with the y‐axis (hatching value). Fitting the new model forms to data validates the usefulness of interpreting the inflection placement in addition to the growth rate. It also illustrated the advantages and limitations of constraining the upper asymptote (adult value) and the y‐axis intersection (hatching value) to fixed values. We show that the U‐Richards model can successfully replace some of the commonly used growth models, and we advocate replacing these with the U‐Richards when modelling bird growth.  相似文献   

5.
Increasingly, applications need to be able to self-reconfigure in response to changing requirements and environmental conditions. Autonomic computing has been proposed as a means for automating software maintenance tasks. As the complexity of adaptive and autonomic systems grows, designing and managing the set of reconfiguration rules becomes increasingly challenging and may produce inconsistencies. This paper proposes an approach to leverage genetic algorithms in the decision-making process of an autonomic system. This approach enables a system to dynamically evolve target reconfigurations at run time that balance tradeoffs between functional and non-functional requirements in response to changing requirements and environmental conditions. A key feature of this approach is incorporating system and environmental monitoring information into the genetic algorithm such that specific changes in the environment automatically drive the evolutionary process towards new viable solutions. We have applied this genetic-algorithm based approach to the dynamic reconfiguration of a collection of remote data mirrors, demonstrating an effective decision-making method for diffusing data and minimizing operational costs while maximizing data reliability and network performance, even in the presence of link failures.  相似文献   

6.
In large-scale heterogeneous cluster computing systems, processor and network failures are inevitable and can have an adverse effect on applications executing on such systems. One way of taking failures into account is to employ a reliable scheduling algorithm. However, most existing scheduling algorithms for precedence constrained tasks in heterogeneous systems only consider scheduling length, and not efficiently satisfy the reliability requirements of task. In recognition of this problem, we build an application reliability analysis model based on Weibull distribution, which can dynamically measure the reliability of task executing on heterogeneous cluster with arbitrary networks architectures. Then, we propose a reliability-driven earliest finish time with duplication scheduling algorithm (REFTD) which incorporates task reliability overhead into scheduling. Furthermore, to improve system reliability, it duplicates task as if task hazard rate is more than threshold \(\theta \) . The comparison study, based on both randomly generated graphs and the graphs of some real applications, shows that our scheduling algorithm can shorten schedule length and improve system reliability significantly.  相似文献   

7.
High performance and distributed computing systems such as peta-scale, grid and cloud infrastructure are increasingly used for running scientific models and business services. These systems experience large availability variations through hardware and software failures. Resource providers need to account for these variations while providing the required QoS at appropriate costs in dynamic resource and application environments. Although the performance and reliability of these systems have been studied separately, there has been little analysis of the lost Quality of Service (QoS) experienced with varying availability levels. In this paper, we present a resource performability model to estimate lost performance and corresponding cost considerations with varying availability levels. We use the resulting model in a multi-phase planning approach for scheduling a set of deadline-sensitive meteorological workflows atop grid and cloud resources to trade-off performance, reliability and cost. We use simulation results driven by failure data collected over the lifetime of high performance systems to demonstrate how the proposed scheme better accounts for resource availability.  相似文献   

8.
The Category Fluency Test (CFT) provides a sensitive measurement of cognitive capabilities in humans related to retrieval from semantic memory. In particular, it is widely used to assess progress of cognitive impairment in patients with dementia. Previous research shows that, in the first approximation, the intensity of tested individuals’ responses within a standard 60-s test period decays exponentially with time, with faster decay rates for more cognitively impaired patients. Such decay rate can then be viewed as a global (macro) diagnostic parameter of each test. In the present paper we focus on the statistical properties of the properly de-trended time intervals between consecutive responses (inter-call times) in the Category Fluency Test. In a sense, those properties reflect the local (micro) structure of the response generation process. We find that a good approximation for the distribution of the de-trended inter-call times is provided by the Weibull Distribution, a probability distribution that appears naturally in this context as a distribution of a minimum of independent random quantities and is the standard tool in industrial reliability theory. This insight leads us to a new interpretation of the concept of “navigating a semantic space” via patient responses.  相似文献   

9.
J. V. Greenman  T. G. Benton 《Oikos》2005,110(2):369-389
Much research effort has been devoted to the study of the interaction between environmental noise and discrete time nonlinear dynamical systems. A large part of this effort has involved numerical simulation of simple unstructured models for particular ranges of parameter values. While such research is important in encouraging discussion of important ecological issues it is often unclear how general are the conclusions reached. However, by restricting attention to weak noise it is possible to obtain analytical results that hold for essentially all discrete time models and still provide considerable insight into the properties of the noise-dynamics interface. We follow this approach, focusing on the autocorrelation properties of the population fluctuations using the power (frequency) spectrum matrix as the analytic framework. We study the relationship between the spectral peak structure and the dynamical behaviour of the system and the modulation of this relationship by its internal structure, acting as an "intrinsic" filter and by colour in the noise acting as an "extrinsic" filter. These filters redistribute "power" between frequency components in the spectrum. The analysis emphasises the importance of eigenvalues in the identification of resonance, both in the system itself and in its subsystems, and the importance of noise configuration in defining which paths are followed on the network. The analysis highlights the complexity of the inverse problem (in finding, for example, the source of long term fluctuations) and the role of factors other than colour in the persistence of populations.  相似文献   

10.
Aims Fits of species-abundance distributions to empirical data are increasingly used to evaluate models of diversity maintenance and community structure and to infer properties of communities, such as species richness. Two distributions predicted by several models are the Poisson lognormal (PLN) and the negative binomial (NB) distribution; however, at least three different ways to parameterize the PLN have been proposed, which differ in whether unobserved species contribute to the likelihood and in whether the likelihood is conditional upon the total number of individuals in the sample. Each of these has an analogue for the NB. Here, we propose a new formulation of the PLN and NB that includes the number of unobserved species as one of the estimated parameters. We investigate the performance of parameter estimates obtained from this reformulation, as well as the existing alternatives, for drawing inferences about the shape of species abundance distributions and estimation of species richness.Methods We simulate the random sampling of a fixed number of individuals from lognormal and gamma community relative abundance distributions, using a previously developed 'individual-based' bootstrap algorithm. We use a range of sample sizes, community species richness levels and shape parameters for the species abundance distributions that span much of the realistic range for empirical data, generating 1?000 simulated data sets for each parameter combination. We then fit each of the alternative likelihoods to each of the simulated data sets, and we assess the bias, sampling variance and estimation error for each method.Important findings Parameter estimates behave reasonably well for most parameter values, exhibiting modest levels of median error. However, for the NB, median error becomes extremely large as the NB approaches either of two limiting cases. For both the NB and PLN,>90% of the variation in the error in model parameters across parameter sets is explained by three quantities that corresponded to the proportion of species not observed in the sample, the expected number of species observed in the sample and the discrepancy between the true NB or PLN distribution and a Poisson distribution with the same mean. There are relatively few systematic differences between the four alternative likelihoods. In particular, failing to condition the likelihood on the total sample sizes does not appear to systematically increase the bias in parameter estimates. Indeed, overall, the classical likelihood performs slightly better than the alternatives. However, our reparameterized likelihood, for which species richness is a fitted parameter, has important advantages over existing approaches for estimating species richness from fitted species-abundance models.  相似文献   

11.
In previous work, we developed an 8-state nonlinear dynamic model of the acute inflammatory response, including activated phagocytic cells, pro- and anti-inflammatory cytokines, and tissue damage, and calibrated it to data on cytokines from endotoxemic rats. In the interest of parsimony, the present work employed parametric sensitivity and local identifiability analysis to establish a core set of parameters predominantly responsible for variability in model solutions. Parameter optimization, facilitated by varying only those parameters belonging to this core set, was used to identify an ensemble of parameter vectors, each representing an acceptable local optimum in terms of fit to experimental data. Individual models within this ensemble, characterized by their different parameter values, showed similar cytokine but diverse tissue damage behavior. A cluster analysis of the ensemble of models showed the existence of a continuum of acceptable models, characterized by compensatory mechanisms and parameter changes. We calculated the direct correlations between the core set of model parameters and identified three mechanisms responsible for the conversion of the diverse damage time courses to similar cytokine behavior in these models. Given that tissue damage level could be an indicator of the likelihood of mortality, our findings suggest that similar cytokine dynamics could be associated with very different mortality outcomes, depending on the balance of certain inflammatory elements.  相似文献   

12.
The heterogeneity in mammalian cells signaling response is largely a result of pre‐existing cell‐to‐cell variability. It is unknown whether cell‐to‐cell variability rises from biochemical stochastic fluctuations or distinct cellular states. Here, we utilize calcium response to adenosine trisphosphate as a model for investigating the structure of heterogeneity within a population of cells and analyze whether distinct cellular response states coexist. We use a functional definition of cellular state that is based on a mechanistic dynamical systems model of calcium signaling. Using Bayesian parameter inference, we obtain high confidence parameter value distributions for several hundred cells, each fitted individually. Clustering the inferred parameter distributions revealed three major distinct cellular states within the population. The existence of distinct cellular states raises the possibility that the observed variability in response is a result of structured heterogeneity between cells. The inferred parameter distribution predicts, and experiments confirm that variability in IP3R response explains the majority of calcium heterogeneity. Our work shows how mechanistic models and single‐cell parameter fitting can uncover hidden population structure and demonstrate the need for parameter inference at the single‐cell level.  相似文献   

13.
14.
We introduce a method for systematically reducing the dimension of biophysically realistic neuron models with stochastic ion channels exploiting time-scales separation. Based on a combination of singular perturbation methods for kinetic Markov schemes with some recent mathematical developments of the averaging method, the techniques are general and applicable to a large class of models. As an example, we derive and analyze reductions of different stochastic versions of the Hodgkin Huxley (HH) model, leading to distinct reduced models. The bifurcation analysis of one of the reduced models with the number of channels as a parameter provides new insights into some features of noisy discharge patterns, such as the bimodality of interspike intervals distribution. Our analysis of the stochastic HH model shows that, besides being a method to reduce the number of variables of neuronal models, our reduction scheme is a powerful method for gaining understanding on the impact of fluctuations due to finite size effects on the dynamics of slow fast systems. Our analysis of the reduced model reveals that decreasing the number of sodium channels in the HH model leads to a transition in the dynamics reminiscent of the Hopf bifurcation and that this transition accounts for changes in characteristics of the spike train generated by the model. Finally, we also examine the impact of these results on neuronal coding, notably, reliability of discharge times and spike latency, showing that reducing the number of channels can enhance discharge time reliability in response to weak inputs and that this phenomenon can be accounted for through the analysis of the reduced model.  相似文献   

15.
The paper is devoted to the study of discrete time and continuous space models with nonlocal resource competition and periodic boundary conditions. We consider generalizations of logistic and Ricker's equations as intraspecific resource competition models with symmetric nonlocal dispersal and interaction terms. Both interaction and dispersal are modeled using convolution integrals, each of which has a parameter describing the range of nonlocality. It is shown that the spatially homogeneous equilibrium of these models becomes unstable for some kernel functions and parameter values by performing a linear stability analysis. To be able to further analyze the behavior of solutions to the models near the stability boundary, weakly nonlinear analysis, a well-known method for continuous time systems, is employed. We obtain Stuart–Landau type equations and give their parameters in terms of Fourier transforms of the kernels. This analysis allows us to study the change in amplitudes of the solutions with respect to ranges of nonlocalities of two symmetric kernel functions. Our calculations indicate that supercritical bifurcations occur near stability boundary for uniform kernel functions. We also verify these results numerically for both models.  相似文献   

16.
We present in this paper various links between individual and population cell growth. Deterministic models of the lag and subsequent growth of a bacterial population and their connection with stochastic models for the lag and subsequent generation times of individual cells are analysed. We derived the individual lag time distribution inherent in population growth models, which shows that the Baranyi model allows a wide range of shapes for individual lag time distribution. We demonstrate that individual cell lag time distributions cannot be retrieved from population growth data. We also present the results of our investigation on the effect of the mean and variance of the individual lag time and the initial cell number on the mean and variance of the population lag time. These relationships are analysed theoretically, and their consequence for predictive microbiology research is discussed.  相似文献   

17.
In this paper we investigate several schemes to approximate the stationary distribution of the stochastic SIS system with import. We begin by presenting the model and analytically computing its stationary distribution. We then approximate this distribution using Kramers–Moyal approximation, van Kampen's system size expansion, and a semiclassical scheme, also called WKB or eikonal approximation depending on its different applications in physics. For the semiclassical scheme, done in the context of the Hamilton–Jacobi formalism, two approaches are taken. In the first approach we assume a semiclassical ansatz for the generating function, while in the second the solution of the master equation is approximated directly. The different schemes are compared and the semiclassical approximation, which performs better, is then used to analyse the time dependent solution of stochastic systems for which no analytical expression is known. Stochastic epidemiological models are studied in order to investigate how far such semiclassical approximations can be used for parameter estimation.  相似文献   

18.
Dewan I  Kulathinal S 《PloS one》2007,2(12):e1255
The hypothesis of independence between the failure time and the cause of failure is studied by using the conditional probabilities of failure due to a specific cause given that there is no failure up to certain fixed time. In practice, there are situations when the failure times are available for all units but the causes of failures might be missing for some units. We propose tests based on U-statistics to test for independence of the failure time and the cause of failure in the competing risks model when all the causes of failure cannot be observed. The asymptotic distribution is normal in each case. Simulation studies look at power comparisons for the proposed tests for two families of distributions. The one-sided and the two-sided tests based on Kendall type statistic perform exceedingly well in detecting departures from independence.  相似文献   

19.
In this paper we are concerned with the live verification of the consistency of a replicated system, an issue that has not been addressed by the research community so far. We consider the problem of how to enable the system to detect automatically and in production whether the invariants defining the correctness of object replication are violated. This feature could greatly improve the dependability of distributed applications and is necessary for constructing self-managing and self-healing replicated systems. We focus on systems that enforce strongly consistent replication: all replicas of each object must be kept “continuously” in-sync. This replication strategy is appropriate for application domains where correctness guarantees in spite of failures are more important than performance and scalability. We present the design and implementation of a replicated web service capable of self-checking whether all replicas are indeed kept in sync. This check occurs on-line, transparently to clients. We also discuss the performance cost of self-checking in our prototype. Alberto Bartoli is Associate Professor of Computer Engineering at the University of Trieste, Italy. He took a degree in Electrical Engineering in 1989 and a doctorate in Computer Engineering in 1994, both at the University of Pisa, Italy. His research interests are in the area of reliability and fault-tolerance in distributed systems. Giovanni Masarin took a degree in Electronic Engineering in 2004, at the University of Trieste, Italy. He is currently involved in product development at RadioTrevisan, a company specialized in the production of lawful interception equipments.  相似文献   

20.
Methods developed by the metrological community and principles used by the research community were integrated to provide a basis for a periodic maintenance interval analysis system. Engineering endpoints are used as measurement attributes on which to base two primary quality indicators: accuracy and reliability. Also key to establishing appropriate maintenance intervals is the ability to recognize two primary failure modes: random failure and time-related failure. The primary objective of the maintenance program is to avert predictable and preventable device failure, and understanding time-related failures enables service personnel to set intervals accordingly.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号