首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, an autonomic performance management approach is introduced that can be applied to a general class of web services deployed in large scale distributed environment. The proposed approach utilizes traditional large scale control-based algorithms by using interaction balance approach in web service environment for managing the response time and the system level power consumption. This approach is developed in a generic fashion that makes it suitable for web service deployments, where web service performance can be adjusted by using a finite set of control inputs. This approach maintains the service level agreements, maximizes the revenue, and minimizes the infrastructure operating cost. Additionally, the proposed approach is fault-tolerant with respect to the failures of the computing nodes inside the distributed deployment. Moreover, the computational overhead of the proposed approach can also be managed by using appropriate value of configuration parameters during its deployment.  相似文献   

2.
Summary Ye, Lin, and Taylor (2008, Biometrics 64 , 1238–1246) proposed a joint model for longitudinal measurements and time‐to‐event data in which the longitudinal measurements are modeled with a semiparametric mixed model to allow for the complex patterns in longitudinal biomarker data. They proposed a two‐stage regression calibration approach that is simpler to implement than a joint modeling approach. In the first stage of their approach, the mixed model is fit without regard to the time‐to‐event data. In the second stage, the posterior expectation of an individual's random effects from the mixed‐model are included as covariates in a Cox model. Although Ye et al. (2008) acknowledged that their regression calibration approach may cause a bias due to the problem of informative dropout and measurement error, they argued that the bias is small relative to alternative methods. In this article, we show that this bias may be substantial. We show how to alleviate much of this bias with an alternative regression calibration approach that can be applied for both discrete and continuous time‐to‐event data. Through simulations, the proposed approach is shown to have substantially less bias than the regression calibration approach proposed by Ye et al. (2008) . In agreement with the methodology proposed by Ye et al. (2008) , an advantage of our proposed approach over joint modeling is that it can be implemented with standard statistical software and does not require complex estimation techniques.  相似文献   

3.
A general approach to family-based examinations of association between marker alleles and traits is proposed. The approach is based on computing p values by comparing test statistics for association to their conditional distributions given the minimal sufficient statistic under the null hypothesis for the genetic model, sampling plan and population admixture. The approach can be applied with any test statistic, so any kind of phenotype and multi-allelic markers may be examined, and covariates may be included in analyses. By virtue of the conditioning, the approach results in correct type I error probabilities regardless of population admixture, the true genetic model and the sampling strategy. An algorithm for computing the conditional distributions is described, and the results of the algorithm for configurations of nuclear families are presented. The algorithm is applicable with all pedigree structures and all patterns of missing marker allele information.  相似文献   

4.
In follow‐up studies, the disease event time can be subject to left truncation and right censoring. Furthermore, medical advancements have made it possible for patients to be cured of certain types of diseases. In this article, we consider a semiparametric mixture cure model for the regression analysis of left‐truncated and right‐censored data. The model combines a logistic regression for the probability of event occurrence with the class of transformation models for the time of occurrence. We investigate two techniques for estimating model parameters. The first approach is based on martingale estimating equations (EEs). The second approach is based on the conditional likelihood function given truncation variables. The asymptotic properties of both proposed estimators are established. Simulation studies indicate that the conditional maximum‐likelihood estimator (cMLE) performs well while the estimator based on EEs is very unstable even though it is shown to be consistent. This is a special and intriguing phenomenon for the EE approach under cure model. We provide insights into this issue and find that the EE approach can be improved significantly by assigning appropriate weights to the censored observations in the EEs. This finding is useful in overcoming the instability of the EE approach in some more complicated situations, where the likelihood approach is not feasible. We illustrate the proposed estimation procedures by analyzing the age at onset of the occiput‐wall distance event for patients with ankylosing spondylitis.  相似文献   

5.
In QTL analysis of non-normally distributed phenotypes, non-parametric approaches have been proposed as an alternative to the use of parametric tests on mathematically transformed data. The non-parametric interval mapping test uses random ranking to deal with ties. Another approach is to assign to each tied individual the average of the tied ranks (midranks). This approach is implemented and compared to the random ranking approach in terms of statistical power and accuracy of the QTL position. Non-normal phenotypes such as bacteria counts showing high numbers of zeros are simulated (0-80% zeros). We show that, for low proportions of zeros, the power estimates are similar but, for high proportions of zeros, the midrank approach is superior to the random ranking approach. For example, with a QTL accounting for 8% of the total phenotypic variance, a gain from 8% to 11% of power can be obtained. Furthermore, the accuracy of the estimated QTL location is increased when using midranks. Therefore, if non-parametric interval mapping is chosen, the midrank approach should be preferred. This test might be especially relevant for the analysis of disease resistance phenotypes such as those observed when mapping QTLs for resistance to infectious diseases.  相似文献   

6.
When a new treatment is compared to an established one in a randomized clinical trial, it is standard practice to statistically test for non-inferiority rather than for superiority. When the endpoint is binary, one usually compares two treatments using either an odds-ratio or a difference of proportions. In this paper, we propose a mixed approach which uses both concepts. One first defines the non-inferiority margin using an odds-ratio and one ultimately proves non-inferiority statistically using a difference of proportions. The mixed approach is shown to be more powerful than the conventional odds-ratio approach when the efficacy of the established treatment is known (with good precision) and high (e.g. with more than 56% of success). The gain of power achieved may lead in turn to a substantial reduction in the sample size needed to prove non-inferiority. The mixed approach can be generalized to ordinal endpoints.  相似文献   

7.
Parameter estimation is a critical problem in modeling biological pathways. It is difficult because of the large number of parameters to be estimated and the limited experimental data available. In this paper, we propose a decompositional approach to parameter estimation. It exploits the structure of a large pathway model to break it into smaller components, whose parameters can then be estimated independently. This leads to significant improvements in computational efficiency. We present our approach in the context of Hybrid Functional Petri Net modeling and evolutionary search for parameter value estimation. However, the approach can be easily extended to other modeling frameworks and is independent of the search method used. We have tested our approach on a detailed model of the Akt and MAPK pathways with two known and one hypothesized crosstalk mechanisms. The entire model contains 84 unknown parameters. Our simulation results exhibit good correlation with experimental data, and they yield positive evidence in support of the hypothesized crosstalk between the two pathways.  相似文献   

8.
This paper continues our work on the theory of nonequilibrium voltage noise generated by electric transport processes in membranes. Introducing the membrane voltage as a further variable, a system of kinetic equations linearized in voltage is derived by which generally the time-dependent behaviour of charge-transport processes under varying voltage can be discussed. Using these equations, the treatment of voltage noise can be based on the usual master equation approach to steady-state fluctuations of scalar quantities. Thus, a general theoretical approach to nonequilibrium voltage noise is presented, completing our approach to current fluctuations which had been developed some years ago. It is explicitly shown that at equilibrium the approach yields agreement with the Nyquist relation, while at nonequilibrium this relation is not valid. A further general property of voltage noise is the reduction of low-frequency noise with increasing number of transport units as a consequence of the interactions via the electric field. In a second paper, the approach will be applied for a number of special transport mechanisms, such as ionic channels, carriers or electrogenic pumps.  相似文献   

9.
The generalized nonlinear Klien-Gordon equation plays an important role in quantum mechanics. In this paper, a new three-time level implicit approach based on cubic trigonometric B-spline is presented for the approximate solution of this equation with Dirichlet boundary conditions. The usual finite difference approach is used to discretize the time derivative while cubic trigonometric B-spline is applied as an interpolating function in the space dimension. Several examples are discussed to exhibit the feasibility and capability of the approach. The absolute errors and error norms are also computed at different times to assess the performance of the proposed approach and the results were found to be in good agreement with known solutions and with existing schemes in literature.  相似文献   

10.
Cryptography with DNA binary strands   总被引:13,自引:0,他引:13  
Biotechnological methods can be used for cryptography. Here two different cryptographic approaches based on DNA binary strands are shown. The first approach shows how DNA binary strands can be used for steganography, a technique of encryption by information hiding, to provide rapid encryption and decryption. It is shown that DNA steganography based on DNA binary strands is secure under the assumption that an interceptor has the same technological capabilities as sender and receiver of encrypted messages. The second approach shown here is based on steganography and a method of graphical subtraction of binary gel-images. It can be used to constitute a molecular checksum and can be combined with the first approach to support encryption. DNA cryptography might become of practical relevance in the context of labelling organic and inorganic materials with DNA 'barcodes'.  相似文献   

11.
12.
F?rster resonance energy transfer (FRET) has become an important tool to study the submicrometer distribution of proteins and lipids in membranes. Although resolving the two-dimensional distribution of fluorophores from FRET is generally underdetermined, a forward approach can be used to determine characteristic FRET "signatures" for interesting classes of microdomain organizations. As a first step toward this goal, we use a stochastic Monte Carlo approach to characterize FRET in the case of molecules randomly distributed within disk-shaped domains. We find that when donors and acceptors are confined within domains, FRET depends very generally on the density of acceptors within domains. An implication of this result is that two domain populations with the same acceptor density cannot be distinguished by this FRET approach even if the domains have different diameters or different numbers of molecules. In contrast, both the domain diameter and molecule number can be resolved by combining this approach with a segregation approach that measures FRET between donors confined in domains and acceptors localized outside domains. These findings delimit where the inverse problem is tractable for this class of distributions and reframe ways FRET can be used to characterize the structure of microdomains such as lipid rafts.  相似文献   

13.
A novel approach to construct kinetic models of metabolic pathways, to be used in metabolic engineering, is presented: the tendency modeling approach. This approach greatly facilitates the construction of these models and can easily be applied to complex metabolic networks. The resulting models contain a minimal number of parameters; identification of their values is straightforward. Use of in vitro obtained information in the identification of the kinetic equations is minimized. The tendency modeling approach has been used to derive a dynamic model of primary metabolism for aerobic growth of Saccharomyces cerevisiae on glucose, in which compartmentation is included. Simulation results obtained with the derived model are satisfying for most of the carbon metabolites that have been measured. Compared to a more detailed model, the simulations of our model are less accurate, but taking into account the much smaller number of kinetic parameters (35 instead of 84), the tendency the modeling approach is considered promising.  相似文献   

14.
Allocation results for a multi-output process in a life cycle assessment study depend on the definition of the unit process which can vary with the depth of a study. The unit process may be a manufacturing site, a sub-process, or an operational unit (e.g. distillation column or reactor). There are three different approaches to define a unit process: macroscopic approach, quasi-microscopic approach, and microscopic approach. In the macroscopic approach, a unit process is the manufacturing site, while a unit process in the quasi-microscopic approach is a sub-process of the manufacturing site. An operational unit becomes the unit process in the microscopic approach. In the quasi-microscopic and the microscopic approaches, a process can be subdivided into a joint process, a physically separated process which is physically apart from other processes, and a fully separated process. Each type can be a unit process. Therefore, the multi-output process in the quasi-microscopic and the microscopic approaches can be subdivided among two or more unit processes depending on the actual operations. The allocation in the fully separated process can be avoided because this process fulfills one function. In the joint process and the physically separated process, which deliver two or more functions, allocation is still required. Ammonia manufacturing, where carbon dioxide is formed as a byproduct is given to show a specific detailed example of the allocation procedure by subdivision in ISO 14041. It is shown that the quasi-microscopic and the microscopic approaches can reduce the multi-output allocation of a given chemical product. Furthermore, the quasi-microscopic and the microscopic approaches are very useful in identifying key pollution prevention issues related with one product or function.  相似文献   

15.
We propose a novel, closed-loop approach to tuning deep brain stimulation (DBS) for Parkinson’s disease (PD). The approach, termed Phasic Burst Stimulation (PhaBS), applies a burst of stimulus pulses over a range of phases predicted to disrupt pathological oscillations seen in PD. Stimulation parameters are optimized based on phase response curves (PRCs), which would be measured from each patient. This approach is tested in a computational model of PD with an emergent population oscillation. We show that the stimulus phase can be optimized using the PRC, and that PhaBS is more effective at suppressing the pathological oscillation than a single phasic stimulus pulse. PhaBS provides a closed-loop approach to DBS that can be optimized for each patient.  相似文献   

16.
A parametric approach fits particular classes of parametric models to the data, uses the model parameter estimates as summaries and tests for differences between groups by comparing fits with and without the assumption of common parameter values across groups. The paper discusses how a parametric approach can be implemented in the specific context of a single‐factor replicated spatial experiment and uses simulations to show when the parametric approach can be efficient or potentially misleading. An analysis of the spatial distribution of pyramidal neurons in human patients is also shown.  相似文献   

17.
The analysis of family-study data sometimes focuses on whether a dichotomous trait tends to cluster in families. For traits with variable age-at-onset, it may be of interest to investigate whether age-at-onset itself also exhibits familial clustering. A complication in such investigations is that censoring by age-at-ascertainment can induce artifactual familial correlation in the age-at-onset of affected members. A further complication can be that sample inclusion criteria involve the affection status of family members. The purpose here is to present an approach to testing for correlation that is not confounded by censoring by age-at-ascertainment and may be applied with a broad range of inclusion criteria. The approach involves regression statistics in which subjects's covariate terms are chosen to reflect age-at-onset information from the subjects's affected family members. The results of analyses of data from a family-study of panic disorder illustrate the approach.  相似文献   

18.
An approach is described for identifying and quantifying oxidant-sensitive protein thiols using a cysteine-specific, acid-cleavable isotope-coded affinity tag (ICAT) reagent (Applied Biosystems, Foster City, CA). The approach is based on the fact that only free cysteine thiols are susceptible to labeling by the iodoacetamide-based ICAT reagent, and that mass spectrometry can be used to quantitate the relative labeling of free thiols. To validate our approach, creatine kinase with four cysteine residues, one of which is oxidant-sensitive, was chosen as an experimental model. ICAT-labeled peptides derived from creatine kinase were used to evaluate the relative abundance of the free thiols in samples subjected (or not) to treatment with hydrogen peroxide. As predicted, hydrogen peroxide decreased the relative abundance of the unmodified oxidant-sensitive thiol residue of cysteine-283 in creatine kinase, providing proof of principle that an ICAT-based quantitative mass spectrometry approach can be used to identify and quantify oxidation of cysteine thiols. This approach opens an avenue for proteomics studies of the redox state of protein thiols.  相似文献   

19.
It is widely acknowledged that the analysis of comparative data from related species should be performed taking into account their phylogenetic relationships. We introduce a new method, based on the use of generalized estimating equations (GEE), for the analysis of comparative data. The principle is to incorporate, in the modelling process, a correlation matrix that specifies the dependence among observations. This matrix is obtained from the phylogenetic tree of the studied species. Using this approach, a variety of distributions (discrete or continuous) can be analysed using a generalized linear modelling framework, phylogenies with multichotomies can be analysed, and there is no need to estimate ancestral character state. A simulation study showed that the proposed approach has good statistical properties with a type-I error rate close to the nominal 5%, and statistical power to detect correlated evolution between two characters which increases with the strength of the correlation. The proposed approach performs well for the analysis of discrete characters. We illustrate our approach with some data on macro-ecological correlates in birds. Some extensions of the use of GEE are discussed.  相似文献   

20.
Fish screens can help prevent the entrainment or injury of fish at irrigation diversions, but only when designed appropriately. Design criteria cannot simply be transferred between sites or pump systems and need to be developed using an evidence-based approach with the needs of local species in mind. Laboratory testing is typically used to quantify fish responses at intake screens, but often limits the number of species that can studied and creates artificial conditions not directly applicable to screens in the wild. In this study a field-based approach was used to assess the appropriateness of different screen design attributes for the protection of a lowland river fish assemblage at an experimental irrigation pump. Direct netting of entrained fish was used along with sonar technology to quantify the probability of screen contact for a Murray-Darling Basin (Australia) fish species. Two approach velocities (0.1 and 0.5 m.sec−1) and different sizes of woven mesh (5, 10 and 20 mm) were evaluated. Smaller fish (<150 mm) in the assemblage were significantly more susceptible to entrainment and screen contact, especially at higher approach velocities. Mesh size appeared to have little impact on screen contact and entrainment, suggesting that approach velocity rather than mesh size is likely to be the primary consideration when developing screens. Until the effects of screen contacts on injury and survival of these species are better understood, it is recommended that approach velocities not exceed 0.1 m.sec−1 when the desire is to protect the largest range of species and size classes for lowland river fish assemblages in the Murray-Darling Basin. The field method tested proved to be a useful approach that could compliment laboratory studies to refine fish screen design and facilitate field validation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号