首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, we present algorithms to find near-optimal sets of epidemic spreaders in complex networks. We extend the notion of local-centrality, a centrality measure previously shown to correspond with a node''s ability to spread an epidemic, to sets of nodes by introducing combinatorial local centrality. Though we prove that finding a set of nodes that maximizes this new measure is NP-hard, good approximations are available. We show that a strictly greedy approach obtains the best approximation ratio unless P = NP and then formulate a modified version of this approach that leverages qualities of the network to achieve a faster runtime while maintaining this theoretical guarantee. We perform an experimental evaluation on samples from several different network structures which demonstrate that our algorithm maximizes combinatorial local centrality and consistently chooses the most effective set of nodes to spread infection under the SIR model, relative to selecting the top nodes using many common centrality measures. We also demonstrate that the optimized algorithm we develop scales effectively.  相似文献   

2.
A significance measure essentially due to Liebermeister and dating back to 1877 may be preferable to Fisher's Exact Test, a conservative but commonly applied test when the sample sizes are small. We show that Liebermeister's measure is less conservative than Fisher's P‐value and just as easy to calculate, while retaining the important features of a significance measure. We also compare Liebermeister's measure with Lancaster's mid‐P, which has gained increasing acceptance as a replacement for Fisher's P‐value. Application is made to a recent striking medical study on appendicitis symptoms for which the Fisher test does not give significance.  相似文献   

3.
Abstract. The first objective of this paper is to define a new measure of fidelity of a species to a vegetation unit, called u. The value of u is derived from the approximation of the binomial or the hypergeometric distribution by the normal distribution. It is shown that the properties of u meet the requirements for a fidelity measure in vegetation science, i.e. (1) to reflect differences of a species’relative frequency inside a certain vegetation unit and its relative frequency in the remainder of the data set; (2) to increase with increasing size of the data set. Additionally (3), u has the property to be dependent on the proportion of the vegetation unit's size to the size of the whole data set. The second objective is to present a method of how to use the value of u for finding species groups in large data bases and for defining vegetation units. A species group is defined by possession of species that show the highest value of u among all species in the data set with regard to the vegetation unit defined by this species group. The vegetation unit is defined as comprising all relevés that include a minimum number of the species in the species group. This minimum number is derived statistically in such a way that fewer relevés always belong to a species group than would be expected if the differential species were distributed randomly among the relevés. An iterative algorithm is described for detecting species groups in data bases. Starting with an initial species group, species composition of this group and the vegetation unit defined by this group are mutually optimized. With this algorithm species groups are formed in a data set independently of each other. Subsequently, these species groups can be combined in such a way that they are suited to define commonly known syntaxa a posteriori.  相似文献   

4.
Muller''s ratchet is a paradigmatic model for the accumulation of deleterious mutations in a population of finite size. A click of the ratchet occurs when all individuals with the least number of deleterious mutations are lost irreversibly due to a stochastic fluctuation. In spite of the simplicity of the model, a quantitative understanding of the process remains an open challenge. In contrast to previous works, we here study a Moran model of the ratchet with overlapping generations. Employing an approximation which describes the fittest individuals as one class and the rest as a second class, we obtain closed analytical expressions of the ratchet rate in the rare clicking regime. As a click in this regime is caused by a rare, large fluctuation from a metastable state, we do not resort to a diffusion approximation but apply an approximation scheme which is especially well suited to describe extinction events from metastable states. This method also allows for a derivation of expressions for the quasi-stationary distribution of the fittest class. Additionally, we confirm numerically that the formulation with overlapping generations leads to the same results as the diffusion approximation and the corresponding Wright-Fisher model with non-overlapping generations.  相似文献   

5.
Toxicological study is of practical importance in modern drug development. Proper statistical methodologies for toxicological evaluation of new developed drugs are undoubtedly necessary. In toxicological studies, it is practically desirable for a method to not declare the safety of a developed drug at a higher dosage prior to the declaration of the safety at lower dosages. Hsu and Berger 's stepwise confidence interval method was recently proposed for this purpose. Unfortunately, their procedure necessitates the homogeneity of variances among dosages, which is seldom satisfied in practice. In this article, via the application of the Stein 's two‐stage sampling method, we propose a stepwise confidence interval procedure for the same task without the homoscedasticity restriction. In addition, our procedure is shown to control its family‐wise type I error rate at the pre‐chosen nominal level. A simulation study will be conducted to compare our method, Hsu and Berger 's stepwise confidence interval method, and a single stage stepwise testing procedure based on Welch 's approximation. Our procedure is empirically shown to outperform Hsu and Berger 's procedure under heteroscedasticity and perform similarly with Welch 's procedure. An example will be used to illustrate our method.  相似文献   

6.
NOETHER (1987) proposed a method of sample size determination for the Wilcoxon-Mann-Whitney test. To obtain a sample size formula, he restricted himself to alternatives that differ only slightly from the null hypothesis, so that the unknown variance o2 of the Mann-Whitney statistic can be approximated by the known variance under the null hypothesis which depends only on n. This fact is frequently forgotten in statistical practice. In this paper, we compare Noether's large sample solution against an alternative approach based on upper bounds of σ2 which is valid for any alternatives. This comparison shows that Noether's approximation is sufficiently reliable with small and large deviations from the null hypothesis.  相似文献   

7.
In this paper we investigate several schemes to approximate the stationary distribution of the stochastic SIS system with import. We begin by presenting the model and analytically computing its stationary distribution. We then approximate this distribution using Kramers–Moyal approximation, van Kampen's system size expansion, and a semiclassical scheme, also called WKB or eikonal approximation depending on its different applications in physics. For the semiclassical scheme, done in the context of the Hamilton–Jacobi formalism, two approaches are taken. In the first approach we assume a semiclassical ansatz for the generating function, while in the second the solution of the master equation is approximated directly. The different schemes are compared and the semiclassical approximation, which performs better, is then used to analyse the time dependent solution of stochastic systems for which no analytical expression is known. Stochastic epidemiological models are studied in order to investigate how far such semiclassical approximations can be used for parameter estimation.  相似文献   

8.
It is not currently possible to measure the real-world thought process that a child has while observing an actual school lesson. However, if it could be done, children''s neural processes would presumably be predictive of what they know. Such neural measures would shed new light on children''s real-world thought. Toward that goal, this study examines neural processes that are evoked naturalistically, during educational television viewing. Children and adults all watched the same Sesame Street video during functional magnetic resonance imaging (fMRI). Whole-brain intersubject correlations between the neural timeseries from each child and a group of adults were used to derive maps of “neural maturity” for children. Neural maturity in the intraparietal sulcus (IPS), a region with a known role in basic numerical cognition, predicted children''s formal mathematics abilities. In contrast, neural maturity in Broca''s area correlated with children''s verbal abilities, consistent with prior language research. Our data show that children''s neural responses while watching complex real-world stimuli predict their cognitive abilities in a content-specific manner. This more ecologically natural paradigm, combined with the novel measure of “neural maturity,” provides a new method for studying real-world mathematics development in the brain.  相似文献   

9.
In this paper we consider a modification of Bailey's stochastic model for the spread of an epidemic when there are seasonal variations in infection rate. The resulting nonlinear model is analyzed by employing the diffusion approximation technique. We have shown that for a large population the process, on suitable scaling and normalization, converges to a non-stationary Ornstein-Uhlenbeck process. Consequently the number of infectives has in the steady state a gaussian distribution.  相似文献   

10.
The rubber hand illusion (RHI) is a popular experimental paradigm. Participants view touch on an artificial rubber hand while the participants'' own hidden hand is touched. If the viewed and felt touches are given at the same time then this is sufficient to induce the compelling experience that the rubber hand is one''s own hand. The RHI can be used to investigate exactly how the brain constructs distinct body representations for one''s own body. Such representations are crucial for successful interactions with the external world. To obtain a subjective measure of the RHI, researchers typically ask participants to rate statements such as "I felt as if the rubber hand were my hand". Here we demonstrate how the crossmodal congruency task can be used to obtain an objective behavioral measure within this paradigm.The variant of the crossmodal congruency task we employ involves the presentation of tactile targets and visual distractors. Targets and distractors are spatially congruent (i.e. same finger) on some trials and incongruent (i.e. different finger) on others. The difference in performance between incongruent and congruent trials - the crossmodal congruency effect (CCE) - indexes multisensory interactions. Importantly, the CCE is modulated both by viewing a hand as well as the synchrony of viewed and felt touch which are both crucial factors for the RHI.The use of the crossmodal congruency task within the RHI paradigm has several advantages. It is a simple behavioral measure which can be repeated many times and which can be obtained during the illusion while participants view the artificial hand. Furthermore, this measure is not susceptible to observer and experimenter biases. The combination of the RHI paradigm with the crossmodal congruency task allows in particular for the investigation of multisensory processes which are critical for modulations of body representations as in the RHI.  相似文献   

11.
This paper discusses tests for homogeneity of row-variances in a two-way classification. Parametric tests for this problem are shown to be highly sensitive to departures from the normality assumption. Levene's robust tests tend to be rather liberal when the number of rows exceeds the number of columns. A modification is suggested which improves the approximation for this situation, while retaining the desirable robustness property.  相似文献   

12.
13.
Bees which are held in a fixed position so that only head movements can be made, respond to a moving stripe system in their visual field by a characteristic motion of the antennae. This reflex can be used to measure the bee''s state of photic adaptation. A curve describing the course of dark adaptation is obtained, which shows that the sensitivity of the light adapted bee''s eye increases rapidly during the first few minutes in darkness, then more slowly until it reaches a maximum level after 25 to 30 minutes. The total increase in sensitivity is about 1000 fold. The adaptive range of the human eye is about 10 times greater than for the bee''s eye. The range covered by the bee''s eye corresponds closely to the adapting range which is covered by the rods of the human eye.  相似文献   

14.
The most studied ecogeographic rule is Bergmann's rule, but aspects of the original paper are often presented incorrectly even though Bergmann (1847) is explicitly cited. The goal of this paper is to 1) summarize the contents of Bergmann's paper, supported by direct translations, and 2) to discuss the main issues surrounding Bergmann's rule based on Bergmann's intentions and early definitions of the rule. Although Bergmann himself never formulated an explicit rule, based on Bergmann's (1847) intentions and early definitions of Bergmann's rule, Bergmann's rule is: “Within species and amongst closely related species of homeothermic animals a larger size is often achieved in colder climates than in warmer ones, which is linked to the temperature budget of these animals.” Bergmann (1847) assumed that the surface area of an animal is a measure for heat dissipation and an animal's volume a measure of its heat production. As body size increases, an animal's surface area increases less than its volume; however, modifications in morphology and behaviour will also influence the temperature budget. Bergmann hypothesized that when everything but size is equal, the smaller animals should live in warmer areas. This was supported by empirical data on > 300 bird species belonging to 86 genera. Recommendations for use of the term Bergmann's rule include 1) inclusion of a thermoregulatory mechanism, 2) application only to homoeothermic animals, 3) but to any taxonomic group, 4) tests of the rule should test the assumption that larger animals have to produce less heat to increase body temperatures, and 5) future authors should either go back to the original publication (Bergmann 1847) when referring to it or simply not cite it at all. Synthesis Based on Bergmann's (1847) intentions and early definitions, Bergmann's rule is: “Within species and amongst closely related species of homeothermic animals a larger size is often achieved in colder climates than in warmer ones, which is linked to the temperature budget of these animals.” Recommendations for use of the term Bergmann's rule include 1) inclusion of a thermoregulatory mechanism, 2) application only to homoeothermic animals, 3) and to any taxonomic group, 4) tests of the rule should examine whether larger animals have to produce less heat to increase body temperatures, and 5) authors should go back to the original publication (Bergmann 1847) when referring to it.  相似文献   

15.
Consider testing the hypothesis of no treatment effects against a postulated ranking of the m treatments, given data from n Complete Blocks. A suitable test statistic is the weighted average rank correlation w = σbQiCi where Ci is the correlation between the postulated ranking and the ranking observed within the ith block, Qi is the rank of the ith block with respect to credibility, and the bi's are weights such that 0 ≦ b1 ≦ … ≦ bn. In this paper we introduce some simple statistics: the first extends the signed-rank statistic to m ≦ 3, the second uses a simple measure of correlation based on the antirank, and the third a statistic based on Spearman's footrule. Tables for critical values are provided and the normal approximation is investigated.  相似文献   

16.
Speciation is central to evolutionary biology, and to elucidate it, we need to catch the early genetic changes that set nascent taxa on their path to species status (Via 2009 ). That challenge is difficult, of course, for two chief reasons: (i) serendipity is required to catch speciation in the act; and (ii) after a short time span with lingering gene flow, differentiation may be low and/or embodied only in rare alleles that are difficult to sample. In this issue of Molecular Ecology Resources, Smouse et al. ( 2015 ) have noted that optimal assessment of differentiation within and between nascent species should be robust to these challenges, and they identified a measure based on Shannon's information theory that has many advantages for this and numerous other tasks. The Shannon measure exhibits complete additivity of information at different levels of subdivision. Of all the family of diversity measures (‘0’ or allele counts, ‘1’ or Shannon, ‘2’ or heterozygosity, FST and related metrics) Shannon's measure comes closest to weighting alleles by their frequencies. For the Shannon measure, rare alleles that represent early signals of nascent speciation are neither down‐weighted to the point of irrelevance, as for level 2 measures, nor up‐weighted to overpowering importance, as for level 0 measures (Chao et al. 2010 , 2015 ). Shannon measures have a long history in population genetics, dating back to Shannon's PhD thesis in 1940 (Crow 2001 ), but have received only sporadic attention, until a resurgence of interest in the last ten years, as reviewed briefly by Smouse et al. ( 2015 ).  相似文献   

17.
The problem of variable selection in the generalized linear‐mixed models (GLMMs) is pervasive in statistical practice. For the purpose of variable selection, many methodologies for determining the best subset of explanatory variables currently exist according to the model complexity and differences between applications. In this paper, we develop a “higher posterior probability model with bootstrap” (HPMB) approach to select explanatory variables without fitting all possible GLMMs involving a small or moderate number of explanatory variables. Furthermore, to save computational load, we propose an efficient approximation approach with Laplace's method and Taylor's expansion to approximate intractable integrals in GLMMs. Simulation studies and an application of HapMap data provide evidence that this selection approach is computationally feasible and reliable for exploring true candidate genes and gene–gene associations, after adjusting for complex structures among clusters.  相似文献   

18.
This paper is an attempt to develop a generic modeling framework that addresses tactical planning problems of flexible manufacturing systems in a coherent manner. We propose a generic 0-1 mixed integer programming formulation, that integrates batching, loading, and routing problems with their critical aspects related to a system's performance. For this purpose, a thorough analysis is made to determine and relate system components, their attributes, and alternatives together with performance measures specific to tactical planning. This provided the justification to support our argument about generality of the model. A linear programming formulation is provided to approximate the mixed integer formulation proposed so as to overcome the problem's combinatorial complexity. The potential capability of the linear approximation proposed also is demonstrated via a small set of test problems.  相似文献   

19.
Variability between raters' ordinal scores is commonly observed in imaging tests, leading to uncertainty in the diagnostic process. In breast cancer screening, a radiologist visually interprets mammograms and MRIs, while skin diseases, Alzheimer's disease, and psychiatric conditions are graded based on clinical judgment. Consequently, studies are often conducted in clinical settings to investigate whether a new training tool can improve the interpretive performance of raters. In such studies, a large group of experts each classify a set of patients' test results on two separate occasions, before and after some form of training with the goal of assessing the impact of training on experts' paired ratings. However, due to the correlated nature of the ordinal ratings, few statistical approaches are available to measure association between raters' paired scores. Existing measures are restricted to assessing association at just one time point for a single screening test. We propose here a novel paired kappa to provide a summary measure of association between many raters' paired ordinal assessments of patients' test results before versus after rater training. Intrarater association also provides valuable insight into the consistency of ratings when raters view a patient's test results on two occasions with no intervention undertaken between viewings. In contrast to existing correlated measures, the proposed kappa is a measure that provides an overall evaluation of the association among multiple raters' scores from two time points and is robust to the underlying disease prevalence. We implement our proposed approach in two recent breast-imaging studies and conduct extensive simulation studies to evaluate properties and performance of our summary measure of association.  相似文献   

20.
Flexible Assembly Systems (FASs), which form an important subset of modern manufacturing systems, are finding increasing use in today's industry. In the planning and design phase of these systems, it is useful to have tools that predict system performance for various operating conditions. In this article, we present such a performance analysis tool based on queueing approximation for a class of FASs, namely, closed-loop flexible assembly systems (CL-FASs). For CL-FASs, we describe iterative algorithms for computing steady-state performance measures, including production rate and station utilizations. These algorithms are computationally simple and have a fast convergence rate. We derive a new approximation to correct the mean delay at each queue. This improves the accuracy of performance prediction, especially in the case of small CL-FASs. Comparisons with simulation results indicate that the approximation technique is reasonably accurate for a broad range of parameter values and system sizes. This makes possible efficient (fast and computationally inexpensive) analysis of CL-FASs under various conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号