首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Consider the problem of making inference about the initial relative infection rate of a stochastic epidemic model. A relatively complete analysis of infectious disease data is possible when it is assumed that the latent and infectious periods are non-random. Here two related martingale-based techniques are used to derive estimates and associated standard errors for the initial relative infection rate. The first technique requires complete information on the epidemic, the second only the total number of people who were infected and the population size. Explicit expressions for the estimates are obtained. The estimates of the parameter and its associated standard error are easily computed and compare well with results of other methods in an application to smallpox data. Asymptotic efficiency differences between the two martingale techniques are considered.  相似文献   

2.
Axons of the Til and Fe2 pioneer neurons in the legs of insect embryos possess separate and highly stereotyped proximal projections towards the CNS. However, quantitative analyses of deviations from the standard paths during the period of axon growth indicate that transient errors occur unexpectedly often. The distribution of legs with axons following deviant paths among the embryos analyzed is used to determine whether these errors are caused by random developmental noise or by non-random genetic or environmental factors. During the formation of the Til pathway all the errors are characterized by defasciculation of the 2 axons, occur with an average incidence of 7% and are statistically shown to be randomly caused. In comparison, during the formation of the Fe2 pathway the errors are characterized by both defasciculation and elongation in an inappropriate distal direction, occur with an incidence of 16%, and as revealed by statistical analyses, are caused by a non-random factor. Therefore, during pathfinding by these 2 pairs of axons there is a need for error-correcting mechanisms to insure the stereotypy of the final projections. These error-correcting mechanisms are suggested to have properties similar to those producing canalization as proposed by Waddington.  相似文献   

3.
Summary A theoretical investigation was made to ascertain the effects of random and non-random deviations, called errors, of phenotypic from genotypic values on population means and on the response to phenotypic recurrent selection. The study was motivated as a selection experiment for disease resistance where there was either variability in the inoculation or environment (the random errors) or where the inoculation was above or below the the optimum rate where genetic differences in resistance are maximized (the non-random errors). The study was limited to the genetics at a diallelic locus (alleles B and b) in an autotetraploid population in random mating equilibrium. The response to selection was measured as the covariance of selection and compared to the exact covariance which was the covariance of selection without errors in phenotype. The random errors were modeled by assuming that a given percentage () of the population was uniformly distributed among the five possible genotype classes independent of their true genotypes. This model was analyzed numerically for a theoretical population with the frequency of the B allele (p) ranging from 0.0 to 1.0 and assumed errors of=0.1 and 0.5 for the following six types of genic action of the B allele: additive, monoplex dominance, partial monoplex dominance, duplex dominance, partial duplex dominance, and recessive. The effect of random error was to consistently reduce the response to selection by a percentage independent of the type of genic action at the locus. The effect on the population mean was an upward bias when p was low and a downward bias when p approached unity. In the non-random error model below optimum inoculations altered the phenotypes by systematically including percentage of susceptible genotypes into one or more other genotype classes with more genetic resistance (a positive shift). With above optimum inoculations, some resistant genotypes are classed with the non-resistant genotypes (a negative shift). The effects on the covariance of selection were found by numerical analysis for the same types of genic action and's as investigated for random error. With a negative shift and a low p, the covariance of selection was always reduced, but for an increasing p the covariance approached and exceeded the exact covariance for all types of genic action except additive. With a positive shift and a low p, response to selection was greatly improved for three types of genic action: duplex dominance, partial duplex dominance, and recessive. The effect of a non-random error on population means was to greatly bias the means upwards for a low p and positive shift, but with increasing p the bias decreased. A relatively slight decrease in the mean occurred with a negative shift. This study indicated check varieties commonly used to monitor selection pressures in screening programs are very responsive to positive non-random shifts, but are relatively unresponsive to negative shifts. The interaction of selection pressure, types of genic action, and genotypes in the class shift models was suggested as a partial explanation for the lack of response to increasing selection pressures observed in some breeding programs.Cooperative investigations of the Alfalfa Production Research Unit, United States Department of Agriculture, Agricultural Research Service, and the Nevada Agricultural Experiment Station, Reno, Nevada. Paper No. 404 Scientific Journal Series. Nevada Agricultural Experiment Station  相似文献   

4.
The possibility that any non-random conformation in reduced bovine pancreatic trypsin inhibitor (BPTI) and ribonuclease A might be significant for folding has been considered, using the experimental data available on forming the first disulphide bond in each. It is a thermodynamic necessity that whatever conformation stabilises a particular disulphide bond be stabilised to the same extent by the presence of the disulphide. The stabilising effects of disulphides are known approximately, so the stability of any non-random conformation found in a one-disulphide intermediate can be estimated in the absence of the disulphide bond. The non-random conformation in the BPTI intermediates is sufficiently labile to indicate that it would be expected to be present in no more than 5% of the reduced BPTI molecules. There is much less non-random conformation apparent in ribonuclease A. Whatever conformations are represented in the bulk of these two reduced proteins cannot favour disulphide formation and further productive folding.  相似文献   

5.
Statistical tests for non-random associations with components of habitat or different kinds of prey require information about the availability of sub-habitats or types of prey. The data are obtained from sampling (Stage 1 samples). Tests are then constructed using this information to predict what will be the occupancy of habitats or composition of diet under the null hypothesis of random association. Estimates of actual occupancy of habitats or composition of diet are then obtained from Stage 2 sampling and tests are done to compare the observed data from Stage 2 with what was predicted from Stage 1.Estimates from each stage of sampling are subject to sampling error, particularly where small samples are involved. The errors involved in Stage 1 sampling are often ignored, resulting in biases in tests and excessive rejection of null hypotheses (i.e. non-random patterns are claimed when they are not present). Here, accurate tests are developed which take into account both types of error.For animals in patchy habitats, with two or more types of patch, the data from Stages 1 and 2 are used to derive maximal likelihood estimators for the proportions of area occupied by the sub-habitats and the proportions of animals in each sub-habitat. These are then used in χ2 tests.For composition of diets, data are more complex, because the consumption of food of each type (on its own) must be estimated in separate experiments or sampling. So, Stage 1 sampling is more difficult and the maximal likelihood estimators described here are more complex. The accurate tests described here give much more realistic answers in that they properly control rates of Type I error, particularly with small samples. The effects of errors in Stage 1 sampling are, however, shown to be important, even for quite large samples. The tests can and should be used in any analyses of non-random association or preference among sub-habitats or types of prey.  相似文献   

6.
  1. We have studied the development of the refractive state in young barn owls (Tyto alba pratincola). Strikingly, the eyes had severe refractive errors shortly after lid opening (which occurred around day 14 after hatching; average from 6 owls: 13.83 ± 1.47 days). Refractive errors vanished in the subsequent one or two weeks (Fig. 1, Fig. 2).
  2. Refractive errors did not differ by more than 1 diopter (D) in both eyes of an individual (Fig. 2). Thus, non-visual control of eye growth was sufficient to produce non-random refractions. However, visual input was finally required to adjust the optical system to emmetropia.
  3. Using in-vivo A-scan ultrasonography of ocular dimensions (Fig. 4A), photokeratometric measurements of corneal radius of curvature (Fig. 4B), and frozen sections of excised eyes (Fig. 3), we developed paraxial schematic eye models which described age-dependent changes in ocular parameters and were applicable through the ages from lid opening to fledging (Table 1). A schematic eye for the adult barn owl (European subspecies: Tyto alba alba) is also provided. Eye sizes in an adult owl of the American (Tyto alba pratincola) and the European subspecies (T. alba alba) were similar despite of different body weights (500 g and 350 g, respectively).
  4. The schematic eyes were used to test which ocular parameters might have caused the recovery from refractive errors. However, none of the ocular dimensions measured underwent obvious changes in their growth curves as visual input became available. Apparently, coordinated growth of several ocular components produced emmetropia.
  5. From the schematic eye model, the developmental changes in image brightness and image magnification were calculated (Fig. 5). In barn owl eyes, image size was not quite as extreme as in the tawny owl or the great horned owl. However, the image was larger and the f/number was lower than in diurnal birds of comparable weight (pigeon, chicken). The observation supports a conclusion that image size is maximised in owls to permit a higher degree of photoreceptor convergence for higher light sensitivity at dusk while spatial acuity remains comparable to diurnal birds with smaller eyes.
  相似文献   

7.
Global positioning system (GPS) technologies collect unprecedented volumes of animal location data, providing ever greater insight into animal behaviour. Despite a certain degree of inherent imprecision and bias in GPS locations, little synthesis regarding the predominant causes of these errors, their implications for ecological analysis or solutions exists. Terrestrial deployments report 37 per cent or less non-random data loss and location precision 30 m or less on average, with canopy closure having the predominant effect, and animal behaviour interacting with local habitat conditions to affect errors in unpredictable ways. Home-range estimates appear generally robust to contemporary levels of location imprecision and bias, whereas movement paths and inferences of habitat selection may readily become misleading. There is a critical need for greater understanding of the additive or compounding effects of location imprecision, fix-rate bias, and, in the case of resource selection, map error on ecological insights. Technological advances will help, but at present analysts have a suite of ad hoc statistical corrections and modelling approaches available—tools that vary greatly in analytical complexity and utility. The success of these solutions depends critically on understanding the error-inducing mechanisms, and the biggest gap in our current understanding involves species-specific behavioural effects on GPS performance.  相似文献   

8.
The non-random distribution of DNA breakage in pulsed-field gel electrophoresis (PFGE) experiments poses a problem of proper subtraction of the background damage to obtain a fragment-size distribution due to radiation only. As been pointed out by various authors, a naive bin-to-bin subtraction of the background signal will not result in the right DNA mass distribution histogram, and may even result in negative values. Previous more systematic subtraction methods have been based mainly on random breakage, appropriate for low-LET radiation but problematic for high LET. Moreover, an investigation is needed whether the background breakage itself is random or non-random. Previously a new generalized formalism based on stochastic processes for the subtraction of the background damage in PFGE experiments for any LET and any background was proposed, and as now applied it to a set of PFGE data for Fe ions. We developed a Monte Carlo algorithm to compare the naïve subtraction procedure in artificial data sets to the result produced by the new formalism. The simulated data corresponded to various cases, involving non-random (high-LET) or random radiation breakage and random or non-random background breakage. The formalism systematically gives better results than naïve bin-by-bin subtraction in all these artificial data sets.  相似文献   

9.
AimWe sought to improve error detection ability during volume modulated arc therapy (VMAT) by dividing and evaluating the treatment plan.BackgroundVMAT involves moving a beam source delivering radiation to tumor tissue through an arc, which significantly decreases treatment time. Treatment planning for VMAT involves many parameters. Quality assurance before treatment is a major focus of research.Materials and methodsWe used an established VMAT prostate treatment plan and divided it into 12° × 30° sections. In all the sections, only image data that generated errors in one segment and those that were integrally acquired were evaluated by a gamma analysis. This was done with five different patient plans.ResultsThe integrated image data resulting from errors in each section was 100% (tolerance 0.5 mm/0.5%) in the gamma analysis result in all image data. Division of the treatment plans produced a shift in the mean value of each gamma analysis in the cranial, left, and ventral directions of 94.59%, 98.83%, 96.58%, and the discrimination ability improved.ConclusionThe error discrimination ability was improved by dividing and verifying the portal imaging.  相似文献   

10.
The dynamics of adhesion and growth of bacterial cells on biomaterial surfaces play an important role in the formation of biofilms. The surface properties of biomaterials have a major impact on cell adhesion processes, eg the random/non-cooperative adhesion of bacteria. In the present study, the spatial arrangement of Escherichia coli on different biomaterials is investigated in a time series during the first hours after exposure. The micrographs are analyzed via an image processing routine and the resulting point patterns are evaluated using second order statistics. Two main adhesion mechanisms can be identified: random adhesion and non-random processes. Comparison with an appropriate null-model quantifies the transition between the two processes with statistical significance. The fastest transition to non-random processes was found to occur after adhesion on PTFE for 2–3 h. Additionally, determination of cell and cluster parameters via image processing gives insight into surface influenced differences in bacterial micro-colony formation.  相似文献   

11.

Background  

High content screening (HCS) is a powerful method for the exploration of cellular signalling and morphology that is rapidly being adopted in cancer research. HCS uses automated microscopy to collect images of cultured cells. The images are subjected to segmentation algorithms to identify cellular structures and quantitate their morphology, for hundreds to millions of individual cells. However, image analysis may be imperfect, especially for "HCS-unfriendly" cell lines whose morphology is not well handled by current image segmentation algorithms. We asked if segmentation errors were common for a clinically relevant cell line, if such errors had measurable effects on the data, and if HCS data could be improved by automated identification of well-segmented cells.  相似文献   

12.
Fluorescence lifetime imaging (FLIM) is widely applied to obtain quantitative information from fluorescence signals, particularly using Förster Resonant Energy Transfer (FRET) measurements to map, for example, protein-protein interactions. Extracting FRET efficiencies or population fractions typically entails fitting data to complex fluorescence decay models but such experiments are frequently photon constrained, particularly for live cell or in vivo imaging, and this leads to unacceptable errors when analysing data on a pixel-wise basis. Lifetimes and population fractions may, however, be more robustly extracted using global analysis to simultaneously fit the fluorescence decay data of all pixels in an image or dataset to a multi-exponential model under the assumption that the lifetime components are invariant across the image (dataset). This approach is often considered to be prohibitively slow and/or computationally expensive but we present here a computationally efficient global analysis algorithm for the analysis of time-correlated single photon counting (TCSPC) or time-gated FLIM data based on variable projection. It makes efficient use of both computer processor and memory resources, requiring less than a minute to analyse time series and multiwell plate datasets with hundreds of FLIM images on standard personal computers. This lifetime analysis takes account of repetitive excitation, including fluorescence photons excited by earlier pulses contributing to the fit, and is able to accommodate time-varying backgrounds and instrument response functions. We demonstrate that this global approach allows us to readily fit time-resolved fluorescence data to complex models including a four-exponential model of a FRET system, for which the FRET efficiencies of the two species of a bi-exponential donor are linked, and polarisation-resolved lifetime data, where a fluorescence intensity and bi-exponential anisotropy decay model is applied to the analysis of live cell homo-FRET data. A software package implementing this algorithm, FLIMfit, is available under an open source licence through the Open Microscopy Environment.  相似文献   

13.
Species dispersal studies provide valuable information in biological research. Restricted dispersal may give rise to a non-random distribution of genotypes in space. Detection of spatial genetic structure may therefore provide valuable insight into dispersal. Spatial structure has been treated via autocorrelation analysis with several univariate statistics for which results could dependent on sampling designs. New geostatistical approaches (variogram-based analysis) have been proposed to overcome this problem. However, modelling parametric variograms could be difficult in practice. We introduce a non-parametric variogram-based method for autocorrelation analysis between DNA samples that have been genotyped by means of multilocus-multiallele molecular markers. The method addresses two important aspects of fine-scale spatial genetic analyses: the identification of a non-random distribution of genotypes in space, and the estimation of the magnitude of any non-random structure. The method uses a plot of the squared Euclidean genetic distances vs. spatial distances between pairs of DNA-samples as empirical variogram. The underlying spatial trend in the plot is fitted by a non-parametric smoothing (LOESS, Local Regression). Finally, the predicted LOESS values are explained by segmented regressions (SR) to obtain classical spatial values such as the extent of autocorrelation. For illustration we use multivariate and single-locus genetic distances calculated from a microsatellite data set for which autocorrelation was previously reported. The LOESS/SR method produced a good fit providing similar value of published autocorrelation for this data. The fit by LOESS/SR was simpler to obtain than the parametric analysis since initial parameter values are not required during the trend estimation process. The LOESS/SR method offers a new alternative for spatial analysis.  相似文献   

14.
Summary We introduce a nearly automatic procedure to locate and count the quantum dots in images of kinesin motor assays. Our procedure employs an approximate likelihood estimator based on a two‐component mixture model for the image data; the first component has a normal distribution, and the other component is distributed as a normal random variable plus an exponential random variable. The normal component has an unknown variance, which we model as a function of the mean. We use B‐splines to estimate the variance function during a training run on a suitable image, and the estimate is used to process subsequent images. Parameter estimates are generated for each image along with estimates of standard errors, and the number of dots in the image is determined using an information criterion and likelihood ratio tests. Realistic simulations show that our procedure is robust and that it leads to accurate estimates, both of parameters and of standard errors.  相似文献   

15.

Background  

The standard genetic code is redundant and has a highly non-random structure. Codons for the same amino acids typically differ only by the nucleotide in the third position, whereas similar amino acids are encoded, mostly, by codon series that differ by a single base substitution in the third or the first position. As a result, the code is highly albeit not optimally robust to errors of translation, a property that has been interpreted either as a product of selection directed at the minimization of errors or as a non-adaptive by-product of evolution of the code driven by other forces.  相似文献   

16.
Abstract When mating is non-random among several, compatible donors, the fitness of pollen donors, maternal plants, and offspring may be affected. Although this process may be important, it is much less studied than other forms of non-random mating such as incompatibility and avoidance of inbreeding. Therefore, the amount and consequences of non-random mating were investigated in greenhouse studies with wild radish, Raphanus sativus . Six compatible donors differed in the number, position, and weight of seeds sired, so mating was non-random at the level of mate identity. Mate number also affected mating patterns; fruits with more fathers were allocated more resources. This keeps mate number per fruit high. In contrast, other processes appear to keep mate number below the maximum so that mate number per fruit is regulated at an intermediate level. Mate identity had clear consequences as offspring with different fathers were of different sizes after 11 weeks. The effects of mate number on offspring success were less clear. These and other data suggest that non-random mating among compatible donors is a relatively common process in wild radish. It may occur through mechanisms controlled by the pollen tubes, the maternal plants or the embryos. While this non-random mating is the raw maternal for sexual selection in plants, whether sexual selection actually occurs and how important it may be is still nuclear.  相似文献   

17.
The non-random distribution of DNA breakage in PFGE (pulsed-field gel electrophoresis) experiments poses a problem of proper subtraction of the background DNA damage to obtain a fragment-size distribution due to radiation only. A naive bin-to-bin subtraction of the background signal will not result in the right DNA mass distribution histogram. This problem could become more pronounced for high-LET (linear energy transfer) radiation, because the fragment-size distribution manifests a higher frequency of smaller fragments. Previous systematic subtraction methods have been based on random breakage, appropriate for low-LET radiation. Moreover, an investigation is needed to determine whether the background breakage is itself random or non-random. We consider two limiting cases: (1) the background damage is present in all cells, and (2) it is present in only a small subset of cells, while other cells are not contributing to the background DNA fragmentation. We give a generalized formalism based on stochastic processes for the subtraction of the background damage in PFGE experiments for any LET and apply it to two sets of PFGE data for iron ions.  相似文献   

18.
Summary The sources of errors which may occur when cytophotometric analysis is performed with video microscopy using a charged-coupled device (CCD) camera and image analysis are reviewed. The importance of these errors in practice has been tested, and ways of minimizing or avoiding them are described. Many of these sources of error are known from scanning and integrating cytophotometry; they include the use of white instead of monochromatic light, the distribution error, glare, diffraction, shading distortion, and inadequate depth of field. Sources of errors specifically linked with video microscopy or image analysis are highlighted as well; these errors include blooming, limited dynamic range of grey levels, non-linear responses of the camera, contrast transfer, photon noise, dark current, read-out noise, fixed scene noise and spatial calibration. Glare, contrast transfer, fixed scene noise, depth of field and spatial calibration seem to be the most serious sources of errors when measurements are not carried out correctly. We include a table summarizing all the errors discussed in this review and procedures for avoiding them. It can be concluded that if accurate calibration steps are performed and proper guidelines followed, image cytometry can be applied safely for quantifying amounts of chromophore per cell or per unit volume of tissue in sections, even when relatively simple and inexpensive instrumentation is being used.  相似文献   

19.
Proposed standard for image cytometry data files   总被引:1,自引:0,他引:1  
P Dean  L Mascio  D Ow  D Sudar  J Mullikin 《Cytometry》1990,11(5):561-569
A number of different types of computers running a variety of operating systems are presently used for the collection and analysis of image cytometry data. In order to facilitate the development of sharable data analysis programs, to allow for the transport of image cytometry data from one installation to another, and to provide a uniform and controlled means for including textual information in data files, this document describes a data storage format that is proposed as a standard for use in image cytometry. In this standard, data from an image measurement are stored in a minimum of two files. One file is written in ASCII to include information about the way the image data are written and optionally, information about the sample, experiment, equipment, etc. The image data are written separately into a binary file. This standard is proposed with the intention that it will be used internationally for the storage and handling of biomedical image cytometry data. The method of data storage described in this paper is similar to those methods published in American Association of Physicists in Medicine (AAPM) Report Number 10 and in ACR-NEMA Standards Publication Number 300-1985.  相似文献   

20.
A theory is developed for determining the motion of an observer given the motion field over a full 360 degree image sphere. The method is based on the fact that for an observer translating without rotation, the projected circular motion field about any equator can be divided into disjoint semicircles of clockwise and counterclockwise flow, and on the observation that the effects of rotation decouple around the three equators defining the three principal axes of rotation. Since the effect of rotation is geometrical, the three rotational parameters can be determined independently by searching, in each case, for a rotational value for which the derotated equatorial motion field can be partitioned into 180 degree arcs of clockwise and counterclockwise flow. The direction of translation is also obtained from this analysis. This search is two dimensional in the motion parameters, and can be performed relatively efficiently. Because information is correlated over large distances, the method can be considered a pattern recognition rather than a numerical algorithm. The algorithm is shown to be robust and relatively insensitive to noise and to missing data. Both theoretical and empirical studies of the error sensitivity are presented. The theoretical analysis shows that for white noise of bounded magnitude M, the expected errors is at worst linearly proportional to M. Empirical tests demonstrate negligible error for perturbations of up to 20% in the input, and errors of less than 20% for perturbations of up to 200%.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号