首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 36 毫秒
1.
Bosco G  Campbell P  Leiva-Neto JT  Markow TA 《Genetics》2007,177(3):1277-1290
The size of eukaryotic genomes can vary by several orders of magnitude, yet genome size does not correlate with the number of genes nor with the size or complexity of the organism. Although "whole"-genome sequences, such as those now available for 12 Drosophila species, provide information about euchromatic DNA content, they cannot give an accurate estimate of genome sizes that include heterochromatin or repetitive DNA content. Moreover, genome sequences typically represent only one strain or isolate of a single species that does not reflect intraspecies variation. To more accurately estimate whole-genome DNA content and compare these estimates to newly assembled genomes, we used flow cytometry to measure the 2C genome values, relative to Drosophila melanogaster. We estimated genome sizes for the 12 sequenced Drosophila species as well as 91 different strains of 38 species of Drosophilidae. Significant differences in intra- and interspecific 2C genome values exist within the Drosophilidae. Furthermore, by measuring polyploid 16C ovarian follicle cell underreplication we estimated the amount of satellite DNA in each of these species. We found a strong correlation between genome size and amount of satellite underreplication. Addition and loss of heterochromatin satellite repeat elements appear to have made major contributions to the large differences in genome size observed in the Drosophilidae.  相似文献   

2.
E A Liberman 《Biofizika》1975,20(3):432-436
Living organisms measure many parameters in order to have orientation in the outer medium. That is why biophysics cannot use the ordinary laws of physics and must take into account the influence on the phenomena to be studied not only of a measurement but also of a calculation process in the real physical and biophysical device predicting the future. Science taking into account the effects of the calculating process-realistical or informative (RI) physics-has different (laws) for different times, distances and numbers of measuring and predicting parameters. RI-physics deals with unreproducible events and considers only such time intervals and distances for which the prediction can be made on the basis of earlier measurements and calculations according to the laws with optimal difficulty. It is suggested that the living cell uses the laws which are close to these optimal (limiting) laws of RI-physics. Physics and quantum mechanics can be considered as a limiting case of RI-physics. In this case values of distances and times are large enough and the number of simultaneously measured independent parameters is such that the heat effect of the calculating device would become negligible. Molecular cell computer (MCC) [I] cannot calculate the interaction of a great quantity of different molecules, using the equations of quantum mechanics because the expense of the (price of action) would be very large and both MCC and the surrounding world could change.  相似文献   

3.
Features such as mutations or structural characteristics can be non-randomly or non-uniformly distributed within a genome. So far, computer simulations were required for statistical inferences on the distribution of sequence motifs. Here, we show that these analyses are possible using an analytical, mathematical approach. For the assessment of non-randomness, our calculations only require information including genome size, number of (sampled) sequence motifs and distance parameters. We have developed computer programs evaluating our analytical formulas for the real-time determination of expected values and p-values. This approach permits a flexible cluster definition that can be applied to most effectively identify non-random or non-uniform sequence motif distribution. As an example, we show the effectivity and reliability of our mathematical approach in clinical retroviral vector integration site distribution.  相似文献   

4.
Humans use various cues to understand the structure of the world from images. One such cue is the contours of an object formed by occlusion or from surface discontinuities. It is known that contours in the image of an object provide various amounts of information about the shape of the object in view, depending on assumptions that the observer makes. Another powerful cue is motion. The ability of the human visual system to discern structure from a motion stimulus is well known and has a solid theoretical and experimental foundation. However, when humans interpret a visual scene they use various cues to understand what they observe, and the interpretation comes from combining the information acquired from the various modules devoted to specific cues. In such an integration of modules it seems that each cue carries a different weight and importance. We performed several experiments where we made sure that the only cues available to the observer were contour and motion. It turns out that when humans combine information from contour and motion to reconstruct the shape of an object in view, if the results of the two modules--shape from contour and structure from motion--are inconsistent, they experience a perceptual result which is due to the combination of the two modules, with the influence of the contour dominating, thus giving rise to the illusion. We describe here examples of such illusions and identify the conditions under which they happen. Finally, we introduce a computational theory for combining contour and motion using the theory of regularization. The theory explains such illusions and predicts many more.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

5.
Transposable element contributions to plant gene and genome evolution   总被引:34,自引:0,他引:34  
Transposable elements were first discovered in plants because they can have tremendous effects on genome structure and gene function. Although only a few or no elements may be active within a genome at any time in any individual, the genomic alterations they cause can have major outcomes for a species. All major element types appear to be present in all plant species, but their quantitative and qualitative contributions are enormously variable even between closely related lineages. In some large-genome plants, mobile DNAs make up the majority of the nuclear genome. They can rearrange genomes and alter individual gene structure and regulation through any of the activities they promote: transposition, insertion, excision, chromosome breakage, and ectopic recombination. Many genes may have been assembled or amplified through the action of transposable elements, and it is likely that most plant genes contain legacies of multiple transposable element insertions into promoters. Because chromosomal rearrangements can lead to speciating infertility in heterozygous progeny, transposable elements may be responsible for the rate at which such incompatibility is generated in separated populations. For these reasons, understanding plant gene and genome evolution is only possible if we comprehend the contributions of transposable elements.  相似文献   

6.
On the detection of salient contours   总被引:5,自引:0,他引:5  
Braun J 《Spatial Vision》1999,12(2):211-225
When visual space is densely populated by elements of random orientation, elements which happen to be aligned may form a perceptually salient 'contour' (Field et al., 1993; Kovacs and Julesz, 1993). Here we further characterize human performance for detecting this type of Gestalt grouping. We find that detectability of salient contours reaches a plateau when they comprise at least 10 elements and are presented for at least 200 ms. It has been suggested that the detection of salient contours is mediated by the intrinsic connectivity of striate cortex and several computational models have been formulated on this basis. The present data provide a benchmark against which such models can be evaluated.  相似文献   

7.
S. P. Otto  V. Walbot 《Genetics》1990,124(2):429-437
We present a model for the kinetics of methylation and demethylation of eukaryotic DNA; the model incorporates values for de novo methylation and the error rate of maintenance methylation. From the equations, an equilibrium is reached such that the proportion of sites which are newly methylated equals the proportion of sites which become demethylated in a cell generation. This equilibrium is empirically determined as the level of maintenance methylation. We then chose reasonable values for the parameters using maize and mice as model species. In general, if the genome is either hypermethylated or hypomethylated it will approach the equilibrium level of maintenance methylation asymptotically over time; events occurring just once per life cycle to suppress methylation can maintain a relatively hypomethylated state. Although the equations developed are used here as framework for evaluating events in the whole genome, they can also be used to evaluate the rates of methylation and demethylation in specific sites over time.  相似文献   

8.
Steady-state levels of HIV-1 viraemia in the plasma vary more than a 1,000-fold between HIV-positive patients and are thought to be influenced by several different host and viral factors such as host target cell availability, host anti-HIV immune response and the virulence of the virus. Previous mathematical models have taken the form of classical ecological food-chain models and are unable to account for this multifactorial nature of the disease. These models suggest that the steady-state viral load (i.e. the set-point) is determined by immune response parameters only. We have devised a generalized consensus model in which the conventional parameters are replaced by so-called 'process functions'. This very general approach yields results that are insensitive to the precise form of the mathematical model. Here we applied the approach to HIV-1 infections by estimating the steady-state values of several process functions from published patient data. Importantly, these estimates are generic because they are independent of the precise form of the underlying processes. We recorded the variation in the estimated steady-state values of the process functions in a group of HIV-1 patients. We developed a novel model by providing explicit expressions for the process functions having the highest patient-to-patient variation in their estimated values. Small variations from patient to patient for several parameters of the new model collectively accounted for the large variations observed in the steady-state viral burden. The novel model remains in full agreement with previous models and data.  相似文献   

9.
On the Organization of Higher Chromosomes   总被引:2,自引:0,他引:2  
OHTA and Kimura1 have argued that only about 6% of the sequences in mammalian DNA can be under the intense selection that has characterized the evolutionary history of the cytochromes c, the globin chains and the histones. From the calculated mutation rate of fibrinopeptides A and B they show that if all genes are subjected to the same mutation rate 8.3 mutations would accumulate per genome per generation. Because 0,5 deleterious mutations per genome per generation is the maximum allowable in an equilibrium population2, they conclude that the amount of DNA that codes for informational sequences such as the cytochromes, globins and histones must be no more than 0.5/8.3, or 6%. We are therefore left with the interesting observation that 94% of mammalian nuclear DNA serves a function not under strong selection. These authors make several assumptions, one of which is that the spontaneous mutation rate characteristic of a species is constant over all nucleotide sequences. I suggest here that this assumption is incorrect, for a variety of reasons and that by assuming that spontaneous mutation rates vary sequence by sequence, one can arrive at a plausible organizing principle for the structure of higher chromosomes.  相似文献   

10.
We attempted to answer the following question: What evolutionary conditions are required to generate novel genetic modules? Our broad formulation of the problem allows us to simultaneously consider such issues as the relationship between the stage of "genetic search" and the rate of adaptive evolution; the theoretical limits to the generative capacities of spontaneous mutagenesis; and the correlation between genome organization and evolvability. We show that adaptive evolution is feasible only when the mutation rate is fine-tuned to a specific range of values and the structures of the genome and genes are optimized in a certain way. Our quantitative analysis has demonstrated that the rate of evolution of novelty depends on several parameters, such as genome size, the length of a module, the size of the adjacent nonfunctional DNA spacers, and the mutation rate at various genomic scales. We evaluated the efficiency of some mechanisms that increase evolvability: bias in the spectrum of mutation rates towards small mutations, and the availability and size of nonfunctional DNA spacers. We show that the probability of successful duplication and insertion of a copy of a functional module increases by several orders of magnitude depending on the length of the spacers flanking the module. We infer that the adaptive evolution of multicellular organisms has become feasible because of the abundance of nonfunctional DNA spacers, particularly introns, in the genome. We also discuss possible reasons underlying evolutionary retention of the mechanisms that increase evolvability.  相似文献   

11.
We construct several score functions for use in locating unusually conserved regions in a genomewide search of aligned DNA from two species. We test these functions on regions of the human genome aligned to the mouse genome. These score functions are derived from properties of neutrally evolving sites on the mouse and human genome and can be adjusted to the local background rate of conservation. The aim of these functions is to try to identify regions of the human genome that are conserved by evolutionary selection because they have an important function, rather than by chance. We use them to get a very rough estimate of the amount of DNA in the human genome that is under selection.  相似文献   

12.
The genome is packed into the cell nucleus in the form of chromatin. Biochemical approaches have revealed that chromatin is packed within domains, which group into larger domains, and so forth. Such hierarchical packing is equally visible in super-resolution microscopy images of large-scale chromatin organization. While previous work has suggested that chromatin is partitioned into distinct domains via microphase separation, it is unclear how these domains organize into this hierarchical packing. A particular challenge is to find an image analysis approach that fully incorporates such hierarchical packing, so that hypothetical governing mechanisms of euchromatin packing can be compared against the results of such an analysis. Here, we obtain 3D STED super-resolution images from pluripotent zebrafish embryos labeled with improved DNA fluorescence stains, and demonstrate how the hierarchical packing of euchromatin in these images can be described as multiplicative cascades. Multiplicative cascades are an established theoretical concept to describe the placement of ever-smaller structures within bigger structures. Importantly, these cascades can generate artificial image data by applying a single rule again and again, and can be fully specified using only four parameters. Here, we show how the typical patterns of euchromatin organization are reflected in the values of these four parameters. Specifically, we can pinpoint the values required to mimic a microphase-separated state of euchromatin. We suggest that the concept of multiplicative cascades can also be applied to images of other types of chromatin. Here, cascade parameters could serve as test quantities to assess whether microphase separation or other theoretical models accurately reproduce the hierarchical packing of chromatin.  相似文献   

13.
Nishihara H  Kuno S  Nikaido M  Okada N 《Gene》2007,400(1-2):98-103
Recent rapid generation of genomic sequence data has allowed many researchers to perform comparative analyses in various mammalian species. However, characterization of transposable elements, such as short interspersed repetitive elements (SINEs), has not been reported for several mammalian groups. Because SINEs occupy a large portion of the mammalian genome, they are believed to have contributed to the constitution and diversification of the host genomes during evolution. In the present study, we characterized a novel SINE family in the anteater genomes and designated it the MyrSINE family. Typical SINEs consist of a tRNA-related, a tRNA-unrelated and an AT-rich (or poly-A) region. MyrSINEs have only tRNA-related and poly-A regions; they are included in a group called t-SINE. The tRNA-related regions of the MyrSINEs were found to be derived from tRNAGly. We demonstrate that the MyrSINE family can be classified into three subfamilies. Two of the MyrSINE subfamilies are distributed in the genomes of both giant anteater and tamandua, while the other is present only in the giant anteater. We discuss the evolutionary history of MyrSINEs and their relationship to the evolution of anteaters. We also speculate that the simple structure of t-SINEs may be a potential evolutionary source for the generation of the typical SINE structure.  相似文献   

14.
15.
I J Wilson  D J Balding 《Genetics》1998,150(1):499-510
Ease and accuracy of typing, together with high levels of polymorphism and widespread distribution in the genome, make microsatellite (or short tandem repeat) loci an attractive potential source of information about both population histories and evolutionary processes. However, microsatellite data are difficult to interpret, in particular because of the frequency of back-mutations. Stochastic models for the underlying genetic processes can be specified, but in the past they have been too complicated for direct analysis. Recent developments in stochastic simulation methodology now allow direct inference about both historical events, such as genealogical coalescence times, and evolutionary parameters, such as mutation rates. A feature of the Markov chain Monte Carlo (MCMC) algorithm that we propose here is that the likelihood computations are simplified by treating the (unknown) ancestral allelic states as auxiliary parameters. We illustrate the algorithm by analyzing microsatellite samples simulated under the model. Our results suggest that a single microsatellite usually does not provide enough information for useful inferences, but that several completely linked microsatellites can be informative about some aspects of genealogical history and evolutionary processes. We also reanalyze data from a previously published human Y chromosome microsatellite study, finding evidence for an effective population size for human Y chromosomes in the low thousands and a recent time since their most recent common ancestor: the 95% interval runs from approximately 15, 000 to 130,000 years, with most likely values around 30,000 years.  相似文献   

16.
Recently, several two-dimensional spiking neuron models have been introduced, with the aim of reproducing the diversity of electrophysiological features displayed by real neurons while keeping a simple model, for simulation and analysis purposes. Among these models, the adaptive integrate-and-fire model is physiologically relevant in that its parameters can be easily related to physiological quantities. The interaction of the differential equations with the reset results in a rich and complex dynamical structure. We relate the subthreshold features of the model to the dynamical properties of the differential system and the spike patterns to the properties of a Poincaré map defined by the sequence of spikes. We find a complex bifurcation structure which has a direct interpretation in terms of spike trains. For some parameter values, spike patterns are chaotic.  相似文献   

17.
Sokal RR  Wartenberg DE 《Genetics》1983,105(1):219-237
Using the isolation-by-distance model as an example, we have examined several assumptions of spatial autocorrelation analysis applied to gene frequency surfaces. Gene frequency surfaces generated by a simulation of Wright's isolation-by-distance model were shown to exhibit spatial autocorrelation, except in the panmictic case. Identical stochastic generating processes result in surfaces with characteristics that are functions of the process parameters, such as parental vagility and neighborhood size. Differences in these parameters are detectable as differences in spatial autocorrelations after only a few generations of the simulations. Separate realizations of processes with identical parameters yield similar spatial correlograms. We have examined the inferences about population structure that could have been made from these observations if they had been real, rather than simulated, populations. From such inferences, we could have drawn conclusions about the presence of selection, migration and drift in given natural systems.  相似文献   

18.
19.
20.
How a developing embryo becomes "informed" about its form?" This problem remains obscure and controversial. We argue that the "information about a form" is distributed throughout three main components: the dynamic laws, the parameters and the initial/boundary conditions. In the absence of a dynamic law two other components are "blind", that is, do not contain any unambiguous information. We present a version of a dynamic law of morphogenesis, based upon the presumption of a feedback between passive and active mechanical stresses. We explore several models of shape formation based upon this law and show that, as depending upon the parameters values, they generate a large set of realistic shapes. Genetic and epigenetic basis of the models parameters is discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号