首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Bible, without which Western civilization is inexplicable, has powerful ecological teachings that support an ecological worldview. While these teachings are not widely practised in our time. continuing degradation of ecological systems by humanity requires their re-examination by ecologists and the church. Such re-examination can help develop the mutual understanding necessary for making ethical ecological judgements and putting these teachings into practice in an appropriate manner. Among these teachings are the expectation that people will serve and keep the Creation (earthkeeping principle), that creatures and ecosystems not be relentlessly pressed (sabbath principle), that provisions must be made for the flourishing of the biosphere (fruitfulness principle), that the Earth be filled with biologically diverse and abundant life (fulfilment principle), that pressing the biosphere's absolute limits must be avoided (buffer principle), that people should seek contentment and not selfish gain (contentment principle), that people should seek biospheric integrity rather than self-interest (priority principle) and that people should not fail to act on what they know is right (praxis principle). Ecologists need to recognize and respect these and other biblical ecological teachings and be ready to assist churches in their care and keeping of Creation. And churches must join ecologists in the work of assuring the continued integrity of the biosphere.  相似文献   

2.
The primary structure of a polypeptide can be predicted by translating its mRNA sequence according to the ‘universal’ genetic code. Yet, recent evidence has shown that a number of nonstandard translational events may occur in cells, generating microheterogeneity in the translation product at the amino acid level. Such events can be programmed by sequences within the mRNA, or may just represent nonprogrammed errors that occur during translation as a result of depletion of specific aminoacyl-tRNAs. The potential occurrence of such errors must be considered and steps taken both to identify and eliminate them when expression strategies are being developed for producing recombinant proteins for human therapeutic use.  相似文献   

3.
As many data as possible must be included in any scientific analysis, provided that they follow the logical principles on which this analysis is based. Phylogenetic analysis is based on the basic principle of evolution, i.e., descent with modification. Consequently, ecological characters or any other nontraditional characters must be included in phylogenetic analyses, provided that they can plausibly be postulated heritable. The claim of Zrzavý (1997, Oikos 80, 186–192) or Luckow and Bruneau (1997, Cladistics 13, 145–151) that any character of interest should be included in the analysis is thus inaccurate. Many characters, broadly defined or extrinsic (such as distribution areas), cannot be considered as actually heritable. It is argued that we should better care for the precise definition and properties of characters of interest than decide a priori to include them in any case in the analysis. The symmetrical claim of de Queiroz (1996, Am. Nat. 148, 700–708) that some characters of interest should better be excluded from analyses to reconstruct their history is similarly inaccurate. If they match the logical principles of phylogenetic analysis, there is no acceptable reason to exclude them. The different statistical testing strategies of Zrzavý (1997) and de Queiroz (1996) aimed at justifying inclusion versus exclusion of characters are ill‐conceived, leading respectively to Type II and Type I errors. It is argued that phylogenetic analyses should not be constrained by testing strategies that are downstream of the logical principles of phylogenetics. Excluding characters and mapping them on an independent phylogeny produces a particular and suboptimal kind of secondary homology, the use of which can be justified only for preliminary studies dealing with broadly defined characters.  相似文献   

4.
The present paper will focus on the relation between the structure of the table of the genetic code and the evolution of primitive organisms: it will be shown that the organization of the code table according to an optimization principle based on the notion of resistance to errors can provide a criterium for selection. The ordered aspect of the genetic code table makes this result a plausible starting point for studies of the origin and evolution of the genetic code: these could include, besides a more refined optimization principle at the logical level, some effects more directly related to the physico-chemical context, and the construction of realistic models incorporating both aspects.  相似文献   

5.
We consider the design and evaluation of short barcodes, with a length between six and eight nucleotides, used for parallel sequencing on platforms where substitution errors dominate. Such codes should have not only good error correction properties but also the code words should fulfil certain biological constraints (experimental parameters). We compare published barcodes with codes obtained by two new constructions methods, one based on the currently best known linear codes and a simple randomized construction method. The evaluation done is with respect to the error correction capabilities, barcode size and their experimental parameters and fundamental bounds on the code size and their distance properties. We provide a list of codes for lengths between six and eight nucleotides, where for length eight, two substitution errors can be corrected. In fact, no code with larger minimum distance can exist.  相似文献   

6.
We consider a model of the origin of genetic code organization incorporating the biosynthetic relationships between amino acids and their physicochemical properties. We study the behavior of the genetic code in the set of codes subject both to biosynthetic constraints and to the constraint that the biosynthetic classes of amino acids must occupy only their own codon domain, as observed in the genetic code. Therefore, this set contains the smallest number of elements ever analyzed in similar studies. Under these conditions and if, as predicted by physicochemical postulates, the amino acid properties played a fundamental role in genetic code organization, it can be expected that the code must display an extremely high level of optimization. This prediction is not supported by our analysis, which indicates, for instance, a minimization percentage of only 80%. These observations can therefore be more easily explained by the coevolution theory of genetic code origin, which postulates a role that is important but not fundamental for the amino acid properties in the structuring of the code. We have also investigated the shape of the optimization landscape that might have arisen during genetic code origin. Here, too, the results seem to favor the coevolution theory because, for instance, the fact that only a few amino acid exchanges would have been sufficient to transform the genetic code (which is not a local minimum) into a much better optimized code, and that such exchanges did not actually take place, seems to suggest that, for instance, the reduction of translation errors was not the main adaptive theme structuring the genetic code.  相似文献   

7.
Learning to read takes time and it requires explicit instruction. Three decades of research has taught us a good deal about how children learn about the links between orthography and phonology during word reading development. However, we have learned less about the links that children build between orthographic form and meaning. This is surprising given that the goal of reading development must be for children to develop an orthographic system that allows meanings to be accessed quickly, reliably and efficiently from orthography. This review considers whether meaning-related information is used when children read words aloud, and asks what we know about how and when children make connections between form and meaning during the course of reading development.  相似文献   

8.
Studies on the origin of the genetic code compare measures of the degree of error minimization of the standard code with measures produced by random variant codes but do not take into account codon usage, which was probably highly biased during the origin of the code. Codon usage bias could play an important role in the minimization of the chemical distances between amino acids because the importance of errors depends also on the frequency of the different codons. Here I show that when codon usage is taken into account, the degree of error minimization of the standard code may be dramatically reduced, and shifting to alternative codes often increases the degree of error minimization. This is especially true with a high CG content, which was probably the case during the origin of the code. I also show that the frequency of codes that perform better than the standard code, in terms of relative efficiency, is much higher in the neighborhood of the standard code itself, even when not considering codon usage bias; therefore alternative codes that differ only slightly from the standard code are more likely to evolve than some previous analyses suggested. My conclusions are that the standard genetic code is far from being an optimum with respect to error minimization and must have arisen for reasons other than error minimization.[Reviewing Editor: Martin Kreitman]  相似文献   

9.
Statistical and biochemical studies of the genetic code have found evidence of nonrandom patterns in the distribution of codon assignments. It has, for example, been shown that the code minimizes the effects of point mutation or mistranslation: erroneous codons are either synonymous or code for an amino acid with chemical properties very similar to those of the one that would have been present had the error not occurred. This work has suggested that the second base of codons is less efficient in this respect, by about three orders of magnitude, than the first and third bases. These results are based on the assumption that all forms of error at all bases are equally likely. We extend this work to investigate (1) the effect of weighting transition errors differently from transversion errors and (2) the effect of weighting each base differently, depending on reported mistranslation biases. We find that if the bias affects all codon positions equally, as might be expected were the code adapted to a mutational environment with transition/transversion bias, then any reasonable transition/transversion bias increases the relative efficiency of the second base by an order of magnitude. In addition, if we employ weightings to allow for biases in translation, then only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects biases in these errors, as might be expected were the code the product of selection. Received: 25 July 1997 / Accepted: 9 January 1998  相似文献   

10.
In this article, I ask whether a principle analogous to the principle of clinical equipoise should govern the design and conduct of RCTs evaluating the effectiveness of policy interventions. I answer this question affirmatively, and introduce and defend the principle of policy equipoise. According to this principle, all arms of a policy RCT must be, at minimum, in a state of equipoise with the best proven policy that is also morally and practically attainable and sustainable. For all arms of a policy RCT, policy experts must either (1) reasonably disagree about whether the trial arms are more effective than this policy, or (2) know that they are.  相似文献   

11.
The paper deplores the increasing practice whereby individuals and groups write Igbo with orthographic conventions that deviate from those of the official Igbo (Ọnwụ) Orthography. It warns that these divergent acts are steadily dragging Igbo Orthography into a state of anarchy whose consequences could be more disastrous than those of the earlier orthography controversy of 1929–1961. The paper briefly traces the history of Igbo orthography from the earliest mention of Igbo in the sixteenth century writings of European travelers to the present times. Among its recommendations for the restoration of sanity in Igbo orthography are: the respect of the present official one until new conventions are officially agreed to and sanctioned; the revival of the Igbo Standardization Committee which formerly regulated and supervised developments in the language; the convening of an international workshop on Igbo orthography and the production of an enlarged Pan Igbo orthography for writing in dialects while the present official (Ọnwụ) orthography serves for Standard Igbo.
Chinyere Ohiri-AnicheEmail:
  相似文献   

12.
During the RNA World, organisms experienced high rates of genetic errors, which implies that there was strong evolutionary pressure to reduce the errors’ phenotypical impact by suitably structuring the still-evolving genetic code. Therefore, the relative rates of the various types of genetic errors should have left characteristic imprints in the structure of the genetic code. Here, we show that, therefore, it is possible to some extent to reconstruct those error rates, as well as the nucleotide frequencies, for the time when the code was fixed. We find evidence indicating that the frequencies of G and C in the genome were not elevated. Since, for thermodynamic reasons, RNA in thermophiles tends to possess elevated G+C content, this result indicates that the fixation of the genetic code occurred in organisms which were either not thermophiles or that the code’s fixation occurred after the rise of DNA. Supplementary Materials Original data and programs are available at the author’s web site: .  相似文献   

13.
An information-based methodology for determining the quality of an alignment of two code sequences is presented. The assumptions involved in the procedure are as follows, (i) The information required to effect the alignment is separable into three categories: location, type and operation detail. The information basis of all three categories must be the same so that the information values obtained may be added together to produce a meaningful total for the entire alignment, (ii) All possible alignments may be expressed as composites of four mutation operations, UR, S, In and D. Two mutations are constrained from occurring at the same site to avoid ambiguity and to render the set of alignments finite, (iii) The character statistics and corresponding estimates of the probabilities of occurrence for mutations are available or at least estimable.In application, one needs to obtain estimates of the distribution of (a) the spacing between mutations, (b) the frequency of the four mutation operations, and (c) the inserted character frequencies and deletion lengths.Some of the constraints on these estimates are described and means, in each case, for obtaining reasonable values are suggested.These requirements are all extremely fundamental in nature and can, in principle, be satisfied biochemically. The greatest potential value of the method, is that these physical quantities may be related in a non-arbitrary way to the complex problem of alignment. The method requires no arbitrary penalty factors and should help to guide geneticists in gathering the necessary data.  相似文献   

14.

Background

The standard genetic code (SGC) is a unique set of rules which assign amino acids to codons. Similar amino acids tend to have similar codons indicating that the code evolved to minimize the costs of amino acid replacements in proteins, caused by mutations or translational errors. However, if such optimization in fact occurred, many different properties of amino acids must have been taken into account during the code evolution. Therefore, this problem can be reformulated as a multi-objective optimization task, in which the selection constraints are represented by measures based on various amino acid properties.

Results

To study the optimality of the SGC we applied a multi-objective evolutionary algorithm and we used the representatives of eight clusters, which grouped over 500 indices describing various physicochemical properties of amino acids. Thanks to that we avoided an arbitrary choice of amino acid features as optimization criteria. As a consequence, we were able to conduct a more general study on the properties of the SGC than the ones presented so far in other papers on this topic. We considered two models of the genetic code, one preserving the characteristic codon blocks structure of the SGC and the other without this restriction. The results revealed that the SGC could be significantly improved in terms of error minimization, hereby it is not fully optimized. Its structure differs significantly from the structure of the codes optimized to minimize the costs of amino acid replacements. On the other hand, using newly defined quality measures that placed the SGC in the global space of theoretical genetic codes, we showed that the SGC is definitely closer to the codes that minimize the costs of amino acids replacements than those maximizing them.

Conclusions

The standard genetic code represents most likely only partially optimized systems, which emerged under the influence of many different factors. Our findings can be useful to researchers involved in modifying the genetic code of the living organisms and designing artificial ones.
  相似文献   

15.
The ribosome is a molecular machine that converts genetic information in the form of RNA, into protein. Recent structural studies reveal a complex set of interactions between the ribosome and its ligands, mRNA and tRNA, that indicate ways in which the ribosome could avoid costly translational errors. Ribosomes must decode each successive codon accurately, and structural data provide a clear indication of how ribosomes limit recruitment of the wrong tRNA (sense errors). In a triplet-based genetic code there are three potential forward reading frames, only one of which encodes the correct protein. Errors in which the ribosome reads a codon out of the normal reading frame (frameshift errors) occur less frequently than sense errors, although it is not clear from structural data how these errors are avoided. Some mRNA sequences, termed programmed-frameshift sites, cause the ribosome to change reading frame. Based on recent work on these sites, this article proposes that the ribosome uses the structure of the codon-anticodon complex formed by the peptidyl-tRNA, especially its wobble interaction, to constrain the incoming aminoacyl-tRNA to the correct reading frame.  相似文献   

16.
Early fixation of an optimal genetic code   总被引:19,自引:0,他引:19  
The evolutionary forces that produced the canonical genetic code before the last universal ancestor remain obscure. One hypothesis is that the arrangement of amino acid/codon assignments results from selection to minimize the effects of errors (e.g., mistranslation and mutation) on resulting proteins. If amino acid similarity is measured as polarity, the canonical code does indeed outperform most theoretical alternatives. However, this finding does not hold for other amino acid properties, ignores plausible restrictions on possible code structure, and does not address the naturally occurring nonstandard genetic codes. Finally, other analyses have shown that significantly better code structures are possible. Here, we show that if theoretically possible code structures are limited to reflect plausible biological constraints, and amino acid similarity is quantified using empirical data of substitution frequencies, the canonical code is at or very close to a global optimum for error minimization across plausible parameter space. This result is robust to variation in the methods and assumptions of the analysis. Although significantly better codes do exist under some assumptions, they are extremely rare and thus consistent with reports of an adaptive code: previous analyses which suggest otherwise derive from a misleading metric. However, all extant, naturally occurring, secondarily derived, nonstandard genetic codes do appear less adaptive. The arrangement of amino acid assignments to the codons of the standard genetic code appears to be a direct product of natural selection for a system that minimizes the phenotypic impact of genetic error. Potential criticisms of previous analyses appear to be without substance. That known variants of the standard genetic code appear less adaptive suggests that different evolutionary factors predominated before and after fixation of the canonical code. While the evidence for an adaptive code is clear, the process by which the code achieved this optimization requires further attention.  相似文献   

17.
Theories of neural coding seek to explain how states of the world are mapped onto states of the brain. Here, we compare how an animal''s location in space can be encoded by two different kinds of brain states: population vectors stored by patterns of neural firing rates, versus synchronization vectors stored by patterns of synchrony among neural oscillators. It has previously been shown that a population code stored by spatially tuned ‘grid cells’ can exhibit desirable properties such as high storage capacity and strong fault tolerance; here it is shown that similar properties are attainable with a synchronization code stored by rhythmically bursting ‘theta cells’ that lack spatial tuning. Simulations of a ring attractor network composed from theta cells suggest how a synchronization code might be implemented using fewer neurons and synapses than a population code with similar storage capacity. It is conjectured that reciprocal connections between grid and theta cells might control phase noise to correct two kinds of errors that can arise in the code: path integration and teleportation errors. Based upon these analyses, it is proposed that a primary function of spatially tuned neurons might be to couple the phases of neural oscillators in a manner that allows them to encode spatial locations as patterns of neural synchrony.  相似文献   

18.
19.
This paper proposes a modified nonlinear viscoelastic Bilston model (Bilston et al., 2001, Biorheol., 38, pp. 335-345). for the modeling of brain tissue constitutive properties. The modified model can be readily implemented in a commercial explicit finite element (FE) code, PamCrash. Critical parameters of the model have been determined through a series of rheological tests on porcine brain tissue samples and the time-temperature superposition (TTS) principle has been used to extend the frequency to a high region. Simulations by using PamCrash are compared with the test results. Through the use of the TTS principle, the mechanical and rheological behavior at high frequencies up to 10(4) rads may be obtained. This is important because the properties of the brain tissue at high frequencies and impact rates are especially relevant to studies of traumatic head injury. The averaged dynamic modulus ranges from 130 Pa to 1500 Pa and loss modulus ranges from 35 Pa to 800 Pa in the frequency regime studied (0.01 rads to 3700 rads). The errors between theoretical predictions and averaged relaxation test results are within 20% for strains up to 20%. The FEM simulation results are in good agreement with experimental results. The proposed model will be especially useful for application to FE analysis of the head under impact loads. More realistic analysis of head injury can be carried out by incorporating the nonlinear viscoelastic constitutive law for brain tissue into a commercial FE code.  相似文献   

20.
ObjectivesBecause of the large amount of medical imaging data, the transmission process becomes complicated in telemedicine applications. Thus, in order to adapt the data bit streams to the constraints related to the limitation of the bandwidths a reduction of the size of the data by compression of the images is essential. Despite the improvements in the field of compression, the transmission itself can also introduce errors. For this reason, it is important to develop an adequate strategy which will help reduce this volume of data without having to introduce some distortion and resist the errors introduced by the channel noise during transmission. Thus, in this paper, we propose a ROI-based coding strategy and unequal bit stream protection to meet this dual constraint.Material and methodsThe proposed ROI-based compression strategy with unequal bit stream protection is composed of three parts: the first one allows the extraction of the ROI region, the second one consists of a ROI-based coding and the third one allows an unequal protection of the ROI bit stream.First, the Regions Of Interest (ROI) are extracted by hierarchical segmentation of these regions according to a segmentation method based on the technique of Marker-based-watershed combined with the technique of active contours by level set. The resulting regions are selectively encoded by a 3D coder based on a shape adaptive discrete wavelet transform 3D-BISK, where the compression ratio of each region depends on its relevance in diagnosis. These obtained regions of interest are protected with an error-correcting code of Reed-Solomon type with a code rate that varies according to the relevance of the region by an unequal protection strategy (UEP).ResultsThe performance of the proposed compression scheme is evaluated in several ways. First, tests are performed to study the impact of errors on the different bit streams. In the first place, these tests are carried out in order to study the effect of the variation of the compression rates on the different bit streams. Secondly, different Reed Solomon error-correcting codes of different code rates are tested at different compression rates on a BSC channel. Finally, the performances of this coding strategy are compared with those of SPIHT 3D in the case of transmission on a BSC channel.ConclusionThe obtained results show that the proposed method is quite efficient in transmission time reduction. Therefore, our proposed scheme will reduce the volume of data without having to introduce some distortion and resist the errors introduced by the channel noise in the case of telemedicine.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号