共查询到20条相似文献,搜索用时 484 毫秒
1.
Andrew Travers 《Origins of life and evolution of the biosphere》2006,36(5-6):549-555
The evolution of the genetic code in terms of the adoption of new codons has previously been related to the relative thermostability
of codon–anticodon interactions such that the most stable interactions have been hypothesised to represent the most ancient
coding capacity. This derivation is critically dependent on the accuracy of the experimentally determined stability parameters.
A new set of parameters recently determined for B-DNA reveals that the codon–anticodon pairs for the codes in non-plant mitochondria
on the one hand and prokaryotic and eukaryotic organisms on the other can be unequivocally divided into two classes – the
most stable base steps define a common code specified by the first two bases in a codon while the less stable base steps correlate
with divergent usage and the adoption of a 3-letter code. This pattern suggests that the fixation of codons for A, G, P, V,
S, T, D/E, R may have preceded the divergence of the non-plant mitochondrial line from other organisms. Other variations in
the code correlate with the least stable codon–anticodon pairs.
Presented at: National Workshop on Astrobiology: Search for Life in the Solar System, Capri, Italy, 26 to 28 October, 2005. 相似文献
2.
Pierrel J 《Journal of the history of biology》2012,45(1):109-138
The importance of viruses as model organisms is well-established in molecular biology and Max Delbrück’s phage group set standards
in the DNA phage field. In this paper, I argue that RNA phages, discovered in the 1960s, were also instrumental in the making
of molecular biology. As part of experimental systems, RNA phages stood for messenger RNA (mRNA), genes and genome. RNA was
thought to mediate information transfers between DNA and proteins. Furthermore, RNA was more manageable at the bench than
DNA due to the availability of specific RNases, enzymes used as chemical tools to analyse RNA. Finally, RNA phages provided
scientists with a pure source of mRNA to investigate the genetic code, genes and even a genome sequence. This paper focuses
on Walter Fiers’ laboratory at Ghent University (Belgium) and their work on the RNA phage MS2. When setting up his Laboratory
of Molecular Biology, Fiers planned a comprehensive study of the virus with a strong emphasis on the issue of structure. In
his lab, RNA sequencing, now a little-known technique, evolved gradually from a means to solve the genetic code, to a tool
for completing the first genome sequence. Thus, I follow the research pathway of Fiers and his ‘RNA phage lab’ with their
evolving experimental system from 1960 to the late 1970s. This study illuminates two decisive shifts in post-war biology:
the emergence of molecular biology as a discipline in the 1960s in Europe and of genomics in the 1990s. 相似文献
3.
David H. Ardell 《Journal of molecular evolution》1998,47(1):1-13
Distances between amino acids were derived from the polar requirement measure of amino acid polarity and Benner and co-workers'
(1994) 74-100 PAM matrix. These distances were used to examine the average effects of amino acid substitutions due to single-base
errors in the standard genetic code and equally degenerate randomized variants of the standard code. Second-position transitions
conserved all distances on average, an order of magnitude more than did second-position transversions. In contrast, first-position
transitions and transversions were about equally conservative. In comparison with randomized codes, second-position transitions
in the standard code significantly conserved mean square differences in polar requirement and mean Benner matrix-based distances,
but mean absolute value differences in polar requirement were not significantly conserved. The discrepancy suggests that these
commonly used distance measures may be insufficient for strict hypothesis testing without more information. The translational
consequences of single-base errors were then examined in different codon contexts, and similarities between these contexts
explored with a hierarchical cluster analysis. In one cluster of codon contexts corresponding to the RNY and GNR codons, second-position
transversions between C and G and transitions between C and U were most conservative of both polar requirement and the matrix-based
distance. In another cluster of codon contexts, second-position transitions between A and G were most conservative. Despite
the claims of previous authors to the contrary, it is shown theoretically that the standard code may have been shaped by position-invariant
forces such as mutation and base content. These forces may have left heterogeneous signatures in the code because of differences
in translational fidelity by codon position.
A scenario for the origin of the code is presented wherein selection for error minimization could have occurred multiple times
in disjoint parts of the code through a phyletic process of competition between lineages. This process permits error minimization
without the disruption of previously useful messages, and does not predict that the code is optimally error-minimizing with
respect to modern error. Instead, the code may be a record of genetic process and patterns of mutation before the radiation
of modern organisms and organelles.
Received: 28 July 1997 / Accepted: 23 January 1998 相似文献
4.
Naokazu Inoue Toshiyuki Saito Riako Masuda Yoshio Suzuki Michiko Ohtomi H. Sakiyama 《Human genetics》1998,103(4):415-418
The complement system plays an important role in defense mechanisms by promoting the adherence of microorganisms to phagocytic
cells and lysis of foreign organisms. Deficiencies of the first complement components, C1r/C1s, often cause systemic lupus
erythema-tosus-like syndromes and severe pyogenic infections. Up to now no genetic analysis of the C1r/C1s deficiencies has
been carried out. In the present work, we report the first genetic analysis of selective C1s deficiency, the patient having
a normal amount of C1r. C1s RNA with a normal size was detected in patient’s subcutaneous fibroblasts (YKF) by RNA blot analysis
and RT-PCR. The amount of C1s RNA was approximately one-tenth of the RNA from the human chondrosarcoma cell line, HCS2/8.
In contrast, the levels of C1r and β-actin RNA of YKF were similar to that of HCS2/8. Sequence analysis of C1s cDNA revealed
a deletion at nucleotides 1087–1090 (TTTG), creating a stop codon (TGA) at position 94 downstream of the mutation site. Direct
sequencing of the gene between the primers designed on intron 9 and exon 10 indicated the presence of the deletion on exon
10 of the gene. Quantitative Southern blot hybridization suggested the mutation was homozygous. The 4-bp deletion on exon
10 was also found in the patient’s heterozygous mother who had normal hemolytic activity.
Received: 6 July 1998 / Accepted: 1 August 1998 相似文献
5.
Ashutosh Vishwa Bandhu Neha Aggarwal Supratim Sengupta 《Origins of life and evolution of the biosphere》2013,43(6):465-489
The origin of the genetic code marked a major transition from a plausible RNA world to the world of DNA and proteins and is an important milestone in our understanding of the origin of life. We examine the efficacy of the physico-chemical hypothesis of code origin by carrying out simulations of code-sequence coevolution in finite populations in stages, leading first to the emergence of ten amino acid code(s) and subsequently to 14 amino acid code(s). We explore two different scenarios of primordial code evolution. In one scenario, competition occurs between populations of equilibrated code-sequence sets while in another scenario; new codes compete with existing codes as they are gradually introduced into the population with a finite probability. In either case, we find that natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. The code whose structure is most consistent with the standard genetic code is often not among the codes that have a high fixation probability. However, we find that the composition of the code population affects the code fixation probability. A physico-chemically optimized code gets fixed with a significantly higher probability if it competes against a set of randomly generated codes. Our results suggest that physico-chemical optimization may not be the sole driving force in ensuring the emergence of the standard genetic code. 相似文献
6.
Explaining the apparent non-random codon distribution and the nature and number of amino acids in the ‘standard’ genetic code
remains a challenge, despite the various hypotheses so far proposed. In this paper we propose a simple new hypothesis for
code evolution involving a progression from singlet to doublet to triplet codons with a reading mechanism that moves three
bases each step. We suggest that triplet codons gradually evolved from two types of ambiguous doublet codons, those in which
the first two bases of each three-base window were read (‘prefix’ codons) and those in which the last two bases of each window
were read (‘suffix’ codons). This hypothesis explains multiple features of the genetic code such as the origin of the pattern
of four-fold degenerate and two-fold degenerate triplet codons, the origin of its error minimising properties, and why there
are only 20 amino acids.
Reviewing Editor: Dr. Laura Landweber
An erratum to this article can be found at . 相似文献
7.
8.
The role of biodiversity in ecosystem function receives substantial attention, yet despite the diversity and functional relevance
of microorganisms, relationships between microbial community structure and ecosystem processes remain largely unknown. We
used tropical rain forest fertilization plots to directly compare the relative abundance, composition and diversity of free-living
nitrogen (N)-fixer communities to in situ leaf litter N fixation rates. N fixation rates varied greatly within the landscape,
and ‘hotspots’ of high N fixation activity were observed in both control and phosphorus (P)-fertilized plots. Compared with
zones of average activity, the N fixation ‘hotspots’ in unfertilized plots were characterized by marked differences in N-fixer
community composition and had substantially higher overall diversity. P additions increased the efficiency of N-fixer communities,
resulting in elevated rates of fixation per nifH gene. Furthermore, P fertilization increased N fixation rates and N-fixer abundance, eliminated a highly novel group of N-fixers,
and increased N-fixer diversity. Yet the relationships between diversity and function were not simple, and coupling rate measurements
to indicators of community structure revealed a biological dynamism not apparent from process measurements alone. Taken together,
these data suggest that the rain forest litter layer maintains high N fixation rates and unique N-fixing organisms and that,
as observed in plant community ecology, structural shifts in N-fixing communities may partially explain significant differences
in system-scale N fixation rates. 相似文献
9.
Doogab Yi 《Journal of the history of biology》2008,41(4):589-636
The existing literature on the development of recombinant DNA technology and genetic engineering tends to focus on Stanley
Cohen and Herbert Boyer’s recombinant DNA cloning technology and its commercialization starting in the mid-1970s. Historians
of science, however, have pointedly noted that experimental procedures for making recombinant DNA molecules were initially
developed by Stanford biochemist Paul Berg and his colleagues, Peter Lobban and A. Dale Kaiser in the early 1970s. This paper,
recognizing the uneasy disjuncture between scientific authorship and legal invention in the history of recombinant DNA technology,
investigates the development of recombinant DNA technology in its full scientific context. I do so by focusing on Stanford
biochemist Berg’s research on the genetic regulation of higher organisms. As I hope to demonstrate, Berg’s new venture reflected
a mass migration of biomedical researchers as they shifted from studying prokaryotic organisms like bacteria to studying eukaryotic
organisms like mammalian and human cells. It was out of this boundary crossing from prokaryotic to eukaryotic systems through
virus model systems that recombinant DNA technology and other significant new research techniques and agendas emerged. Indeed,
in their attempt to reconstitute ‹life’ as a research technology, Stanford biochemists’ recombinant DNA research recast genes
as a sequence that could be rewritten thorough biochemical operations. The last part of this paper shifts focus from recombinant
DNA technology’s academic origins to its transformation into a genetic engineering technology by examining the wide range
of experimental hybridizations which occurred as techniques and knowledge circulated between Stanford biochemists and the
Bay Area’s experimentalists. Situating their interchange in a dense research network based at Stanford’s biochemistry department,
this paper helps to revise the canonized history of genetic engineering’s origins that emerged during the patenting of Cohen–Boyer’s
recombinant DNA cloning procedures. 相似文献
10.
We have previously proposed an SNS hypothesis on the origin of the genetic code (Ikehara and Yoshida 1998). The hypothesis
predicts that the universal genetic code originated from the SNS code composed of 16 codons and 10 amino acids (S and N mean
G or C and either of four bases, respectively). But, it must have been very difficult to create the SNS code at one stroke
in the beginning. Therefore, we searched for a simpler code than the SNS code, which could still encode water-soluble globular
proteins with appropriate three-dimensional structures at a high probability using four conditions for globular protein formation
(hydropathy, α-helix, β-sheet, and β-turn formations). Four amino acids (Gly [G], Ala [A], Asp [D], and Val [V]) encoded by
the GNC code satisfied the four structural conditions well, but other codes in rows and columns in the universal genetic code
table do not, except for the GNG code, a slightly modified form of the GNC code. Three three-amino acid systems ([D], Leu
and Tyr; [D], Tyr and Met; Glu, Pro and Ile) also satisfied the above four conditions. But, some amino acids in the three
systems are far more complex than those encoded by the GNC code. In addition, the amino acids in the three-amino acid systems
are scattered in the universal genetic code table. Thus, we concluded that the universal genetic code originated not from
a three-amino acid system but from a four-amino acid system, the GNC code encoding [GADV]-proteins, as the most primitive
genetic code.
Received: 11 June 2001 / Accepted: 11 October 2001 相似文献
11.
Jayaraman R 《Journal of genetics》2011,90(2):383-391
Hypermutability is a phenotype characterized by a moderate to high elevation of spontaneous mutation rates and could result
from DNA replication errors, defects in error correction mechanisms and many other causes. The elevated mutation rates are
helpful to organisms to adapt to sudden and unforeseen threats to survival. At the same time hypermutability also leads to
the generation of many deleterious mutations which offset its adaptive value and therefore disadvantageous. Nevertheless,
it is very common in nature, especially among clinical isolates of pathogens. Hypermutability is inherited by indirect (second
order) selection along with the beneficial mutations generated. At large population sizes and high mutation rates many cells
in the population could concurrently acquire beneficial mutations of varying adaptive (fitness) values. These lineages compete
with the ancestral cells and also among themselves for fixation. The one with the ‘fittest’ mutation gets fixed ultimately
while the others are lost. This has been called ‘clonal interference’ which puts a speed limit on adaptation. The original
clonal interference hypothesis has been modified recently. Nonheritable (transient) hypermtability conferring significant
adaptive benefits also occur during stress response although its molecular basis remains controversial. The adaptive benefits
of heritable hypermutability are discussed with emphasis on host–pathogen interactions. 相似文献
12.
The genetic code appears to be optimized in its robustness to missense errors and frameshift errors. In addition, the genetic code is near-optimal in terms of its ability to carry information in addition to the sequences of encoded proteins. As evolution has no foresight, optimality of the modern genetic code suggests that it evolved from less optimal code variants. The length of codons in the genetic code is also optimal, as three is the minimal nucleotide combination that can encode the twenty standard amino acids. The apparent impossibility of transitions between codon sizes in a discontinuous manner during evolution has resulted in an unbending view that the genetic code was always triplet. Yet, recent experimental evidence on quadruplet decoding, as well as the discovery of organisms with ambiguous and dual decoding, suggest that the possibility of the evolution of triplet decoding from living systems with non-triplet decoding merits reconsideration and further exploration. To explore this possibility we designed a mathematical model of the evolution of primitive digital coding systems which can decode nucleotide sequences into protein sequences. These coding systems can evolve their nucleotide sequences via genetic events of Darwinian evolution, such as point-mutations. The replication rates of such coding systems depend on the accuracy of the generated protein sequences. Computer simulations based on our model show that decoding systems with codons of length greater than three spontaneously evolve into predominantly triplet decoding systems. Our findings suggest a plausible scenario for the evolution of the triplet genetic code in a continuous manner. This scenario suggests an explanation of how protein synthesis could be accomplished by means of long RNA-RNA interactions prior to the emergence of the complex decoding machinery, such as the ribosome, that is required for stabilization and discrimination of otherwise weak triplet codon-anticodon interactions. 相似文献
13.
The question of the potential importance for speciation of large/small population sizes remains open. We compare speciation
rates in twelve major taxonomic groups that differ by twenty orders of magnitude in characteristic species abundance (global
population number). It is observed that the twenty orders of magnitude’s difference in species abundances scales to less than
two orders of magnitude’s difference in speciation rates. As far as species abundance largely determines the rate of generation
of intraspecific endogenous genetic variation, the result obtained suggests that the latter rate is not a limiting factor
for speciation. Furthermore, the observed approximate constancy of speciation rates in different taxa cannot be accounted
for by assuming a neutral or nearly neutral molecular clock in subdivided populations. Neutral fixation is only relevant in
sufficiently small populations with 4N
ev < 1, which appears an unrealistic condition for many taxa of the smaller organisms. Further research is clearly needed to
reveal the mechanisms that could equate the evolutionary pace in taxa with dramatically different population sizes 相似文献
14.
Do universal codon-usage patterns minimize the effects of mutation and translation error? 总被引:2,自引:1,他引:1
Background
Do species use codons that reduce the impact of errors in translation or replication? The genetic code is arranged in a way that minimizes errors, defined as the sum of the differences in amino-acid properties caused by single-base changes from each codon to each other codon. However, the extent to which organisms optimize the genetic messages written in this code has been far less studied. We tested whether codon and amino-acid usages from 457 bacteria, 264 eukaryotes, and 33 archaea minimize errors compared to random usages, and whether changes in genome G+C content influence these error values.Results
We tested the hypotheses that organisms choose their codon usage to minimize errors, and that the large observed variation in G+C content in coding sequences, but the low variation in G+U or G+A content, is due to differences in the effects of variation along these axes on the error value. Surprisingly, the biological distribution of error values has far lower variance than randomized error values, but error values of actual codon and amino-acid usages are actually greater than would be expected by chance.Conclusion
These unexpected findings suggest that selection against translation error has not produced codon or amino-acid usages that minimize the effects of errors, and that even messages with very different nucleotide compositions somehow maintain a relatively constant error value. They raise the question: why do all known organisms use highly error-minimizing genetic codes, but fail to minimize the errors in the mRNA messages they encode?15.
It is widely agreed that the standard genetic code must have been preceded by a simpler code that encoded fewer amino acids. How this simpler code could have expanded into the standard genetic code is not well understood because most changes to the code are costly. Taking inspiration from the recently synthesized six-letter code, we propose a novel hypothesis: the initial genetic code consisted of only two letters, G and C, and then expanded the number of available codons via the introduction of an additional pair of letters, A and U. Various lines of evidence, including the relative prebiotic abundance of the earliest assigned amino acids, the balance of their hydrophobicity, and the higher GC content in genome coding regions, indicate that the original two nucleotides were indeed G and C. This process of code expansion probably started with the third base, continued with the second base, and ended up as the standard genetic code when the second pair of letters was introduced into the first base. The proposed process is consistent with the available empirical evidence, and it uniquely avoids the problem of costly code changes by positing instead that the code expanded its capacity via the creation of new codons with extra letters. 相似文献
16.
Robert W. Griffith 《Origins of life and evolution of the biosphere》2009,39(6):517-531
Among various scenarios that attempt to explain how life arose, the RNA world is currently the most widely accepted scientific
hypothesis among biologists. However, the RNA world is logistically implausible and doesn’t explain how translation arose
and DNA became incorporated into living systems. Here I propose an alternative hypothesis for life’s origin based on cooperation
between simple nucleic acids, peptides and lipids. Organic matter that accumulated on the prebiotic Earth segregated into
phases in the ocean based on density and solubility. Synthesis of complex organic monomers and polymerization reactions occurred
within a surface hydrophilic layer and at its aqueous and atmospheric interfaces. Replication of nucleic acids and translation
of peptides began at the emulsified interface between hydrophobic and aqueous layers. At the core of the protobiont was a
family of short nucleic acids bearing arginine’s codon and anticodon that added this amino acid to pre-formed peptides. In
turn, the survival and replication of nucleic acid was aided by the peptides. The arginine-enriched peptides served to sequester
and transfer phosphate bond energy and acted as cohesive agents, aggregating nucleic acids and keeping them at the interface. 相似文献
17.
Since the early days of the discovery of the genetic code nonrandom patterns have been searched for in the code in the hope of providing information about its origin and early evolution. Here we present a new classification scheme of the genetic code that is based on a binary representation of the purines and pyrimidines. This scheme reveals known patterns more clearly than the common one, for instance, the classification of strong, mixed, and weak codons as well as the ordering of codon families. Furthermore, new patterns have been found that have not been described before: Nearly all quantitative amino acid properties, such as Woeses polarity and the specific volume, show a perfect correlation to Lagerkvists codon–anticodon binding strength. Our new scheme leads to new ideas about the evolution of the genetic code. It is hypothesized that it started with a binary doublet code and developed via a quaternary doublet code into the contemporary triplet code. Furthermore, arguments are presented against suggestions that a simpler code, where only the midbase was informational, was at the origin of the genetic code. 相似文献
18.
Thermophilic organisms are being increasingly investigated and applied in metabolic engineering and biotechnology. The distinct metabolic and physiological characteristics of thermophiles, including broad substrate range and high uptake rates, coupled with recent advances in genetic tool development, present unique opportunities for strain engineering. However, poor understanding of the cellular physiology and metabolism of thermophiles has limited the application of systems biology and metabolic engineering tools to these organisms. To address this concern, we applied high resolution 13C metabolic flux analysis to quantify fluxes for three divergent extremely thermophilic bacteria from separate phyla: Geobacillus sp. LC300, Thermus thermophilus HB8, and Rhodothermus marinus DSM 4252. We performed 18 parallel labeling experiments, using all singly labeled glucose tracers for each strain, reconstructed and validated metabolic network models, measured biomass composition, and quantified precise metabolic fluxes for each organism. In the process, we resolved many uncertainties regarding gaps in pathway reconstructions and elucidated how these organisms maintain redox balance and generate energy. Overall, we found that the metabolisms of the three thermophiles were highly distinct, suggesting that adaptation to growth at high temperatures did not favor any particular set of metabolic pathways. All three strains relied heavily on glycolysis and TCA cycle to generate key cellular precursors and cofactors. None of the investigated organisms utilized the Entner-Doudoroff pathway and only one strain had an active oxidative pentose phosphate pathway. Taken together, the results from this study provide a solid foundation for future model building and engineering efforts with these and related thermophiles. 相似文献
19.
Statistical and biochemical studies of the genetic code have found evidence of nonrandom patterns in the distribution of
codon assignments. It has, for example, been shown that the code minimizes the effects of point mutation or mistranslation:
erroneous codons are either synonymous or code for an amino acid with chemical properties very similar to those of the one
that would have been present had the error not occurred. This work has suggested that the second base of codons is less efficient
in this respect, by about three orders of magnitude, than the first and third bases. These results are based on the assumption
that all forms of error at all bases are equally likely. We extend this work to investigate (1) the effect of weighting transition
errors differently from transversion errors and (2) the effect of weighting each base differently, depending on reported mistranslation
biases. We find that if the bias affects all codon positions equally, as might be expected were the code adapted to a mutational
environment with transition/transversion bias, then any reasonable transition/transversion bias increases the relative efficiency
of the second base by an order of magnitude. In addition, if we employ weightings to allow for biases in translation, then
only 1 in every million random alternative codes generated is more efficient than the natural code. We thus conclude not only
that the natural genetic code is extremely efficient at minimizing the effects of errors, but also that its structure reflects
biases in these errors, as might be expected were the code the product of selection.
Received: 25 July 1997 / Accepted: 9 January 1998 相似文献
20.
The canonical genetic code has been reported both to be error minimizing and to show stereochemical associations between coding
triplets and binding sites. In order to test whether these two properties are unexpectedly overlapping, we generated 200,000
randomized genetic codes using each of five randomization schemes, with and without randomization of stop codons. Comparison
of the code error (difference in polar requirement for single-nucleotide codon interchanges) with the coding triplet concentrations
in RNA binding sites for eight amino acids shows that these properties are independent and uncorrelated. Thus, one is not
the result of the other, and error minimization and triplet associations probably arose independently during the history of
the genetic code. We explicitly show that prior fixation of a stereochemical core is consistent with an effective later minimization
of error.
[Reviewing Editor : Dr. Stephen Freeland] 相似文献