首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Kinetochore reorientation is the critical process ensuring normal chromosome distribution. Reorientation has been studied in living grasshopper spermatocytes, in which bivalents with both chromosomes oriented to the same pole (unipolar orientation) occur but are unstable: sooner or later one chromosome reorients, the stable, bipolar orientation results, and normal anaphase segregation to opposite poles follows. One possible source of stability in bipolar orientations is the normal spindle forces toward opposite poles, which slightly stretch the bivalent. This tension is lacking in unipolar orientations because all the chromosomal spindle fibers and spindle forces are directed toward one pole. The possible role of tension has been tested directly by micromanipulation of bivalents in unipolar orientation to artificially create the missing tension. Without exception, such bivalents never reorient before the tension is released; a total time "under tension" of over 5 hr has been accumulated in experiments on eight bivalents in eight cells. In control experiments these same bivalents reoriented from a unipolar orientation within 16 min, on the average, in the absence of tension. Controlled reorientation and chromosome segregation can be explained from the results of these and related experiments.  相似文献   

2.
3.
A new technique has been devised for staining the mitotic spindle in mammalian cells while preserving spindle structure and chromosome number. The cells are trypsinized and fixed with a 3:1 methanobacetic acid solution containing 4 mM MgCl2 and 1.5 mM CaCl2 at room temperature. The cells are then placed on slides and treated with 5% perchloric acid before staining with a 10% acetic acid solution containing safranin O and brilliant blue R. The preserved spindles appear dark blue against a light cytoplasmic background with chromosomes stained bright red. Individual chromosomes and chromatids are clearly visible. Positioning of the chromosomes relative to the spindle apparatus is readily ascertained allowing easy study of mitotic spindle and chromosome behavior.  相似文献   

4.
5.
6.
During normal metaphase in Saccharomyces cerevisiae, chromosomes are captured at the kinetochores by microtubules emanating from the spindle pole bodies at opposite poles of the dividing cell. The balance of forces between the cohesins holding the replicated chromosomes together and the pulling force from the microtubules at the kinetochores result in the biorientation of the sister chromatids before chromosome segregation. The absence of kinetochore–microtubule interactions or loss of cohesion between the sister chromatids triggers the spindle checkpoint which arrests cells in metaphase. We report here that an MEN mutant, cdc15-2, though competent in activating the spindle assembly checkpoint when exposed to Noc, mis-segregated chromosomes during recovery from spindle checkpoint activation. cdc15-2 cells arrested in Noc, although their Pds1p levels did not accumulate as well as in wild-type cells. Genetic analysis indicated that Pds1p levels are lower in a mad2Δ cdc15-2 and bub2Δ cdc15-2 double mutants compared with the single mutants. Chromosome mis-segregation in the mutant was due to premature spindle elongation in the presence of unattached chromosomes, likely through loss of proper control on spindle midzone protein Slk19p and kinesin protein, Cin8p. Our data indicate that a slower rate of transition through the cell division cycle can result in an inadequate level of Pds1p accumulation that can compromise recovery from spindle assembly checkpoint activation.  相似文献   

7.
8.
9.
10.
Taming data     
A challenge in systems-level investigations of the immune response is the principled integration of disparate data sets for constructing predictive models. InnateDB (Lynn et al., 2008; http://www.innatedb.ca), a publicly available, manually curated database of experimentally verified molecular interactions and pathways involved in innate immunity, is a powerful new resource that facilitates such integrative systems-level analyses.  相似文献   

11.
12.
Taming plastids for a green future   总被引:14,自引:0,他引:14  
Plant genetic engineering will probably contribute to the required continued increase in agricultural productivity during the coming decades, and moreover, plants can potentially provide inexpensive production platforms for pharmaceuticals and nutraceuticals. With the advent of technologies for altering the genetic information inside chloroplasts, a new attractive target for genetic engineering has become available to biotechnologists. Potential advantages over conventional nuclear transformation include high transgene expression levels and increased biosafety because of maternal organelle inheritance in most crops. This review summarizes the state of the art in chloroplast genetic engineering and describes how reverse genetics approaches enhance our understanding of photosynthesis and other important chloroplast functions. Furthermore, promising strategies by which chloroplast genetic engineering might contribute to the successful modification of plant metabolism are discussed.  相似文献   

13.
14.
15.
Summary Specific recombinant DNA sequences (5S rRNA, B1, albumin) were assigned to flow sorted chromosomes of the Chinese hamster cell line CHV79. For this purpose, a rapid protocol was developed using filterbound chromosomal DNA and probing with various nucleic acids, that allows sequence identification in chromosomes. A flow histogram and a flow karyogram of the CHV79 cell line were established by flow analysis in order to calculate the amount of DNA per CHV79 cell and their chromosomes. Subsequently, metaphase chromosomes or chromosomal groups were fractionated by electronic sorting and a defined number of chromosomes was directly bound to nitrocellulose filters for sequence homology analysis by a dot blot hybridization procedure. This procedure not only allows the assigning of specific DNA sequences to particular chromosomes, it is also applicable to studies of changes in karyotypes, for example translocations of given sequences.Some results shown constituted a Diploma Thesis by G.L. submitted to and accepted by the Department of Biology, University of Kaiserslautern  相似文献   

16.
17.
18.
19.
There are many complex biological models that fit the data perfectly and yet do not reflect the cellular reality. The process of validating a large model should therefore be viewed as an ongoing mission that refines underlying assumptions by improving low-confidence areas or gaps in the model''s construction.At its most basic, science is about models. Natural phenomena that were perplexing to ancient humans have been systematically illuminated as scientific models have revealed the mathematical order underlying the natural world. But what happens when the models themselves become complex enough that they too must be interpreted to be understood?In 2012, Jonathan Karr, Markus Covert and colleagues at the University of California, San Diego (USA) produced a bold new biological model that attempts to simulate an entire cell: iMg [1]. iMg merges 28 sub-modules of processes within Mycobacterium genitalium, one of the simplest organisms known to man. As a systems biology big-data model, iMg is unique in its scope and is an undeniable paragon of good craft. Because it is probable that this landmark paper will soon be followed by other whole cell models, we feel it is timely to examine this important endeavour, its challenges and potential pitfalls.Building a model requires making many decisions, such as which processes to glaze over and which to reconstruct in detail, how many and what kinds of connections to forge between the model''s constituents, and how to determine values for the model''s parameters. The standard practice has been to tune a model''s parameters and its structure to a best fit with the available data. But this approach breaks down when building a large whole cell model because the number of decisions inflates with the model''s size, and the amount of data required for these decisions to be unequivocal becomes huge. This problem is fundamental, not merely technical, and is rooted in the principle of frugality that underlies all science: Occam''s razor.The problem posed by Occam''s razor is that there are vastly more potential large models that can successfully predict and explain any given body of data than there are small ones. As we can tweak increasingly complex models in an increasing number of ways, we can produce many large models that fit the data perfectly and yet do not reflect the cellular reality. Even if a model fits all the data well, the chance of it happening to be the ‘correct'' model—in other words the one that reflects correctly the underlying cellular architecture and relevant enzymatic parameters—is inversely related to its complexity. A sophisticated large model such as iMg, which has been fitted to many available datasets, will certainly recapture many behaviours of the real system. But it could also recapture many other potentially wrong ones.How do we test a model''s correctness in the sense just mentioned? The intuitive way is to make and test predictions about previously uncharted phenomena. But validating a large biological model is an inherently different challenge than the common practice of “predict, test and validate” customary with smaller ones. Validation using phenotypic ‘emerging'' predictions would require such large amounts of data that it would be highly inefficient and costly at this scale, especially as many of these predictions will turn out to be false leads, with negative results yielding little insight. Rather, the correctness of a whole-cell model is perhaps best validated by using a complementary paradigm: direct testing of the basic decisions that went into the model''s construction. For example, enzymatic rate constants that were fitted in order to make the model behave properly could be experimentally scrutinized for later versions. Performing extensive sensitivity analyses and incorporating known confidence levels of modelling decisions, or harnessing more advanced methods such as ‘active learning'' should all be used in conjunction to determine which parameters to focus on in the future. The process of validating a large model should thus be viewed as an ongoing mission that aims to produce more refined and accurate drafts by improving low-confidence areas or gaps in the model''s construction. Step by step, this paradigm should increase a model''s reliability and ability to make valid new predictions.An open discussion of the potential pitfalls and benefits of building complex biological models could not be timelier, as both the EU and the US have just committed more than a combined 1.4 billion dollars to explicitly model the human brain. Massive data collection and big data analysis are the new norm in most fields, and big models are following closely behind. Their cost, usefulness and application remain open for discussion, but we certainly laud the spirit of the effort. For what is certain is this: only by building these models will we know what usefulness we can attribute to them. Paraphrasing Paul Cezzane, these efforts might be indeed justified and worthy, so long as one is “more or less master of his model”.  相似文献   

20.
Protein folding is an important problem in structural biology with significant medical implications, particularly for misfolding disorders like Alzheimer's disease. Solving the folding problem will ultimately require a combination of theory and experiment, with theoretical models providing a comprehensive view of folding and experiments grounding these models in reality. Here we review progress towards this goal over the past decade, with an emphasis on recent theoretical advances that are empowering chemically detailed models of folding and the new results these technologies are providing. In particular, we discuss new insights made possible by Markov state models (MSMs), including the role of non-native contacts and the hub-like character of protein folded states.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号