首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
4.
5.
6.
I here consider the question of when to formulate a likelihood over the whole data set, as opposed to conditioning the likelihood on subsets of the data (i.e., joint vs. conditional likelihoods). I show that when certain conditions are met, these two likelihoods are guaranteed to be equivalent, and thus that it is generally preferable to condition on subsets, since that likelihood is mathematically and computationally simpler. However, I show that when these conditions are not met, conditioning on subsets of the data is equivalent to introducing additional df into our genetic model, df that we may not have been aware of. I discuss the implications of these facts for ascertainment corrections and other genetic problems.  相似文献   

7.
8.
Protein concentration data are required for understanding protein interactions and are a prerequisite for the determination of affinity and kinetic properties. It is vital for the judgment of protein quality and for monitoring the effect of therapeutic agents. Protein concentration values are typically obtained by comparison to a standard and derived from a standard curve. The use of a protein standard is convenient, but may not give reliable results if samples and standards behave differently. In other cases, a standard preparation may not be available and has to be established and validated. Using surface plasmon resonance (SPR) biosensors, an alternative concentration method is possible. This method is called calibration-free concentration analysis (CFCA); it generates active concentration data directly and without the use of a standard. The active concentration of a protein is defined through its interaction with its binding partner. This concentration can differ from the total protein concentration if some protein fraction is incapable of binding. If a protein has several different binding sites, active concentration data can be established for each binding site using site-specific interaction partners. This review will focus on CFCA analysis. It will reiterate the theory of CFCA and describe how CFCA has been applied in different research segments. The major part of the review will, however, try to set expectations on CFCA and discuss how CFCA can be further developed for absolute and relative concentration measurements.  相似文献   

9.
Analysis of ancient bone DNA: techniques and applications.   总被引:1,自引:0,他引:1  
The analysis of DNA from ancient bone has numerous applications in archaeology and molecular evolution. Significant amounts of genetic information can be recovered from ancient bone: mitochondrial DNA sequences of 800 base pairs have been amplified from a 750-year-old human femur by using the polymerase chain reaction. DNA recovery varies considerably between bone samples and is not dependent on the age of the specimen. We present the results of a study on a small number of bones from a mediaeval and a 17th-century cemetery in Abingdon showing the relation between gross preservation, microscopic preservation and DNA recovery.  相似文献   

10.
11.
12.
An essentially new method to relate a number of taxa on the basis of a predefined set of dichotomous properties (i.e. either present or not present) is described. The basic step of the analysis is the derivation of a sophisticated distance measure to describe the pairwise dissimilarities quantitatively on the basis of the individual properties. The presentation of the dissimilarity matrix by a tree-like structure is an obvious step implicated by the the distance measure and is related to the widely used method of successive joining of nearest neighbors with respect to the distances. The distance measure makes no use of stochastic or other mathematical models of evolutionary processes and can be interpreted best in terms of discrete information theory.  相似文献   

13.
Grasslands used to be vital landscape elements throughout Europe. Nowadays, the area of grasslands is dramatically reduced, especially in industrial countries. Grassland restoration is widely applied to increase the naturalness of the landscape and preserve biodiversity. We reviewed the most frequently used restoration techniques (spontaneous succession, sowing seed mixtures, transfer of plant material, topsoil removal and transfer) and techniques used to improve species richness (planting, grazing and mowing) to recover natural-like grasslands from ex-arable lands. We focus on the usefulness of methods in restoring biodiversity, their practical feasibility and costs. We conclude that the success of each technique depends on the site conditions, history, availability of propagules and/or donor sites, and on the budget and time available for restoration. Spontaneous succession can be an option for restoration when no rapid result is expected, and is likely to lead to the target in areas with high availability of propagules. Sowing low-diversity seed mixtures is recommended when we aim at to create basic grassland vegetation in large areas and/or in a short time. The compilation of high-diversity seed mixtures for large sites is rather difficult and expensive; thus, it may be applied rather on smaller areas. We recommend combining the two kinds of seed sowing methods by sowing low-diversity mixtures in a large area and high-diversity mixtures in small blocks to create species-rich source patches for the spontaneous colonization of nearby areas. When proper local hay sources are available, the restoration with plant material transfer can be a fast and effective method for restoration.  相似文献   

14.
Distribution data for 111 mainly sclerophyll forest tree species in southern and eastern Australia were analysed to detect possible phytogeographic provinces. The suitability of fourteen methods of numerical classification was assessed. Overall, the Kulczynski (1927) similarity coefficient together with the flexible clustering strategy (β= 0.25) (Lance & Williams 1967) and the asymmetric information statistic (Dale, Lance & Albrecht 1971) were considered to produce the most satisfactory results.  相似文献   

15.
Alan Taylor 《Twin research》2004,7(5):485-504
Nonresponse occurs when individuals either have no chance of being included in a study (noncoverage), refuse to take part (unit nonresponse), or fail to give complete information (item nonresponse). The purpose of this article is to test the possible biasing effects of nonresponse on the results of behavioral-genetic studies. Simulations and a real data 'natural' experiment were used to determine the impact of nonresponse on estimates of additive genetic and environmental effects. The simulations used realistic twin-pair correlations and models of nonresponse derived from prior research. The real data 'natural experiment' used data from a nationally representative birth-cohort twin study (E-Risk Study) and compared model results from families who had responded to a mail survey to those from all study cases. Results showed that the primary influence of nonresponse was to attenuate the effect of the shared environment and to inflate estimates of nonshared environment and additive genetic effects. At high levels of nonresponse a spurious nonadditive genetic effect (suggesting genetic dominance) was also found. Study nonresponse was shown to have the potential to bias the findings of behavioral-genetic research. Design and analysis methods that can be used to alleviate this potentially important biasing effect in behavioral-genetic studies are discussed in light of these findings.  相似文献   

16.
17.
18.
In a previous paper (Bartholomay, 1971), a general mathematical model of the medical diagnostic process was described. The present paper amounts to a relization of that process in terms of conventional 12-lead electrocardiographic diagnosis as enunciated by Dr. Harold D. Levine (1966) in the course of a collaborative study by Dr. Levine and the present author at the Peter Bent Brigham Hospital of the Harvard Medical School between 1963 and 1966. The main details of the cognitive component of that model are described in detail here. The model has been programmed onto a computer system consisting of an analog-digital converter and general purpose digital computer and amounts to a simulation of Dr. Levine’s electrocardiographic analysis procedure.  相似文献   

19.
20.
Dr. Robert Smith? 《CMAJ》2009,181(12):E297-E300
An outbreak of zombification wreaked havoc recently in Canada and the rest of the world. Mathematical models were created to establish the speed of zombie infection and evaluate potential scenarios for intervention, mainly because mathematicians don’t have anything better to do with their time. We review the development of these models and their effect on the undead.In August 2009, a new disease emerged in North America and quickly made its way around the world.1 Media reports suggest the outbreak began in Ottawa2 but rapidly spread across Canada3,4 and was transported thereafter to the United States5 and the United Kingdom.6,7The infection resulted in a new species of human, classified as nonmortuus contagio, but known in the popular press as “zombie”, from the Congolese nzambi, meaning “spirit of a dead person.” As the name implies, the outbreak resulted in a resurgence of the previously deceased. Clinical signs included discoloration around the eyes, open wounds and rotting flesh, with organs and bodily functions operating at minimal levels.Initial studies reported that the zombies did not feel pain, but these findings could not be verified because of the zombification of the researchers in question. When asked for comment, the lead author of one such study said, “Grrrnn, aaarghhh, huuuuuungry!” When questioned in more detail, he replied, “Braaaaiiiinnnnsss!” No further information is available from the interview.The cause of the virus remains unknown. Anecdotal evidence suggests that zombies can be defeated by guns, the army, eventual starvation or Dire Straits records.New diseases are generally investigated using experiments on infected people, clinical trials or medical observation. Unfortunately, because of the rapid zombification of scientists, epidemiologists and doctors, society was left with only one group of technocrats who remained uninfected: mathematicians. Fortunately, during the time at which the outbreak occurred, members of this group had not been invited to parties and thus remained entirely uninfected.A mathematical model for zombie infection8 was quickly designed (Figure 1). As shown by the model, humans could be infected by contact with a zombie, whereas zombies could be created either through the conversion of humans or reanimation of the dead. The model assumed that zombies could be killed in encounters with humans, as often happened when humans ran over zombies in their cars. Initially, such deaths were assumed by authorities to be part of a concerted effort at eradication of zombies, but were later revealed to be simply a result of rush hour. Some drivers were surprised when these recently deceased zombies returned to life and attacked them. Other drivers simply handed the zombies some spare change and waited for the reanimated creatures to clean their windshields.Open in a separate windowFigure 1Schematic diagram of the basic mathematical model (black arrows). Humans (friendly Canadians, in this example) can either die naturally or be converted into zombies — which is not terribly pleasant, but does come with that nifty jacket and tie. Zombies can reanimate the dead or be killed by humans (although it must be said that the latter doesn’t bother them too much). Possible intervention include quarantine of the zombies (green arrow), a potential cure (blue arrow) or impulsive attacks (red arrow).Open in a separate windowThe model showed two equilibria: the disease-free equilibrium (with no zombies) and the doomsday equilibrium (where everyone is a zombie). The application of a linear stability analysis showed that — in the absence of further interventions — the disease-free equilibrium was unstable and the doomsday equilibrium was stable. This finding was not promising.Simulations based on a city of roughly 500 000 people demonstrated that an entire such city would be replaced by zombies after about four days (Figure 2A). Were this mass replacement of a population to occur in a city such as Ottawa, it may be unlikely anyone would notice. Nevertheless, nearby cities such as Montreal would no longer be in the bagel-supplying business in such a scenario, and that result would be serious.Open in a separate windowFigure 2Projected population dynamics (based on type of intervention) in the context of a zombie outbreak. In the basic model (A), zombies eradicate humans after 4 days, leaving nobody to host daytime variety television shows or stop you from entering nightclubs (i.e., no loss there). (B): Quarantine delays the inevitable. Slightly. (C): A cure allows zombies to live in harmony with humans, which would be more fun for zombies than humans. (D): Only episodes of blind, aggressive, unfeeling violence are effective. And that’s just on the part of humans.Given this model, even a small outbreak would lead to a collapse of society as we know it. Explaining to the mathematicians that this outcome might be a bad thing took time because, initially, they were not able to see the downside. However, they were quickly mobilized after realizing their supply of caffeine and science fiction DVDs would dry up.Three interventions were proposed. The first was quarantine (Figure 1, green arrow), whereby a small proportion of the zombies would be kept in isolation. But given that the infection was so virulent, even leaving a few zombies in the wild would result in a restart of the outbreak. Including quarantine thus made no difference to the stability of the doomsday equilibrium (Figure 2B). That was a bit of a downer, to be honest.The second intervention was a theoretical cure that would convert zombies back into humans (Figure 1, blue arrow). Although the mathematicians were reminded that such a cure was entirely theoretical and likely could not be developed within four days, they were quite taken with the idea of proving results based on things that couldn’t possibly exist. This response was annoying, because they should have been concentrating on zombies instead.With a cure, humans and zombies could coexist. But unless the cure were 100% effective, humans would survive only in small numbers (Figure 2C) — most likely in shopping malls, abandoned farmhouses or the Winchester pub.Finally, the idea of impulsive attacks was considered (Figure 1, red arrow). This intervention would involve an escalating series of discrete attacks on the zombies, using an advanced mathematical theory called impulsive differential equations. These equations are similar to ordinary differential equations, except that sometimes they jump up onto tables, paint themselves purple and start singing show tunes for no reason whatsoever.The projected outcome of this intervention was more promising. At regular intervals, humans would mobilize their resources and attack the zombies. Each attack would be carried out with more force than the last one. The humans would keep fighting with increasing intensity until either the zombies were destroyed or the humans were torn apart from limb to limb and their flesh consumed by the ghoulish undead. Still, you’ve got to laugh, haven’t you? If humans could manage these impulsive attacks, the zombies could be destroyed after 10 days (Figure 2D).The overall model had limitations, of course. The numerical contributions of natural births and deaths had been ignored because of the brief timescale involved and the unlikelihood that mathematicians would be engaged in breeding. Inclusion of population demographics in the model suggests a limitless supply of new bodies would be available for zombies to infect, resurrect and convert. We must therefore act quickly and decisively to eradicate zombies before they eradicate us.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号