首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
To close the gap between research and development, a number of funding organizations focus their efforts on large, translations research projects rather than small research teams and individual scientists. Yet, as Paul van Helden argues, if the support for small, investigator-driven research decreases, there will soon be a dearth of novel discoveries for large research groups to explore.What is medical science all about? Surely it is about the value chain, which begins with basic research and ends—if there is an end—with a useful product. There is a widespread perception that scientists do a lot of basic research, but neglect the application of their findings. To remedy this, a number of organizations and philanthropists have become dedicated advocates of applied or translational research and preferentially fund large consortia rather than small teams or individual scientists. Yet, this is only the latest round in the never-ending debate about how to optimize research. The question remains whether large teams, small groups or individuals are better at making ‘discoveries''.To some extent, a scientific breakthrough depends on the nature of the research. Einstein worked largely alone, and the development of E = mc2 is a case in point. He put together insights from many researchers to produce his breakthrough, which has subsequently required teams of scientists to apply. Similarly, drug development may require only an individual or a small team to make the initial discovery. However, it needs many individuals to develop a candidate compound and large teams to conduct clinical trials. On the other hand, Darwin could be seen to have worked the other way around: he had an initial ‘team'' of ‘field assistants''—including the crew of HMS Beagle—but he produced his seminal work essentially alone.Consortium funding is of course attractive for researchers because of the time-scale and the amount of money involved. Clinical trials or large research units may get financial support for 10 years or even longer and in the range of millions of dollars. However, organizations that provide funding on such a large scale require extensive and detailed planning from researchers. The work is subject to frequent reporting and review and often carries a large administrative burden. It has come to the point where this oversight threatens academic freedom. Principal investigators who try to conduct experiments outside the original plan, even if they make sense, lose their funding. Under such conditions, administrative officials are often not there to serve, but to govern.There is a widespread perception that small teams are more productive in terms of published papers. But large-scale science often generates outcomes and product value that a small team cannot. We therefore need both. The problem is the low level of funding for individual scientists and small teams and the resulting cut-throat competition for limited resources. This draws too many researchers to large consortia, which, if successful, can become comfort zones or, if they crash and burn, can cause serious damage.Other factors should also inform our deliberations about the size of research teams and consortia. Which is the better environment in which to train the next generation of scientists? By definition, research should question scientific dogmas and foster innovative thinking. Will a large consortium be able to achieve or even tolerate this?Perhaps these trends can be ascribed to generational differences. Neil Howe described people born between 1943 and 1980 as obsessed with values, individually strong and individualistic, whereas the younger folks born after 1981 place more trust in strong institutions that are seen to be moving society somewhere. If this is true, we can predict that the consortium approach is here to stay, at least for some time. Perhaps the emergence of large-scale science is driven by strong—maybe dictatorial—older individuals and arranged to accommodate the younger generation. If so, it is a win–win situation: we know the value of networking and interacting with others, which comes naturally in the ‘online age''.A down side of large groups is the loss of individual career development. The number of authors per paper has increased constantly. Who does the work and who gets the honour? There is often little recognition for the contribution of most people to publications that arise from large consortia, and it is difficult for peer-reviewers to assess individual contribution. We must take care that we measure what we value and not value what we measure.While it is clear that both large and small groups are essential, good management and balance is required. An alarming trend in my opinion is the inclination to fund new sites for clinical trials, to the detriment of existing facilities. This does not seem to be reasonable or the best use of scarce resources.In the long-term interest of science, we need to consider the correlation of major breakthroughs compared to incremental science with the size of the research group. This is hard to measure, but we must not forget that basic research produces the first leads that are then developed further into products. If the funding for basic science decreases, there will soon be a dearth of topics for ‘big science''.Is there a way out of this dilemma? I would like to suggest that organizations currently funding large consortia allow investigators to set aside a percentage of the money to support basic, curiosity-driven research within these consortia. If they do not rethink their funding strategy, these organizations may find with time that there are few novel discoveries for large groups to explore.  相似文献   

3.
4.
5.
A key hypothesis in population ecology is that synchronous and intermittent seed production, known as mast seeding, is driven by the alternating allocation of carbohydrates and mineral nutrients between growth and reproduction in different years, i.e. ‘resource switching’. Such behaviour may ultimately generate bimodal distributions of long‐term flower and seed production, and evidence of these patterns has been taken to support the resource switching hypothesis. Here, we show how a widely‐used statistical test of bimodality applied by many studies in different ecological contexts may fail to reject the null hypothesis that focal probability distributions are unimodal. Using data from five tussock grass species in South Island, New Zealand, we find clear evidence of bimodality only when flowering patterns are analyzed with probabilistic mixture models. Mixture models provide a theory oriented framework for testing hypotheses of mast seeding patterns, enabling the different responses underlying medium‐ and high‐ versus non‐ and low‐flowering years to be modelled more realistically by associating these with distinct probability distributions. Coupling theoretical expectations with more rigorous statistical approaches will empower ecologists to reject null hypotheses more often.  相似文献   

6.
The standard approach in a biological two-player game is toassume both players choose their actions independently of oneanother, having no information about their opponent's action(simultaneous game). However, this approach is not realisticin some circumstances. In many cases, one player chooses hisaction first and then the second player chooses her action withinformation about his action (Stackelberg game). We comparethese two games, which can be mathematically analyzed into twotypes, depending on the direction of the best response function(BRF) at the evolutionarily stable strategy in the simultaneousgame (ESSsim). We subcategorize each type of game into two cases,depending on the change in payoff to one player, when both playersare at the ESSsim, and the other player increases his action.Our results show that in cases where the BRF is decreasing atthe ESSsim, the first player in the Stackelberg game receivesthe highest payoff, followed by both players in the simultaneousgame, followed by the second player in the Stackelberg game.In these cases, it is best to be the first Stackelberg player.In cases where the BRF is increasing at the ESSsim, both Stackelbergplayers receive a higher payoff than players in a simultaneousgame. In these cases, it is better for both players to playa Stackelberg game rather than a simultaneous game. However,in some cases the first Stackelberg player receives a higherpayoff than the second Stackelberg player, and in some casesthe opposite is true.  相似文献   

7.
Summary In the present investigations ‘Chillum’ jar assembly was found to provide more favourable environmental conditions for rhizobia to nodulate leguminous plants particularly under summer conditions than the usual Leonard jar assembly. When thirty pigeon pea rhizobia isolates were tested for their nodulation efficiency in both Leonard jars as well as ‘Chillum’ jars, it was noticed that there was no nodulation in any of the isolates under Leonard jars whereas all isolates were nodulating well under ‘Chillum’ jars conditions. This was probably due to lowering of temperature in ‘Chillum’ jar caused by rapid evaporation from the outer surface of ‘Chillum’ jar assembly. The maximum temperature recorded in ‘Chillum’ jar was 34°C whereas in Leonard jars it was 46.5°C.  相似文献   

8.
Is it better to combine predictions?   总被引:2,自引:0,他引:2  
We have compared the accuracy of the individual protein secondary structure prediction methods: PHD, DSC, NNSSP and Predator against the accuracy obtained by combing the predictions of the methods. A range of ways of combing predictions were tested: voting, biased voting, linear discrimination, neural networks and decision trees. The combined methods that involve 'learning' (the non-voting methods) were trained using a set of 496 non-homologous domains; this dataset was biased as some of the secondary structure prediction methods had used them for training. We used two independent test sets to compare predictions: the first consisted of 17 non-homologous domains from CASP3 (Third Community Wide Experiment on the Critical Assessment of Techniques for Protein Structure Prediction); the second set consisted of 405 domains that were selected in the same way as the training set, and were non-homologous to each other and the training set. On both test datasets the most accurate individual method was NNSSP, then PHD, DSC and the least accurate was Predator; however, it was not possible to conclusively show a significant difference between the individual methods. Comparing the accuracy of the single methods with that obtained by combing predictions it was found that it was better to use a combination of predictions. On both test datasets it was possible to obtain a approximately 3% improvement in accuracy by combing predictions. In most cases the combined methods were statistically significantly better (at P = 0.05 on the CASP3 test set, and P = 0.01 on the EBI test set). On the CASP3 test dataset there was no significant difference in accuracy between any of the combined method of prediction: on the EBI test dataset, linear discrimination and neural networks significantly outperformed voting techniques. We conclude that it is better to combine predictions.  相似文献   

9.
A scientific ontology is a formal representation of knowledge within a domain, typically including central concepts, their properties, and relations. With the rise of computers and high-throughput data collection, ontologies have become essential to data mining and sharing across communities in the biomedical sciences. Powerful approaches exist for testing the internal consistency of an ontology, but not for assessing the fidelity of its domain representation. We introduce a family of metrics that describe the breadth and depth with which an ontology represents its knowledge domain. We then test these metrics using (1) four of the most common medical ontologies with respect to a corpus of medical documents and (2) seven of the most popular English thesauri with respect to three corpora that sample language from medicine, news, and novels. Here we show that our approach captures the quality of ontological representation and guides efforts to narrow the breach between ontology and collective discourse within a domain. Our results also demonstrate key features of medical ontologies, English thesauri, and discourse from different domains. Medical ontologies have a small intersection, as do English thesauri. Moreover, dialects characteristic of distinct domains vary strikingly as many of the same words are used quite differently in medicine, news, and novels. As ontologies are intended to mirror the state of knowledge, our methods to tighten the fit between ontology and domain will increase their relevance for new areas of biomedical science and improve the accuracy and power of inferences computed across them.  相似文献   

10.
Molecular dynamics simulations of biological membranes have come of age. Simulations of pure lipid bilayers are extending our understanding of both optimal simulation procedures and the detailed structural dynamics of lipids in these systems. Simulation methods established using simple bilayer-embedded peptides are being extended to a wide range of membrane proteins and membrane protein models, and are beginning to reveal some of the complexities of membrane protein structural dynamics and their relationship to biological function.  相似文献   

11.
Tadin D  Blake R 《Neuron》2005,45(3):325-327
Older people can discriminate visual motion of large, high-contrast stimuli better than young adults. This surprising result, reported by Betts et al. in this issue of Neuron, suggests weaker center-surround antagonism in senescence, perhaps attributable to age-related reduction in GABA-mediated inhibition.  相似文献   

12.
Water consumption affects milk production of dairy cows. In a previous study, we found that dairy cows preferred to drink from larger than from smaller troughs and that intake was higher when water was offered in the larger, preferred troughs. In this study, we investigated some of the trough's characteristics that may underlie such preference. The volume of water consumed, time spent drinking and number of sips taken by cows (n = 18) were compared when water was offered in two troughs differing in surface area (1.13 m2 or 0.28 m2; experiment 1), height (30 cm or 60 cm; experiment 2) or depth (30 cm or 60 cm; experiment 3). In each experiment, each cow was tested individually for six consecutive days with the troughs randomly placed in each side. In experiment 1, cows took more sips (P < 0.01), spent more time drinking (P < 0.01) and drank more water (P < 0.01) from the trough with larger surface area. In experiment 2, cows took more sips from the higher than from the lower trough (P < 0.02) and showed a tendency to consume more water (P = 0.08) and to spend more time drinking (P = 0.08) from the higher than from the lower trough. Trough depth did not influence any of the variables recorded.  相似文献   

13.
Depletion of major blood proteins is one of the most promising approaches to access low abundant biomarkers using proteomics. Immunocapture columns often used for this purpose exist in different formats depending on the number of major proteins removed. In this article, we compared the relative interest of depleting either one (albumin), six (albumin, IgG, IgA, transferrin, α1-antitrypsin, and haptoglobin), twelve (the previous six and apo A-I and -II, orosomucoid, α2-macroglobulin, fibrinogen, IgM) or twenty blood proteins (the previous twelve and IgD, ceruloplasmin, apo B, complement C1q, C3, C4, plasminogen, and prealbumin). Such study raises interesting issues related to the reproducibility, practicability, specificity of the immunocapture, and to the impact of removing not only the selected molecules, but also associated peptides and proteins. Depleted sera were here analysed using different proteomic approaches, including two dimensional electrophoresis and SELDI–TOF. Altogether, our results clearly confirmed the interest of depleting major blood proteins for the proteomic detection of low abundant components. However, we observed that increasing the number of depleted proteins from twelve to twenty had a limited beneficial impact and might increase drawbacks in removing associated peptides and proteins. This conclusion is however related to the technologies that we have used, and we believe that it is necessary to adapt the immunocapture to the analytical method employed, and to the ratio between wanted and unwanted proteins removed.  相似文献   

14.
Statins: lower lipids and better bones?   总被引:3,自引:0,他引:3  
  相似文献   

15.
16.
Ma H 《Current biology : CB》2000,10(10):R365-R368
A new study has found that, in Arabidopsis, the paternal copies of many genes are delayed in expression during early seed development. The distribution of the genes and nature of their products suggest that this delayed expression of paternal alleles may be a global phenomenon.  相似文献   

17.
Is more better? Polyploidy and parasite resistance   总被引:1,自引:0,他引:1  
Ploidy-level variation is common and can drastically affect organismal fitness. We focus on the potential consequences of this variation for parasite resistance. First, we elucidate connections between ploidy variation and key factors determining resistance, including allelic diversity, gene expression and physiological condition. We then argue that systems featuring both natural and artificially manipulated ploidy variation should be used to evaluate whether ploidy level influences host-parasite interactions.  相似文献   

18.
19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号