首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Over the last couple of decades, a call has begun to resound in a number of distinct fields of inquiry for a reattachment of form to matter, for an understanding of 'information' as inherently embodied, or, as Jean-Marie Lehn calls it, for a "science of informed matter." We hear this call most clearly in chemistry, in cognitive science, in molecular computation, and in robotics-all fields looking to biological processes to ground a new epistemology. The departure from the values of a more traditional epistemological culture can be seen most clearly in changing representations of biological development. Where for many years now, biological discourse has accepted a sharp distinction (borrowed directly from classical computer science) between information and matter, software and hardware, data and program, encoding and enactment, a new discourse has now begun to emerge in which these distinctions have little meaning. Perhaps ironically, much of this shift depends on drawing inspiration from just those biological processes which the discourse of disembodied information was intended to describe.  相似文献   

2.
Alcino J. Silva   《Journal of Physiology》2007,101(4-6):203-213
Studies of cognitive function include a wide spectrum of disciplines, with very diverse theoretical and practical frameworks. For example, in Behavioral Neuroscience cognitive mechanisms are mostly inferred from loss of function (lesion) experiments while in Cognitive Neuroscience these mechanisms are commonly deduced from brain activation patterns. Although neuroscientists acknowledge the limitations of deriving conclusions using a limited scope of approaches, there are no systematically studied, objective and explicit criteria for what is required to test a given hypothesis of cognitive function. This problem plagues every discipline in science: scientific research lacks objective, systematic studies that validate the principles underlying even its most elemental practices. For example, scientists decide what experiments are best suited to test key ideas in their field, which hypotheses have sufficient supporting evidence and which require further investigation, which studies are important and which are not, based on intuitions derived from experience, implicit principles learned from mentors and colleagues, traditions in their fields, etc. Philosophers have made numerous attempts to articulate and frame the principles that guide research and innovation, but these speculative ideas have remained untested and have had a minimal impact on the work of scientists. Here, I propose the development of methods for systematically and objectively studying and improving the modus operandi of research and development. This effort (the science of scientific research or S2) will benefit all aspects of science, from education of young scientists to research, publishing and funding, since it will provide explicit and systematically tested frameworks for practices in science. To illustrate its goals, I will introduce a hypothesis (the Convergent Four) derived from experimental practices common in molecular and cellular biology. This S2 hypothesis proposes that there are at least four fundamentally distinct strategies that scientists can use to test the connection between two phenomena of interest (A and B), and that to establish a compelling connection between A and B it is crucial to develop independently confirmed lines of convergent evidence in each of these four categories. The four categories include negative alteration (decrease probability of A or p(A) and determine p(B)), positive alteration (increase p(A) and determine p(B)), non-intervention (examine whether A precedes B) and integration (develop ideas about how to get from A to B and integrate those ideas with other available information about A and B). I will discuss both strategies to test this hypothesis and its implications for studies of cognitive function.  相似文献   

3.
The transition from bench science to science policy is not always a smooth one, and my journey stretched as far as the unemployment line to the hallowed halls of the U.S. Capitol. While earning my doctorate in microbiology, I found myself more interested in my political activities than my experiments. Thus, my science policy career aspirations were born from merging my love of science with my interest in policy and politics. After receiving my doctorate, I accepted the Henry Luce Scholarship, which allowed me to live in South Korea for 1 year and delve into the field of science policy research. This introduction into science policy occurred at the South Korean think tank called the Science and Technology Policy Institute (STEPI). During that year, I used textbooks, colleagues, and hands-on research projects as my educational introduction into the social science of science and technology decision-making. However, upon returning to the United States during one of the worst job markets in nearly 80 years, securing a position in science policy proved to be very difficult, and I was unemployed for five months. Ultimately, it took more than a year from the end of the Luce Scholarship to obtain my next science policy position with the American Society for Microbiology Congressional Fellowship. This fellowship gave me the opportunity to work as the science and public health advisor to U.S. Senator Harry Reid. While there were significant challenges during my transition from the laboratory to science policy, those challenges made me tougher, more appreciative, and more prepared to move from working at the bench to working in the field of science policy.  相似文献   

4.
Proponents of Evidence-based medicine (EBM) do not provide a clear role for basic science in therapeutic decision making. Of what they do say about basic science, most of it is negative. Basic science resides on the lower tiers of EBM’s hierarchy of evidence. Therapeutic decisions, according to proponents of EBM, should be informed by evidence from randomised studies (and systematic reviews of randomised studies) rather than basic science. A framework of models explicates the links between the mechanisms of basic science, experimental inquiry, and observed data. Relying on the framework of models I show that basic science often plays a role not only in specifying experiments, but also analysing and interpreting the data that is provided. Further, and contradicting what is implied in EBM’s hierarchy of evidence, appeals to basic science are often required to apply clinical research to therapeutic questions.  相似文献   

5.
Khrennikov A 《Bio Systems》2006,84(3):225-241
We present a contextualist statistical realistic model for quantum-like representations in physics, cognitive science, and psychology. We apply this model to describe cognitive experiments to check quantum-like structures of mental processes. The crucial role is played by interference of probabilities for mental observables. Recently one such experiment based on recognition of images was performed. This experiment confirmed our prediction on the quantum-like behavior of mind. In our approach "quantumness of mind" has no direct relation to the fact that the brain (as any physical body) is composed of quantum particles. We invented a new terminology "quantum-like (QL) mind." Cognitive QL-behavior is characterized by a nonzero coefficient of interference lambda. This coefficient can be found on the basis of statistical data. There are predicted not only cos theta-interference of probabilities, but also hyperbolic cosh theta-interference. This interference was never observed for physical systems, but we could not exclude this possibility for cognitive systems. We propose a model of brain functioning as a QL-computer (there is a discussion on the difference between quantum and QL computers).  相似文献   

6.
This paper re-examines the repeatedly-offered hypothesis (Fialkowski 1978, 1986, 1987, 1988) that hominid brain expansion was largely a side effect of evolutionary response to increased heat stress under conditions of primitive hunting, and resulted in a preadaptation to enhanced cognitive abilities. Fialkowski's hypothesis, previously shown to be based on data that are seriously inaccurate, continues to be presented in a manner that precludes testing. Consequently, however interesting these ideas may be, they are beyond the conventional domain of anthropology as a legitimate subdiscipline of modern science.  相似文献   

7.
Limnology--the science about lakes is the young and relatively closed area of studies; its existence is owing to several hundreds of scientists. The International Society of Limnologists holds its meetings since 1922. We used materials of these meetings to find out the main stages of development of this science; among these stages there were both fast and relatively calm periods. Based on analysis of these data, we constructed a model of development of the science, the same data being used for tuning and verification of the model. We have suggested that the main regularities and of development of limnology can be extrapolated to other sciences. The main "acting person" in the model is population of scientists. Each scientist, with some probability, can propose new ideas as well as use in his elaborations some particular complex of the already accumulated knowledge and ideas. The model also takes into consideration how the scientific information is spreading, specifically some individual peculiarities of model scientists, such as age, experience, communicability. After the model parameters had been chosen in such a way that is described adequately the development of limnology, we performed a series of experiments by changing some of the characteristics and obtained rather unexpected results published preliminary in the short work (Levchenko V. F and Menshutkin V. V. Int. J. Comp. Anticip. Syst., 2008, vol. 22, p. 63-75) and discussed here in the greater detail. It is revealed, that the development of science is passing irregularly and sharply decelerated at low level of scientists communication and absence of scientific schools, and that the age of "scientific youth" of scientist begins usually only after 40 years.  相似文献   

8.
I D Bross 《Biometrics》1990,46(4):1213-1225
The two steps necessary for the clinical expression of a mutagenic disease, genetic damage and viability, are countervailing forces and therefore the dosage response curve for mutagens must have a maximum. To illustrate that science is common sense reduced to calculation, a new mathematical derivation of this result and supporting data are given. This example also shows that the term "context-free" is a snare and a delusion. When statistical methods are used in a scientific context where their assumptions are known to fail and where there is a reasonable presumption of intent to deceive, they are fraudulent. Estimation of low-level mutagenic risks by linear extrapolation from high-dose data is one example of such a method that is widely used by Executive Branch agencies. Other examples are given of fraudulent statistical methods that are currently used in biomedical research done by or for U.S. government agencies. In the long run, it is argued, the surest way to eradicate such fraud is for biostatisticians to do their own science.  相似文献   

9.
关莹  周振宇 《人类学学报》2022,41(1):169-179
从考古学诞生之初,对抽象数据的解读与分析就一直伴随。对于旧石器考古学而言,“人工制品”成为了传达史前物质文化信息的主要载体,对人工制品中所提取的数据进行科学解读,成为了复原古代人类历史的关键步骤。数据科学在旧石器考古学中的应用具有三个主要因素,分别是数理统计学、计算机应用,以及旧石器考古学的基础数据与核心科学问题以及理论知识,即采用某种或多种逻辑将旧石器考古学领域的数据进行基于计算机平台的数理统计,并借助计算机语言对庞大的数据进行快速计算,从而帮助我们解释和重建史前人类社会。在目前的旧石器考古学领域,研究者们已经不再满足于对标本所进行的基础的描述性信息统计,对数据进行科学的处理并系统解读的诉求前所未有的强烈,这种诉求不断推动着学科的发展,深化了我们原本对史前社会的认识,甚至开拓出了新的研究领域,极大地推动了旧石器考古学的发展。本文就数据科学的概念、技术路线,以及在旧石器考古学中的应用历史与发展前景做详细介绍,希望通过系统性的梳理,使更多读者熟悉相关的研究手段与具体技术,使更多考古学者对数据科学的应用产生兴趣,从而应用于相关的项目研究中。  相似文献   

10.

Background

Applications in biomedical science and life science produce large data sets using increasingly powerful imaging devices and computer simulations. It is becoming increasingly difficult for scientists to explore and analyze these data using traditional tools. Interactive data processing and visualization tools can support scientists to overcome these limitations.

Results

We show that new data processing tools and visualization systems can be used successfully in biomedical and life science applications. We present an adaptive high-resolution display system suitable for biomedical image data, algorithms for analyzing and visualization protein surfaces and retinal optical coherence tomography data, and visualization tools for 3D gene expression data.

Conclusion

We demonstrated that interactive processing and visualization methods and systems can support scientists in a variety of biomedical and life science application areas concerned with massive data analysis.
  相似文献   

11.
The debate about the statistics of DNA profiling in forensic science casework has been carried out mainly from the perspective which is generally known as "match/binning." This approach has an initial appeal because of its apparent conceptual simplicity. However, the simplicity is illusory because it encourages misconceptions which obscure the essential forensic issues. This is exemplified in a recent report of the National Research Council, which places great emphasis on the need for conservative estimation of relative frequencies while missing the point that the power of RFLP technology cannot be realized if the matching stage is inefficient. Our approach to the problem is a one-stage rather than a two-stage process, by means of one function--the likelihood ratio--which determines the evidential strength. This paper describes experiments which have been carried out to assess the power of the method in forensic science and compares it with match/binning methodology. Tests for gauging the effects of between-probe dependence are included, with the results complementing those of Risch and Devlin.  相似文献   

12.
13.
I. R. Noble 《Plant Ecology》1987,69(1-3):115-121
An area of artificial intelligence known as experts systems (or knowledge-based systems) is being applied in many areas of science, technology and commerce. It is likely that the techniques will have an impact on vegetation science and ecology in general. This paper discusses some of those impacts and concludes that the main effects will be in areas of applied ecology especially where ecological expertise is needed either quickly (e.g. disaster management) or across a wide range of ecological disciplines (e.g. land management decisions). Expert systems will provide ecologists with valuable tools for managing data and interacting with other fields of expertise. The impact of expert systems on ecological theory will depend on the degree to which deep knowledge (i.e. knowledge based on first principles rather than on more empirical rules) is used in formulating knowledge bases.  相似文献   

14.
Citizen science and community-based monitoring programs are increasing in number and breadth, generating volumes of scientific data. Many programs are ill-equipped to effectively manage these data. We examined the art and science of multi-scale citizen science support, focusing on issues of integration and flexibility that arise for data management when programs span multiple spatial, temporal, and social scales across many domains. Our objectives were to: (1) briefly review existing citizen science approaches and data management needs; (2) propose a framework for multi-scale citizen science support; (3) develop a cyber-infrastructure to support citizen science program needs; and (4) describe lessons learned. We find that approaches differ in scope, scale, and activities and that the proposed framework situates programs while guiding cyber-infrastructure system development. We built a cyber-infrastructure support system for citizen science programs (www.citsci.org) and show that carefully designed systems can be adept enough to support programs at multiple spatial and temporal scales across many domains when built with a flexible architecture. The advantage of a flexible, yet controlled, cyber-infrastructure system lies in the ability of users with different levels of permission to easily customize the features themselves, while adhering to controlled vocabularies necessary for cross-discipline comparisons and meta-analyses. Program evaluation tied to this framework and integrated into cyber-infrastructure support systems will improve our ability to track effectiveness. We compare existing systems and discuss the importance of standards for interoperability and the challenges associated with system maintenance and long-term support. We conclude by offering a vision of the future of citizen science data management and cyber-infrastructure support.  相似文献   

15.
16.
The development of robust science policy depends on use of the best available data, rigorous analysis, and inclusion of a wide range of input. While director of the National Institute of General Medical Sciences (NIGMS), I took advantage of available data and emerging tools to analyze training time distribution by new NIGMS grantees, the distribution of the number of publications as a function of total annual National Institutes of Health support per investigator, and the predictive value of peer-review scores on subsequent scientific productivity. Rigorous data analysis should be used to develop new reforms and initiatives that will help build a more sustainable American biomedical research enterprise.Good scientists almost invariably insist on obtaining the best data potentially available and fostering open and direct communication and criticism to address scientific problems. Remarkably, this same approach is only sometimes used in the context of the development of science policy. In my opinion, several factors underlie the reluctance to apply scientific methods rigorously to inform science policy questions. First, obtaining the relevant data can be challenging and time-consuming. Tools relatively unfamiliar to many scientists may be required, and the data collected may have inherent limitations that make their use challenging. Second, reliance on data may require the abandonment of preconceived notions and a willingness to face potentially unwanted political consequences, depending on where the data analysis leads.One of my first experiences witnessing the application of a rigorous approach to a policy question involved previous American Society for Cell Biology Public Service awardee Tom Pollard when he and I were both at Johns Hopkins School of Medicine. Tom was leading an effort to reorganize the first-year medical school curriculum, trying to move toward an integrated plan and away from an entrenched departmentally based system (DeAngelis, 2000 ). He insisted that every lecture in the old curriculum be on the table for discussion, requiring frank discussions and defusing one of the most powerful arguments in academia: “But, we''ve always done it that way.” As the curriculum was being implemented, he recruited a set of a dozen or so students who were tasked with filling out questionnaires immediately after every lecture; this enabled evaluation and refinement of the curriculum and yielded a data set that changed the character of future discussions.After 13 years as a department director at Johns Hopkins (including a number of years as course director for the Molecules and Cells course in the first-year medical school curriculum), I had the opportunity to become director of the National Institute of General Medical Sciences (NIGMS) at the National Institutes of Health (NIH). NIH supports large data systems, as these are essential for NIH staff to perform their work in receiving, reviewing, funding, and monitoring research grants. While these rich data sources were available, the resources for analysis were not as sophisticated as they could have been. This became apparent when we tried to understand how long successful young scientists spent at various early-career stages (in graduate school, doing postdoctoral fellowships, and in faculty positions before funding). This was a relatively simple question to formulate, but it took considerable effort to collect the data because the relevant data were in free-text form. An intrepid staff member took on the challenge, and went through three years’ worth of biosketches by hand to find 360 individuals who had received their first R01 awards from NIGMS and then compiled data on the years those individuals had graduated from college, completed graduate school, started their faculty positions, and received their R01 awards. Analysis of these data revealed that the median time from BS/BA to R01 award was ∼15 years, including a median of 3.6 years between starting a faculty position and receiving the grant. These results were presented to the NIGMS Advisory Council but were not shared more widely, because of the absence of a good medium at the time for reporting such results. I did provide them subsequently through a blog in the context of a discussion of similar issues (DrugMonkey, 2012 ). To address the communications need, we had developed the NIGMS Feedback Loop, first as an electronic newsletter (NIGMS, 2005 ) and subsequently as a blog (NIGMS, 2009 ). This vehicle has been of great utility for bidirectional communication, particularly under unusual circumstances. For example, during the period prior to the implementation of the American Recovery and Reinvestment Act, that is, the “stimulus bill,” I shared our thoughts and solicited input from the community. I subsequently received and answered hundreds of emails that offered reactions and suggestions. Having these admittedly nonscientific survey data in hand was useful in subsequent NIH-wide policy-development discussions.At this point, staff members at several NIH institutes, including NIGMS, were developing tools for data analysis, including the ability to link results from different data systems. Many of the questions I was most eager to address involved the relationship between scientific productivity and other parameters, including the level of grant support and the results of peer review that led to funding in the first place. With an initial system that was capable of linking NIH-funded investigators to publications, I performed an analysis of the number of publications from 2007 to mid-2010 attributed to NIH funding as a function of the total amount of annual NIH direct-cost support for 2938 NIGMS-funded investigators from fiscal year 2006 (Berg, 2010 ). The results revealed that the number of publications did not increase monotonically but rather reached a plateau near an annual funding level near $700,000. This observation received considerable attention (Wadman, 2010 ) and provided support for a long-standing NIGMS policy of imposing an extra level of oversight for well-funded investigators. It is important to note that, not surprisingly, there was considerable variation in the number of publications at all funding levels and, in my opinion, this observation is as important as the plateau in moving policies away from automatic caps and toward case-by-case analysis by staff armed with the data.This analysis provoked considerable discussion on the Feedback Loop blog and elsewhere regarding whether the number of publications was an appropriate measure of productivity. With better tools, it was possible to extend such analyses to other measures, including the number of citations, the number of citations relative to other publications, and many other factors. This extended set of metrics was applied to an analysis of the ability of peer-review scores to predict subsequent productivity (Berg, 2012a , b ). Three conclusions were supported by this analysis. First, the various metrics were sufficiently correlated with one another that the choice of metric did not affect any major conclusions (although metrics such as number of citations performed slightly better than number of publications). Second, peer-review scores could predict subsequent productivity to some extent (compared with randomly assigned scores), but the level of prediction was modest. Importantly, this provided some of the first direct evidence that peer review is capable of identifying applications that are more likely to be productive. Finally, the results revealed no noticeable drop-off in productivity, even near the 20th percentile, supporting the view that a substantial amount of productive science is being left unfunded with pay lines below the 20th percentile, let alone the 10th percentile.In 2011, I moved to the University of Pittsburgh and also became president-elect of the American Society for Biochemistry and Molecular Biology (ASBMB). In my new positions, I have been able to gain a more direct perspective on the current state of the academic biomedical research enterprise. It is exciting to be back in the trenches again. On the other hand, my observations support a conclusion I had drawn while I was at NIH: the biomedical research enterprise is not sustainable in its present form due not only to the level of federal support, but also to the duration of training periods, the number of individuals being trained to support the research effort, the lack of appropriate pathways for individuals interested in careers as bench scientists, challenges in the interactions between the academic and private sectors, and other factors. Working with the Public Affair Advisory Committee at ASBMB, we have produced a white paper (ASBMB, 2013 ) that we hope will help initiate conversations about imagining and then moving toward more sustainable models for biomedical research. We can expect to arrive at effective policy changes and initiatives only through data-driven and thorough self-examination and candid discussions between different stakeholders. We look forward to working with leaders and members from other scientific societies as we tackle this crucial set of issues.Open in a separate windowJeremy M. Berg  相似文献   

17.
We live in an increasingly data-driven world, where high-throughput sequencing and mass spectrometry platforms are transforming biology into an information science. This has shifted major challenges in biological research from data generation and processing to interpretation and knowledge translation. However, postsecondary training in bioinformatics, or more generally data science for life scientists, lags behind current demand. In particular, development of accessible, undergraduate data science curricula has the potential to improve research and learning outcomes as well as better prepare students in the life sciences to thrive in public and private sector careers. Here, we describe the Experiential Data science for Undergraduate Cross-Disciplinary Education (EDUCE) initiative, which aims to progressively build data science competency across several years of integrated practice. Through EDUCE, students complete data science modules integrated into required and elective courses augmented with coordinated cocurricular activities. The EDUCE initiative draws on a community of practice consisting of teaching assistants (TAs), postdocs, instructors, and research faculty from multiple disciplines to overcome several reported barriers to data science for life scientists, including instructor capacity, student prior knowledge, and relevance to discipline-specific problems. Preliminary survey results indicate that even a single module improves student self-reported interest and/or experience in bioinformatics and computer science. Thus, EDUCE provides a flexible and extensible active learning framework for integration of data science curriculum into undergraduate courses and programs across the life sciences.  相似文献   

18.
In 1988, David Hull presented an evolutionary account of science. This was a direct analogy to evolutionary accounts of biological adaptation, and part of a generalized view of Darwinian selection accounts that he based upon the Universal Darwinism of Richard Dawkins. Criticisms of this view were made by, among others, Kim Sterelny, which led to it gaining only limited acceptance. Some of these criticisms are, I will argue, no longer valid in the light of developments in the formal modeling of evolution, in particular that of Sergey Gavrilets’ work on adaptive landscapes. If we can usefully recast the Hullian view of science as being driven by selection in terms of Gavrilets’ and Kaufmann’s view of there being “giant components” of high-fitness networks through any realistic adaptive landscape, we may now find it useful to ask what the adaptive pressures on science are, and to extend the metaphor into a full analogy. This is in effect to reconcile the Fisherianism of the Dawkins–Hull approach to selection and replicators, with a Wrightean drift account of social constructionist views of science, preserving, it is to be hoped, the valuable aspects of both.
John S. WilkinsEmail:
  相似文献   

19.

Aim

To improve the accuracy of inferences on habitat associations and distribution patterns of rare species by combining machine‐learning, spatial filtering and resampling to address class imbalance and spatial bias of large volumes of citizen science data.

Innovation

Modelling rare species’ distributions is a pressing challenge for conservation and applied research. Often, a large number of surveys are required before enough detections occur to model distributions of rare species accurately, resulting in a data set with a high proportion of non‐detections (i.e. class imbalance). Citizen science data can provide a cost‐effective source of surveys but likely suffer from class imbalance. Citizen science data also suffer from spatial bias, likely from preferential sampling. To correct for class imbalance and spatial bias, we used spatial filtering to under‐sample the majority class (non‐detection) while maintaining all of the limited information from the minority class (detection). We investigated the use of spatial under‐sampling with randomForest models and compared it to common approaches used for imbalanced data, the synthetic minority oversampling technique (SMOTE), weighted random forest and balanced random forest models. Model accuracy was assessed using kappa, Brier score and AUC. We demonstrate the method by evaluating habitat associations and seasonal distribution patterns using citizen science data for a rare species, the tricoloured blackbird (Agelaius tricolor).

Main Conclusions

Spatial under‐sampling increased the accuracy of each model and outperformed the approach typically used to direct under‐sampling in the SMOTE algorithm. Our approach is the first to characterize winter distribution and movement of tricoloured blackbirds. Our results show that tricoloured blackbirds are positively associated with grassland, pasture and wetland habitats, and negatively associated with high elevations or evergreen forests during both winter and breeding seasons. The seasonal differences in distribution indicate that individuals move to the coast during the winter, as suggested by historical accounts.
  相似文献   

20.
The search for a model of balanced science has not so far been based on a detailed analysis of the nature and effects of the science curriculum. The present paper attempts to begin to fill the gap, from the perspective of biology, using data from the Assessment of Perform-ance Unit (APU). It is shown that, in England, the uptake of the subject as an option outstrips that of the other sciences, with pupils quoting its intrinsic interest as their motivation. Biology serves to attract to science pupils of all abilities and girls in particular. The other sciences are not so successful in this sphere. However, performance data show that the study of biology is not as effective in enhancing science performance as are the physical sciences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号