首页 | 本学科首页   官方微博 | 高级检索  
   检索      


Scientific approaches to science policy
Authors:Jeremy M Berg
Institution:Department of Computational and Systems Biology, University of Pittsburgh School of Medicine, Pittsburgh, PA 15213
Abstract:The development of robust science policy depends on use of the best available data, rigorous analysis, and inclusion of a wide range of input. While director of the National Institute of General Medical Sciences (NIGMS), I took advantage of available data and emerging tools to analyze training time distribution by new NIGMS grantees, the distribution of the number of publications as a function of total annual National Institutes of Health support per investigator, and the predictive value of peer-review scores on subsequent scientific productivity. Rigorous data analysis should be used to develop new reforms and initiatives that will help build a more sustainable American biomedical research enterprise.Good scientists almost invariably insist on obtaining the best data potentially available and fostering open and direct communication and criticism to address scientific problems. Remarkably, this same approach is only sometimes used in the context of the development of science policy. In my opinion, several factors underlie the reluctance to apply scientific methods rigorously to inform science policy questions. First, obtaining the relevant data can be challenging and time-consuming. Tools relatively unfamiliar to many scientists may be required, and the data collected may have inherent limitations that make their use challenging. Second, reliance on data may require the abandonment of preconceived notions and a willingness to face potentially unwanted political consequences, depending on where the data analysis leads.One of my first experiences witnessing the application of a rigorous approach to a policy question involved previous American Society for Cell Biology Public Service awardee Tom Pollard when he and I were both at Johns Hopkins School of Medicine. Tom was leading an effort to reorganize the first-year medical school curriculum, trying to move toward an integrated plan and away from an entrenched departmentally based system (DeAngelis, 2000 ). He insisted that every lecture in the old curriculum be on the table for discussion, requiring frank discussions and defusing one of the most powerful arguments in academia: “But, we''ve always done it that way.” As the curriculum was being implemented, he recruited a set of a dozen or so students who were tasked with filling out questionnaires immediately after every lecture; this enabled evaluation and refinement of the curriculum and yielded a data set that changed the character of future discussions.After 13 years as a department director at Johns Hopkins (including a number of years as course director for the Molecules and Cells course in the first-year medical school curriculum), I had the opportunity to become director of the National Institute of General Medical Sciences (NIGMS) at the National Institutes of Health (NIH). NIH supports large data systems, as these are essential for NIH staff to perform their work in receiving, reviewing, funding, and monitoring research grants. While these rich data sources were available, the resources for analysis were not as sophisticated as they could have been. This became apparent when we tried to understand how long successful young scientists spent at various early-career stages (in graduate school, doing postdoctoral fellowships, and in faculty positions before funding). This was a relatively simple question to formulate, but it took considerable effort to collect the data because the relevant data were in free-text form. An intrepid staff member took on the challenge, and went through three years’ worth of biosketches by hand to find 360 individuals who had received their first R01 awards from NIGMS and then compiled data on the years those individuals had graduated from college, completed graduate school, started their faculty positions, and received their R01 awards. Analysis of these data revealed that the median time from BS/BA to R01 award was ∼15 years, including a median of 3.6 years between starting a faculty position and receiving the grant. These results were presented to the NIGMS Advisory Council but were not shared more widely, because of the absence of a good medium at the time for reporting such results. I did provide them subsequently through a blog in the context of a discussion of similar issues (DrugMonkey, 2012 ). To address the communications need, we had developed the NIGMS Feedback Loop, first as an electronic newsletter (NIGMS, 2005 ) and subsequently as a blog (NIGMS, 2009 ). This vehicle has been of great utility for bidirectional communication, particularly under unusual circumstances. For example, during the period prior to the implementation of the American Recovery and Reinvestment Act, that is, the “stimulus bill,” I shared our thoughts and solicited input from the community. I subsequently received and answered hundreds of emails that offered reactions and suggestions. Having these admittedly nonscientific survey data in hand was useful in subsequent NIH-wide policy-development discussions.At this point, staff members at several NIH institutes, including NIGMS, were developing tools for data analysis, including the ability to link results from different data systems. Many of the questions I was most eager to address involved the relationship between scientific productivity and other parameters, including the level of grant support and the results of peer review that led to funding in the first place. With an initial system that was capable of linking NIH-funded investigators to publications, I performed an analysis of the number of publications from 2007 to mid-2010 attributed to NIH funding as a function of the total amount of annual NIH direct-cost support for 2938 NIGMS-funded investigators from fiscal year 2006 (Berg, 2010 ). The results revealed that the number of publications did not increase monotonically but rather reached a plateau near an annual funding level near $700,000. This observation received considerable attention (Wadman, 2010 ) and provided support for a long-standing NIGMS policy of imposing an extra level of oversight for well-funded investigators. It is important to note that, not surprisingly, there was considerable variation in the number of publications at all funding levels and, in my opinion, this observation is as important as the plateau in moving policies away from automatic caps and toward case-by-case analysis by staff armed with the data.This analysis provoked considerable discussion on the Feedback Loop blog and elsewhere regarding whether the number of publications was an appropriate measure of productivity. With better tools, it was possible to extend such analyses to other measures, including the number of citations, the number of citations relative to other publications, and many other factors. This extended set of metrics was applied to an analysis of the ability of peer-review scores to predict subsequent productivity (Berg, 2012a , b ). Three conclusions were supported by this analysis. First, the various metrics were sufficiently correlated with one another that the choice of metric did not affect any major conclusions (although metrics such as number of citations performed slightly better than number of publications). Second, peer-review scores could predict subsequent productivity to some extent (compared with randomly assigned scores), but the level of prediction was modest. Importantly, this provided some of the first direct evidence that peer review is capable of identifying applications that are more likely to be productive. Finally, the results revealed no noticeable drop-off in productivity, even near the 20th percentile, supporting the view that a substantial amount of productive science is being left unfunded with pay lines below the 20th percentile, let alone the 10th percentile.In 2011, I moved to the University of Pittsburgh and also became president-elect of the American Society for Biochemistry and Molecular Biology (ASBMB). In my new positions, I have been able to gain a more direct perspective on the current state of the academic biomedical research enterprise. It is exciting to be back in the trenches again. On the other hand, my observations support a conclusion I had drawn while I was at NIH: the biomedical research enterprise is not sustainable in its present form due not only to the level of federal support, but also to the duration of training periods, the number of individuals being trained to support the research effort, the lack of appropriate pathways for individuals interested in careers as bench scientists, challenges in the interactions between the academic and private sectors, and other factors. Working with the Public Affair Advisory Committee at ASBMB, we have produced a white paper (ASBMB, 2013 ) that we hope will help initiate conversations about imagining and then moving toward more sustainable models for biomedical research. We can expect to arrive at effective policy changes and initiatives only through data-driven and thorough self-examination and candid discussions between different stakeholders. We look forward to working with leaders and members from other scientific societies as we tackle this crucial set of issues.Open in a separate windowJeremy M. Berg
Keywords:
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号