首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
There is a paucity of data in the literature concerning the validation of the grant application peer review process, which is used to help direct billions of dollars in research funds. Ultimately, this validation will hinge upon empirical data relating the output of funded projects to the predictions implicit in the overall scientific merit scores from the peer review of submitted applications. In an effort to address this need, the American Institute of Biological Sciences (AIBS) conducted a retrospective analysis of peer review data of 2,063 applications submitted to a particular research program and the bibliometric output of the resultant 227 funded projects over an 8-year period. Peer review scores associated with applications were found to be moderately correlated with the total time-adjusted citation output of funded projects, although a high degree of variability existed in the data. Analysis over time revealed that as average annual scores of all applications (both funded and unfunded) submitted to this program improved with time, the average annual citation output per application increased. Citation impact did not correlate with the amount of funds awarded per application or with the total annual programmatic budget. However, the number of funded applications per year was found to correlate well with total annual citation impact, suggesting that improving funding success rates by reducing the size of awards may be an efficient strategy to optimize the scientific impact of research program portfolios. This strategy must be weighed against the need for a balanced research portfolio and the inherent high costs of some areas of research. The relationship observed between peer review scores and bibliometric output lays the groundwork for establishing a model system for future prospective testing of the validity of peer review formats and procedures.  相似文献   

2.
Teleconferencing as a setting for scientific peer review is an attractive option for funding agencies, given the substantial environmental and cost savings. Despite this, there is a paucity of published data validating teleconference-based peer review compared to the face-to-face process.Our aim was to conduct a retrospective analysis of scientific peer review data to investigate whether review setting has an effect on review process and outcome measures.We analyzed reviewer scoring data from a research program that had recently modified the review setting from face-to-face to a teleconference format with minimal changes to the overall review procedures. This analysis included approximately 1600 applications over a 4-year period: two years of face-to-face panel meetings compared to two years of teleconference meetings. The average overall scientific merit scores, score distribution, standard deviations and reviewer inter-rater reliability statistics were measured, as well as reviewer demographics and length of time discussing applications.The data indicate that few differences are evident between face-to-face and teleconference settings with regard to average overall scientific merit score, scoring distribution, standard deviation, reviewer demographics or inter-rater reliability. However, some difference was found in the discussion time.These findings suggest that most review outcome measures are unaffected by review setting, which would support the trend of using teleconference reviews rather than face-to-face meetings. However, further studies are needed to assess any correlations among discussion time, application funding and the productivity of funded research projects.  相似文献   

3.
4.

Background

The peer review system has been traditionally challenged due to its many limitations especially for allocating funding. Bibliometric indicators may well present themselves as a complement.

Objective

We analyze the relationship between peers’ ratings and bibliometric indicators for Spanish researchers in the 2007 National R&D Plan for 23 research fields.

Methods and Materials

We analyze peers’ ratings for 2333 applications. We also gathered principal investigators’ research output and impact and studied the differences between accepted and rejected applications. We used the Web of Science database and focused on the 2002-2006 period. First, we analyzed the distribution of granted and rejected proposals considering a given set of bibliometric indicators to test if there are significant differences. Then, we applied a multiple logistic regression analysis to determine if bibliometric indicators can explain by themselves the concession of grant proposals.

Results

63.4% of the applications were funded. Bibliometric indicators for accepted proposals showed a better previous performance than for those rejected; however the correlation between peer review and bibliometric indicators is very heterogeneous among most areas. The logistic regression analysis showed that the main bibliometric indicators that explain the granting of research proposals in most cases are the output (number of published articles) and the number of papers published in journals that belong to the first quartile ranking of the Journal Citations Report.

Discussion

Bibliometric indicators predict the concession of grant proposals at least as well as peer ratings. Social Sciences and Education are the only areas where no relation was found, although this may be due to the limitations of the Web of Science’s coverage. These findings encourage the use of bibliometric indicators as a complement to peer review in most of the analyzed areas.  相似文献   

5.
6.
The first call for applications to the NHS research and development programme on the interface between primary and secondary care was advertised in February 1994. A total of 674 outline proposals were submitted and 54 (8%) secured funding. Projects have been commissioned in 16 of the 21 priority areas and around 6m pounds has been committed. Analysis shows that multidisciplinary applications are more likely to be funded and that the odds for a successful application are on average nearly doubled for each discipline represented up to five. A survey of applicants and peer reviewers found satisfaction with much of the commissioning process, but peer review and feedback were subject to criticism, particularly by unsuccessful applicants. The programme shows that it is possible to commission a large number of projects in an innovative area of research and development and has identified refinements that will further increase the efficiency and acceptability of the process.  相似文献   

7.
Crystallization of proteins is a nontrivial task, and despite the substantial efforts in robotic automation, crystallization screening is still largely based on trial-and-error sampling of a limited subset of suitable reagents and experimental parameters. Funding of high throughput crystallography pilot projects through the NIH Protein Structure Initiative provides the opportunity to collect crystallization data in a comprehensive and statistically valid form. Data mining and machine learning algorithms thus have the potential to deliver predictive models for protein crystallization. However, the underlying complex physical reality of crystallization, combined with a generally ill-defined and sparsely populated sampling space, and inconsistent scoring and annotation make the development of predictive models non-trivial. We discuss the conceptual problems, and review strengths and limitations of current approaches towards crystallization prediction, emphasizing the importance of comprehensive and valid sampling protocols. In view of limited overlap in techniques and sampling parameters between the publicly funded high throughput crystallography initiatives, exchange of information and standardization should be encouraged, aiming to effectively integrate data mining and machine learning efforts into a comprehensive predictive framework for protein crystallization. Similar experimental design and knowledge discovery strategies should be applied to valid analysis and prediction of protein expression, solubilization, and purification, as well as crystal handling and cryo-protection.  相似文献   

8.
The National Institutes of Health (NIH) Policy for Data Management and Sharing (DMS Policy) recognizes the NIH’s role as a key steward of United States biomedical research and information and seeks to enhance that stewardship through systematic recommendations for the preservation and sharing of research data generated by funded projects. The policy is effective as of January 2023. The recommendations include a requirement for the submission of a Data Management and Sharing Plan (DMSP) with funding applications, and while no strict template was provided, the NIH has released supplemental draft guidance on elements to consider when developing a plan. This article provides 10 key recommendations for creating a DMSP that is both maximally compliant and effective.  相似文献   

9.
10.
The recent US law (H.R.2764) affecting NIH policy and the recent unanimous vote by the Arts and Science faculty of Harvard University in favour of a mandatory deposit of researchers' publications in a suitable repository have brought the Open Access movement into public light. After reviewing the historical background of Open Access, its evolution and extension in the United States, Great Britain, France and Canada are examined. Policies aiming at strengthening Open Access to scientific research are viewed as the direct consequence of treating scientific publishing as an integral part of the research cycle. It should, therefore, be wrapped into the financing of research. As the greater part of research is funded by public money, it appears legitimate to make its results as widely available as is possible. Open Access journals and repositories with strong deposit mandates form the backbone of the strategies to achieve the objective of Open Access. Despite the claims of some publishers, Open Access does not weaken or threaten the peer review process, and it does not conflict with copyright laws.  相似文献   

11.
Agencies that fund scientific research must choose: is it more effective to give large grants to a few elite researchers, or small grants to many researchers? Large grants would be more effective only if scientific impact increases as an accelerating function of grant size. Here, we examine the scientific impact of individual university-based researchers in three disciplines funded by the Natural Sciences and Engineering Research Council of Canada (NSERC). We considered four indices of scientific impact: numbers of articles published, numbers of citations to those articles, the most cited article, and the number of highly cited articles, each measured over a four-year period. We related these to the amount of NSERC funding received. Impact is positively, but only weakly, related to funding. Researchers who received additional funds from a second federal granting council, the Canadian Institutes for Health Research, were not more productive than those who received only NSERC funding. Impact was generally a decelerating function of funding. Impact per dollar was therefore lower for large grant-holders. This is inconsistent with the hypothesis that larger grants lead to larger discoveries. Further, the impact of researchers who received increases in funding did not predictably increase. We conclude that scientific impact (as reflected by publications) is only weakly limited by funding. We suggest that funding strategies that target diversity, rather than “excellence”, are likely to prove to be more productive.  相似文献   

12.
The development of robust science policy depends on use of the best available data, rigorous analysis, and inclusion of a wide range of input. While director of the National Institute of General Medical Sciences (NIGMS), I took advantage of available data and emerging tools to analyze training time distribution by new NIGMS grantees, the distribution of the number of publications as a function of total annual National Institutes of Health support per investigator, and the predictive value of peer-review scores on subsequent scientific productivity. Rigorous data analysis should be used to develop new reforms and initiatives that will help build a more sustainable American biomedical research enterprise.Good scientists almost invariably insist on obtaining the best data potentially available and fostering open and direct communication and criticism to address scientific problems. Remarkably, this same approach is only sometimes used in the context of the development of science policy. In my opinion, several factors underlie the reluctance to apply scientific methods rigorously to inform science policy questions. First, obtaining the relevant data can be challenging and time-consuming. Tools relatively unfamiliar to many scientists may be required, and the data collected may have inherent limitations that make their use challenging. Second, reliance on data may require the abandonment of preconceived notions and a willingness to face potentially unwanted political consequences, depending on where the data analysis leads.One of my first experiences witnessing the application of a rigorous approach to a policy question involved previous American Society for Cell Biology Public Service awardee Tom Pollard when he and I were both at Johns Hopkins School of Medicine. Tom was leading an effort to reorganize the first-year medical school curriculum, trying to move toward an integrated plan and away from an entrenched departmentally based system (DeAngelis, 2000 ). He insisted that every lecture in the old curriculum be on the table for discussion, requiring frank discussions and defusing one of the most powerful arguments in academia: “But, we''ve always done it that way.” As the curriculum was being implemented, he recruited a set of a dozen or so students who were tasked with filling out questionnaires immediately after every lecture; this enabled evaluation and refinement of the curriculum and yielded a data set that changed the character of future discussions.After 13 years as a department director at Johns Hopkins (including a number of years as course director for the Molecules and Cells course in the first-year medical school curriculum), I had the opportunity to become director of the National Institute of General Medical Sciences (NIGMS) at the National Institutes of Health (NIH). NIH supports large data systems, as these are essential for NIH staff to perform their work in receiving, reviewing, funding, and monitoring research grants. While these rich data sources were available, the resources for analysis were not as sophisticated as they could have been. This became apparent when we tried to understand how long successful young scientists spent at various early-career stages (in graduate school, doing postdoctoral fellowships, and in faculty positions before funding). This was a relatively simple question to formulate, but it took considerable effort to collect the data because the relevant data were in free-text form. An intrepid staff member took on the challenge, and went through three years’ worth of biosketches by hand to find 360 individuals who had received their first R01 awards from NIGMS and then compiled data on the years those individuals had graduated from college, completed graduate school, started their faculty positions, and received their R01 awards. Analysis of these data revealed that the median time from BS/BA to R01 award was ∼15 years, including a median of 3.6 years between starting a faculty position and receiving the grant. These results were presented to the NIGMS Advisory Council but were not shared more widely, because of the absence of a good medium at the time for reporting such results. I did provide them subsequently through a blog in the context of a discussion of similar issues (DrugMonkey, 2012 ). To address the communications need, we had developed the NIGMS Feedback Loop, first as an electronic newsletter (NIGMS, 2005 ) and subsequently as a blog (NIGMS, 2009 ). This vehicle has been of great utility for bidirectional communication, particularly under unusual circumstances. For example, during the period prior to the implementation of the American Recovery and Reinvestment Act, that is, the “stimulus bill,” I shared our thoughts and solicited input from the community. I subsequently received and answered hundreds of emails that offered reactions and suggestions. Having these admittedly nonscientific survey data in hand was useful in subsequent NIH-wide policy-development discussions.At this point, staff members at several NIH institutes, including NIGMS, were developing tools for data analysis, including the ability to link results from different data systems. Many of the questions I was most eager to address involved the relationship between scientific productivity and other parameters, including the level of grant support and the results of peer review that led to funding in the first place. With an initial system that was capable of linking NIH-funded investigators to publications, I performed an analysis of the number of publications from 2007 to mid-2010 attributed to NIH funding as a function of the total amount of annual NIH direct-cost support for 2938 NIGMS-funded investigators from fiscal year 2006 (Berg, 2010 ). The results revealed that the number of publications did not increase monotonically but rather reached a plateau near an annual funding level near $700,000. This observation received considerable attention (Wadman, 2010 ) and provided support for a long-standing NIGMS policy of imposing an extra level of oversight for well-funded investigators. It is important to note that, not surprisingly, there was considerable variation in the number of publications at all funding levels and, in my opinion, this observation is as important as the plateau in moving policies away from automatic caps and toward case-by-case analysis by staff armed with the data.This analysis provoked considerable discussion on the Feedback Loop blog and elsewhere regarding whether the number of publications was an appropriate measure of productivity. With better tools, it was possible to extend such analyses to other measures, including the number of citations, the number of citations relative to other publications, and many other factors. This extended set of metrics was applied to an analysis of the ability of peer-review scores to predict subsequent productivity (Berg, 2012a , b ). Three conclusions were supported by this analysis. First, the various metrics were sufficiently correlated with one another that the choice of metric did not affect any major conclusions (although metrics such as number of citations performed slightly better than number of publications). Second, peer-review scores could predict subsequent productivity to some extent (compared with randomly assigned scores), but the level of prediction was modest. Importantly, this provided some of the first direct evidence that peer review is capable of identifying applications that are more likely to be productive. Finally, the results revealed no noticeable drop-off in productivity, even near the 20th percentile, supporting the view that a substantial amount of productive science is being left unfunded with pay lines below the 20th percentile, let alone the 10th percentile.In 2011, I moved to the University of Pittsburgh and also became president-elect of the American Society for Biochemistry and Molecular Biology (ASBMB). In my new positions, I have been able to gain a more direct perspective on the current state of the academic biomedical research enterprise. It is exciting to be back in the trenches again. On the other hand, my observations support a conclusion I had drawn while I was at NIH: the biomedical research enterprise is not sustainable in its present form due not only to the level of federal support, but also to the duration of training periods, the number of individuals being trained to support the research effort, the lack of appropriate pathways for individuals interested in careers as bench scientists, challenges in the interactions between the academic and private sectors, and other factors. Working with the Public Affair Advisory Committee at ASBMB, we have produced a white paper (ASBMB, 2013 ) that we hope will help initiate conversations about imagining and then moving toward more sustainable models for biomedical research. We can expect to arrive at effective policy changes and initiatives only through data-driven and thorough self-examination and candid discussions between different stakeholders. We look forward to working with leaders and members from other scientific societies as we tackle this crucial set of issues.Open in a separate windowJeremy M. Berg  相似文献   

13.
In their submission to the government in advance of the white paper on science policy in the United Kingdom the Medical Research Council commends the MRC''s own approach to managing directly funded research. But a series of semi-structured interviews with the directors of some of the MRC''s units suggests a gap between the MRC''s model of managed research and the reality. Although such units are theoretically managed from MRC head office (and units are charged an overhead for this), in practice each unit runs its own affairs. Between major reviews average contact time with the head office contact person is seven hours a year. The first paper argues that a purchaser-provider split would recognise the benefits of decentralisation and allow units to bid for research funds from several sources, the successful ones guaranteeing their survival through a rolling series of research programmes. The second paper criticises the MRC''s cumbersome peer review system. Reliance on outside experts atrophies the scientific skills of head office staff and builds delays into decision making. A purchaser-provider model would allow the head office scientific staff to act like commercial research and development managers, commissioning research, and using the outcome, rather than peer review, as a criterion for continued funding.  相似文献   

14.
Grzywacz NM 《IEEE pulse》2012,3(4):22-26
The Department of Biomedical Engineering (BME) of the University of Southern California (BME@USC) has a longstanding tradition of advancing biomedicine through the development and application of novel engineering ideas. More than 80 primary and affiliated faculty members conduct cutting-edge research in a wide variety of areas, such as neuroengineering, biosystems and biosignal analysis, medical devices (including biomicroelectromechanical systems (bioMEMS) and bionanotechnology), biomechanics, bioimaging, and imaging informatics. Currently, the department hosts six internationally recognized research centers: the Biomimetic MicroElectronic Systems Engineering Research Center (funded by the National Science Foundation), the Biomedical Simulations Resource [funded by the National Institutes of Health (NIH)], the Medical Ultrasonic Transducer Center (funded by NIH), the Center for Neural Engineering, the Center for Vision Science and Technology (funded by an NIH Bioengineering Research Partnership Grant), and the Center for Genomic and Phenomic Studies in Autism (funded by NIH). BME@USC ranks in the top tier of all U.S. BME departments in terms of research funding per faculty.  相似文献   

15.
We describe an assessment of the collective impact of 35 grants that the Howard Hughes Medical Institute (HHMI) made to biomedical research institutions in 1999 to support precollege science education outreach programs. Data collected from funded institutions were compared with data from a control group of institutions that had advanced to the last stage of review but had not been funded. The survey instrument and the results reveal outcomes and impacts that HHMI considers relevant for these programs. The following attributes are considered: ability to secure additional, non-HHMI funding; institution buy-in as measured by gains in dedicated space and staff; enhancement of the program director's career; number and adoption of educational products developed; number of related publications and awards; percentage of programs for which teachers received course credit; increase in science content knowledge; and increase in student motivation to study science.  相似文献   

16.

Background

The reporting of outcomes within published randomized trials has previously been shown to be incomplete, biased and inconsistent with study protocols. We sought to determine whether outcome reporting bias would be present in a cohort of government-funded trials subjected to rigorous peer review.

Methods

We compared protocols for randomized trials approved for funding by the Canadian Institutes of Health Research (formerly the Medical Research Council of Canada) from 1990 to 1998 with subsequent reports of the trials identified in journal publications. Characteristics of reported and unreported outcomes were recorded from the protocols and publications. Incompletely reported outcomes were defined as those with insufficient data provided in publications for inclusion in meta-analyses. An overall odds ratio measuring the association between completeness of reporting and statistical significance was calculated stratified by trial. Finally, primary outcomes specified in trial protocols were compared with those reported in publications.

Results

We identified 48 trials with 68 publications and 1402 outcomes. The median number of participants per trial was 299, and 44% of the trials were published in general medical journals. A median of 31% (10th–90th percentile range 5%–67%) of outcomes measured to assess the efficacy of an intervention (efficacy outcomes) and 59% (0%–100%) of those measured to assess the harm of an intervention (harm outcomes) per trial were incompletely reported. Statistically significant efficacy outcomes had a higher odds than nonsignificant efficacy outcomes of being fully reported (odds ratio 2.7; 95% confidence interval 1.5–5.0). Primary outcomes differed between protocols and publications for 40% of the trials.

Interpretation

Selective reporting of outcomes frequently occurs in publications of high-quality government-funded trials.Selective reporting of results from randomized trials can occur either at the level of end points within published studies (outcome reporting bias)1 or at the level of entire trials that are selectively published (study publication bias).2 Outcome reporting bias has previously been demonstrated in a broad cohort of published trials approved by a regional ethics committee.1 The Canadian Institutes of Health Research (CIHR) — the primary federal funding agency, known before 2000 as the Medical Research Council of Canada (MRC) — recognized the need to address this issue and conducted an internal review process in 2002 to evaluate the reporting of results from its funded trials. The primary objectives were to determine (a) the prevalence of incomplete outcome reporting in journal publications of randomized trials; (b) the degree of association between adequate outcome reporting and statistical significance; and (c) the consistency between primary outcomes specified in trial protocols and those specified in subsequent journal publications.  相似文献   

17.
The most highly cited ecologists and environmental scientists provide both a benchmark and unique opportunity to consider the importance of research funding. Here, we use citation data and self‐reported funding levels to assess the relative importance of various factors in shaping productivity and potential impact. The elite were senior Americans, well funded, with large labs. In contrast to Canadian NSERC grant holders (not in the top 1%), citations per paper did not increase with higher levels of funding within the ecological elite. We propose that this is good news for several reasons. It suggests that the publications generated by the top ecologists and environmental scientists are subject to limitations, that higher volume of publications is always important, and that increased funding to ecologists in general can shift our discipline to wider research networks. As expected, collaboration was identified as an important factor for the elite, and hopefully, this serves as a positive incentive to funding agencies since it increases the visibility of their research.  相似文献   

18.
Use of the Open Source Software (OSS) development model has been crucial in a number of recent technological areas, including operating systems, applications and bioinformatics. The rationale for why OSS is often a better development model than proprietary development and some of the results of this model in the field of Gene Expression are reviewed. The paper concludes with a discussion of why funding agencies should endorse OSS and require funded software projects to be released Open Source.  相似文献   

19.
The Working Group on Peer Review of the Advisory Committee to the Director of NIH has recommended that at least 4 reviewers should be used to assess each grant application. A sample size analysis of the number of reviewers needed to evaluate grant applications reveals that a substantially larger number of evaluators are required to provide the level of precision that is currently mandated. NIH should adjust their peer review system to account for the number of reviewers needed to provide adequate precision in their evaluations.  相似文献   

20.

Background

Studying de-implementation—defined herein as reducing or stopping the use of a health service or practice provided to patients by healthcare practitioners and systems—has gained traction in recent years. De-implementing ineffective, unproven, harmful, overused, inappropriate, and/or low-value health services and practices is important for mitigating patient harm, improving processes of care, and reducing healthcare costs. A better understanding of the state-of-the-science is needed to guide future objectives and funding initiatives. To this end, we characterized de-implementation research grants funded by the United States (US) National Institutes of Health (NIH) and the Agency for Healthcare Research and Quality (AHRQ).

Methods

We used systematic methods to search, identify, and describe de-implementation research grants funded across all 27 NIH Institutes and Centers (ICs) and AHRQ from fiscal year 2000 through 2017. Eleven key terms and three funding opportunity announcements were used to search for research grants in the NIH Query, View and Report (QVR) system. Two coders identified eligible grants based on inclusion/exclusion criteria. A codebook was developed, pilot tested, and revised before coding the full grant applications of the final sample.

Results

A total of 1277 grants were identified through the QVR system; 542 remained after removing duplicates. After the multistep eligibility assessment and review process, 20 grant applications were coded. Many grants were funded by NIH (n?=?15), with fewer funded by AHRQ, and a majority were funded between fiscal years 2015 and 2016 (n?=?11). Grant proposals focused on de-implementing a range of health services and practices (e.g., medications, therapies, screening tests) across various health areas (e.g., cancer, cardiovascular disease) and delivery settings (e.g., hospitals, nursing homes, schools). Grants proposed to use a variety of study designs and research methods (e.g., experimental, observational, mixed methods) to accomplish study aims.

Conclusions

Based on the systematic portfolio analysis of NIH- and AHRQ-funded research grants over the past 17 years, relatively few have focused on studying the de-implementation of ineffective, unproven, harmful, overused, inappropriate, and/or low-value health services and practices provided to patients by healthcare practitioners and systems. Strategies for raising the profile and growing the field of research on de-implementation are discussed.
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号