首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 812 毫秒
1.

Background

High-risk prescribing of non-steroidal anti-inflammatory drugs (NSAIDs) and antiplatelet agents accounts for a significant proportion of hospital admissions due to preventable adverse drug events. The recently completed PINCER trial has demonstrated that a one-off pharmacist-led information technology (IT)-based intervention can significantly reduce high-risk prescribing in primary care, but there is evidence that effects decrease over time and employing additional pharmacists to facilitate change may not be sustainable.

Methods/design

We will conduct a cluster randomised controlled with a stepped wedge design in 40 volunteer general practices in two Scottish health boards. Eligible practices are those that are using the INPS Vision clinical IT system, and have agreed to have relevant medication-related data to be automatically extracted from their electronic medical records. All practices (clusters) that agree to take part will receive the data-driven quality improvement in primary care (DQIP) intervention, but will be randomised to one of 10 start dates. The DQIP intervention has three components: a web-based informatics tool that provides weekly updated feedback of targeted prescribing at practice level, prompts the review of individual patients affected, and summarises each patient's relevant risk factors and prescribing; an outreach visit providing education on targeted prescribing and training in the use of the informatics tool; and a fixed payment of 350 GBP (560 USD; 403 EUR) up front and a small payment of 15 GBP (24 USD; 17 EUR) for each patient reviewed in the 12 months of the intervention. We hypothesise that the DQIP intervention will reduce a composite of nine previously validated measures of high-risk prescribing. Due to the nature of the intervention, it is not possible to blind practices, the core research team, or the data analyst. However, outcome assessment is entirely objective and automated. There will additionally be a process and economic evaluation alongside the main trial.

Discussion

The DQIP intervention is an example of a potentially sustainable safety improvement intervention that builds on the existing National Health Service IT-infrastructure to facilitate systematic management of high-risk prescribing by existing practice staff. Although the focus in this trial is on Non-steroidal anti-inflammatory drugs and antiplatelets, we anticipate that the tested intervention would be generalisable to other types of prescribing if shown to be effective.

Trial registration

ClinicalTrials.gov, dossier number: NCT01425502  相似文献   

2.

Background

Patient-reported outcomes (PROs), such as health-related quality of life (HRQL) are increasingly used to evaluate treatment effectiveness in clinical trials, are valued by patients, and may inform important decisions in the clinical setting. It is of concern, therefore, that preliminary evidence, gained from group discussions at UK-wide Medical Research Council (MRC) quality of life training days, suggests there are inconsistent standards of HRQL data collection in trials and appropriate training and education is often lacking. Our objective was to investigate these reports, to determine if they represented isolated experiences, or were indicative of a potentially wider problem.

Methods And Findings

We undertook a qualitative study, conducting 26 semi-structured interviews with research nurses, data managers, trial coordinators and research facilitators involved in the collection and entry of HRQL data in clinical trials, across one primary care NHS trust, two secondary care NHS trusts and two clinical trials units in the UK. We used conventional content analysis to analyze and interpret our data. Our study participants reported (1) inconsistent standards in HRQL measurement, both between, and within, trials, which appeared to risk the introduction of bias; (2), difficulties in dealing with HRQL data that raised concern for the well-being of the trial participant, which in some instances led to the delivery of non-protocol driven co-interventions, (3), a frequent lack of HRQL protocol content and appropriate training and education of trial staff, and (4) that HRQL data collection could be associated with emotional and/or ethical burden.

Conclusions

Our findings suggest there are inconsistencies in the standards of HRQL data collection in some trials resulting from a general lack of HRQL-specific protocol content, training and education. These inconsistencies could lead to biased HRQL trial results. Future research should aim to develop HRQL guidelines and training programmes aimed at supporting researchers to carry out high quality data collection.  相似文献   

3.

Background

The Nuremberg code defines the general ethical framework of medical research with participant consent as its cornerstone. In cluster randomized trials (CRT), obtaining participant informed consent raises logistic and methodologic concerns. First, with randomization of large clusters such as geographical areas, obtaining individual informed consent may be impossible. Second, participants in randomized clusters cannot avoid certain interventions, which implies that participant informed consent refers only to data collection, not administration of an intervention. Third, complete participant information may be a source of selection bias, which then raises methodological concerns. We assessed whether participant informed consent was required in such trials, which type of consent was required, and whether the trial was at risk of selection bias because of the very nature of participant information.

Methods and Findings

We systematically reviewed all reports of CRT published in MEDLINE in 2008 and surveyed corresponding authors regarding the nature of the informed consent and the process of participant inclusion. We identified 173 reports and obtained an answer from 113 authors (65.3%). In total, 23.7% of the reports lacked information on ethics committee approval or participant consent, 53.1% of authors declared that participant consent was for data collection only and 58.5% that the group allocation was not specified for participants. The process of recruitment (chronology of participant recruitment with regard to cluster randomization) was rarely reported, and we estimated that only 56.6% of the trials were free of potential selection bias.

Conclusions

For CRTs, the reporting of ethics committee approval and participant informed consent is less than optimal. Reports should describe whether participants consented for administration of an intervention and/or data collection. Finally, the process of participant recruitment should be fully described (namely, whether participants were informed of the allocation group before being recruited) for a better appraisal of the risk of selection bias.  相似文献   

4.
5.
6.
7.
8.
ABSTRACT: BACKGROUND: A number of single case reports have suggested that the context within which intervention studies take place may challenge the assumptions that underpin RCTs. However, the diverse ways in which context may challenge the central tenets of the RCT, and the degree to which this information is known to researchers and/or subsequently reported, has received much less attention. In this paper we explore these issues by focussing on 7 RCTs of interventions ranging in type and degree of complexity, and across diverse contexts. METHODS: This in-depth multiple case study using interviews, focus groups and documentary analysis was conducted in two phases. In phase one, a RCT of a nurse-led intervention provided a single exploratory case and informed the design, sampling and data collection within the main study. Phase two consisted of a multiple explanatory case study covering a range of trials of different types of complex intervention. A total of 84 data sources across the 7 trials were accessed. RESULTS: We present consistent empirical evidence across all trials to indicate that four key elements of context (personal, organisational, trial and problem context) are crucial to understanding how a complex intervention works and to enable both assessments of internal validity and likely generalizability to other settings. The ways in which context challenged trial operation was often complex, idiosyncratic, and subtle; often falling outside of current trial reporting formats. However, information on such issues appeared to be available via first hand "insider accounts" of each trial suggesting that improved reporting on the role of context is possible. CONCLUSIONS: Sufficient detail about context needs to be understood and reported in RCTs of complex interventions, in order for the transferability of complex interventions to be assessed. Improved reporting formats that require and encourage the clarification of both general and project-specific threats to the likely internal and external validity need to be developed. In addition a cultural change is required in which the open and honest reporting of such issues is seen as an indicator of study strength and researcher integrity, rather than a symbol of a poor quality study or investigator ability.  相似文献   

9.

Background

The reporting of outcomes within published randomized trials has previously been shown to be incomplete, biased and inconsistent with study protocols. We sought to determine whether outcome reporting bias would be present in a cohort of government-funded trials subjected to rigorous peer review.

Methods

We compared protocols for randomized trials approved for funding by the Canadian Institutes of Health Research (formerly the Medical Research Council of Canada) from 1990 to 1998 with subsequent reports of the trials identified in journal publications. Characteristics of reported and unreported outcomes were recorded from the protocols and publications. Incompletely reported outcomes were defined as those with insufficient data provided in publications for inclusion in meta-analyses. An overall odds ratio measuring the association between completeness of reporting and statistical significance was calculated stratified by trial. Finally, primary outcomes specified in trial protocols were compared with those reported in publications.

Results

We identified 48 trials with 68 publications and 1402 outcomes. The median number of participants per trial was 299, and 44% of the trials were published in general medical journals. A median of 31% (10th–90th percentile range 5%–67%) of outcomes measured to assess the efficacy of an intervention (efficacy outcomes) and 59% (0%–100%) of those measured to assess the harm of an intervention (harm outcomes) per trial were incompletely reported. Statistically significant efficacy outcomes had a higher odds than nonsignificant efficacy outcomes of being fully reported (odds ratio 2.7; 95% confidence interval 1.5–5.0). Primary outcomes differed between protocols and publications for 40% of the trials.

Interpretation

Selective reporting of outcomes frequently occurs in publications of high-quality government-funded trials.Selective reporting of results from randomized trials can occur either at the level of end points within published studies (outcome reporting bias)1 or at the level of entire trials that are selectively published (study publication bias).2 Outcome reporting bias has previously been demonstrated in a broad cohort of published trials approved by a regional ethics committee.1 The Canadian Institutes of Health Research (CIHR) — the primary federal funding agency, known before 2000 as the Medical Research Council of Canada (MRC) — recognized the need to address this issue and conducted an internal review process in 2002 to evaluate the reporting of results from its funded trials. The primary objectives were to determine (a) the prevalence of incomplete outcome reporting in journal publications of randomized trials; (b) the degree of association between adequate outcome reporting and statistical significance; and (c) the consistency between primary outcomes specified in trial protocols and those specified in subsequent journal publications.  相似文献   

10.
Objective To examine whether the association of inadequate or unclear allocation concealment and lack of blinding with biased estimates of intervention effects varies with the nature of the intervention or outcome.Design Combined analysis of data from three meta-epidemiological studies based on collections of meta-analyses.Data sources 146 meta-analyses including 1346 trials examining a wide range of interventions and outcomes.Main outcome measures Ratios of odds ratios quantifying the degree of bias associated with inadequate or unclear allocation concealment, and lack of blinding, for trials with different types of intervention and outcome. A ratio of odds ratios <1 implies that inadequately concealed or non-blinded trials exaggerate intervention effect estimates.Results In trials with subjective outcomes effect estimates were exaggerated when there was inadequate or unclear allocation concealment (ratio of odds ratios 0.69 (95% CI 0.59 to 0.82)) or lack of blinding (0.75 (0.61 to 0.93)). In contrast, there was little evidence of bias in trials with objective outcomes: ratios of odds ratios 0.91 (0.80 to 1.03) for inadequate or unclear allocation concealment and 1.01 (0.92 to 1.10) for lack of blinding. There was little evidence for a difference between trials of drug and non-drug interventions. Except for trials with all cause mortality as the outcome, the magnitude of bias varied between meta-analyses.Conclusions The average bias associated with defects in the conduct of randomised trials varies with the type of outcome. Systematic reviewers should routinely assess the risk of bias in the results of trials, and should report meta-analyses restricted to trials at low risk of bias either as the primary analysis or in conjunction with less restrictive analyses.  相似文献   

11.
McCambridge J  Kypri K 《PloS one》2011,6(10):e23748

Background

Participant reports of their own behaviour are critical for the provision and evaluation of behavioural interventions. Recent developments in brief alcohol intervention trials provide an opportunity to evaluate longstanding concerns that answering questions on behaviour as part of research assessments may inadvertently influence it and produce bias. The study objective was to evaluate the size and nature of effects observed in randomized manipulations of the effects of answering questions on drinking behaviour in brief intervention trials.

Methodology/Principal Findings

Multiple methods were used to identify primary studies. Between-group differences in total weekly alcohol consumption, quantity per drinking day and AUDIT scores were evaluated in random effects meta-analyses.Ten trials were included in this review, of which two did not provide findings for quantitative study, in which three outcomes were evaluated. Between-group differences were of the magnitude of 13.7 (−0.17 to 27.6) grams of alcohol per week (approximately 1.5 U.K. units or 1 standard U.S. drink) and 1 point (0.1 to 1.9) in AUDIT score. There was no difference in quantity per drinking day.

Conclusions/Significance

Answering questions on drinking in brief intervention trials appears to alter subsequent self-reported behaviour. This potentially generates bias by exposing non-intervention control groups to an integral component of the intervention. The effects of brief alcohol interventions may thus have been consistently under-estimated. These findings are relevant to evaluations of any interventions to alter behaviours which involve participant self-report.  相似文献   

12.
Summary Cluster randomized trials in health care may involve three instead of two levels, for instance, in trials where different interventions to improve quality of care are compared. In such trials, the intervention is implemented in health care units (“clusters”) and aims at changing the behavior of health care professionals working in this unit (“subjects”), while the effects are measured at the patient level (“evaluations”). Within the generalized estimating equations approach, we derive a sample size formula that accounts for two levels of clustering: that of subjects within clusters and that of evaluations within subjects. The formula reveals that sample size is inflated, relative to a design with completely independent evaluations, by a multiplicative term that can be expressed as a product of two variance inflation factors, one that quantifies the impact of within‐subject correlation of evaluations on the variance of subject‐level means and the other that quantifies the impact of the correlation between subject‐level means on the variance of the cluster means. Power levels as predicted by the sample size formula agreed well with the simulated power for more than 10 clusters in total, when data were analyzed using bias‐corrected estimating equations for the correlation parameters in combination with the model‐based covariance estimator or the sandwich estimator with a finite sample correction.  相似文献   

13.
Common mental disorders, such as depression and anxiety, pose a major public health burden in developing countries. Although these disorders are thought to be best managed in primary care settings, there is a dearth of evidence about how this can be achieved in low resource settings. The MANAS project is an attempt to integrate an evidence based package of treatments into routine public and private primary care settings in Goa, India. Before initiating the trial, we carried out extensive preparatory work, over a period of 15 months, to examine the feasibility and acceptability of the planned intervention. This paper describes the systematic development and evaluation of the intervention through this preparatory phase. The preparatory stage, which was implemented in three phases, utilized quantitative and qualitative methods to inform our understanding of the potential problems and possible solutions in implementing the trial and led to critical modifications of the original intervention plan. Investing in systematic formative work prior to conducting expensive trials of the effectiveness of complex interventions is a useful exercise which potentially improves the likelihood of a positive result of such trials.  相似文献   

14.
Objectives To determine whether parachutes are effective in preventing major trauma related to gravitational challenge.Design Systematic review of randomised controlled trials.Data sources: Medline, Web of Science, Embase, and the Cochrane Library databases; appropriate internet sites and citation lists.Study selection: Studies showing the effects of using a parachute during free fall.Main outcome measure Death or major trauma, defined as an injury severity score > 15.Results We were unable to identify any randomised controlled trials of parachute intervention.Conclusions As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.  相似文献   

15.
OBJECTIVE--To evaluate a palliative care home support team based on an inpatient unit. DESIGN--Randomised controlled trial with waiting list. Patients in the study group received the service immediately, those in the control group received it after one month. Main comparison point was at one month. SETTING--A city of 300,000 people with a publicly funded home care service and about 200 general practitioners, most of whom provide home care. MAIN OUTCOME MEASURES--Pain and nausea levels were measured at entry to trial and at one month, as were quality of life for patients and care givers'' health. RESULTS--Because of early deaths, problems with recruitment, and a low compliance rate for completion of questionnaires, the required sample size was not attained. CONCLUSION--In designing evaluations of palliative care services, investigators should be prepared to deal with the following issues: attrition due to early death, opposition to randomisation by patients and referral sources, ethical problems raised by randomisation of dying patients, the appropriate timing of comparison points, and difficulties of collecting data from sick or exhausted patients and care givers. Investigators may choose to evaluate a service from various perspectives using different methods: controlled trials, qualitative studies, surveys, and audits. Randomised trials may prove to be impracticable for evaluation of palliative care.  相似文献   

16.

Background

The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias has been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making. Until recently, outcome reporting bias has received less attention.

Methodology/Principal Findings

We review and summarise the evidence from a series of cohort studies that have assessed study publication bias and outcome reporting bias in randomised controlled trials. Sixteen studies were eligible of which only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Eleven of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.

Conclusions

Recent work provides direct empirical evidence for the existence of study publication bias and outcome reporting bias. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.  相似文献   

17.
Viergever RF  Ghersi D 《PloS one》2011,6(2):e14701

Background

Lack of transparency in clinical trial conduct, publication bias and selective reporting bias are still important problems in medical research. Through clinical trials registration, it should be possible to take steps towards resolving some of these problems. However, previous evaluations of registered records of clinical trials have shown that registered information is often incomplete and non-meaningful. If these studies are accurate, this negates the possible benefits of registration of clinical trials.

Methods and Findings

A 5% sample of records of clinical trials that were registered between 17 June 2008 and 17 June 2009 was taken from the International Clinical Trials Registry Platform (ICTRP) database and assessed for the presence of contact information, the presence of intervention specifics in drug trials and the quality of primary and secondary outcome reporting. 731 records were included. More than half of the records were registered after recruitment of the first participant. The name of a contact person was available in 94.4% of records from non-industry funded trials and 53.7% of records from industry funded trials. Either an email address or a phone number was present in 76.5% of non-industry funded trial records and in 56.5% of industry funded trial records. Although a drug name or company serial number was almost always provided, other drug intervention specifics were often omitted from registration. Of 3643 reported outcomes, 34.9% were specific measures with a meaningful time frame.

Conclusions

Clinical trials registration has the potential to contribute substantially to improving clinical trial transparency and reducing publication bias and selective reporting. These potential benefits are currently undermined by deficiencies in the provision of information in key areas of registered records.  相似文献   

18.
BackgroundThe Health Directors of the US Affiliated Pacific Islands (USAPI) declared a State of Emergency due to epidemic proportions of lifestyle diseases: cancer, obesity and other non-communicable diseases (NCDs) in 2010. This paper describes the development, implementation, and evaluation of a USAPI policy, system and environment (PSE) approach to address lifestyle behaviors associated with cancer and other NCDs.MethodsEach of USAPI jurisdictions applied the PSE approach to tobacco and nutrition interventions in a local institution, faith based, or community setting. A participatory community engagement process was utilized to: identify relevant deleterious health behaviors in the population, develop PSE interventions to modify the context in which the behavior occurs in a particular setting, implement the PSE intervention through five specified activities, and evaluate the activities and behavior change associated with the intervention.ResultsPSE interventions have been implemented in all USAPI jurisdictions. Current human and financial resources have been adequate to support the interventions. Process and behavior change evaluations have not been completed and is ongoing. Personnel turnover and maintaining the intervention strategy in response due to shifting community demands has been the biggest challenge in one site.ConclusionFrom 2014 through 2016 the PSE approach has been used to implement PSE interventions in all USAPI jurisdictions. The intervention evaluations have not been completed. The PSE intervention is novel and has the potential to be a scalable methodology to prevent cancer and modify NCD risk in the USAPI and small states.  相似文献   

19.
BackgroundDiabetes is highly prevalent and contributes to significant morbidity and mortality worldwide. Behaviour change interventions that target health and lifestyle factors associated with the onset of diabetes can delay progression to diabetes, but many approaches rely on intensive one-to-one contact by specialists. Health coaching is an approach based on motivational interviewing that can potentially deliver behaviour change interventions by non-specialists at a larger scale. This trial protocol describes a randomized controlled trial (CATFISH) that tests whether a web-enhanced telephone health coaching intervention (IGR3) is more acceptable and efficient than a telephone-only health coaching intervention (IGR2) for people with prediabetes (impaired glucose regulation).MethodsCATFISH is a two-parallel group, single-centre individually randomized controlled trial. Eligible participants are patients aged ≥18 years with impaired glucose regulation (HbA1c concentration between 42 and 47 mmol/mol), have access to a telephone and home internet and have been referred to an existing telephone health coaching service at Salford Royal NHS Foundation Trust, Salford, UK. Participants who give written informed consent will be randomized remotely (via a clinical trials unit) to either the existing pathway (IGR2) or the new web-enhanced pathway (IGR3) for 9 months. The primary outcome measure is patient acceptability at 9 months, determined using the Client Satisfaction Questionnaire. Secondary outcome measures at 9 months are: cost of delivery of IGR2 and IGR3, mental health, quality of life, patient activation, self-management, weight (kg), HbA1c concentration, and body mass index. All outcome measures will be analyzed on an intention-to-treat basis. A qualitative process evaluation will explore the experiences of participants and providers with a focus on understanding usability of interventions, mechanisms of behaviour change, and impact of context on delivery and user acceptability. Qualitative data will be analyzed using Framework.DiscussionThe CATFISH trial will provide a pragmatic assessment of whether a web-based information technology platform can enhance acceptability of a telephone health coaching intervention for people with prediabetes. The data will prove critical in understanding the role of web applications to improve engagement with evidence-based approaches to preventing diabetes.

Trial registration

ISRCTN16534814. Registered on 7 February 2016.

Electronic supplementary material

The online version of this article (doi:10.1186/s13063-016-1519-6) contains supplementary material, which is available to authorized users.  相似文献   

20.

Background

The increased use of meta-analysis in systematic reviews of healthcare interventions has highlighted several types of bias that can arise during the completion of a randomised controlled trial. Study publication bias and outcome reporting bias have been recognised as a potential threat to the validity of meta-analysis and can make the readily available evidence unreliable for decision making.

Methodology/Principal Findings

In this update, we review and summarise the evidence from cohort studies that have assessed study publication bias or outcome reporting bias in randomised controlled trials. Twenty studies were eligible of which four were newly identified in this update. Only two followed the cohort all the way through from protocol approval to information regarding publication of outcomes. Fifteen of the studies investigated study publication bias and five investigated outcome reporting bias. Three studies have found that statistically significant outcomes had a higher odds of being fully reported compared to non-significant outcomes (range of odds ratios: 2.2 to 4.7). In comparing trial publications to protocols, we found that 40–62% of studies had at least one primary outcome that was changed, introduced, or omitted. We decided not to undertake meta-analysis due to the differences between studies.

Conclusions

This update does not change the conclusions of the review in which 16 studies were included. Direct empirical evidence for the existence of study publication bias and outcome reporting bias is shown. There is strong evidence of an association between significant results and publication; studies that report positive or significant results are more likely to be published and outcomes that are statistically significant have higher odds of being fully reported. Publications have been found to be inconsistent with their protocols. Researchers need to be aware of the problems of both types of bias and efforts should be concentrated on improving the reporting of trials.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号