首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
ObjectiveTo quantify the effects of smoke-free workplaces on smoking in employees and compare these effects to those achieved through tax increases.DesignSystematic review with a random effects meta-analysis.SettingWorkplaces in the United States, Australia, Canada, and Germany.ParticipantsEmployees in unrestricted and totally smoke-free workplaces.ResultsTotally smoke-free workplaces are associated with reductions in prevalence of smoking of 3.8% (95% confidence interval 2.8% to 4.7%) and 3.1 (2.4 to 3.8) fewer cigarettes smoked per day per continuing smoker. Combination of the effects of reduced prevalence and lower consumption per continuing smoker yields a mean reduction of 1.3 cigarettes per day per employee, which corresponds to a relative reduction of 29%. To achieve similar reductions the tax on a pack of cigarettes would have to increase from $0.76 to $3.05 (€0.78 to €3.14) in the United States and from £3.44 to £6.59 (€5.32 to €10.20) in the United Kingdom. If all workplaces became smoke-free, consumption per capita in the entire population would drop by 4.5% in the United States and 7.6% in the United Kingdom, costing the tobacco industry $1.7 billion and £310 million annually in lost sales. To achieve similar reductions tax per pack would have to increase to $1.11 and £4.26.ConclusionsSmoke-free workplaces not only protect non-smokers from the dangers of passive smoking, they also encourage smokers to quit or to reduce consumption.

What is already known on this topic

Smoke-free workplaces are associated with lower cigarette consumption per continuing smoker

What this study adds

Smoke-free workplaces reduce prevalence of smoking as well as consumptionThe combined effects of people stopping smoking and reducing consumption reduces total cigarette consumption by 29%To achieve similar results through taxation would require cigarette taxes per pack to increase from $0.76 to $3.05 in the United States and from £3.44 to £6.59 in the United Kingdom  相似文献   

2.
ObjectiveTo report the career choices and career destinations in 1995 of doctors who qualified in the United Kingdom in 1988.DesignPostal questionnaire.SettingUnited Kingdom.SubjectsAll doctors who qualified in the United Kingdom in 1988.ResultsOf the 3724 doctors who were sent questionnaires, eight had died and three declined to participate. Of the remaining 3713 doctors, 2885 (77.7%) replied. 16.9% (608/3593; 95% confidence interval 16.1% to 17.8%) of all 1988 qualifiers from medical schools in Great Britain were not working in the NHS in Great Britain in 1995 compared with 17.0% (624/3674; 16.1% to 17.9%) of the 1983 cohort in 1990. The proportion of doctors working in general practice was lower than in previous cohorts. The percentage of women in general practice (44.3% (528/1192)) substantially exceeded that of men (33.1% (443/1340)). 53% (276/522) of the women in general practice and 20% (98/490) of the women in hospital specialties worked part time.ConclusionsConcerns about recruitment difficulties in general practice are justified. Women are now entering general practice in greater numbers than men. There is no evidence of a greater exodus from the NHS from the 1988 qualifiers than from earlier cohorts.

Key messages

  • This study reports the career progress to September 1995 of doctors who qualified in 1988
  • Loss from the British NHS, at 16.9% (95% confidence interval, 16.1% to 17.8%), was no greater than among earlier qualifiers at the same time after qualification
  • The proportion of doctors working in general practice (38%) was lower than in earlier cohorts studied
  • In this generation of doctors, women in general practice now outnumber men
  • Fifty three per cent of the women in general practice and 20% of the women in hospital specialties were working on a part time or flexible basis
  相似文献   

3.
ObjectivesTo identify the number and current location of children, aged 0 to 16 years, requiring long term ventilation in the United Kingdom, and to establish their underlying diagnoses and ventilatory needs.DesignPostal questionnaires sent to consultant respiratory paediatricians and all lead clinicians of intensive care and special care baby units in the United Kingdom.SubjectsAll children in the United Kingdom who, when medically stable, continue to need a mechanical aid for breathing.Results141 children requiring long term ventilation were identified from the initial questionnaire. Detailed information was then obtained on 136 children from 30 units. Thirty three children (24%) required continuous positive pressure ventilation by tracheostomy over 24 hours, and 103 received ventilation when asleep by a non-invasive mask (n=62; 46%), tracheostomy (n=32; 24%), or negative pressure ventilation (n=9; 7%). Underlying conditions included neuromuscular disease (n=62; 46%), congenital central hypoventilation syndrome (n=18; 13%), spinal injury (n=16; 12%), craniofacial syndromes (n=9; 7%), bronchopulmonary dysplasia (n=6; 4%), and others (n=25; 18%). 93 children were cared for at home. 43 children remained in hospital because of home circumstances, inadequate funding, or lack of provision of home carers. 96 children were of school age and 43 were attending mainstream school.ConclusionsA significant increase in the number of children requiring long term ventilation in the United Kingdom has occurred over the past decade. Contributing factors include improved technology, developments in paediatric non-invasive ventilatory support, and a change in attitude towards home care. Successful discharge home and return to school is occurring even for severely disabled patients. Funding and home carers are common obstacles to discharge.

Key messages

  • The number of children requiring long term ventilatory support has increased substantially in the past 8 years
  • Ventilatory support at home is the best option for meeting the child’s psychological needs and enhancing quality of life
  • The majority of children dependent on long term ventilation live at home and attend mainstream schools
  • A shift of care has occurred from intensive care units to less acute areas
  相似文献   

4.
ObjectiveTo determine the career destinations, by 1995, of doctors who qualified in the United Kingdom in 1977; the relation between their destinations and early career choice; and their intentions regarding retirement age.DesignPostal questionnaire.SettingUnited Kingdom.SubjectsAll (n=3135) medical qualifiers of 1977.ResultsAfter about 12 years the distribution of respondents by type of employment, and, for women, the percentage of doctors in part time rather than full time medical work, had stabilised. Of all 2997 qualifiers from medical schools in Great Britain, 2399 (80.0% (95% confidence interval 79.5% to 80.6%)) were working in medicine in the NHS in Great Britain 18 years after qualifying. Almost half the women (318/656) worked in the NHS part time. Of 1714 doctors in the NHS, 1125 intended to work in the NHS until normal retirement age, 392 did not, and 197 were undecided. Of the 1548 doctors for whom we had sufficient information, career destinations at 18 years matched the choices made at 1, 3, and 5 years in 58.9% (912), 78.2% (1211), and 86.6% (1341) of cases respectively.ConclusionsPlanning for the medical workforce needs to be supported by information about doctors’ career plans, destinations, and whole time equivalent years of work. Postgraduate training needs to take account of doctors’ eventual choice of specialty (and the timing of this choice).

Key messages

  • A large scale national study in the United Kingdom followed doctors from qualification to mid-career and beyond
  • Most doctors had made their choice of eventual career—at least in terms of broadly defined specialty—within 5 years of qualifying
  • Eighteen years on, 80% of the doctors were working in the NHS and nearly half of women doctors were working part time
  • Almost a quarter of NHS doctors planned to retire early
  相似文献   

5.

Background

Evidence from developed countries showed that medication errors are common and harmful. Little is known about medication errors in resource-restricted settings, including Vietnam.

Objectives

To determine the prevalence and potential clinical outcome of medication preparation and administration errors, and to identify factors associated with errors.

Methods

This was a prospective study conducted on six wards in two urban public hospitals in Vietnam. Data of preparation and administration errors of oral and intravenous medications was collected by direct observation, 12 hours per day on 7 consecutive days, on each ward. Multivariable logistic regression was applied to identify factors contributing to errors.

Results

In total, 2060 out of 5271 doses had at least one error. The error rate was 39.1% (95% confidence interval 37.8%- 40.4%). Experts judged potential clinical outcomes as minor, moderate, and severe in 72 (1.4%), 1806 (34.2%) and 182 (3.5%) doses. Factors associated with errors were drug characteristics (administration route, complexity of preparation, drug class; all p values < 0.001), and administration time (drug round, p = 0.023; day of the week, p = 0.024). Several interactions between these factors were also significant. Nurse experience was not significant. Higher error rates were observed for intravenous medications involving complex preparation procedures and for anti-infective drugs. Slightly lower medication error rates were observed during afternoon rounds compared to other rounds.

Conclusions

Potentially clinically relevant errors occurred in more than a third of all medications in this large study conducted in a resource-restricted setting. Educational interventions, focusing on intravenous medications with complex preparation procedure, particularly antibiotics, are likely to improve patient safety.  相似文献   

6.
ObjectiveTo compare the effectiveness of lipid lowering drugs in lowering serum cholesterol concentrations.DesignCross sectional study.Setting17 practices within 17 primary care groups in Trent region, United Kingdom.ParticipantsPatients aged 35 years or over taking lipid lowering drugs and with at least two serum cholesterol concentrations recorded on computer.Results1353 of 2469 (54.8%) patients receiving lipid lowering treatment had a last recorded serum cholesterol concentration of ⩽5 mmol/l. Significantly more patients taking statins achieved the target value for serum cholesterol (5 mmol/l) than those taking fibrates (1307 (57%) v 46 (26%); P<0.0001). Atorvastatin and simvastatin were the most effective drugs in achieving the target. Significant differences were found between lipid lowering drugs for the pretreatment serum cholesterol concentration, the most recent cholesterol concentration, and the associated percentage reduction. Atorvastatin and simvastatin achieved the greatest percentage reduction in serum cholesterol concentrations (30.1%, 95% confidence interval 28.8% to 31.4%, and 28.0%, 26.7% to 29.3%, respectively). Although the mean serum cholesterol concentrations in this unselected population tended to be higher than those in clinical trials, the percentage reduction was consistent with the trials.ConclusionThe ability of individual statins to lower serum cholesterol concentration varied, with atorvastatin and simvastatin being the most effective. The percentage reductions agreed with those of randomised controlled trials indicating likely benefits in unselected patients in primary care. As the initial serum cholesterol concentrations were higher than those in randomised controlled trials, target serum cholesterol values of ⩽5 mmol/l may be unrealistic even for patients taking the most efficacious drugs. Also, the higher initial levels could mean that the absolute reduction in cardiovascular risk in primary care patients is greater than thought.

What is already known on this topic

Statins in patients with coronary heart disease help reduce further cardiovascular events and improve survivalThis seems to be a class effect of statins, although there may be important differences in effectiveness between themLess than half of patients in the community who take lipid lowering drugs achieve target serum cholesterol values

What this study adds

Statins vary in their ability to lower serum cholesterol concentration, with atorvastatin and simvastatin achieving the best resultsThe percentage reductions agreed with those found in randomised controlled trialsSince the initial serum cholesterol concentrations were higher than in trials, absolute risk reductions in primary care patients may be greater than thoughtTarget values of ⩽5 mmol/l may be unrealistic even for patients on the most efficacious drugs, because the initial mean cholesterol values of primary care patients are higher than those of patients in trials  相似文献   

7.
ObjectiveTo investigate delays in the presentation to hospital and evaluation of patients with suspected stroke.DesignMulticentre prospective observational study.Setting22 hospitals in the United Kingdom and Dublin.Participants739 patients with suspected stroke presenting to hospital.ResultsThe median age of patients was 75 years, and 400 were women. The median delay between onset of symptoms and arrival at hospital was 6 hours (interquartile range 1 hour 48 minutes to 19 hours 12 minutes). 37% of patients arrived within 3 hours, 50% within 6 hours. The median delay for patients using the emergency service was 2 hours 3 minutes (47 minutes to 7 hours 12 minutes) compared with 7 hours 12 minutes (2 hours 5 minutes to 20 hours 37 minutes) for referrals from general practitioners (P<0.0001). Use of emergency services reduced delays to hospital (odds ratio 0.45, 95% confidence interval 0.23 to 0.61). The median time to evaluation by a senior doctor was 1 hour 9 minutes (interquartile range 33 minutes to 1 hour 50 minutes) but was undertaken in only 477 (65%) patients within 3 hours of arrival. This was not influenced by age, sex, time of presentation, mode of referral, hospital type, or the presence of a stroke unit. Computed tomography was requested within 3 hours of arrival in 166 (22%) patients but undertaken in only 60 (8%).ConclusionDelays in patients arriving at hospital with suspected stroke can be reduced by the increased use of emergency services. Over a third of patients arrive at hospital within three hours of stroke; their management can be improved by expediting medical evaluation and performing computed tomography early.

What is already known on this topic

Delay in presentation and assessment of patients with suspected stroke prevents the possible benefits from thrombolysis being achievedLittle is known about the presentation and early management of patients with acute stroke in the United Kingdom

What this study adds

Most patients with suspected stroke in the United Kingdom arrive at hospital within six hours of the onset of symptomsNot all patients are evaluated by a senior doctor within three hours of arrival at hospital and most do not undergo computed tomographyThe potential for thrombolysis in patients with acute stroke can be improved significantly by greater use of emergency services and expediting evaluation and investigations by doctors  相似文献   

8.
9.
ObjectiveTo assess the cost effectiveness of universal antenatal HIV screening compared with selective screening in the United Kingdom.DesignIncremental cost effectiveness analysis relating additional costs of screening to life years gained. Maternal and paediatric costs and life years were combined.SettingUnited Kingdom.ResultsOn base case assumptions, a new diagnosis of a pregnant woman with HIV results in a gain of 6.392 life years and additional expenditure of £14 833. If decision makers are prepared to pay up to £10 000 for an additional life year, this would imply a net benefit of £49 090 (range £12 300-£59 000), which would be available to detect each additional infected woman in an antenatal screening programme. In London, universal antenatal screening would be cost effective compared with a selective screening under any reasonable assumptions about screening costs. Outside London, universal screening with uptake above 90% would be cost effective with a £0.60 HIV antibody test cost and up to 3.5 minutes for pretest discussion. Cost effectiveness of universal testing is lower if selective testing can achieve high uptake among those at higher risk. A universal strategy with only 50% uptake may not be less cost effective in low prevalence districts and may cost more and be less effective than a well run selective strategy.ConclusionsUniversal screening with pretest discussion should be adopted throughout the United Kingdom as part of routine antenatal care as long as test costs can be kept low and uptake high.

Key messages

  • In 1997 only 13% of undiagnosed HIV infection in pregnant women was picked up on antenatal testing, resulting in many preventable paediatric infections
  • Assuming NHS willingness to pay £10 000 per life year gained, universal testing would be much more cost effective than selective testing throughout London on any reasonable assumptions on costs, prevalence, and uptake of testing
  • Outside London, universal testing would also be cost effective, even allowing 2-4 minutes for pretest discussion, provided that test costs were no more than £0.60 and uptake exceeded 90%
  • Low cost tests could be achieved by pooling antenatal sera or centralisation of testing
  • Universal testing with uptake of 50% may be less cost effective than a well run selective programme
  相似文献   

10.
ObjectivesTo prospectively compare compliance with treatment in patients with hypertension responsive to treatment versus patients with treatment resistant hypertension.DesignProspective case-control study.SettingOutpatient department in a large city hospital in Switzerland, providing primary, secondary, and tertiary care.Participants110 consecutive medical outpatients with hypertension and taking stable treatment with at least two antihypertensive drugs for at least four weeks.ResultsComplete data were available for 103 patients, of whom 86 took ⩾80% of their prescribed doses (“compliant”) and 17 took <80% (“non-compliant”). Of the 49 patients with treatment resistant hypertension, 40 (82%) were compliant, while 46 (85%) of the 54 patients responsive to treatment were compliant.ConclusionNon-compliance with treatment was not more prevalent in patients with treatment resistant hypertension than in treatment responsive patients.

What is already known on this topic

For many patients with arterial hypertension, blood pressure cannot be adequately controlled despite treatment with antihypertensive drugsPatients'' poor compliance with treatment is often suggested as the reason for lack of response to antihypertensive drugs

What this study adds

When treatment compliance was monitored in hypertensive patients following stable treatment regimens, no difference in compliance was found between those with treatment resistant hypertension and those responsive to treatmentFactors other than patients'' compliance with treatment regimens should be examined to explain lack of response to antihypertensive drugs  相似文献   

11.
ObjectiveTo examine referral pathways from primary care for patients with epithelial ovarian cancer and to identify factors related to survival at 18 months.DesignRetrospective review of patient notes.SettingGeneral practices and receiving hospitals within Mersey region.Subjects135 patients with epithelial ovarian cancer identified from an audit in the Mersey area between 1992 and 1994.Results105 (78%) women first presented to their general practitioner within four weeks of the onset of symptoms. 99 (73%) women were referred to hospital by their general practitioners within four weeks of presentation, and 95 (70%) were seen in hospital within two weeks of referral. Multivariate analysis with survival as the dependent variable identified age (odds ratio 0.96, 95% confidence interval 0.93 to 0.99) cancer stage III or more (0.15, 0.05 to 0.43), and non-specific symptoms (0.36, 0.14 to 0.89) as significant variables.ConclusionMost patients attended their general practitioner within four weeks and were referred within two weeks. No evidence was found that delays in referral or diagnosis adversely affected survival at 18 months. Stage of disease at surgery was the most important adverse factor. An effective screening programme is the most likely method to improve survival.

What is already known on this topic

Epithelial ovarian cancer is the most common gynaecological cancer in the United Kingdom75% of patients present with advanced incurable disease, and five year survival is 30%The Department of Health recommends that everyone suspected of having ovarian cancer should be seen within two weeks of referral by their general practitioner

What this study adds

78% of patients have had symptoms for less than 4 weeks when they present to general practice and are referred to hospital within four weeks of presentation70% of patients are seen in hospital within two weeks of the referralDelay by patients and general practitioners does not affect survival beyond 18 months  相似文献   

12.

Background & Objectives

Intravenous iron supplementation is widespread in the hemodialysis population, but there is uncertainty about the safest dosing strategy. We compared the safety of different intravenous iron dosing practices on the risk of adverse cardiovascular outcomes in a large population of hemodialysis patients.

Design settings, participants, & measurements

A retrospective cohort was created from the clinical database of a large dialysis provider (years 2004-2008) merged with administrative data from the United States Renal Data System. Dosing comparisons were (1) bolus (consecutive doses ≥ 100 mg exceeding 600 mg during one month) versus maintenance (all other iron doses during the month); and (2) high (> 200 mg over 1 month) versus low dose (≤ 200 mg over 1 month). We established a 6-month baseline period (to identify potential confounders and effect modifiers), a one-month iron exposure period, and a three-month follow-up period. Outcomes were myocardial infarction, stroke, and death from cardiovascular disease.

Results

117,050 patients contributed 776,203 unique iron exposure/follow-up periods. After adjustment, we found no significant associations of bolus dose versus maintenance, hazards ratio for composite outcome, 1.03 (95% C.I. 0.99, 1.07), or high dose versus low dose intravenous iron, hazards ratio for composite outcome, 0.99 (95% C.I. 0.96, 1.03). There were no consistent associations of either high or bolus dose versus low or maintenance respectively among pre-specified subgroups.

Conclusions

Strategies favoring large doses of intravenous iron were not associated with increased short-term cardiovascular morbidity and mortality. Investigation of the long-term safety of the various intravenous iron supplementation strategies may still be warranted.  相似文献   

13.
ObjectivesTo assess the effectiveness of β blockers in short term treatment for acute myocardial infarction and in longer term secondary prevention; to examine predictive factors that may influence outcome and therefore choice of drug; and to examine the clinical importance of the results in the light of current treatment.DesignSystematic review of randomised controlled trials.SettingRandomised controlled trials.SubjectsPatients with acute or past myocardial infarction.Interventionβ Blockers compared with control.Mainoutcome measures All cause mortality and non-fatal reinfarction.ResultsOverall, 5477 of 54 234 patients (10.1%) randomised to β blockers or control died. We identified a 23% reduction in the odds of death in long term trials (95% confidence interval 15% to 31%), but only a 4% reduction in the odds of death in short term trials (−8% to 15%). Meta regression in long term trials did not identify a significant reduction in effectiveness in drugs with cardioselectivity but did identify a near significant trend towards decreased benefit in drugs with intrinsic sympathomimetic activity. Most evidence is available for propranolol, timolol, and metoprolol. In long term trials, the number needed to treat for 2 years to avoid a death is 42, which compares favourably with other treatments for patients with acute or past myocardial infarction.Conclusionsβ Blockers are effective in long term secondary prevention after myocardial infarction, but they are underused in such cases and lead to avoidable mortality and morbidity.

Key messages

  • The first randomised trials of β blockade in secondary prevention after myocardial infarction were published in the 1960s
  • β blockers were once heralded as a major advance, but their use for secondary prevention has declined in recent years
  • Firm evidence shows that long term β blockade remains an effective and well tolerated treatment that reduces mortality and morbidity in unselected patients after myocardial infarction
  • The benefits from β blockade compare favourably with other drug treatments for this patient group
  • Most evidence is for propranolol, timolol, and metoprolol, whereas atenolol, which is commonly used, is inadequately evaluated for long term use
  相似文献   

14.
ObjectiveTo collect population based information on transfusion of red blood cells.DesignProspective observational study over 28 days.SettingHospital blood banks in the north of England (population 2.9 million).ParticipantsAll patients who received a red cell transfusion during the study period. Data completed by hospital blood bank staff.ResultsThe destination of 9848 units was recorded (97% of expected blood use). In total 9774 units were transfused: 5047 (51.6%) units were given to medical patients, 3982 (40.7%) to surgical patients, and 612 (6.3%) to obstetric and gynaecology patients. Nearly half (49.3%) of all blood is given to female recipients, and the mean age of recipients of individual units was 62.7 years. The most common surgical indications for transfusion were total hip replacement (4.6% of all blood transfused) and coronary artery bypass grafting (4.1%). Haematological disorders accounted for 15.5% of use. Overall use was 4274 units per 100 000 population per year.ConclusionIn the north east of England more than half of red cell units are transfused for medical indications. Demand for red cell transfusion increases with age. With anticipated changes in the age structure of the population the demand for blood will increase by 4.9% by 2008.

What is already known on this topic

There have been no systematic population based surveys on use of red cells in the United KingdomStudies in France and the United States have shown that more than half of transfused red cells go to surgical patients

What this study adds

In the north of England over half of red cells are given for medical indicationsRates of red cell transfusion rise steeply with advancing ageSmall increases in the number of elderly people will have large effects on demand  相似文献   

15.
ObjectiveTo test two methods of providing low cost information on the later health status of survivors of neonatal intensive care.DesignCluster randomised comparison.SettingNine hospitals distributed across two UK health regions. Each hospital was randomised to use one of two methods of follow up.ParticipantsAll infants born ⩽32 weeks'' gestation during 1997 in the study hospitals.MethodFamilies were recruited at the time of discharge. In one method of follow up families were asked to complete a questionnaire about their child''s health at the age of 2 years (corrected for gestation). In the other method the children''s progress was followed by clerks in the local community child health department by using sources of routine information.Results236 infants were recruited to each method of follow up. Questionnaires were returned by 214 parents (91%; 95% confidence interval 84% to 97%) and 223 clerks (95%; 86% to 100%). Completed questionnaires were returned by 201 parents (85%; 76% to 94%) and 158 clerks (67%; 43% to 91%). Most parents found the forms easy to complete, but some had trouble understanding the concept of “corrected age” and hence when to return the form. Community clerks often had to rely on information that was out of date and difficult to interpret.ConclusionNeither questionnaires from parents nor routinely collected health data are adequate methods of providing complete follow up data on children who were born preterm and required neonatal intensive care, though both methods show potential.

What is already known on this topic

Outcome of neonatal intensive care should include later health status not just early mortalityAlthough these data are commonly sought, for various reasons no existing routine system currently delivers the information for ⩾95% of the population (95% representing the minimum acceptable standard)Running one-off studies to gain later follow up data is difficult and costly

What this study adds

Potentially these data could come from parents but to reach 95% ascertainment perhaps 5-10% of parents would require help and support to provide informationExisting data flows may be able to provide the required information if the timing of routine reviews and methods of data recording were harmonised across the United KingdomThe costs attached to introducing such a system seem to be low  相似文献   

16.
ObjectivesTo assess the quality and completeness of a database of clinical outcomes after cardiac surgery and to determine whether a process of validation, monitoring, and feedback could improve the quality of the database.DesignStratified sampling of retrospective data followed by prospective re-sampling of database after intervention of monitoring, validation, and feedback.SettingTen tertiary care cardiac surgery centres in the United Kingdom.InterventionValidation of data derived from a stratified sample of case notes (recording of deaths cross checked with mortuary records), monitoring of completeness and accuracy of data entry, feedback to local data managers and lead surgeons.ResultsThe database was incomplete, with a mean (SE) of 24.96% (0.09%) of essential data elements missing, whereas only 1.18% (0.06%) were missing in the patient records (P<0.0001). Intervention was associated with (a) significantly less missing data (9.33% (0.08%) P<0.0001); (b) marginal improvement in reliability of data and mean (SE) overall centre reliability score (0.53 (0.15) v 0.44 (0.17)); and (c) improved accuracy of assigned Parsonnet risk scores (κ 0.84 v 0.70). Mortality scores (actual minus risk adjusted mortality) for all participating centres fell within two standard deviations of the mean score.ConclusionA short period of independent validation, monitoring, and feedback improved the quality of an outcomes database and improved the process of risk adjustment, but with substantial room for further improvement. Wider application of this approach should increase the credibility of similar databases before their public release.

What is already know in this topic

Release of healthcare outcomes into the public domain has altered referral patterns and has led to improvement in some centres and elimination of othersThe tacit assumption is that such outcomes data are accurate and can be relied on by the public and by healthcare providers to guide improvements

What this study adds

Sampling of a published national cardiac surgery database in England revealed it to be both incomplete and unreliable in its ability to yield accurate, risk adjusted outcomes dataAn independent short process of monitoring, validation, and feedback improved the quality of the databaseSuch databases probably require an ongoing process of monitoring in order to allow data of adequate quality to be generated for the purpose of improving healthcare outcomes  相似文献   

17.
ObjectiveTo estimate the association of driver air bag presence with driver fatality in road traffic crashes.DesignMatched pair cohort study.SettingAll passenger vehicle crashes in the United States during 1990-2000 inclusive.Subjects51 031 driver-passenger pairs in the same vehicle.ResultsDrivers with an air bag were less likely to die than drivers without an air bag (adjusted relative risk 0.92 (95% confidence interval 0.88 to 0.96)). This estimate was nearly the same whether drivers wore a seat belt (adjusted relative risk 0.93) or not (0.91). Air bags were associated with more protection for women (0.88 (0.82 to 0.93)), than for men (0.94 (0.90 to 0.99)). Drivers wearing a seat belt were less likely to die than unbelted drivers (0.35 (0.33 to 0.36)). Belted drivers with an air bag were less likely to die than unbelted drivers without an air bag (0.32 (0.30 to 0.34)).ConclusionsIf the associations are causal the average risk of driver death was reduced 8% (95% confidence interval 4% to 12%) by an air bag. Benefit was similar for belted and unbelted drivers and was slightly greater for women. However, seat belts offered much more protection than air bags.

What is already known on this topic

Studies have estimated that driver air bags reduce the risk of death in a road vehicle crash by 10-14%These studies disagree as to whether benefit is greater for drivers wearing a seat belt or for unbelted drivers

What this study adds

Having an air bag was associated with an 8% reduction in the risk of death, whether the driver was belted or notThe reduction in risk was greater for women (12%) than for men (6%)Seat belts provided much greater protection, with seat belt use reducing the risk of death by 65% (or by 68% in combination with an air bag)  相似文献   

18.
ObjectiveTo evaluate the cost effectiveness of four disease modifying treatments (interferon betas and glatiramer acetate) for relapsing remitting and secondary progressive multiple sclerosis in the United Kingdom.DesignModelling cost effectiveness.SettingUK NHS.ParticipantsPatients with relapsing remitting multiple sclerosis and secondary progressive multiple sclerosis.ResultsThe base case cost per quality adjusted life year gained by using any of the four treatments ranged from £42 000 ($66 469; €61 630) to £98 000 based on efficacy information in the public domain. Uncertainty analysis suggests that the probability of any of these treatments having a cost effectiveness better than £20 000 at 20 years is below 20%. The key determinants of cost effectiveness were the time horizon, the progression of patients after stopping treatment, differential discount rates, and the price of the treatments.ConclusionsCost effectiveness varied markedly between the interventions. Uncertainty around point estimates was substantial. This uncertainty could be reduced by conducting research on the true magnitude of the effect of these drugs, the progression of patients after stopping treatment, the costs of care, and the quality of life of the patients. Price was the key modifiable determinant of the cost effectiveness of these treatments.

What is already known on this topic

Interferon beta and glatiramer acetate are the only disease modifying therapies used to treat multiple sclerosisEconomic evaluations of these drugs have had flaws in the specification of the course of the disease, efficacy, duration of treatment, mortality, and the analysis of uncertaintyNone of the existing estimates of cost effectiveness can be viewed as robust

What this study adds

The cost per quality adjusted life year gained is unlikely to be less than £40 000 for interferon beta or glatiramer acetateExperience after stopping treatment is a key determinant of the cost effectiveness of these therapiesKey factors affecting point estimates of cost effectiveness are the cost of interferon beta and glatiramer acetate, the effect of these therapies on disease progression, and the time horizon evaluated  相似文献   

19.

Background

Previous studies of drug trials submitted to regulatory authorities have documented selective reporting of both entire trials and favorable results. The objective of this study is to determine the publication rate of efficacy trials submitted to the Food and Drug Administration (FDA) in approved New Drug Applications (NDAs) and to compare the trial characteristics as reported by the FDA with those reported in publications.

Methods and Findings

This is an observational study of all efficacy trials found in approved NDAs for New Molecular Entities (NMEs) from 2001 to 2002 inclusive and all published clinical trials corresponding to the trials within the NDAs. For each trial included in the NDA, we assessed its publication status, primary outcome(s) reported and their statistical significance, and conclusions. Seventy-eight percent (128/164) of efficacy trials contained in FDA reviews of NDAs were published. In a multivariate model, trials with favorable primary outcomes (OR = 4.7, 95% confidence interval [CI] 1.33–17.1, p = 0.018) and active controls (OR = 3.4, 95% CI 1.02–11.2, p = 0.047) were more likely to be published. Forty-one primary outcomes from the NDAs were omitted from the papers. Papers included 155 outcomes that were in the NDAs, 15 additional outcomes that favored the test drug, and two other neutral or unknown additional outcomes. Excluding outcomes with unknown significance, there were 43 outcomes in the NDAs that did not favor the NDA drug. Of these, 20 (47%) were not included in the papers. The statistical significance of five of the remaining 23 outcomes (22%) changed between the NDA and the paper, with four changing to favor the test drug in the paper (p = 0.38). Excluding unknowns, 99 conclusions were provided in both NDAs and papers, nine conclusions (9%) changed from the FDA review of the NDA to the paper, and all nine did so to favor the test drug (100%, 95% CI 72%–100%, p = 0.0039).

Conclusions

Many trials were still not published 5 y after FDA approval. Discrepancies between the trial information reviewed by the FDA and information found in published trials tended to lead to more favorable presentations of the NDA drugs in the publications. Thus, the information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.  相似文献   

20.

Background

India is an increasingly influential player in the global pharmaceutical market. Key parts of the drug regulatory system are controlled by the states, each of which applies its own standards for enforcement, not always consistent with others. A pilot study was conducted in two major cities in India, Delhi and Chennai, to explore the question/hypothesis/extent of substandard and counterfeit drugs available in the market and to discuss how the Indian state and federal governments could improve drug regulation and more importantly regulatory enforcement to combat these drugs.

Methodology/Principal Findings

Random samples of antimalarial, antibiotic, and antimycobacterial drugs were collected from pharmacies in urban and peri-urban areas of Delhi and Chennai, India. Semi-quantitative thin-layer chromatography and disintegration testing were used to measure the concentration of active ingredients against internationally acceptable standards. 12% of all samples tested from Delhi failed either one or both tests, and were substandard. 5% of all samples tested from Chennai failed either one or both tests, and were substandard. Spatial heterogeneity between pharmacies was observed, with some having more or less substandard drugs (30% and 0% respectively), as was product heterogeneity, with some drugs being more or less frequently substandard (12% and 7% respectively).

Conclusions/Significance

In a study using basic field-deployable techniques of lesser sensitivity rather than the most advanced laboratory-based techniques, the prevalence of substandard drugs in Delhi and Chennai is confirmed to be roughly in accordance with the Indian government''s current estimates. However, important spatial and product heterogeneity exists, which suggests that India''s substandard drug problem is not ubiquitous, but driven by a subset of manufacturers and pharmacies which thrive in an inadequately regulated environment. It is likely that the drug regulatory system in India needs to be improved for domestic consumption, and because India is an increasingly important exporter of drugs for both developed and developing countries. Some poor countries with high burdens of disease have weak drug regulatory systems and import many HIV/AIDS, tuberculosis and malaria drugs from India.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号