首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 21 毫秒
1.

Objectives

Mortality in patients starting antiretroviral therapy (ART) is higher in Malawi and Zambia than in South Africa. We examined whether different monitoring of ART (viral load [VL] in South Africa and CD4 count in Malawi and Zambia) could explain this mortality difference.

Design:

Mathematical modelling study based on data from ART programmes.

Methods

We used a stochastic simulation model to study the effect of VL monitoring on mortality over 5 years. In baseline scenario A all parameters were identical between strategies except for more timely and complete detection of treatment failure with VL monitoring. Additional scenarios introduced delays in switching to second-line ART (scenario B) or higher virologic failure rates (due to worse adherence) when monitoring was based on CD4 counts only (scenario C). Results are presented as relative risks (RR) with 95% prediction intervals and percent of observed mortality difference explained.

Results

RRs comparing VL with CD4 cell count monitoring were 0.94 (0.74–1.03) in scenario A, 0.94 (0.77–1.02) with delayed switching (scenario B) and 0.80 (0.44–1.07) when assuming a 3-times higher rate of failure (scenario C). The observed mortality at 3 years was 10.9% in Malawi and Zambia and 8.6% in South Africa (absolute difference 2.3%). The percentage of the mortality difference explained by VL monitoring ranged from 4% (scenario A) to 32% (scenarios B and C combined, assuming a 3-times higher failure rate). Eleven percent was explained by non-HIV related mortality.

Conclusions

VL monitoring reduces mortality moderately when assuming improved adherence and decreased failure rates.  相似文献   

2.

Background

We studied how well first-year medical students understand and apply the concept of substituted judgment, following a course on clinical ethics.

Method

Students submitted essays on one of three ethically controversial scenarios presented in class. One scenario involved a patient who had lost decisional capacity. Through an iterative process of textual analysis, the essays were studied and coded for patterns in the ways students misunderstood or misapplied the principle of substituted judgment.

Results

Students correctly articulated course principles regarding patient autonomy, substituted judgment, and non-imposition of physician values. However, students showed misunderstanding by giving doctors the responsibility of balancing the interests of the patient against the interests of the family, by stating doctors and surrogates should be guided primarily by a best-interest standard, and by suggesting that patient autonomy becomes the guiding principle only when patients can no longer express their wishes.

Conclusion

Students did not appear to internalize or correctly apply the substituted judgment standard, even though they could describe it accurately. This suggests the substituted judgment standard may run counter to students'' moral intuitions, making it harder to apply in clinical practice.  相似文献   

3.

Background

This cluster-randomised controlled trial determined the effectiveness of an evidence-based, pamphlet intervention in improving low back pain (LBP)-related beliefs among pharmacy consumers.

Methods

Thirty five community pharmacies were randomised to three groups: pamphlet+education intervention [n = 11]; pamphlet only intervention [n = 11]; control: usual care [n = 13]. Eligibility requirements for clusters included: community-based pharmacies and proprietor participation consent. Pharmacy consumers (N = 317) aged 18–65 years currently experiencing LBP participated. Intervention group allocation depended on the pharmacy attended. Individual-level outcomes were measured at pre-intervention (T0), at two (T1) and eight (T2) weeks post-intervention and included beliefs about LBP [Back Pain Beliefs Questionnaire (BBQ); Fear Avoidance Beliefs Questionnaire (FABQ)]. Secondary outcomes included pain severity, activity impairment and pamphlet perceived usefulness. Blinding to group allocation included primary investigators, outcome assessors and the statistician. Pharmacy staff and consumers were un-blinded.

Results

Of 35 pharmacies recruited (317 consumers), no clusters were lost to follow-up. Follow-up was available for n = 24 at 2 weeks only; n = 38 at 8 weeks only; n = 148 at both time points, with n = 148+24+38 = 210 analysed (107 excluded: no follow up). Adjusting for baseline scores demonstrated no significant differences in beliefs (2 or at 8 weeks) between pamphlet (with or without education) versus control, or between ‘pamphlet with’ versus ‘without’ education. Work-related fear (FABQ) was significantly lower in consumers receiving pamphlet (with or without education) versus control (difference −2.3, 95%CI: −4.4 to −0.2). There was no significant difference between “pamphlet with” versus “pamphlet without” groups. Consumers receiving the “pamphlet with” reported greater perceived usefulness than consumers receiving the “pamphlet without” (difference 0.9 (95%CI: 0.0 to 1.8)).

Conclusion

Community pharmacies provided a feasible primary care portal for implementing evidence-based information. The associated improvement in work-related LBP-beliefs for consumers receiving the pamphlet suggests this simple intervention may be a useful component of care.

Trial Registration

ACTR.org.au ACTRN12611000053921  相似文献   

4.

Introduction

The National Institute for Health and Clinical Excellence guidelines recommend acupuncture as a clinically effective treatment for chronic back pain. However, there is insufficient knowledge of what factors contribute to patients’ positive and negative experiences of acupuncture, and how those factors interact in terms of the acceptability of treatment. This study used patient interviews following acupuncture treatment for back pain to identify, understand and describe the elements that contribute or detract from acceptability of treatment.

Methods

The study used semi-structured interviews. Twelve patients were interviewed using an interview schedule as a sub-study nested within a randomised controlled trial of acupuncture for chronic back pain. The interviews were analysed using thematic analysis.

Results and Discussion

Three over-arching themes emerged from the analysis. The first entitled facilitators of acceptability contained five subthemes; experience of pain relief, improvements in physical activity, relaxation, psychological benefit, reduced reliance on medication. The second over-arching theme identified barriers to acceptability, which included needle-related discomfort and temporary worsening of symptoms, pressure to continue treatment and financial cost. The third over-arching theme comprised mediators of acceptability, which included pre-treatment mediators such as expectation and previous experience, and treatment-related mediators of time, therapeutic alliance, lifestyle advice and the patient’s active involvement in recovery. These themes inform our understanding of the acceptability of acupuncture to patients with low back pain.

Conclusion

The acceptability of acupuncture treatment for low back pain is complex and multifaceted. The therapeutic relationship between the practitioner and patient emerged as a strong driver for acceptability, and as a useful vehicle to develop the patients’ self-efficacy in pain management in the longer term. Unpleasant treatment related effects do not necessarily detract from patients’ overall perception of acceptability.  相似文献   

5.
6.

Introduction

Routine provider-initiated HIV testing and counselling (PITC) may increase HIV testing rates, but whether PITC is acceptable to health facility (HF) attendees is unclear. In the course of a PITC intervention study in Rwanda, we assessed the acceptability of PITC, reasons for being or not being tested and factors associated with HIV testing.

Methods

Attendees were systematically interviewed in March 2009 as they left the HF, regarding knowledge and acceptability of PITC, history of testing and reasons for being tested or not. Subsequently, PITC was introduced in 6 of the 8 HFs and a second round of interviews was conducted. Independent factors associated with testing were analysed using logistic regression. Randomly selected health care workers (HCWs) were also interviewed.

Results

1772 attendees were interviewed. Over 95% agreed with the PITC policy, both prior to and after implementation of PITC policy. The most common reasons for testing were the desire to know one’s HIV status and having been offered an HIV test by an HCW. The most frequent reasons for not being tested were known HIV status and test not being offered. In multivariable analysis, PITC, age ≥15 years, and not having been previously tested were factors significantly associated with testing. Although workload was increased by PITC, HIV testing rates increased and HCWs overwhelmingly supported the policy.

Conclusion

Among attendees and HCWs in Rwandan clinics, the acceptability of PITC was very high. PITC appeared to increase testing rates and may be helpful in prevention and early access to treatment.  相似文献   

7.

Background

The mechanism of veisalgia cephalgia or hangover headache is unknown. Despite a lack of mechanistic studies, there are a number of theories positing congeners, dehydration, or the ethanol metabolite acetaldehyde as causes of hangover headache.

Methods

We used a chronic headache model to examine how pure ethanol produces increased sensitivity for nociceptive behaviors in normally hydrated rats.

Results

Ethanol initially decreased sensitivity to mechanical stimuli on the face (analgesia), followed 4 to 6 hours later by inflammatory pain. Inhibiting alcohol dehydrogenase extended the analgesia whereas inhibiting aldehyde dehydrogenase decreased analgesia. Neither treatment had nociceptive effects. Direct administration of acetate increased nociceptive behaviors suggesting that acetate, not acetaldehyde, accumulation results in hangover-like hypersensitivity in our model. Since adenosine accumulation is a result of acetate formation, we administered an adenosine antagonist that blocked hypersensitivity.

Discussion

Our study shows that acetate contributes to hangover headache. These findings provide insight into the mechanism of hangover headache and the mechanism of headache induction.  相似文献   

8.

Background

Many journals now require authors share their data with other investigators, either by depositing the data in a public repository or making it freely available upon request. These policies are explicit, but remain largely untested. We sought to determine how well authors comply with such policies by requesting data from authors who had published in one of two journals with clear data sharing policies.

Methods and Findings

We requested data from ten investigators who had published in either PLoS Medicine or PLoS Clinical Trials. All responses were carefully documented. In the event that we were refused data, we reminded authors of the journal''s data sharing guidelines. If we did not receive a response to our initial request, a second request was made. Following the ten requests for raw data, three investigators did not respond, four authors responded and refused to share their data, two email addresses were no longer valid, and one author requested further details. A reminder of PLoS''s explicit requirement that authors share data did not change the reply from the four authors who initially refused. Only one author sent an original data set.

Conclusions

We received only one of ten raw data sets requested. This suggests that journal policies requiring data sharing do not lead to authors making their data sets available to independent investigators.  相似文献   

9.

Background

The healthcare sector is a significant contributor to global carbon emissions, in part due to extensive travelling by patients and health workers.

Objectives

To evaluate the potential of telemedicine services based on videoconferencing technology to reduce travelling and thus carbon emissions in the healthcare sector.

Methods

A life cycle inventory was performed to evaluate the carbon reduction potential of telemedicine activities beyond a reduction in travel related emissions. The study included two rehabilitation units at Umeå University Hospital in Sweden. Carbon emissions generated during telemedicine appointments were compared with care-as-usual scenarios. Upper and lower bound emissions scenarios were created based on different teleconferencing solutions and thresholds for when telemedicine becomes favorable were estimated. Sensitivity analyses were performed to pinpoint the most important contributors to emissions for different set-ups and use cases.

Results

Replacing physical visits with telemedicine appointments resulted in a significant 40–70 times decrease in carbon emissions. Factors such as meeting duration, bandwidth and use rates influence emissions to various extents. According to the lower bound scenario, telemedicine becomes a greener choice at a distance of a few kilometers when the alternative is transport by car.

Conclusions

Telemedicine is a potent carbon reduction strategy in the health sector. But to contribute significantly to climate change mitigation, a paradigm shift might be required where telemedicine is regarded as an essential component of ordinary health care activities and not only considered to be a service to the few who lack access to care due to geography, isolation or other constraints.  相似文献   

10.

Background

Evaluating the net exchange of greenhouse gas (GHG) emissions in conjunction with soil carbon sequestration may give a comprehensive insight on the role of agricultural production in global warming.

Materials and Methods

Measured data of methane (CH4) and nitrous oxide (N2O) were utilized to test the applicability of the Denitrification and Decomposition (DNDC) model to a winter wheat – single rice rotation system in southern China. Six alternative scenarios were simulated against the baseline scenario to evaluate their long-term (45-year) impacts on net global warming potential (GWP) and greenhouse gas intensity (GHGI).

Principal Results

The simulated cumulative CH4 emissions fell within the statistical deviation ranges of the field data, with the exception of N2O emissions during rice-growing season and both gases from the control treatment. Sensitivity tests showed that both CH4 and N2O emissions were significantly affected by changes in both environmental factors and management practices. Compared with the baseline scenario, the long-term simulation had the following results: (1) high straw return and manure amendment scenarios greatly increased CH4 emissions, while other scenarios had similar CH4 emissions, (2) high inorganic N fertilizer increased N2O emissions while manure amendment and reduced inorganic N fertilizer scenarios decreased N2O emissions, (3) the mean annual soil organic carbon sequestration rates (SOCSR) under manure amendment, high straw return, and no-tillage scenarios averaged 0.20 t C ha−1 yr−1, being greater than other scenarios, and (4) the reduced inorganic N fertilizer scenario produced the least N loss from the system, while all the scenarios produced comparable grain yields.

Conclusions

In terms of net GWP and GHGI for the comprehensive assessment of climate change and crop production, reduced inorganic N fertilizer scenario followed by no-tillage scenario would be advocated for this specified cropping system.  相似文献   

11.

Background

New evidence on the clinical effectiveness of acupuncture plus usual care (acupuncture) and counselling plus usual care (counselling) for patients with depression suggests the need to investigate the health-related quality of life and costs of these treatments to understand whether they should be considered a good use of limited health resources.

Methods and Findings

The cost-effectiveness analyses are based on the Acupuncture, Counselling or Usual care for Depression (ACUDep) trial results. Statistical analyses demonstrate a difference in mean quality adjusted life years (QALYs) and suggest differences in mean costs which are mainly due to the price of the interventions. Probabilistic sensitivity analysis is used to express decision uncertainty. Acupuncture and counselling are found to have higher mean QALYs and costs than usual care. In the base case analysis acupuncture has an incremental cost-effectiveness ratio (ICER) of £4,560 per additional QALY and is cost-effective with a probability of 0.62 at a cost-effectiveness threshold of £20,000 per QALY. Counselling compared with acupuncture is more effective and more costly with an ICER of £71,757 and a probability of being cost-effective of 0.36. A scenario analysis of counselling versus usual care, excluding acupuncture as a comparator, results in an ICER of £7,935 and a probability of 0.91.

Conclusions

Acupuncture is cost-effective compared with counselling or usual care alone, although the ranking of counselling and acupuncture depends on the relative cost of delivering these interventions. For patients in whom acupuncture is unavailable or perhaps inappropriate, counselling has an ICER less than most cost-effectiveness thresholds. However, further research is needed to determine the most cost-effective treatment pathways for depressed patients when the full range of available interventions is considered.  相似文献   

12.

Aims

To estimate the number of coronary heart disease (CHD) deaths potentially preventable in England in 2020 comparing four risk factor change scenarios.

Methods and Results

Using 2007 as baseline, the IMPACTSEC model was extended to estimate the potential number of CHD deaths preventable in England in 2020 by age, gender and Index of Multiple Deprivation 2007 quintiles given four risk factor change scenarios: (a) assuming recent trends will continue; (b) assuming optimal but feasible levels already achieved elsewhere; (c) an intermediate point, halfway between current and optimal levels; and (d) assuming plateauing or worsening levels, the worst case scenario. These four scenarios were compared to the baseline scenario with both risk factors and CHD mortality rates remaining at 2007 levels. This would result in approximately 97,000 CHD deaths in 2020. Assuming recent trends will continue would avert approximately 22,640 deaths (95% uncertainty interval: 20,390-24,980). There would be some 39,720 (37,120-41,900) fewer deaths in 2020 with optimal risk factor levels and 22,330 fewer (19,850-24,300) in the intermediate scenario. In the worst case scenario, 16,170 additional deaths (13,880-18,420) would occur. If optimal risk factor levels were achieved, the gap in CHD rates between the most and least deprived areas would halve with falls in systolic blood pressure, physical inactivity and total cholesterol providing the largest contributions to mortality gains.

Conclusions

CHD mortality reductions of up to 45%, accompanied by significant reductions in area deprivation mortality disparities, would be possible by implementing optimal preventive policies.  相似文献   

13.

Background

Malaria is the second highest contributor to the disease burden in Africa and there is a need to identify low cost prevention strategies. The objectives of this study were to estimate the prevalence of malaria parasitaemia among infants and to measure the association between peer counselling for exclusive breastfeeding (EBF), vitamin A supplementation, anthropometric status (weight and length) and malaria parasitaemia.

Methods

A cluster randomized intervention trial was conducted between 2006 and 2008 where 12 of 24 clusters, each comprising one or two villages, in Eastern Uganda were allocated to receive peer counselling for EBF. Women in their third trimester of pregnancy (based on the last normal menstrual period) were recruited in all 24 clusters and followed up until their children''s first birthday. Blood was drawn from 483 infants between 3 and 12 months of age, to test for malaria parasitaemia.

Results

The prevalence of malaria parasitaemia was 11% in the intervention areas and 10% in the control areas. The intervention did not seem to decrease the prevalence of malaria (PR 1.7; 95% CI: 0.9, 3.3). After controlling for potential confounders, infants not supplemented with Vitamin A had a higher prevalence for malaria compared to those who had been supplemented (PR 6.1; 95% CI: 2.1, 17.6). Among children supplemented with vitamin A, every unit increase in length-for-age Z (LAZ) scores was associated with a reduced prevalence in malaria (PR 0.5; 95% CI:0.4, 0.6). There was no association between LAZ scores and malaria among children that had not been supplemented.

Conclusion

Peer counselling for exclusive breastfeeding did not decrease the prevalence of malaria parasitaemia. Children that had not received Vitamin A supplementation had a higher prevalence of malaria compared to children that had been supplemented.

Trial registration

Clinicaltrials.gov: NCT00397150.  相似文献   

14.
15.

Background

Noninvasive physical management is often prescribed for headache and neck pain. Systematic reviews, however, indicate that the evidence of its efficacy is limited. Our aim was to evaluate the effectiveness of a workplace educational and physical program in reducing headache and neck/shoulder pain.

Methodology/Principal Findings

Cluster-randomized controlled trial. All municipal workers of the City of Turin, Italy, were invited to participate. Those who agreed were randomly assigned, according to their departments, to the intervention group (IG) or to the control group and were given diaries for the daily recording of pain episodes for 1 month (baseline). Subsequently, only the IG (119 departments, 923 workers) began the physical and educational program, whereas the control group (117 departments, 990 workers) did not receive any intervention. All participants were again given diaries for the daily recording of pain episodes after 6 months of intervention. The primary outcome was the change in the frequency of headache (expressed as the proportion of subjects with a ≥50% reduction of frequency; responder rate); among the secondary outcomes there were the absolute reduction of the number of days per month with headache and neck/shoulder pain. Differences between the two groups were evaluated using mixed-effect regression models. The IG showed a higher responder rate [risk ratio, 95% confidence interval (CI)] for headache (1.58; 1.28 to 1.92) and for neck/shoulder pain (1.53; 1.27 to 1.82), and a larger reduction of the days per month (95% CI) with headache (−1.72; −2.40 to −1.04) and with neck/shoulder pain (−2.51; −3.56 to −1.47).

Conclusions

The program effectively reduced headache and neck/shoulder pain in a large working community and appears to be easily transferable to primary-care settings. Further trials are needed to investigate the program effectiveness in a clinical setting, for highly selected patients suffering from specific headache types.

Trial Registration

ClinicalTrials.gov NCT00551980  相似文献   

16.

Background

The hospital standardized mortality ratio (HSMR) is developed to evaluate and improve hospital quality. Different methods can be used to standardize the hospital mortality ratio. Our aim was to assess the validity and applicability of directly and indirectly standardized hospital mortality ratios.

Methods

Retrospective scenario analysis using routinely collected hospital data to compare deaths predicted by the indirectly standardized case-mix adjustment method with observed deaths. Discharges from Dutch hospitals in the period 2003–2009 were used to estimate the underlying prediction models. We analysed variation in indirectly standardized hospital mortality ratios (HSMRs) when changing the case-mix distributions using different scenarios. Sixty-one Dutch hospitals were included in our scenario analysis.

Results

A numerical example showed that when interaction between hospital and case-mix is present and case-mix differs between hospitals, indirectly standardized HSMRs vary between hospitals providing the same quality of care. In empirical data analysis, the differences between directly and indirectly standardized HSMRs for individual hospitals were limited.

Conclusion

Direct standardization is not affected by the presence of interaction between hospital and case-mix and is therefore theoretically preferable over indirect standardization. Since direct standardization is practically impossible when multiple predictors are included in the case-mix adjustment model, indirect standardization is the only available method to compute the HSMR. Before interpreting such indirectly standardized HSMRs the case-mix distributions of individual hospitals and the presence of interactions between hospital and case-mix should be assessed.  相似文献   

17.

Background

Although the number of newly detected leprosy cases has decreased globally, a quarter of a million new cases are detected annually and eradication remains far away. Current options for leprosy prevention are contact tracing and BCG vaccination of infants. Future options may include chemoprophylaxis and early diagnosis of subclinical infections. This study compared the predicted trends in leprosy case detection of future intervention strategies.

Methods

Seven leprosy intervention scenarios were investigated with a microsimulation model (SIMCOLEP) to predict future leprosy trends. The baseline scenario consisted of passive case detection, multidrug therapy, contact tracing, and BCG vaccination of infants. The other six scenarios were modifications of the baseline, as follows: no contact tracing; with chemoprophylaxis; with early diagnosis of subclinical infections; replacement of the BCG vaccine with a new tuberculosis vaccine ineffective against Mycobacterium leprae (“no BCG”); no BCG with chemoprophylaxis; and no BCG with early diagnosis.

Findings

Without contact tracing, the model predicted an initial drop in the new case detection rate due to a delay in detecting clinical cases among contacts. Eventually, this scenario would lead to new case detection rates higher than the baseline program. Both chemoprophylaxis and early diagnosis would prevent new cases due to a reduction of the infectious period of subclinical cases by detection and cure of these cases. Also, replacing BCG would increase the new case detection rate of leprosy, but this effect could be offset with either chemoprophylaxis or early diagnosis.

Conclusions

This study showed that the leprosy incidence would be reduced substantially by good BCG vaccine coverage and the combined strategies of contact tracing, early diagnosis, and treatment of infection and/or chemoprophylaxis among household contacts. To effectively interrupt the transmission of M. leprae, it is crucial to continue developing immuno- and chemoprophylaxis strategies and an effective test for diagnosing subclinical infections.  相似文献   

18.

Background

Effective interprofessional collaboration requires that team members share common perceptions and expectations of each other''s roles.

Objective

Describe and compare residents’ and nurses’ perceptions and expectations of their own and each other’s professional roles in the context of an Internal Medicine ward.

Methods

A convenience sample of 14 residents and 14 nurses volunteers from the General Internal Medicine Division at the University Hospitals of Geneva, Switzerland, were interviewed to explore their perceptions and expectations of residents’ and nurses’ professional roles, for their own and the other profession. Interviews were analysed using thematic content analysis. The same respondents also filled a questionnaire asking their own intended actions and the expected actions from the other professional in response to 11 clinical scenarios.

Results

Three main themes emerged from the interviews: patient management, clinical reasoning and decision-making processes, and roles in the team. Nurses and residents shared general perceptions about patient management. However, there was a lack of shared perceptions and expectations regarding nurses’ autonomy in patient management, nurses’ participation in the decision-making process, professional interdependence, and residents’ implication in teamwork. Results from the clinical scenarios showed that nurses’ intended actions differed from residents’ expectations mainly regarding autonomy in patient management. Correlation between residents’ expectations and nurses’ intended actions was 0.56 (p = 0.08), while correlation between nurses’ expectations and residents’ intended actions was 0.80 (p<0.001).

Conclusions

There are discordant perceptions and unmet expectations among nurses and residents about each other’s roles, including several aspects related to the decision-making process. Interprofessional education should foster a shared vision of each other’s roles and clarify the boundaries of autonomy of each profession.  相似文献   

19.

Background

The most recent World Health Organization (WHO) antiretroviral treatment guidelines recommend the inclusion of zidovudine (ZDV) or tenofovir (TDF) in first-line therapy. We conducted a cost-effectiveness analysis with emphasis on emerging patterns of drug resistance upon treatment failure and their impact on second-line therapy.

Methods

We used a stochastic simulation of a generalized HIV-1 epidemic in sub-Saharan Africa to compare two strategies for first-line combination antiretroviral treatment including lamivudine, nevirapine and either ZDV or TDF. Model input parameters were derived from literature and, for the simulation of resistance pathways, estimated from drug resistance data obtained after first-line treatment failure in settings without virological monitoring. Treatment failure and cost effectiveness were determined based on WHO definitions. Two scenarios with optimistic (no emergence; base) and pessimistic (extensive emergence) assumptions regarding occurrence of multidrug resistance patterns were tested.

Results

In the base scenario, cumulative proportions of treatment failure according to WHO criteria were higher among first-line ZDV users (median after six years 36% [95% simulation interval 32%; 39%]) compared with first-line TDF users (31% [29%; 33%]). Consequently, a higher proportion initiated second-line therapy (including lamivudine, boosted protease inhibitors and either ZDV or TDF) in the first-line ZDV user group 34% [31%; 37%] relative to first-line TDF users (30% [27%; 32%]). At the time of second-line initiation, a higher proportion (16%) of first-line ZDV users harboured TDF-resistant HIV compared with ZDV-resistant viruses among first-line TDF users (0% and 6% in base and pessimistic scenarios, respectively). In the base scenario, the incremental cost effectiveness ratio with respect to quality adjusted life years (QALY) was US$83 when TDF instead of ZDV was used in first-line therapy (pessimistic scenario: US$ 315), which was below the WHO threshold for high cost effectiveness (US$ 2154).

Conclusions

Using TDF instead of ZDV in first-line treatment in resource-limited settings is very cost-effective and likely to better preserve future treatment options in absence of virological monitoring.  相似文献   

20.

Trial Design

Best practices for training mid-level practitioners (MLPs) to improve global health-services are not well-characterized. Two hypotheses were: 1) Integrated Management of Infectious Disease (IMID) training would improve clinical competence as tested with a single arm, pre-post design, and 2) on-site support (OSS) would yield additional improvements as tested with a cluster-randomized trial.

Methods

Thirty-six Ugandan health facilities (randomized 1∶1 to parallel OSS and control arms) enrolled two MLPs each. All MLPs participated in IMID (3-week core course, two 1-week boost sessions, distance learning). After the 3-week course, OSS-arm trainees participated in monthly OSS. Twelve written case scenarios tested clinical competencies in HIV/AIDS, tuberculosis, malaria, and other infectious diseases. Each participant completed different randomly-assigned blocks of four scenarios before IMID (t0), after 3-week course (t1), and after second boost course (t2, 24 weeks after t1). Scoring guides were harmonized with IMID content and Ugandan national policy. Score analyses used a linear mixed-effects model. The primary outcome measure was longitudinal change in scenario scores.

Results

Scores were available for 856 scenarios. Mean correct scores at t0, t1, and t2 were 39.3%, 49.1%, and 49.6%, respectively. Mean score increases (95% CI, p-value) for t0–t1 (pre-post period) and t1–t2 (parallel-arm period) were 12.1 ((9.6, 14.6), p<0.001) and −0.6 ((−3.1, +1.9), p = 0.647) percent for OSS arm and 7.5 ((5.0, 10.0), p<0.001) and 1.6 ((−1.0, +4.1), p = 0.225) for control arm. The estimated mean difference in t1 to t2 score change, comparing arm A (participated in OSS) vs. arm B was −2.2 ((−5.8, +1.4), p = 0.237). From t0–t2, mean scores increased for all 12 scenarios.

Conclusions

Clinical competence increased significantly after a 3-week core course; improvement persisted for 24 weeks. No additional impact of OSS was observed. Data on clinical practice, facility-level performance and health outcomes will complete assessment of overall impact of IMID and OSS.

Trial Registration

ClinicalTrials.gov NCT01190540  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号