首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

Despite increasing interest in possible differences in virulence and transmissibility between different genotypes of M. tuberculosis, very little is known about how genotypes within a population change over decades, or about relationships to HIV infection.

Methods and Principal Findings

In a population-based study in rural Malawi we have examined smears and cultures from tuberculosis patients over a 20-year period using spoligotyping. Isolates were grouped into spoligotype families and lineages following previously published criteria. Time trends, HIV status, drug resistance and outcome were examined by spoligotype family and lineage. In addition, transmissibility was examined among pairs of cases with known epidemiological contact by assessing the proportion of transmissions confirmed for each lineage, on the basis of IS6110 RFLP similarity of the M tuberculosis strains. 760 spoligotypes were obtained from smears from 518 patients from 1986–2002, and 377 spoligotypes from cultures from 347 patients from 2005–2008. There was good consistency in patients with multiple specimens. Among 781 patients with first episode tuberculosis, the majority (76%) had Lineage 4 (“European/American”) strains; 9% had Lineage 3 (“East-African/Indian”); 8% Lineage 1 (“Indo-Oceanic”); and 2% Lineage 2 (“East-Asian”); others unclassifiable. Over time the proportion of Lineage 4 decreased from >90% to 60%, with an increase in the other 3 lineages (p<0.001). Lineage 1 strains were more common in those with HIV infection, even after adjusting for age, sex and year. There were no associations with drug resistance or outcome, and no differences by lineage in the proportion of pairs in which transmission was confirmed.

Conclusions

This is the first study to describe long term trends in the four M. tuberculosis lineages in a population. Lineage 4 has probably been longstanding in this population, with relatively recent introductions and spread of Lineages1–3, perhaps influenced by the HIV epidemic.  相似文献   

2.

Background

Intense interest surrounds the recent expansion of US National Institutes of Health (NIH) budgets as part of economic stimulus legislation. However, the relationship between NIH funding and cardiovascular disease research is poorly understood, making the likely impact of this policy change unclear.

Methods

The National Library of Medicine''s PubMed database was searched for articles published from 1996 to 2006, originating from U.S. institutions, and containing the phrases “cardiolog,” “cardiovascular,” or “cardiac,” in the first author''s department. Research methodology, journal of publication, journal impact factor, and receipt of NIH funding were recorded. Differences in means and trends were tested with t-tests and linear regression, respectively, with P≤0.05 for significance.

Results

Of 117,643 world cardiovascular articles, 36,684 (31.2%) originated from the U.S., of which 10,293 (28.1%) received NIH funding. The NIH funded 40.1% of U.S. basic science articles, 20.3% of overall clinical trials, 18.1% of randomized-controlled, and 12.2% of multicenter clinical trials. NIH-funded and total articles grew significantly (65 articles/year, P<0.001 and 218 articles/year, P<0.001, respectively). The proportion of articles receiving NIH funding was stable, but grew significantly for basic science and clinical trials (0.87%/year, P<0.001 and 0.67%/year, P = 0.029, respectively). NIH-funded articles had greater journal impact factors than non NIH-funded articles (5.76 vs. 3.71, P<0.001).

Conclusions

NIH influence on U.S. cardiovascular research expanded in the past decade, during the period of NIH budget doubling. A substantial fraction of research is now directly funded and thus likely sensitive to budget fluctuations, particularly in basic science research. NIH funding predicts greater journal impact.  相似文献   

3.

Background

Cystic Echinococosis (CE) is a zoonotic disease caused by larval stage Echinococcus granulosus. We determined the effects of high dose of Oxfendazole (OXF), combination Oxfendazole/Praziquantel (PZQ), and combination Albendazole (ABZ)/Praziquantel against CE in sheep.

Methodology/Principal Findings

A randomized placebo-controlled trial was carried out on 118 randomly selected ewes. They were randomly assigned to one of the following groups: 1) placebo; 2) OXF 60 mg/Kg of body weight (BW) weekly for four weeks; 3) ABZ 30 mg/Kg BW + PZQ 40 mg/Kg BW weekly for 6 weeks, and 4) OXF 30 mg/Kg BW+ PZQ 40 mg/Kg BW biweekly for 3 administrations (6 weeks). Percent protoscolex (PSC) viability was evaluated using a 0.1% aqueous eosin vital stain for each cyst. “Noninfective” sheep were those that had no viable PSCs; “low-medium infective” were those that had 1% to 60% PSC viability; and “high infective” were those with more than 60% PSC viability. We evaluated 92 of the 118 sheep. ABZ/PZQ led the lowest PSC viability for lung cysts (12.7%), while OXF/PZQ did so for liver cysts (13.5%). The percentage of either “noninfective” or “low-medium infective” sheep was 90%, 93.8% and 88.9% for OXF, ABZ/PZQ and OXF/PZQ group as compared to 50% “noninfective” or “low-medium infective” for placebo. After performing all necropsies, CE prevalence in the flock of sheep was 95.7% (88/92) with a total number of 1094 cysts (12.4 cysts/animal). On average, the two-drug-combination groups resulted pulmonary cysts that were 6 mm smaller and hepatic cysts that were 4.2 mm smaller than placebo (p<0.05).

Conclusions/Significance

We demonstrate that Oxfendazole at 60 mg, combination Oxfendazole/Praziquantel and combination Albendazole/Praziquantel are successful schemas that can be added to control measures in animals and merits further study for the treatment of animal CE. Further investigations on different schedules of monotherapy or combined chemotherapy are needed, as well as studies to evaluate the safety and efficacy of Oxfendazole in humans.  相似文献   

4.

Background

Cataract is the leading cause of blindness in the world, and blindness from cataract is particularly common in low-income countries. The aim of this study is to explore the impact of cataract surgery on daily activities and time-use in Kenya, Bangladesh and the Philippines.

Methods/Principal Findings

A multi-centre intervention study was conducted in three countries. Time-use data were collected through interview from cases aged ≥50 years with visually impairing cataract (VA <6/24) and age- and gender-matched controls with normal vision (VA≥6/18). Cases were offered free/subsidized cataract surgery. Approximately one year later participants were re-interviewed about time-use. At baseline across the three countries there were 651 cases and 571 controls. Fifty-five percent of cases accepted surgery. Response rate at follow up was 84% (303 out of 361) for operated cases, and 80% (459 out of 571) for controls. At baseline, cases were less likely to carry out and spent less time on productive activities (paid and non-paid work) and spent more time in “inactivity” compared to controls. Approximately one year after cataract surgery, operated cases were more likely to undertake productive activities compared to baseline (Kenya from 55% to 88%; Bangladesh 60% to 95% and Philippines 81% to 94%, p<0.001) and mean time spent on productive activities increased by one-two hours in each setting (p<0.001). Time spent in “inactivity” in Kenya and Bangladesh decreased by approximately two hours (p<0.001). Frequency of reported assistance with activities was more than halved in each setting (p<0.001).

Conclusions/Significance

The empirical evidence provided by this study of increased time spent on productive activities, reduced time in inactivity and reduced assistance following cataract surgery among older adults in low-income settings has positive implications for well-being and inclusion, and supports arguments of economic benefit at the household level from cataract surgery.  相似文献   

5.

Objective

To assess the validity of CRB-65 (Confusion, Respiratory rate >30 breaths/min, BP<90/60 mmHg, age >65 years) as a pneumonia severity index in a Malawian hospital population, and determine whether an alternative score has greater accuracy in this setting.

Design

Forty three variables were prospectively recorded during the first 48 hours of admission in all patients admitted to Queen Elizabeth Central Hospital, Malawi, for management of lower respiratory tract infection over a two month period (N = 240). Calculation of sensitivity and specificity for CRB-65 in predicting mortality was followed by multivariate modeling to create a score with superior performance in this population.

Results

Median age 37, HIV prevalence 79.9%, overall mortality 18.3%. CRB-65 predicted mortality poorly, indicated by the area under the ROC curve of 0.649. Independent predictors of death were: Male sex, “S” (AOR 2.6); Wasting, “W” (AOR 6.6); non-ambulatory, “A” (AOR 2.5); Temp >38°C or <35°C, “T” (AOR 3.2); BP<100/60, “Bp” (AOR 3.7). Combining these factors to form a severity index (SWAT-Bp) predicted mortality with high sensitivity and specificity (AUC: 0.867). Mortality for scores 0–5 was 0%, 3.3%, 7.4%, 29.2%, 61.5% and 87.5% respectively. A score ≥3 was 84% sensitive and 77% specific for mortality prediction, with a negative predictive value of 95.8%.

Conclusion

CRB-65 performs poorly in this population. The SWAT-Bp score can accurately stratify patients; ≤2 indicates non-severe infection (mortality 4.4%) and ≥3 severe illness (mortality 45%).  相似文献   

6.

Background

In July 2009, French public health authorities embarked in a mass vaccination campaign against A/H1N1 2009 pandemic-influenza. We explored the attitudes and behaviors of the general population toward pandemic vaccination.

Methodology/Principal Findings

We conducted a cross-sectional online survey among 2,253 French representative adults aged 18 to 64 from November 17 to 25, 2009 (completion rate: 93.8%). The main outcome was the acceptability of A/H1N1 vaccination as defined by previous receipt or intention to get vaccinated (“Yes, certainly”, “Yes, probably”). Overall 17.0% (CI 95%, 15.5% to 18.7%) of respondents accepted A/H1N1 vaccination. Independent factors associated with acceptability included: male sex (p = .0001); older age (p = .002); highest or lowest level of education (p = .016); non-clerical occupation (p = .011); having only one child (p = .008); and having received seasonal flu vaccination in prior 3 years (p<.0001). Acceptability was also significantly higher among pregnant women (37.9%) and other at risk groups with chronic diseases (34.8%) (p = .002). Only 35.5% of respondents perceived A/H1N1 influenza illness as a severe disease and 12.7% had experienced A/H1N1 cases in their close relationships with higher acceptability (p<.0001 and p = .006, respectively). In comparison to 26.0% respondents who did not consult their primary care physician, acceptability was significantly higher among 8.0% respondents who were formally advised to get vaccinated, and lower among 63.7% respondents who were not advised to get vaccinated (respectively: 15.8%, 59.5% and 11.7%- p<.0001). Among respondents who refused vaccination, 71.2% expressed concerns about vaccine safety.

Conclusions/Significance

Our survey occurred one week before the peak of the pandemic in France. We found that alarming public health messages aiming at increasing the perception of risk severity were counteracted by daily personal experience which did not confirm the threat, while vaccine safety was a major issue. This dissonance may have been amplified by having not involved primary care physicians in the mass vaccination campaign.  相似文献   

7.

Background

This study evaluated two models of routine HIV testing of hospitalized children in a high HIV-prevalence resource-constrained African setting. Both models incorporated “task shifting,” or the allocation of tasks to the least-costly, capable health worker.

Methods and Findings

Two models were piloted for three months each within the pediatric department of a referral hospital in Lilongwe, Malawi between January 1 and June 30, 2008. Model 1 utilized lay counselors for HIV testing instead of nurses and clinicians. Model 2 further shifted program flow and advocacy responsibilities from counselors to volunteer parents of HIV-infected children, called “patient escorts.” A retrospective review of data from 6318 hospitalized children offered HIV testing between January-December 2008 was conducted. The pilot quarters of Model 1 and Model 2 were compared, with Model 2 selected to continue after the pilot period. There was a 2-fold increase in patients offered HIV testing with Model 2 compared with Model 1 (43.1% vs 19.9%, p<0.001). Furthermore, patients in Model 2 were younger (17.3 vs 26.7 months, p<0.001) and tested sooner after admission (1.77 vs 2.44 days, p<0.001). There were no differences in test acceptance or enrollment rates into HIV care, and the program trends continued 6 months after the pilot period. Overall, 10244 HIV antibody tests (4779 maternal; 5465 child) and 453 DNA-PCR tests were completed, with 97.8% accepting testing. 19.6% of all mothers (n = 1112) and 8.5% of all children (n = 525) were HIV-infected. Furthermore, 6.5% of children were HIV-exposed (n = 405). Cumulatively, 72.9% (n = 678) of eligible children were evaluated in the hospital by a HIV-trained clinician, and 68.3% (n = 387) successfully enrolled into outpatient HIV care.

Conclusions/Significance

The strategy presented here, task shifting from lay counselors alone to lay counselors and patient escorts, greatly improved program outcomes while only marginally increasing operational costs. The wider implementation of this strategy could accelerate pediatric HIV care access in high-prevalence settings.  相似文献   

8.

Background

Policymakers advocate universal electronic medical records (EMRs) and propose incentives for “meaningful use” of EMRs. Though emergency departments (EDs) are particularly sensitive to the benefits and unintended consequences of EMR adoption, surveillance has been limited. We analyze data from a nationally representative sample of US EDs to ascertain the adoption of various EMR functionalities.

Methodology/Principal Findings

We analyzed data from the National Hospital Ambulatory Medical Care Survey, after pooling data from 2005 and 2006, reporting proportions with 95% confidence intervals (95% CI). In addition to reporting adoption of various EMR functionalities, we used logistic regression to ascertain patient and hospital characteristics predicting “meaningful use,” defined as a “basic” system (managing demographic information, computerized provider order entry, and lab and imaging results). We found that 46% (95% CI 39–53%) of US EDs reported having adopted EMRs. Computerized provider order entry was present in 21% (95% CI 16–27%), and only 15% (95% CI 10–20%) had warnings for drug interactions or contraindications. The “basic” definition of “meaningful use” was met by 17% (95% CI 13–21%) of EDs. Rural EDs were substantially less likely to have a “basic” EMR system than urban EDs (odds ratio 0.19, 95% CI 0.06–0.57, p = 0.003), and Midwestern (odds ratio 0.37, 95% CI 0.16–0.84, p = 0.018) and Southern (odds ratio 0.47, 95% CI 0.26–0.84, p = 0.011) EDs were substantially less likely than Northeastern EDs to have a “basic” system.

Conclusions/Significance

EMRs are becoming more prevalent in US EDs, though only a minority use EMRs in a “meaningful” way, no matter how “meaningful” is defined. Rural EDs are less likely to have an EMR than metropolitan EDs, and Midwestern and Southern EDs are less likely to have an EMR than Northeastern EDs. We discuss the nuances of how to define “meaningful use,” and the importance of considering not only adoption, but also full implementation and consequences.  相似文献   

9.
10.

Importance

A number of officially approved disease-modifying drugs (DMD) are currently available for the early intervention in patients with relapsing-remitting multiple sclerosis (RRMS). The aim of the present study was to systematically evaluate the effect of DMDs on disability progression in RRMS

Methods

We performed a systematic review on MEDLINE and SCOPUS databases to include all available placebo-controlled randomized clinical trials (RCTs) of RRMS patients that reported absolute numbers or percentages of disability progression during each study period. Observational studies, case series, case reports, RCTs without placebo subgroups and studies reporting the use of RRMS therapies that are not still officially approved were excluded. Risk ratios (RRs) were calculated in each study protocol to express the comparison of disability progression in RRMS patients treated with a DMD and those RRMS patients receiving placebo. The mixed-effects model was used to calculate both the pooled point estimate in each subgroup and the overall estimates.

Results

DMDs for RRMS were found to have a significantly lower risk of disability progression compared to placebo (RR = 0.72, 95%CI: 0.66–0.79; p<0.001), with no evidence of heterogeneity or publication bias. In subsequent subgroup analyses, neither dichotomization of DMDs as “first” and “second” line RRMS therapies [(RR = 0.72, 95% CI = 0.65–0.80) vs. (RR = 0.72, 95% = 0.57–0.91); p = 0.96] nor the route of administration (injectable or oral) [RR = 0.75 (95% CI = 0.64–0.87) vs. RR = 0.74 (95% CI = 0.66–0.83); p = 0.92] had a differential effect on the risk of disability progression. Either considerable (5–20%) or significant (>20%) rates of loss to follow-up were reported in many study protocols, while financial and/or other support from pharmaceutical industries with a clear conflict of interest on the study outcomes was documented in all included studies.

Conclusions

Available DMD are effective in reducing disability progression in patients with RRMS, independently of the route of administration and their classification as “first” or “second” line therapies. Attrition bias needs to be taken into account in the interpretation of these findings.  相似文献   

11.

Introduction

Celiac disease (CD) may initially present as a neurological disorder or may be complicated by neurological changes. To date, neurophysiological studies aiming to an objective evaluation of the potential central nervous system involvement in CD are lacking.

Objective

To assess the profile of cortical excitability to Transcranial Magnetic Stimulation (TMS) in a group of de novo CD patients.

Materials and methods

Twenty CD patients underwent a screening for cognitive and neuropsychiatric symptoms by means of the Mini Mental State Examination and the Structured Clinical Interview for DSM-IV Axis I Disorders, respectively. Instrumental exams, including electroencephalography and brain computed tomography, were also performed. Cortico-spinal excitability was assessed by means of single and paired-pulse TMS using the first dorsal interosseus muscle of the dominant hand. TMS measures consisted of resting motor threshold, motor evoked potentials, cortical silent period (CSP), intracortical inhibition (ICI) and facilitation (ICF). None of the CD was on gluten-free diet. A group of 20 age-matched healthy controls was used for comparisons.

Results

CD showed a significantly shorter CSP (78.0 vs 125.0 ms, p<0.025), a reduced ICI (0.3 vs 0.2, p<0.045) and an enhanced ICF (1.1 vs 0.7, p<0.042) compared to controls. A dysthymic disorder was identified in five patients. The effect size between dysthymic and non-dysthymic CD patients indicated a low probability of interference with the CSP (Cohen''s d -0.414), ICI (-0.278) and ICF (-0.292) measurements.

Conclusion

A pattern of cortical excitability characterized by “disinhibition” and “hyperfacilitation” was found in CD patients. Immune system dysregulation might play a central role in triggering changes of the motor cortex excitability.  相似文献   

12.

Background

Tuberculosis (TB) is common among HIV-infected individuals in many resource-limited countries and has been associated with poor survival. We evaluated morbidity and mortality among individuals first starting antiretroviral therapy (ART) with concurrent active TB or other AIDS-defining disease using data from the “Prospective Evaluation of Antiretrovirals in Resource-Limited Settings” (PEARLS) study.

Methods

Participants were categorized retrospectively into three groups according to presence of active confirmed or presumptive disease at ART initiation: those with pulmonary and/or extrapulmonary TB (“TB” group), those with other non-TB AIDS-defining disease (“other disease”), or those without concurrent TB or other AIDS-defining disease (“no disease”). Primary outcome was time to the first of virologic failure, HIV disease progression or death. Since the groups differed in characteristics, proportional hazard models were used to compare the hazard of the primary outcome among study groups, adjusting for age, sex, country, screening CD4 count, baseline viral load and ART regimen.

Results

31 of 102 participants (30%) in the “TB” group, 11 of 56 (20%) in the “other disease” group, and 287 of 1413 (20%) in the “no disease” group experienced a primary outcome event (p = 0.042). This difference reflected higher mortality in the TB group: 15 (15%), 0 (0%) and 41 (3%) participants died, respectively (p<0.001). The adjusted hazard ratio comparing the “TB” and “no disease” groups was 1.39 (95% confidence interval: 0.93–2.10; p = 0.11) for the primary outcome and 3.41 (1.72–6.75; p<0.001) for death.

Conclusions

Active TB at ART initiation was associated with increased risk of mortality in HIV-1 infected patients.  相似文献   

13.

Background

Data from HIV treatment programs in resource-limited settings show extensive rates of loss to follow-up (LTFU) ranging from 5% to 40% within 6 mo of antiretroviral therapy (ART) initiation. Our objective was to project the clinical impact and cost-effectiveness of interventions to prevent LTFU from HIV care in West Africa.

Methods and Findings

We used the Cost-Effectiveness of Preventing AIDS Complications (CEPAC) International model to project the clinical benefits and cost-effectiveness of LTFU-prevention programs from a payer perspective. These programs include components such as eliminating ART co-payments, eliminating charges to patients for opportunistic infection-related drugs, improving personnel training, and providing meals and reimbursing for transportation for participants. The efficacies and costs of these interventions were extensively varied in sensitivity analyses. We used World Health Organization criteria of <3× gross domestic product per capita (3× GDP per capita = US$2,823 for Côte d''Ivoire) as a plausible threshold for “cost-effectiveness.” The main results are based on a reported 18% 1-y LTFU rate. With full retention in care, projected per-person discounted life expectancy starting from age 37 y was 144.7 mo (12.1 y). Survival losses from LTFU within 1 y of ART initiation ranged from 73.9 to 80.7 mo. The intervention costing US$22/person/year (e.g., eliminating ART co-payment) would be cost-effective with an efficacy of at least 12%. An intervention costing US$77/person/year (inclusive of all the components described above) would be cost-effective with an efficacy of at least 41%.

Conclusions

Interventions that prevent LTFU in resource-limited settings would substantially improve survival and would be cost-effective by international criteria with efficacy of at least 12%–41%, depending on the cost of intervention, based on a reported 18% cumulative incidence of LTFU at 1 y after ART initiation. The commitment to start ART and treat HIV in these settings should include interventions to prevent LTFU. Please see later in the article for the Editors'' Summary  相似文献   

14.

Background

The 2009 influenza A(H1N1) pandemic has generated thousands of articles and news items. However, finding relevant scientific articles in such rapidly developing health crises is a major challenge which, in turn, can affect decision-makers'' ability to utilise up-to-date findings and ultimately shape public health interventions. This study set out to show the impact that the inconsistent naming of the pandemic can have on retrieving relevant scientific articles in PubMed/MEDLINE.

Methodology

We first formulated a PubMed search algorithm covering different names of the influenza pandemic and simulated the results that it would have retrieved from weekly searches for relevant new records during the first 10 weeks of the pandemic. To assess the impact of failing to include every term in this search, we then conducted the same searches but omitted in turn “h1n1,” “swine,” “influenza” and “flu” from the search string, and compared the results to those for the full string.

Principal Findings

On average, our core search string identified 44.3 potentially relevant new records at the end of each week. Of these, we determined that an average of 27.8 records were relevant. When we excluded one term from the string, the percentage of records missed out of the total number of relevant records averaged 18.7% for omitting “h1n1,” 13.6% for “swine,” 17.5% for “influenza,” and 20.6% for “flu.”

Conclusions

Due to inconsistent naming, while searching for scientific material about rapidly evolving situations such as the influenza A(H1N1) pandemic, there is a risk that one will miss relevant articles. To address this problem, the international scientific community should agree on nomenclature and the specific name to be used earlier, and the National Library of Medicine in the US could index potentially relevant materials faster and allow publishers to add alert tags to such materials.  相似文献   

15.
You JH  Chan ES  Leung MY  Ip M  Lee NL 《PloS one》2012,7(3):e33123

Background

Seasonal and 2009 H1N1 influenza viruses may cause severe diseases and result in excess hospitalization and mortality in the older and younger adults, respectively. Early antiviral treatment may improve clinical outcomes. We examined potential outcomes and costs of test-guided versus empirical treatment in patients hospitalized for suspected influenza in Hong Kong.

Methods

We designed a decision tree to simulate potential outcomes of four management strategies in adults hospitalized for severe respiratory infection suspected of influenza: “immunofluorescence-assay” (IFA) or “polymerase-chain-reaction” (PCR)-guided oseltamivir treatment, “empirical treatment plus PCR” and “empirical treatment alone”. Model inputs were derived from literature. The average prevalence (11%) of influenza in 2010–2011 (58% being 2009 H1N1) among cases of respiratory infections was used in the base-case analysis. Primary outcome simulated was cost per quality-adjusted life-year (QALY) expected (ICER) from the Hong Kong healthcare providers'' perspective.

Results

In base-case analysis, “empirical treatment alone” was shown to be the most cost-effective strategy and dominated the other three options. Sensitivity analyses showed that “PCR-guided treatment” would dominate “empirical treatment alone” when the daily cost of oseltamivir exceeded USD18, or when influenza prevalence was <2.5% and the predominant circulating viruses were not 2009 H1N1. Using USD50,000 as the threshold of willingness-to-pay, “empirical treatment alone” and “PCR-guided treatment” were cost-effective 97% and 3% of time, respectively, in 10,000 Monte-Carlo simulations.

Conclusions

During influenza epidemics, empirical antiviral treatment appears to be a cost-effective strategy in managing patients hospitalized with severe respiratory infection suspected of influenza, from the perspective of healthcare providers in Hong Kong.  相似文献   

16.

Background

Although many case reports have described patients with proton pump inhibitor (PPI)-induced hypomagnesemia, the impact of PPI use on hypomagnesemia has not been fully clarified through comparative studies. We aimed to evaluate the association between the use of PPI and the risk of developing hypomagnesemia by conducting a systematic review with meta-analysis.

Methods

We conducted a systematic search of MEDLINE, EMBASE, and the Cochrane Library using the primary keywords “proton pump,” “dexlansoprazole,” “esomeprazole,” “ilaprazole,” “lansoprazole,” “omeprazole,” “pantoprazole,” “rabeprazole,” “hypomagnesemia,” “hypomagnesaemia,” and “magnesium.” Studies were included if they evaluated the association between PPI use and hypomagnesemia and reported relative risks or odds ratios or provided data for their estimation. Pooled odds ratios with 95% confidence intervals were calculated using the random effects model. Statistical heterogeneity was assessed with Cochran’s Q test and I 2 statistics.

Results

Nine studies including 115,455 patients were analyzed. The median Newcastle-Ottawa quality score for the included studies was seven (range, 6–9). Among patients taking PPIs, the median proportion of patients with hypomagnesemia was 27.1% (range, 11.3–55.2%) across all included studies. Among patients not taking PPIs, the median proportion of patients with hypomagnesemia was 18.4% (range, 4.3–52.7%). On meta-analysis, pooled odds ratio for PPI use was found to be 1.775 (95% confidence interval 1.077–2.924). Significant heterogeneity was identified using Cochran’s Q test (df = 7, P<0.001, I 2 = 98.0%).

Conclusions

PPI use may increase the risk of hypomagnesemia. However, significant heterogeneity among the included studies prevented us from reaching a definitive conclusion.  相似文献   

17.

Background

Improvements in life expectancy and quality of life for HIV-positive women coupled with reduced vertical transmission will likely lead numerous HIV-positive women to consider becoming pregnant. In order to clarify the demand, and aid with appropriate health services planning for this population, our study aims to assess the fertility desires and intentions of HIV-positive women of reproductive age living in Ontario, Canada.

Methodology/Principal Findings

A cross-sectional study with recruitment stratified to match the geographic distribution of HIV-positive women of reproductive age (18–52) living in Ontario was carried out. Women were recruited from 38 sites between October 2007 and April 2009 and invited to complete a 189-item self-administered survey entitled “The HIV Pregnancy Planning Questionnaire” designed to assess fertility desires, intentions and actions. Logistic regression models were fit to calculate unadjusted and adjusted odds ratios of significant predictors of fertility intentions. The median age of the 490 participating HIV-positive women was 38 (IQR, 32–43) and 61%, 52%, 47% and 74% were born outside of Canada, living in Toronto, of African ethnicity and currently on antiretroviral therapy, respectively. Of total respondents, 69% (95% CI, 64%–73%) desired to give birth and 57% (95% CI, 53%–62%) intended to give birth in the future. In the multivariable model, the significant predictors of fertility intentions were: younger age (age<40) (p<0.0001), African ethnicity (p<0.0001), living in Toronto (p = 0.002), and a lower number of lifetime births (p = 0.02).

Conclusions/Significance

The proportions of HIV-positive women of reproductive age living in Ontario desiring and intending pregnancy were higher than reported in earlier North American studies. Proportions were more similar to those reported from African populations. Healthcare providers and policy makers need to consider increasing services and support for pregnancy planning for HIV-positive women. This may be particularly significant in jurisdictions with high levels of African immigration.  相似文献   

18.

Background

Thyrotoxicosis is conceptualized as an “autoimmune” disease with no accepted infectious etiology. There are increasingly compelling data that another “autoimmune” affliction, Crohn disease, may be caused by Mycobacterium avium subspecies paratuberculosis (MAP). Like M. tb, MAP is systemic. We hypothesized that some cases of thyrotoxicosis may be initiated by a MAP infection. Because other thioamides treat tuberculosis, leprosy and M. avium complex, we hypothesized that a mode of action of some thioamide anti-thyrotoxicosis medications may include MAP growth inhibition.

Methods

The effect of the thioamides, thiourea, methimazole and 6-propo-2-thiouracil (6-PTU) were studied in radiometric Bactec® culture, on ten strains of three mycobacterial species (six of MAP, two of M. avium and two of M. tb. complex). Data are presented as “cumulative growth index,” (cGI) or “percent decrease in cumulative GI” (%-ΔcGI).

Principal Findings

Methimazole was the most effective thioamide at inhibiting MAP growth. At 128µg/ml: MAP UCF-4; 65%-ΔcGI & MAP ATCC 19698; 90%-ΔcGI. Thiourea inhibited MAP “Ben” maximally; 70%-ΔcGI. Neither methimazole nor thiourea inhibited M. avium or M. tb. at the doses tested. 6-PTU has no inhibition on any strain studied, although a structurally analogous control, 5-PTU, was the most inhibitory thioamide tested.

Significance

We show inhibition of MAP growth by the thioamides, thiourea and methimazole in culture. These data are compatible with the hypothesis that these thioamides may have anti-prokaryotic in addition to their well-established eukaryotic actions in thyrotoxic individuals.  相似文献   

19.

Background

Optimal timing of ART initiation for individuals presenting with AIDS-related OIs has not been defined.

Methods and Findings

A5164 was a randomized strategy trial of “early ART” - given within 14 days of starting acute OI treatment versus “deferred ART” - given after acute OI treatment is completed. Randomization was stratified by presenting OI and entry CD4 count. The primary week 48 endpoint was 3-level ordered categorical variable: 1. Death/AIDS progression; 2. No progression with incomplete viral suppression (ie HIV viral load (VL) ≥50 copies/ml); 3. No progression with optimal viral suppression (ie HIV VL <50 copies/ml). Secondary endpoints included: AIDS progression/death; plasma HIV RNA and CD4 responses and safety parameters including IRIS.282 subjects were evaluable; 141 per arm. Entry OIs included Pneumocytis jirovecii pneumonia 63%, cryptococcal meningitis 12%, and bacterial infections 12%. The early and deferred arms started ART a median of 12 and 45 days after start of OI treatment, respectively.The difference in the primary endpoint did not reach statistical significance: AIDS progression/death was seen in 20 (14%) vs. 34 (24%); whereas no progression but with incomplete viral suppression was seen in 54 (38%) vs. 44 (31%); and no progression with optimal viral suppression in 67 (48%) vs 63 (45%) in the early vs. deferred arm, respectively (p = 0.22). However, the early ART arm had fewer AIDS progression/deaths (OR = 0.51; 95% CI = 0.27–0.94) and a longer time to AIDS progression/death (stratified HR = 0.53; 95% CI = 0.30–0.92). The early ART had shorter time to achieving a CD4 count above 50 cells/mL (p<0.001) and no increase in adverse events.

Conclusions

Early ART resulted in less AIDS progression/death with no increase in adverse events or loss of virologic response compared to deferred ART. These results support the early initiation of ART in patients presenting with acute AIDS-related OIs, absent major contraindications.

Trial Registration

ClinicalTrials.gov NCT00055120  相似文献   

20.

Background

Whereas work-hour regulations have been taken for granted since 1940 in other occupational settings, such as commercial aviation, they have been implemented only recently in medical professions, where they lead to a lively debate. The aim of the present study was to evaluate arguments in favour of and against work-hour limitations in medicine given by Swiss surgeons, lawyers, and pilots.

Methods

An electronic questionnaire survey with four free-response items addressing the question of what arguments speak in favour of or against work-hour limitations in general and in medicine was sent to a random sample of board-certified surgeons, lawyers in labour law, and pilots from SWISS International Airlines Ltd.

Results

In all, 279/497 (56%) of the respondents answered the survey: 67/117 surgeons, 92/226 lawyers, and 120/154 pilots. Support for work-hour limitations in general and in medicine was present and higher among lawyers and pilots than it was in surgeons (p<0.001). The latter agreed more with work-hour limitations in general than in medicine (p<0.001). The most often cited arguments in favour of work-hour limitations were “quality and patient safety,” “health and fitness,” and “leisure and work-family balance,” whereas the lack of “flexibility” was the most important argument against. Surgeons expected more often that their “education” and the “quality of their work” would be threatened (p<0.001).

Conclusions

Work-hour limitations should be supported in medicine also, but a way must be found to reduce problems resulting from discontinuity in patient care and to minimise the work in medicine, which has no education value.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号