首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.

Background

The diagnosis of tuberculosis (TB) in resource-limited settings relies on Ziehl-Neelsen (ZN) smear microscopy. LED fluorescence microscopy (LED-FM) has many potential advantages over ZN smear microscopy, but requires evaluation in the field. The aim of this study was to assess the sensitivity/specificity of LED-FM for the diagnosis of pulmonary TB and whether its performance varies with the timing of specimen collection.

Methods and Findings

Adults with cough ≥2 wk were enrolled consecutively in Ethiopia, Nepal, Nigeria, and Yemen. Sputum specimens were examined by ZN smear microscopy and LED-FM and compared with culture as the reference standard. Specimens were collected using a spot-morning-spot (SMS) or spot-spot-morning (SSM) scheme to explore whether the collection of the first two smears at the health care facility (i.e., “on the spot”) the first day of consultation followed by a morning sample the next day (SSM) would identify similar numbers of smear-positive patients as smears collected via the SMS scheme (i.e., one on-the-spot-smear the first day, followed by a morning specimen collected at home and a second on-the-spot sample the second day). In total, 529 (21.6%) culture-positive and 1,826 (74.6%) culture-negative patients were enrolled, of which 1,156 (49%) submitted SSM specimens and 1,199 (51%) submitted SMS specimens. Single LED-FM smears had higher sensitivity but lower specificity than single ZN smears. Using two LED-FM or two ZN smears per patient was 72.8% (385/529, 95% CI 68.8%–76.5%) and 65.8% (348/529, 95% CI 61.6%–69.8%) sensitive (p<0.001) and 90.9% (1,660/1,826, 95% CI 89.5%–92.2%) and 98% (1,790/1,826, 95% CI 97.3%–98.6%) specific (p<0.001). Using three LED-FM or three ZN smears per patient was 77% (408/529, 95% CI 73.3%–80.6%) and 70.5% (373/529, 95% CI 66.4%–74.4%, p<0.001) sensitive and 88.1% (95% CI 86.5%–89.6%) and 96.5% (95% CI 96.8%–98.2%, p<0.001) specific. The sensitivity/specificity of ZN smear microscopy and LED-FM did not vary between SMS and SSM.

Conclusions

LED-FM had higher sensitivity but, in this study, lower specificity than ZN smear microscopy for diagnosis of pulmonary TB. Performance was independent of the scheme used for collecting specimens. The introduction of LED-FM needs to be accompanied by appropriate training, quality management, and monitoring of performance in the field.

Trial Registration

Current Controlled Trials ISRCTN53339491 Please see later in the article for the Editors'' Summary  相似文献   

2.

Background

We conducted a tuberculosis (TB) prevalence survey and evaluated the screening methods used in our survey, to assess if screening in TB prevalence surveys could be simplified, and to assess the accuracy of screening algorithms that may be applicable for active case finding.

Methods

All participants with a positive screen on either a symptom questionnaire, chest radiography (CXR) and/or sputum smear microscopy submitted sputum for culture. HIV status was obtained from prevalent cases. We estimated the accuracy of modified screening strategies with bacteriologically confirmed TB as the gold standard, and compared these with other survey reports. We also assessed whether sequential rather than parallel application of symptom, CXR and HIV screening would substantially reduce the number of participants requiring CXR and/or sputum culture.

Results

Presence of any abnormality on CXR had 94% (95%CI 88–98) sensitivity (92% in HIV-infected and 100% in HIV-uninfected) and 73% (95%CI 68–77) specificity. Symptom screening combinations had significantly lower sensitivity than CXR except for ‘any TB symptom’ which had 90% (95%CI 84–95) sensitivity (96% in HIV-infected and 82% in HIV-uninfected) and 32% (95%CI 30–34) specificity. Smear microscopy did not yield additional suspects, thus the combined symptom/CXR screen applied in the survey had 100% (95%CI 97–100) sensitivity. Specificity was 65% (95%CI 61–68). Sequential application of first a symptom screen for ‘any symptom’, followed by CXR-evaluation and different suspect criteria depending on HIV status would result in the largest reduction of the need for CXR and sputum culture, approximately 36%, but would underestimate prevalence by 11%.

Conclusion

CXR screening alone had higher accuracy compared to symptom screening alone. Combined CXR and symptom screening had the highest sensitivity and remains important for suspect identification in TB prevalence surveys in settings where bacteriological sputum examination of all participants is not feasible.  相似文献   

3.
Chen J  Zhang R  Wang J  Liu L  Zheng Y  Shen Y  Qi T  Lu H 《PloS one》2011,6(11):e26827

Background

Interferon-gamma release assays (IGRAs) have provided a new method for the diagnosis of Mycobacterium tuberculosis infection. However, the role of IGRAs for the diagnosis of active tuberculosis (TB), especially in HIV-infected patients remains unclear.

Methods

We searched PubMed, EMBASE and Cochrane databases to identify studies published in January 2001–July 2011 that evaluated the evidence of using QuantiFERON-TB Gold in-tube (QFT-GIT) and T-SPOT.TB (T-SPOT) on blood for the diagnosis of active TB in HIV-infected patients.

Results

The search identified 16 eligible studies that included 2801 HIV-infected individuals (637 culture confirmed TB cases). The pooled sensitivity for the diagnosis of active TB was 76.7% (95%CI, 71.6–80.5%) and 77.4% (95%CI, 71.4–82.6%) for QFT-GIT and T-SPOT, respectively, while the specificity was 76.1% (95%CI, 74.0–78.0%) and 63.1% (95%CI, 57.6–68.3%) after excluding the indeterminate results. Studies conducted in low/middle income countries showed slightly lower sensitivity and specificity when compared to that in high-income countries. The proportion of indeterminate results was as high as 10% (95%CI, 8.8–11.3%) and 13.2% (95%CI, 10.6–16.0%) for QFT-GIT and T-SPOT, respectively.

Conclusion

IGRAs in their current formulations have limited accuracy in diagnosing active TB in HIV-infected patients, and should not be used alone to rule out or rule in active TB cases in HIV-infected patients. Further modification is needed to improve their accuracy.  相似文献   

4.

Background

Risk factors for ischemic stroke are mostly known, but it is still unclear in most countries, what are their combined population-attributable risk percent (PAR%). In a case-control study the individual odds ratios (ORs) and the individual and combined PAR%, including risk factors not addressed in previous studies were estimated.

Methods

Cases and controls were selected from patients attending to an emergency department. Cases were patients aged with 45 years or more with the first episode of ischemic stroke, characterized by a focal neurological deficit or change in the mental status occurring during the previous 24 hours. Controls, matched to cases by age and gender, were selected from patients without neurological complaints.

Results

133 cases and 272 controls were studied. Odds ratios for ischemic stroke were: atrial fibrillation (27.3; CI 95% 7.5–99.9), left ventricular hypertrophy (20.3; CI 95% 8.8–46.4), history of hypertension (11.2; CI 95% 5.4–23.3), physical inactivity (6.6; CI 95% 3.3–13.1), low levels of HDL-cholesterol (5.0; CI 95%2.8–8.9), heavy smoking (2.8; CI 95% 1.5–5.0), carotid bruit (2.5; CI 95% 1.3–4.6), diabetes (2.4; CI 95% 1.4–4.0) and alcohol abuse (2.1; CI 95% 1.1–4.0), The combination of these risk factors accounted for 98.9% (95% CI; 96.4%–99.7%) of the PAR% for all stroke.

Conclusions

Nine risk factors, easily identified, explain almost 100% of the population attributable risk for ischemic stroke.  相似文献   

5.

Background

The clinical manifestations of neurocysticercosis (NCC) are poorly understood. This systematic review aims to estimate the frequencies of different manifestations, complications and disabilities associated with NCC.

Methods

A systematic search of the literature published from January 1, 1990, to June 1, 2008, in 24 different electronic databases and 8 languages was conducted. Meta-analyses were conducted when appropriate.

Results

A total of 1569 documents were identified, and 21 included in the analysis. Among patients seen in neurology clinics, seizures/epilepsy were the most common manifestations (78.8%, 95%CI: 65.1%–89.7%) followed by headaches (37.9%, 95%CI: 23.3%–53.7%), focal deficits (16.0%, 95%CI: 9.7%–23.6%) and signs of increased intracranial pressure (11.7%, 95%CI: 6.0%–18.9%). All other manifestations occurred in less than 10% of symptomatic NCC patients. Only four studies reported on the mortality rate of NCC.

Conclusions

NCC is a pleomorphic disease linked to a range of manifestations. Although definitions of manifestations were very rarely provided, and varied from study to study, the proportion of NCC cases with seizures/epilepsy and the proportion of headaches were consistent across studies. These estimates are only applicable to patients who are ill enough to seek care in neurology clinics and likely over estimate the frequency of manifestations among all NCC cases.  相似文献   

6.

Background

Sensitive diagnostic tools are required for an accurate assessment of prevalence and intensity of helminth infections in areas undergoing regular deworming, and for monitoring anthelmintic drug efficacy. We compared the diagnostic accuracy of the Kato-Katz and FLOTAC techniques in the frame of a drug efficacy trial.

Methodology/Principal Findings

Stool samples from 343 Zanzibari children were subjected to duplicate Kato-Katz thick smears and the FLOTAC basic technique in a baseline screening in early 2009. The FLOTAC showed a higher sensitivity than the Kato-Katz method for the diagnosis of Trichuris trichiura (95% vs. 88%, p = 0.012) and Ascaris lumbricoides (88% vs. 68%, p = 0.098), but a lower sensitivity for hookworm diagnosis (54% vs. 81%, p = 0.006). Considering the combined results from both methods as ‘gold’ standard, the prevalences of T. trichiura, hookworm and A. lumbricoides were 71% (95% confidence interval (CI): 66–75%), 22% (95% CI: 17–26%) and 12% (95% CI: 8–15%), respectively. At follow-up, 3–5 weeks after 174 among the 269 re-examined children were administered anthelmintic drugs, we observed cure rates (CRs) against A. lumbricoides, hookworm and T. trichiura of 91% (95% CI: 80–100%), 61% (95% CI: 48–75%) and 41% (95% CI: 34–49%), respectively, when using the Kato-Katz method. FLOTAC revealed lower CRs against A. lumbricoides (83%, 95% CI: 67–98%) and T. trichiura (36%, 95% CI: 29–43%), but a higher CR against hookworm (69%, 95% CI: 57–82%). These differences, however, lacked statistical significance. Considerable differences were observed in the geometric mean fecal egg counts between the two methods with lower egg reduction rates (ERRs) determined by FLOTAC.

Conclusion/Significance

Our results suggest that the FLOTAC technique, following further optimization, might become a viable alternative to the Kato-Katz method for anthelmintic drug efficacy studies and for monitoring and evaluation of deworming programs. The lower CRs and ERRs determined by FLOTAC warrant consideration and could strategically impact future helminth control programs.  相似文献   

7.

Background

The control of soil-transmitted helminth (STH) infections currently relies on the large-scale administration of single-dose oral albendazole or mebendazole. However, these treatment regimens have limited efficacy against hookworm and Trichuris trichiura in terms of cure rates (CR), whereas fecal egg reduction rates (ERR) are generally high for all common STH species. We compared the efficacy of single-dose versus triple-dose treatment against hookworm and other STHs in a community-based randomized controlled trial in the People''s Republic of China.

Methodology/Principal findings

The hookworm CR and fecal ERR were assessed in 314 individuals aged ≥5 years who submitted two stool samples before and 3–4 weeks after administration of single-dose oral albendazole (400 mg) or mebendazole (500 mg) or triple-dose albendazole (3×400 mg over 3 consecutive days) or mebendazole (3×500 mg over 3 consecutive days). Efficacy against T. trichiura, Ascaris lumbricoides, and Taenia spp. was also assessed.Albendazole cured significantly more hookworm infections than mebendazole in both treatment regimens (single dose: respective CRs 69% (95% confidence interval [CI]: 55–81%) and 29% (95% CI: 20–45%); triple dose: respective CRs 92% (95% CI: 81–98%) and 54% (95% CI: 46–71%)). ERRs followed the same pattern (single dose: 97% versus 84%; triple dose: 99.7% versus 96%). Triple-dose regimens outperformed single doses against T. trichiura; three doses of mebendazole – the most efficacious treatment tested – cured 71% (95% CI: 57–82%). Both single and triple doses of either drug were highly efficacious against A. lumbricoides (CR: 93–97%; ERR: all >99.9%). Triple dose regimens cured all Taenia spp. infections, whereas single dose applications cured only half of them.

Conclusions/Significance

Single-dose oral albendazole is more efficacious against hookworm than mebendazole. To achieve high CRs against both hookworm and T. trichiura, triple-dose regimens are warranted.

Trial Registration

www.controlled-trials.com ISRCTN47375023  相似文献   

8.

Background

The safety, tolerability, and immunogenicity of a monovalent intranasal 2009 A/H1N1 live attenuated influenza vaccine (LAIV) were evaluated in children and adults.

Methods/Principal Findings

Two randomized, double-blind, placebo-controlled studies were completed in children (2–17 y) and adults (18–49 y). Subjects were assigned 4∶1 to receive 2 doses of H1N1 LAIV or placebo 28 days apart. The primary safety endpoint was fever ≥38.3°C during days 1–8 after the first dose; the primary immunogenicity endpoint was the proportion of subjects experiencing a postdose seroresponse. Solicited symptoms and adverse events were recorded for 14 days after each dose and safety data were collected for 180 days post-final dose. In total, 326 children (H1N1 LAIV, n = 261; placebo, n = 65) and 300 adults (H1N1 LAIV, n = 240; placebo, n = 60) were enrolled. After dose 1, fever ≥38.3°C occurred in 4 (1.5%) pediatric vaccine recipients and 1 (1.5%) placebo recipient (rate difference, 0%; 95% CI: –6.4%, 3.1%). No adults experienced fever following dose 1. Seroresponse rates in children (H1N1 LAIV vs. placebo) were 11.1% vs. 6.3% after dose 1 (rate difference, 4.8%; 95% CI: –9.6%, 13.8%) and 32.0% vs. 14.5% after dose 2 (rate difference, 17.5%; 95% CI: 5.5%, 27.1%). Seroresponse rates in adults were 6.1% vs. 0% (rate difference, 6.1%; 95% CI: –5.6%, 12.6%) and 14.9% vs. 5.6% (rate difference, 9.3%; 95% CI: –0.8%, 16.3%) after dose 1 and dose 2, respectively. Solicited symptoms after dose 1 (H1N1 LAIV vs. placebo) occurred in 37.5% vs. 32.3% of children and 41.7% vs. 31.7% of adults. Solicited symptoms occurred less frequently after dose 2 in adults and children. No vaccine-related serious adverse events occurred.

Conclusions/Significance

In subjects aged 2 to 49 years, two doses of H1N1 LAIV have a safety and immunogenicity profile similar to other previously studied and efficacious formulations of seasonal trivalent LAIV.

Trial Registration

ClinicalTrials.gov NCT00946101, NCT00945893  相似文献   

9.

Background

In general, point-of-care (POC) tests for Chlamydia trachomatis (Ct) show disappointing test performance, especially disappointing sensitivity results. However, one study sponsored by the manufacturer (Diagnostics for the Real World) reported over 80% sensitivity with their Chlamydia Rapid Test (CRT). We evaluated the performance of this CRT in a non–manufacturer-sponsored trial.

Methods

Between July 2009 and February 2010, we included samples from 912 women in both high- and low-risk clinics for sexually transmitted infections (STIs) in Paramaribo, Suriname. Sensitivity, specificity, positive- and negative predictive values (PPV and NPV) for CRT compared to NAAT (Aptima, Gen-Probe) were determined. Quantitative Ct load and human cell load were determined in all CRT and/or NAAT positive samples.

Results

CRT compared to NAAT showed a sensitivity and specificity of 41.2% (95% CI, 31.9%–50.9%) and 96.4% (95% CI, 95.0%–97.5%), respectively. PPV and NPV were 59.2% (95% CI, 47.5%–70.1%) and 92.9% (95% CI, 91.0%–94.5%), respectively. Quantitative Ct bacterial load was 73 times higher in NAAT-positive/CRT-positive samples compared to NAAT-positive/CRT-negative samples (p<0.001). Human cell load did not differ between true-positive and false-negative CRT results (p = 0.835). Sensitivity of CRT in samples with low Ct load was 12.5% (95% CI, 5.2%–24.2%) and in samples with high Ct load 73.5% (95% CI, 59.9%–84.4%).

Conclusions

The sensitivity of CRT for detecting urogenital Ct in this non–manufacturer-sponsored study did not meet the expectations as described previously. The CRT missed samples with a low Ct load. Improved POC are needed as meaningful diagnostic to reduce the disease burden of Ct.  相似文献   

10.

Background

Imperfect sensitivity of interferon-γ release assay (IGRA) is a potential problem to detect tuberculosis. We made a thorough investigation of the factors that can lead to false negativity of IGRA.

Methods

We recruited 543 patients with new smear-positive pulmonary tuberculosis in Hanoi, Viet Nam. At diagnosis, peripheral blood was collected and IGRA (QuantiFERON-TB Gold In-Tube) was performed. Clinical and epidemiological information of the host and pathogen was collected. The test sensitivity was calculated and factors negatively influencing IGRA results were evaluated using a logistic regression model in 504 patients with culture-confirmed pulmonary tuberculosis.

Results

The overall sensitivity of IGRA was 92.3% (95% CI, 89.6%–94.4%). The proportions of IGRA-negative and -indeterminate results were 4.8% (95% CI, 3.1%–7.0%) and 3.0% (95% CI, 1.7%–4.9%). Age increased by year, body mass index <16.0, HIV co-infection and the increased number of HLA-DRB1*0701 allele that patients bear showed significant associations with IGRA negativity (OR = 1.04 [95% CI, 1.01–1.07], 5.42 [1.48–19.79], 6.38 [1.78–22.92] and 5.09 [2.31–11.22], respectively). HIV co-infection and the same HLA allele were also associated with indeterminate results (OR = 99.59 [95% CI, 15.58–625.61] and 4.25 [1.27–14.16]).

Conclusions

Aging, emaciation, HIV co-infection and HLA genotype affected IGRA results. Assessment of these factors might contribute to a better understanding of the assay.  相似文献   

11.

Objective

UK Indian adults have higher risks of coronary heart disease and type 2 diabetes than Indian and UK European adults. With growing evidence that these diseases originate in early life, we compared cardiometabolic risk markers in Indian, UK Indian and white European children.

Methods

Comparisons were based on the Mysore Parthenon Birth Cohort Study (MPBCS), India and the Child Heart Health Study in England (CHASE), which studied 9–10 year-old children (538 Indian, 483 UK Indian, 1375 white European) using similar methods. Analyses adjusted for study differences in age and sex.

Results

Compared with Mysore Indians, UK Indians had markedly higher BMI (% difference 21%, 95%CI 18 to 24%), skinfold thickness (% difference 34%, 95%CI 26 to 42%), LDL-cholesterol (mean difference 0.48, 95%CI 0.38 to 0.57 mmol/L), systolic BP (mean difference 10.3, 95% CI 8.9 to 11.8 mmHg) and fasting insulin (% difference 145%, 95%CI 124 to 168%). These differences (similar in both sexes and little affected by adiposity adjustment) were larger than those between UK Indians and white Europeans. Compared with white Europeans, UK Indians had higher skinfold thickness (% difference 6.0%, 95%CI 1.5 to 10.7%), fasting insulin (% difference 31%, 95%CI 22 to 40%), triglyceride (% difference 13%, 95%CI 8 to 18%) and LDL-cholesterol (mean difference 0.12 mmol/L, 95%CI 0.04 to 0.19 mmol/L).

Conclusions

UK Indian children have an adverse cardiometabolic risk profile, especially compared to Indian children. These differences, not simply reflecting greater adiposity, emphasize the need for prevention strategies starting in childhood or earlier.  相似文献   

12.

Background

Rapid PCR-based tests for the diagnosis of leptospirosis can provide information that contributes towards early patient management, but these have not been adopted in Thailand. Here, we compare the diagnostic sensitivity and specificity of two real-time PCR assays targeting rrs or lipL32 for the diagnosis of leptospirosis in northeast Thailand.

Methods/Principal Findings

A case-control study of 266 patients (133 cases of leptospirosis and 133 controls) was constructed to evaluate the diagnostic sensitivity and specificity (DSe & DSp) of both PCR assays. The median duration of illness prior to admission of cases was 4 days (IQR 2–5 days; range 1–12 days). DSe and DSp were determined using positive culture and/or microscopic agglutination test (MAT) as the gold standard. The DSe was higher for the rrs assay than the lipL32 assay (56%, (95% CI 47–64%) versus 43%, (95% CI 34–52%), p<0.001). No cases were positive for the lipL32 assay alone. There was borderline evidence to suggest that the DSp of the rrs assay was lower than the lipL32 assay (90% (95% CI 83–94%) versus 93%, (95%CI 88–97%), p = 0.06). Nine controls gave positive reactions for both assays and 5 controls gave a positive reaction for the rrs assay alone. The DSe of the rrs and lipL32 assays were high in the subgroup of 39 patients who were culture positive for Leptospira spp. (95% and 87%, respectively, p = 0.25).

Conclusions/Significance

Early detection of Leptospira using PCR is possible for more than half of patients presenting with leptospirosis and could contribute to individual patient care.  相似文献   

13.

Background

Schistosomiasis and soil-transmitted helminthiasis (STH) are widely distributed in Cameroon. Although mass drug administration (MDA) of mebendazole is implemented nationwide, treatment with praziquantel was so far limited to the three northern regions and few health districts in the southern part of Cameroon, based on previous mapping conducted 25 years ago. To update the disease distribution map and determine where treatment with praziquantel should be extended, mapping surveys were conducted in three of the seven southern regions of Cameroon, i.e. Centre, East and West.

Methodology

Parasitological surveys were conducted in April–May 2010 in selected schools in all 63 health districts of the three targeted regions, using appropriate research methodologies, i.e. Kato-Katz and urine filtration.

Principal Findings

The results showed significant variation of schistosomiasis and STH prevalence between schools, villages, districts and regions. Schistosoma mansoni was the most prevalent schistosome species, with an overall prevalence of 5.53%, followed by S. haematobium (1.72%) and S. guineensis (0.14%). The overall prevalence of schistosomiasis across the three regions was 7.31% (95% CI: 6.86–7.77%). The prevalence for Ascaris lumbricoides was 11.48 (95% CI: 10.93–12.04%), Trichuris trichiura 18.22% (95% CI: 17.56–18.90%) and hookworms 1.55% (95% CI: 1.35–1.78%), with an overall STH prevalence of 24.10% (95% CI: 23.36–24.85%) across the three regions. STH was more prevalent in the East region (46.57%; 95% CI: 44.41–48.75%) in comparison to the Centre (25.12; 95% CI: 24.10–26.17%) and West (10.49%; 95% CI: 9.57–11.51%) regions.

Conclusions/Significance

In comparison to previous data, the results showed an increase of schistosomiasis transmission in several health districts, whereas there was a significant decline of STH infections. Based on the prevalence data, the continuation of annual or bi-annual MDA for STH is recommended, as well as an extension of praziquantel in identified moderate and high risk communities for schistosomiasis.  相似文献   

14.

Background

Serological (antibody detection) tests for tuberculosis (TB) are widely used in developing countries. As part of a World Health Organization policy process, we performed an updated systematic review to assess the diagnostic accuracy of commercial serological tests for pulmonary and extrapulmonary TB with a focus on the relevance of these tests in low- and middle-income countries.

Methods and Findings

We used methods recommended by the Cochrane Collaboration and GRADE approach for rating quality of evidence. In a previous review, we searched multiple databases for papers published from 1 January 1990 to 30 May 2006, and in this update, we add additional papers published from that period until 29 June 2010. We prespecified subgroups to address heterogeneity and summarized test performance using bivariate random effects meta-analysis. For pulmonary TB, we included 67 studies (48% from low- and middle-income countries) with 5,147 participants. For all tests, estimates were variable for sensitivity (0% to 100%) and specificity (31% to 100%). For anda-TB IgG, the only test with enough studies for meta-analysis, pooled sensitivity was 76% (95% CI 63%–87%) in smear-positive (seven studies) and 59% (95% CI 10%–96%) in smear-negative (four studies) patients; pooled specificities were 92% (95% CI 74%–98%) and 91% (95% CI 79%–96%), respectively. Compared with ELISA (pooled sensitivity 60% [95% CI 6%–65%]; pooled specificity 98% [95% CI 96%–99%]), immunochromatographic tests yielded lower pooled sensitivity (53%, 95% CI 42%–64%) and comparable pooled specificity (98%, 95% CI 94%–99%). For extrapulmonary TB, we included 25 studies (40% from low- and middle-income countries) with 1,809 participants. For all tests, estimates were variable for sensitivity (0% to 100%) and specificity (59% to 100%). Overall, quality of evidence was graded very low for studies of pulmonary and extrapulmonary TB.

Conclusions

Despite expansion of the literature since 2006, commercial serological tests continue to produce inconsistent and imprecise estimates of sensitivity and specificity. Quality of evidence remains very low. These data informed a recently published World Health Organization policy statement against serological tests. Please see later in the article for the Editors'' Summary  相似文献   

15.

Introduction

The incidence of end-stage renal disease is increasing worldwide. Earlier studies reported high prevalence rates of obesity and hypertension, two major risk factors of chronic kidney disease (CKD), in Golestan Province, Iran. We aimed to investigate prevalence of moderate to severe CKD and its risk factors in the region.

Methods

Questionnaire data and blood samples were collected from 3591 participants (≥18 years old) from the general population. Based on serum creatinine levels, glomerular filtration rate (GFR) was estimated.

Results

High body mass index (BMI) was common: 35.0% of participants were overweight (BMI 25–29.9) and 24.5% were obese (BMI ≥30). Prevalence of CKD stages 3 to 5 (CKD–S3-5), i.e., GFR <60 mL/min/1.73 m2, was 4.6%. The odds ratio (OR) and 95% confidence interval (95% CI) for the risk of CKD–S3-5 associated with every year increase in age was 1.13 (1.11–1.15). Men were at lower risk of CKD–S3-5 than women (OR = 0.28; 95% CI 0.18–0.45). Obesity (OR = 1.78; 95% CI 1.04–3.05) and self-reported diabetes (OR = 1.70; 95% CI 1.00–2.86), hypertension (OR = 3.16; 95% CI 2.02–4.95), ischemic heart disease (OR = 2.73; 95% CI 1.55–4.81), and myocardial infarction (OR = 2.69; 95% CI 1.14–6.32) were associated with increased risk of CKD–S3-5 in the models adjusted for age and sex. The association persisted for self-reported hypertension even after adjustments for BMI and history of diabetes (OR = 2.85; 95% CI 1.77–4.59).

Conclusion

A considerable proportion of inhabitants in Golestan have CKD–S3-5. Screening of individuals with major risk factors of CKD, in order to early detection and treatment of impaired renal function, may be plausible. Further studies on optimal risk prediction of future end-stage renal disease and effectiveness of any screening program are warranted.  相似文献   

16.

Background

The estimated number of new HIV infections in the United States reflects the leading edge of the epidemic. Previously, CDC estimated HIV incidence in the United States in 2006 as 56,300 (95% CI: 48,200–64,500). We updated the 2006 estimate and calculated incidence for 2007–2009 using improved methodology.

Methodology

We estimated incidence using incidence surveillance data from 16 states and 2 cities and a modification of our previously described stratified extrapolation method based on a sample survey approach with multiple imputation, stratification, and extrapolation to account for missing data and heterogeneity of HIV testing behavior among population groups.

Principal Findings

Estimated HIV incidence among persons aged 13 years and older was 48,600 (95% CI: 42,400–54,700) in 2006, 56,000 (95% CI: 49,100–62,900) in 2007, 47,800 (95% CI: 41,800–53,800) in 2008 and 48,100 (95% CI: 42,200–54,000) in 2009. From 2006 to 2009 incidence did not change significantly overall or among specific race/ethnicity or risk groups. However, there was a 21% (95% CI:1.9%–39.8%; p = 0.017) increase in incidence for people aged 13–29 years, driven by a 34% (95% CI: 8.4%–60.4%) increase in young men who have sex with men (MSM). There was a 48% increase among young black/African American MSM (12.3%–83.0%; p<0.001). Among people aged 13–29, only MSM experienced significant increases in incidence, and among 13–29 year-old MSM, incidence increased significantly among young, black/African American MSM. In 2009, MSM accounted for 61% of new infections, heterosexual contact 27%, injection drug use (IDU) 9%, and MSM/IDU 3%.

Conclusions/Significance

Overall, HIV incidence in the United States was relatively stable 2006–2009; however, among young MSM, particularly black/African American MSM, incidence increased. HIV continues to be a major public health burden, disproportionately affecting several populations in the United States, especially MSM and racial and ethnic minorities. Expanded, improved, and targeted prevention is necessary to reduce HIV incidence.  相似文献   

17.

Background

Soil-transmitted helminth (STH) infections (i.e., Ascaris lumbricoides, hookworm, and Trichuris trichiura) affect more than a billion people. Preventive chemotherapy (i.e., repeated administration of anthelmintic drugs to at-risk populations), is the mainstay of control. This strategy, however, does not prevent reinfection. We performed a systematic review and meta-analysis to assess patterns and dynamics of STH reinfection after drug treatment.

Methodology

We systematically searched PubMed, ISI Web of Science, EMBASE, Cochrane Database of Systematic Reviews, China National Knowledge Infrastructure, WanFang Database, Chinese Scientific Journal Database, and Google Scholar. Information on study year, country, sample size, age of participants, diagnostic method, drug administration strategy, prevalence and intensity of infection pre- and posttreatment, cure and egg reduction rate, evaluation period posttreatment, and adherence was extracted. Pooled risk ratios from random-effects models were used to assess the risk of STH reinfection after treatment. Our protocol is available on PROSPERO, registration number: CRD42011001678.

Principal Findings

From 154 studies identified, 51 were included and 24 provided STH infection rates pre- and posttreatment, whereas 42 reported determinants of predisposition to reinfection. At 3, 6, and 12 months posttreatment, A. lumbricoides prevalence reached 26% (95% confidence interval (CI): 16–43%), 68% (95% CI: 60–76%) and 94% (95% CI: 88–100%) of pretreatment levels, respectively. For T. trichiura, respective reinfection prevalence were 36% (95% CI: 28–47%), 67% (95% CI: 42–100%), and 82% (95% CI: 62–100%), and for hookworm, 30% (95% CI: 26–34%), 55% (95% CI: 34–87%), and 57% (95% CI: 49–67%). Prevalence and intensity of reinfection were positively correlated with pretreatment infection status.

Conclusion

STH reinfections occur rapidly after treatment, particularly for A. lumbricoides and T. trichiura. Hence, there is a need for frequent anthelmintic drug administrations to maximize the benefit of preventive chemotherapy. Integrated control approaches emphasizing health education and environmental sanitation are needed to interrupt transmission of STH.  相似文献   

18.

Background

In Norway, women with negative or low-grade cervical biopsies (normal/CIN1) are followed up after six months in order to decide on further follow-up or recall for screening at three-year intervals. A high specificity and positive predictive value (PPV) of the triage test is important to avoid unnecessary diagnostic and therapeutic procedures whereas a low risk of high-grade disease among triage negative women assures safety.

Materials and Methods

At the University Hospital of North Norway, cytology and the HPV mRNA test PreTect HPV-Proofer, detecting E6/E7 mRNA from HPV types 16, 18, 31, 33 and 45, are used in post-colposcopy follow-up of women with negative or low-grade biopsy. In this study, women with negative biopsy after high grade cytology (ASC-H/HSIL) and/or positive HPV mRNA test in the period 2005–2009 were included (n = 520). Histologically confirmed cervical intraepithelial neoplasia of grade 2 or worse (CIN2+) was used as study endpoint.

Results

Of 520 women with negative or low-grade biopsy, 124 women (23.8%) had CIN2+ in follow-up biopsy. The sensitivity and specificity of the HPV mRNA test were 89.1% (95% CI, 80.1–98.1) and 92.5% (95% CI, 88.2–96.7), respectively. The ratios of sensitivity, specificity and PPV of HPV mRNA testing compared to repeat cytology for finding CIN2+ was 1.05 (95% CI: 0.92–1.21), 1.21 (95% CI: 1.12–1.32), and 1.49 (95% CI: 1.20–1.86), respectively. The PPV of mRNA was 77.3% (95% CI, 59.8–94.8) in women aged 40 or older.

Conclusion

Women with negative cervical biopsy require follow-up before resumption of routine screening. Post-colposcopy HPV mRNA testing was as sensitive but more specific than post-colposcopy cytology. In addition, the HPV mRNA test showed higher PPV. A positive mRNA test post-colposcopy could justify treatment in women above 40 years.  相似文献   

19.

Background

Drug resistance among tuberculosis patients in sub-Saharan Africa is increasing, possibly due to association with HIV infection. We studied drug resistance and HIV infection in a representative sample of 533 smear-positive tuberculosis patients diagnosed in Kampala, Uganda.

Methods/Principal Findings

Among 473 new patients, multidrug resistance was found in 5 (1.1%, 95% CI 0.3–2.5) and resistance to any drug in 57 (12.1%, 9.3–15.3). Among 60 previously treated patients this was 7 (11.7%, 4.8–22.6) and 17 (28.3%; 17.5–41.4), respectively. Of 517 patients with HIV results, 165 (31.9%, 27.9–36.1) tested positive. Neither multidrug (adjusted odds ratio (ORadj) 0.7; 95% CI 0.19–2.6) nor any resistance (ORadj 0.7; 0.43–1.3) was associated with HIV status. Primary resistance to any drug was more common among patients who had worked in health care (ORadj 3.5; 1.0–12.0).

Conclusion/Significance

Anti-tuberculosis drug resistance rates in Kampala are low and not associated with HIV infection, but may be associated with exposure during health care.  相似文献   

20.

Objectives

Use electronic health records Autism Spectrum Disorder (ASD) to assess the comorbidity burden of ASD in children and young adults.

Study Design

A retrospective prevalence study was performed using a distributed query system across three general hospitals and one pediatric hospital. Over 14,000 individuals under age 35 with ASD were characterized by their co-morbidities and conversely, the prevalence of ASD within these comorbidities was measured. The comorbidity prevalence of the younger (Age<18 years) and older (Age 18–34 years) individuals with ASD was compared.

Results

19.44% of ASD patients had epilepsy as compared to 2.19% in the overall hospital population (95% confidence interval for difference in percentages 13.58–14.69%), 2.43% of ASD with schizophrenia vs. 0.24% in the hospital population (95% CI 1.89–2.39%), inflammatory bowel disease (IBD) 0.83% vs. 0.54% (95% CI 0.13–0.43%), bowel disorders (without IBD) 11.74% vs. 4.5% (95% CI 5.72–6.68%), CNS/cranial anomalies 12.45% vs. 1.19% (95% CI 9.41–10.38%), diabetes mellitus type I (DM1) 0.79% vs. 0.34% (95% CI 0.3–0.6%), muscular dystrophy 0.47% vs 0.05% (95% CI 0.26–0.49%), sleep disorders 1.12% vs. 0.14% (95% CI 0.79–1.14%). Autoimmune disorders (excluding DM1 and IBD) were not significantly different at 0.67% vs. 0.68% (95% CI −0.14-0.13%). Three of the studied comorbidities increased significantly when comparing ages 0–17 vs 18–34 with p<0.001: Schizophrenia (1.43% vs. 8.76%), diabetes mellitus type I (0.67% vs. 2.08%), IBD (0.68% vs. 1.99%) whereas sleeping disorders, bowel disorders (without IBD) and epilepsy did not change significantly.

Conclusions

The comorbidities of ASD encompass disease states that are significantly overrepresented in ASD with respect to even the patient populations of tertiary health centers. This burden of comorbidities goes well beyond those routinely managed in developmental medicine centers and requires broad multidisciplinary management that payors and providers will have to plan for.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号