首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
BackgroundIt is estimated that over 250 million children under 5 years of age in low- and middle-income countries (LMICs) do not reach their full developmental potential. Poor maternal diet, anemia, and micronutrient deficiencies during pregnancy are associated with suboptimal neurodevelopmental outcomes in children. However, the effect of prenatal macronutrient and micronutrient supplementation on child development in LMIC settings remains unclear due to limited evidence from randomized trials.Methods and findingsWe conducted a 3-arm cluster-randomized trial (n = 53 clusters) that evaluated the efficacy of (1) prenatal multiple micronutrient supplementation (MMS; n = 18 clusters) and (2) lipid-based nutrient supplementation (LNS; n = 18 clusters) as compared to (3) routine iron–folic acid (IFA) supplementation (n = 17 clusters) among pregnant women in the rural district of Madarounfa, Niger, from March 2015 to August 2019 (ClinicalTrials.gov identifier NCT02145000). Children were followed until 2 years of age, and the Bayley Scales of Infant and Toddler Development III (BSID-III) were administered to children every 3 months from 6 to 24 months of age. Maternal report of WHO gross motor milestone achievement was assessed monthly from 3 to 24 months of age. An intention-to-treat analysis was followed. Child BSID-III data were available for 559, 492, and 581 singleton children in the MMS, LNS, and IFA groups, respectively. Child WHO motor milestone data were available for 691, 781, and 753 singleton children in the MMS, LNS, and IFA groups, respectively. Prenatal MMS had no effect on child BSID-III cognitive (standardized mean difference [SMD]: 0.21; 95% CI: −0.20, 0.62; p = 0.32), language (SMD: 0.16; 95% CI: −0.30, 0.61; p = 0.50) or motor scores (SMD: 0.18; 95% CI: −0.39, 0.74; p = 0.54) or on time to achievement of the WHO gross motor milestones as compared to IFA. Prenatal LNS had no effect on child BSID-III cognitive (SMD: 0.17; 95% CI: −0.15, 0.49; p = 0.29), language (SMD: 0.11; 95% CI: −0.22, 0.44; p = 0.53) or motor scores (SMD: −0.04; 95% CI: −0.46, 0.37; p = 0.85) at the 24-month endline visit as compared to IFA. However, the trajectory of BSID-III cognitive scores during the first 2 years of life differed between the groups with children in the LNS group having higher cognitive scores at 18 and 21 months (approximately 0.35 SD) as compared to the IFA group (p-value for difference in trajectory <0.001). Children whose mothers received LNS also had earlier achievement of sitting alone (hazard ratio [HR]: 1.57; 95% CI: 1.10 to 2.24; p = 0.01) and walking alone (1.52; 95% CI: 1.14 to 2.03; p = 0.004) as compared to IFA, but there was no effect on time to achievement of other motor milestones. A limitation of our study is that we assessed child development up to 2 years of age, and, therefore, we may have not captured effects that are easier to detect or emerge at older ages.ConclusionsThere was no benefit of prenatal MMS on child development outcomes up to 2 years of age as compared to IFA. There was evidence of an apparent positive effect of prenatal LNS on cognitive development trajectory and time to achievement of selected gross motor milestones.Trial registrationClinicalTrials.gov NCT02145000.

Christopher R. Sudfeld and colleagues evaluate the benefit of multiple micronutrient supplementation and medium‐quantity lipid‐based nutrient supplementation in pregnancy on child development in rural Niger.  相似文献   

2.

Objective

We performed a systematic review and meta-analysis of double-blind, randomized, placebo-controlled trials evaluating suvorexant for primary insomnia.

Methods

Relevant studies were identified through searches of PubMed, databases of the Cochrane Library, and PsycINFO citations through June 27, 2015. We performed a systematic review and meta-analysis of suvorexant trial efficacy and safety outcomes. The primary efficacy outcomes were either subjective total sleep time (sTST) or subjective time-to-sleep onset (sTSO) at 1 month. The secondary outcomes were other efficacy outcomes, discontinuation rate, and individual adverse events. The risk ratio, number-needed-to-treat/harm, and weighted mean difference (WMD) and 95% confidence intervals (CI) based on a random effects model were calculated.

Results

The computerized literature database search initially yielded 48 results, from which 37 articles were excluded following a review of titles and abstracts and another eight review articles after full-text review. Thus, we identified 4 trials that included a total of 3,076 patients. Suvorexant was superior to placebo with regard to the two primary efficacy outcomes (sTST: WMD = −20.16, 95% CI = −25.01 to −15.30, 1889 patients, 3 trials, sTSO: WMD = −7.62, 95% CI = −11.03 to −4.21, 1889 patients, 3 trials) and was not different from placebo in trial discontinuations. Suvorexant caused a higher incidence than placebo of at least one side effects, abnormal dreams, somnolence, excessive daytime sleepiness/sedation, fatigue, dry mouth, and rebound insomnia.

Conclusions

Our analysis of published trial results suggests that suvorexant is effective in treating primary insomnia and is well-tolerated.  相似文献   

3.
BackgroundImplementing evidence into clinical practice is a key focus of healthcare improvements to reduce unwarranted variation. Dissemination of evidence-based recommendations and knowledge brokering have emerged as potential strategies to achieve evidence implementation by influencing resource allocation decisions. The aim of this study was to determine the effectiveness of these two research implementation strategies to facilitate evidence-informed healthcare management decisions for the provision of inpatient weekend allied health services.Methods and findingsThis multicentre, single-blinded (data collection and analysis), three-group parallel cluster randomised controlled trial with concealed allocation was conducted in Australian and New Zealand hospitals between February 2018 and January 2020. Clustering and randomisation took place at the organisation level where weekend allied health staffing decisions were made (e.g., network of hospitals or single hospital). Hospital wards were nested within these decision-making structures. Three conditions were compared over a 12-month period: (1) usual practice waitlist control; (2) dissemination of written evidence-based practice recommendations; and (3) access to a webinar-based knowledge broker in addition to the recommendations. The primary outcome was the alignment of weekend allied health provision with practice recommendations at the cluster and ward levels, addressing the adoption, penetration, and fidelity to the recommendations. The secondary outcome was mean hospital length of stay at the ward level. Outcomes were collected at baseline and 12 months later. A total of 45 clusters (n = 833 wards) were randomised to either control (n = 15), recommendation (n = 16), or knowledge broker (n = 14) conditions. Four (9%) did not provide follow-up data, and no adverse events were recorded. No significant effect was found with either implementation strategy for the primary outcome at the cluster level (recommendation versus control β 18.11 [95% CI −8,721.81 to 8,758.02] p = 0.997; knowledge broker versus control β 1.24 [95% CI −6,992.60 to 6,995.07] p = 1.000; recommendation versus knowledge broker β −9.12 [95% CI −3,878.39 to 3,860.16] p = 0.996) or ward level (recommendation versus control β 0.01 [95% CI 0.74 to 0.75] p = 0.983; knowledge broker versus control β −0.12 [95% CI −0.54 to 0.30] p = 0.581; recommendation versus knowledge broker β −0.19 [−1.04 to 0.65] p = 0.651). There was no significant effect between strategies for the secondary outcome at ward level (recommendation versus control β 2.19 [95% CI −1.36 to 5.74] p = 0.219; knowledge broker versus control β −0.55 [95% CI −1.16 to 0.06] p = 0.075; recommendation versus knowledge broker β −3.75 [95% CI −8.33 to 0.82] p = 0.102). None of the control or knowledge broker clusters transitioned to partial or full alignment with the recommendations. Three (20%) of the clusters who only received the written recommendations transitioned from nonalignment to partial alignment. Limitations include underpowering at the cluster level sample due to the grouping of multiple geographically distinct hospitals to avoid contamination.ConclusionsOwing to a lack of power at the cluster level, this trial was unable to identify a difference between the knowledge broker strategy and dissemination of recommendations compared with usual practice for the promotion of evidence-informed resource allocation to inpatient weekend allied health services. Future research is needed to determine the interactions between different implementation strategies and healthcare contexts when translating evidence into healthcare practice.Trial registrationAustralian New Zealand Clinical Trials Registry ACTRN12618000029291.

In a cluster randomized controlled implementation trial, Dr. Mitchell N Sarkies and colleagues examine the effectiveness of knowledge brokering and recommendation dissemination in influencing healthcare resource allocation decisions in Australia and New Zealand.  相似文献   

4.
[Purpose]This study aimed to analyze the prevalence of hypertension according to the body mass index (BMI) and relative handgrip strength (RHGS) among elderly individuals in Korea. [Methods]We analyzed the data of 44,183 Korean elderly individuals over 65 years old (men: n = 15,798, age = 73.31 ± 5.04 years, women: n = 28,385, age = 72.14 ± 5.04 years) obtained from the Korean National Fitness Assessment in 2019. All the participants were categorized into three groups according to the BMI and RHGS; additionally, one-way ANOVA and logistic regression analysis were performed. [Results]Overweight (men: 1.16 odds ratio [OR] 1.06–1.26, 95% confidence interval [CI]; women: 1.15 OR, 1.07–1.23 95% CI) and obese (men: 1.54 OR, 1.42–1.66 95% CI; women: 1.44 OR, 1.36–1.53 95% CI) elderly individuals showed a higher prevalence of hypertension than elderly individuals with normal weight, after controlling for age. In men, a lower RHGS was associated with a higher prevalence of hypertension after controlling for age (weak RHGS: 1.09 OR, 1.00–1.17 95% CI; middle RHGS: 1.21 OR, 1.12–1.31 95% CI vs. strong RHGS). [Conclusion]A higher BMI was associated with the prevalence of hypertension in the elderly Korean population. In addition, a lower RHGS was associated with the prevalence of hypertension in elderly Korean men.  相似文献   

5.
BackgroundThe risk of perinatal death and severe neonatal morbidity increases gradually after 41 weeks of pregnancy. Several randomised controlled trials (RCTs) have assessed if induction of labour (IOL) in uncomplicated pregnancies at 41 weeks will improve perinatal outcomes. We performed an individual participant data meta-analysis (IPD-MA) on this subject.Methods and findingsWe searched PubMed, Excerpta Medica dataBASE (Embase), The Cochrane Library, Cumulative Index of Nursing and Allied Health Literature (CINAHL), and PsycINFO on February 21, 2020 for RCTs comparing IOL at 41 weeks with expectant management until 42 weeks in women with uncomplicated pregnancies. Individual participant data (IPD) were sought from eligible RCTs. Primary outcome was a composite of severe adverse perinatal outcomes: mortality and severe neonatal morbidity. Additional outcomes included neonatal admission, mode of delivery, perineal lacerations, and postpartum haemorrhage. Prespecified subgroup analyses were conducted for parity (nulliparous/multiparous), maternal age (<35/≥35 years), and body mass index (BMI) (<30/≥30). Aggregate data meta-analysis (MA) was performed to include data from RCTs for which IPD was not available.From 89 full-text articles, we identified three eligible RCTs (n = 5,161), and two contributed with IPD (n = 4,561). Baseline characteristics were similar between the groups regarding age, parity, BMI, and higher level of education. IOL resulted overall in a decrease of severe adverse perinatal outcome (0.4% [10/2,281] versus 1.0% [23/2,280]; relative risk [RR] 0.43 [95% confidence interval [CI] 0.21 to 0.91], p-value 0.027, risk difference [RD] −57/10,000 [95% CI −106/10,000 to −8/10,000], I2 0%). The number needed to treat (NNT) was 175 (95% CI 94 to 1,267).Perinatal deaths occurred in one (<0.1%) versus eight (0.4%) pregnancies (Peto odds ratio [OR] 0.21 [95% CI 0.06 to 0.78], p-value 0.019, RD −31/10,000, [95% CI −56/10,000 to −5/10,000], I2 0%, NNT 326, [95% CI 177 to 2,014]) and admission to a neonatal care unit ≥4 days occurred in 1.1% (24/2,280) versus 1.9% (46/2,273), (RR 0.52 [95% CI 0.32 to 0.85], p-value 0.009, RD −97/10,000 [95% CI −169/10,000 to −26/10,000], I2 0%, NNT 103 [95% CI 59 to 385]). There was no difference in the rate of cesarean delivery (10.5% versus 10.7%; RR 0.98, [95% CI 0.83 to 1.16], p-value 0.81) nor in other important perinatal, delivery, and maternal outcomes. MA on aggregate data showed similar results.Prespecified subgroup analyses for the primary outcome showed a significant difference in the treatment effect (p = 0.01 for interaction) for parity, but not for maternal age or BMI. The risk of severe adverse perinatal outcome was decreased for nulliparous women in the IOL group (0.3% [4/1,219] versus 1.6% [20/1,264]; RR 0.20 [95% CI 0.07 to 0.60], p-value 0.004, RD −127/10,000, [95% CI −204/10,000 to −50/10,000], I2 0%, NNT 79 [95% CI 49 to 201]) but not for multiparous women (0.6% [6/1,219] versus 0.3% [3/1,264]; RR 1.59 [95% CI 0.15 to 17.30], p-value 0.35, RD 27/10,000, [95% CI −29/10,000 to 84/10,000], I2 55%).A limitation of this IPD-MA was the risk of overestimation of the effect on perinatal mortality due to early stopping of the largest included trial for safety reasons after the advice of the Data and Safety Monitoring Board. Furthermore, only two RCTs were eligible for the IPD-MA; thus, the possibility to assess severe adverse neonatal outcomes with few events was limited.ConclusionsIn this study, we found that, overall, IOL at 41 weeks improved perinatal outcome compared with expectant management until 42 weeks without increasing the cesarean delivery rate. This benefit is shown only in nulliparous women, whereas for multiparous women, the incidence of mortality and morbidity was too low to demonstrate any effect. The magnitude of risk reduction of perinatal mortality remains uncertain. Women with pregnancies approaching 41 weeks should be informed on the risk differences according to parity so that they are able to make an informed choice for IOL at 41 weeks or expectant management until 42 weeks.Study Registration: PROSPERO CRD42020163174

Mårten Alkmark and co-workers report on a meta-analysis of randomized trials of labour induction at 41 weeks'' gestation as compared with expectant management until 42 weeks.  相似文献   

6.
[Purpose]The present study compared energy metabolism between walking and running at equivalent speeds during two incremental exercise tests.[Methods]Thirty four university students (18 males, 16 females) were recruited. Each participant completed two trials, consisting of walking (Walk) and running (Run) trials on different days, with 2-3 days apart. Exercise on a treadmill was started from initial stage of 3 min (3.0 k/m in Walk trial, 5.0 km/h in Run trial), and the speed for walking and running was progressively every minute by 0.5 km/h. The changes in metabolic variables, heart rate (HR), and rating of perceived exertion (RPE) during exercise were compared between the trials.[Results]Energy expenditure (EE) increased with speed in each trial. However, the Walk trial had a significantly higher EE than the Run trial at speeds exceeding 92 ± 2 % of the maximal walking speed (MWS, p < 0.01). Similarly, carbohydrate (CHO) oxidation was significantly higher in the Walk trial than in the Run trial at above 92 ± 2 %MWS in males (p < 0.001) and above 93 ± 1 %MWS in females (p < 0.05).[Conclusion]These findings suggest that EE and CHO oxidation during walking increase non-linearly with speed, and walking at a fast speed causes greater metabolic responses than running at the equivalent speed in young participants.  相似文献   

7.
Background & ObjectiveCurrent evidence is debatable regarding the feasible effects of zinc supplementation on the inflammation and oxidative stress status of adults. This systematic review and meta-analysis aimed to clarify this inconclusiveness.Materials and MethodsLiterature search was conducted via online databases such as PubMed, Scopus, ISI Web of Science, Cochrane Library, and Google Scholar until June 2020. The overall effect was presented as the weighted mean difference (WMD) at 95 % confidence interval (CI) in a random-effects meta-analysis model. Publication bias was also assessed using Egger’s and Begg’s statistics.ResultsIn total, 25 clinical trials (n = 1428) were reviewed, which indicated that zinc supplementation significantly affects the concentration of C- reactive protein (WMD: -0.03 mg/l; 95 % CI: -0.06, 0.0; P = 0.029), interlukin-6 (WMD: -3.81 pg/mL; 95 % CI: -6.87, -0.76; P = 0.014), malondialdehyde (WMD: -0.78 μmol/l; 95 % CI: -1.14, -0.42; P < 0.001), and total antioxidant capacity (WMD: 95.96 mmol/l; 95 % CI: 22.47, 169.44; P = 0.010). In addition, a significant between-study heterogeneity and a non-significant increment was reported in nitric oxide (WMD: 1.47 μmol/l; 95 % CI: -2.45, 5.40; P = 0.461) and glutathione (WMD: 34.84 μmol/l; 95 % CI: -5.12, 74.80; P = 0.087).ConclusionAccording to the results, zinc supplementation may have beneficial anti-inflammatory and anti-oxidative effects in adults.  相似文献   

8.
BackgroundThe tolerability of oral iron supplementation for the treatment of iron deficiency anemia is disputed.ObjectiveOur aim was to quantify the odds of GI side-effects in adults related to current gold standard oral iron therapy, namely ferrous sulfate.MethodsSystematic review and meta-analysis of randomized controlled trials (RCTs) evaluating GI side-effects that included ferrous sulfate and a comparator that was either placebo or intravenous (IV) iron. Random effects meta-analysis modelling was undertaken and study heterogeneity was summarised using I2 statistics.ResultsForty three trials comprising 6831 adult participants were included. Twenty trials (n = 3168) had a placebo arm and twenty three trials (n = 3663) had an active comparator arm of IV iron. Ferrous sulfate supplementation significantly increased risk of GI side-effects versus placebo with an odds ratio (OR) of 2.32 [95% CI 1.74–3.08, p<0.0001, I2 = 53.6%] and versus IV iron with an OR of 3.05 [95% CI 2.07-4.48, p<0.0001, I2 = 41.6%]. Subgroup analysis in IBD patients showed a similar effect versus IV iron (OR = 3.14, 95% CI 1.34-7.36, p = 0.008, I2 = 0%). Likewise, subgroup analysis of pooled data from 7 RCTs in pregnant women (n = 1028) showed a statistically significant increased risk of GI side-effects for ferrous sulfate although there was marked heterogeneity in the data (OR = 3.33, 95% CI 1.19-9.28, p = 0.02, I2 = 66.1%). Meta-regression did not provide significant evidence of an association between the study OR and the iron dose.ConclusionsOur meta-analysis confirms that ferrous sulfate is associated with a significant increase in gastrointestinal-specific side-effects but does not find a relationship with dose.  相似文献   

9.
10.
BackgroundThe purpose of this study was to determine the influence of chromium supplementation on lipid profile in patients with type 2 diabetes mellitus (T2DM).MethodsA systematic search was performed in Scopus, Embase, Web of Science, the Cochrane library and PubMed databases to find randomized controlled trials (RCTs) related to the effect of chromium supplementation on lipid profile in patients with T2DM, up to June 2020. Meta-analyses were performed using the random-effects model, and I2 index was used to evaluate heterogeneity.ResultsThe primary search yielded 725 publications. 24 RCTs (with 28 effect size) were eligible. Our meta-analysis indicated that chromium supplementation resulted in a significant decrease in serum levels of triglyceride (TG) (MD: -6.54 mg/dl, 95 % CI: -13.08 to -0.00, P = 0.050) and total cholesterol (TC) (WMD: -7.77 mg/dl, 95 % CI: -11.35 to -4.18, P < 0.001). Furthermore, chromium significantly increases high-density lipoprotein (HDL) (WMD: 2.23 mg/dl, 95 % CI: 0.07–4.40, P = 0.043) level. However, chromium supplementation did not have significant effects on low-density lipoprotein (LDL) (WMD: -8.54 mg/dl, 95 % CI: -19.58 to 2.49, P = 0.129) level.ConclusionChromium supplementation may significantly improve lipid profile in patients with T2DM by decreasing TG and TC and increasing HDL. However, based on our analysis, chromium failed to affect LDL. It should be noted that the lipid-lowering properties of chromium supplementation were small and may not reach clinical importance.  相似文献   

11.
[Purpose]Skeletal muscle glycogen is a determinant of endurance capacity for some athletes. Ginger is well known to possess nutritional effects, such as anti-diabetic effects. We hypothesized that ginger extract (GE) ingestion increases skeletal muscle glycogen by enhancing fat oxidation. Thus, we investigated the effect of GE ingestion on exercise capacity, skeletal muscle glycogen, and certain blood metabolites in exercised rats. [Methods]First, we evaluated the influence of GE ingestion on body weight and elevation of exercise performance in rats fed with different volumes of GE. Next, we measured the skeletal muscle glycogen content and free fatty acid (FFA) levels in GE-fed rats. Finally, we demonstrated that GE ingestion contributes to endurance capacity during intermittent exercise to exhaustion. [Results]We confirmed that GE ingestion increased exercise performance (p<0.05) and elevated the skeletal muscle glycogen content compared to the non-GE-fed (CE, control exercise) group before exercise (Soleus: p<0.01, Plantaris: p<0.01, Gastrocnemius: p<0.05). Blood FFA levels in the GE group were significantly higher than those in the CE group after exercise (p<0.05). Moreover, we demonstrated that exercise capacity was maintained in the CE group during intermittent exercise (p<0.05). [Conclusion]These findings indicate that GE ingestion increases skeletal muscle glycogen content and exercise performance through the upregulation of fat oxidation.  相似文献   

12.

Background

Perinatal common mental disorders (PCMDs) are a major cause of disability among women. Psychosocial interventions are one approach to reduce the burden of PCMDs. Working with care providers who are not mental health specialists, in the community or in antenatal health care facilities, can expand access to these interventions in low-resource settings. We assessed effects of such interventions compared to usual perinatal care, as well as effects of interventions based on intervention type, delivery method, and timing.

Methods and Findings

We conducted a systematic review, meta-analysis, and meta-regression. We searched databases including Embase and the Global Health Library (up to 7 July 2013) for randomized and non-randomized trials of psychosocial interventions delivered by non-specialist mental health care providers in community settings and antenatal health care facilities in low- and middle-income countries. We pooled outcomes from ten trials for 18,738 participants. Interventions led to an overall reduction in PCMDs compared to usual care when using continuous data for PCMD symptomatology (effect size [ES] −0.34; 95% CI −0.53, −0.16) and binary categorizations for presence or absence of PCMDs (odds ratio 0.59; 95% CI 0.26, 0.92). We found a significantly larger ES for psychological interventions (three studies; ES −0.46; 95% CI −0.58, −0.33) than for health promotion interventions (seven studies; ES −0.15; 95% CI −0.27, −0.02). Both individual (five studies; ES −0.18; 95% CI −0.34, −0.01) and group (three studies; ES −0.48; 95% CI −0.85, −0.11) interventions were effective compared to usual care, though delivery method was not associated with ES (meta-regression β coefficient −0.11; 95% CI −0.36, 0.14). Combined group and individual interventions (based on two studies) had no benefit compared to usual care, nor did interventions restricted to pregnancy (three studies). Intervention timing was not associated with ES (β 0.16; 95% CI −0.16, 0.49). The small number of trials and heterogeneity of interventions limit our findings.

Conclusions

Psychosocial interventions delivered by non-specialists are beneficial for PCMDs, especially psychological interventions. Research is needed on interventions in low-income countries, treatment versus preventive approaches, and cost-effectiveness. Please see later in the article for the Editors'' Summary  相似文献   

13.
BackgroundFinancial incentives and audit/feedback are widely used in primary care to influence clinician behaviour and increase quality of care. While observational data suggest a decline in quality when these interventions are stopped, their removal has not been evaluated in a randomised controlled trial (RCT), to our knowledge. This trial aimed to determine whether chlamydia testing in general practice is sustained when financial incentives and/or audit/feedback are removed.Methods and findingsWe undertook a 2 × 2 factorial cluster RCT in 60 general practices in 4 Australian states targeting 49,525 patients aged 16–29 years for annual chlamydia testing. Clinics were recruited between July 2014 and September 2015 and were followed for up to 2 years or until 31 December 2016. Clinics were eligible if they were in the intervention group of a previous cluster RCT where general practitioners (GPs) received financial incentives (AU$5–AU$8) for each chlamydia test and quarterly audit/feedback reports of their chlamydia testing rates. Clinics were randomised into 1 of 4 groups: incentives removed but audit/feedback retained (group A), audit/feedback removed but incentives retained (group B), both removed (group C), or both retained (group D). The primary outcome was the annual chlamydia testing rate among 16- to 29-year-old patients, where the numerator was the number who had at least 1 chlamydia test within 12 months and the denominator was the number who had at least 1 consultation during the same 12 months. We undertook a factorial analysis in which we investigated the effects of removal versus retention of incentives (groups A + C versus groups B + D) and the effects of removal versus retention of audit/feedback (group B + C versus groups A + D) separately. Of 60 clinics, 59 were randomised and 55 (91.7%) provided data (group A: 15 clinics, 11,196 patients; group B: 14, 11,944; group C: 13, 11,566; group D: 13, 14,819). Annual testing decreased from 20.2% to 11.7% (difference −8.8%; 95% CI −10.5% to −7.0%) in clinics with incentives removed and decreased from 20.6% to 14.3% (difference −7.1%; 95% CI −9.6% to −4.7%) where incentives were retained. The adjusted absolute difference in treatment effect was −0.9% (95% CI −3.5% to 1.7%; p = 0.2267). Annual testing decreased from 21.0% to 11.6% (difference −9.5%; 95% CI −11.7% to −7.4%) in clinics where audit/feedback was removed and decreased from 19.9% to 14.5% (difference −6.4%; 95% CI −8.6% to −4.2%) where audit/feedback was retained. The adjusted absolute difference in treatment effect was −2.6% (95% CI −5.4% to −0.1%; p = 0.0336). Study limitations included an unexpected reduction in testing across all groups impacting statistical power, loss of 4 clinics after randomisation, and inclusion of rural clinics only.ConclusionsAudit/feedback is more effective than financial incentives of AU$5–AU$8 per chlamydia test at sustaining GP chlamydia testing practices over time in Australian general practice.Trial registrationAustralian New Zealand Clinical Trials Registry ACTRN12614000595617

In a cluster randomized trial, Jane S Hocking and colleagues investigate the impact of removing financial incentives and/or audit and feedback on chlamydia testing in general practice in Australia.  相似文献   

14.
BackgroundPsychosocial interventions for adolescent mental health problems are effective, but evidence on their longer-term outcomes is scarce, especially in low-resource settings. We report on the 12-month sustained effectiveness and costs of scaling up a lay counselor–delivered, transdiagnostic problem-solving intervention for common adolescent mental health problems in low-income schools in New Delhi, India.Methods and findingsParticipants in the original trial were 250 school-going adolescents (mean [M] age = 15.61 years, standard deviation [SD] = 1.68), including 174 (69.6%) who identified as male. Participants were recruited from 6 government schools over a period of 4 months (August 20 to December 14, 2018) and were selected on the basis of elevated mental health symptoms and distress/functional impairment. A 2-arm, randomized controlled trial design was used to examine the effectiveness of a lay counselor–delivered, problem-solving intervention (4 to 5 sessions over 3 weeks) with supporting printed booklets (intervention arm) in comparison with problem solving delivered via printed booklets alone (control arm), at the original endpoints of 6 and 12 weeks. The protocol was modified, as per the recommendation of the Trial Steering Committee, to include a post hoc extension of the follow-up period to 12 months. Primary outcomes were adolescent-reported psychosocial problems (Youth Top Problems [YTP]) and mental health symptoms (Strengths and Difficulties Questionnaire [SDQ] Total Difficulties scale). Other self-reported outcomes included SDQ subscales, perceived stress, well-being, and remission. The sustained effects of the intervention were estimated at the 12-month endpoint and over 12 months (the latter assumed a constant effect across 3 follow-up points) using a linear mixed model for repeated measures and involving complete case analysis. Sensitivity analyses examined the effect of missing data using multiple imputations. Costs were estimated for delivering the intervention during the trial and from modeling a scale-up scenario, using a retrospective ingredients approach. Out of the 250 original trial participants, 176 (70.4%) adolescents participated in the 12-month follow-up assessment. One adverse event was identified during follow-up and deemed unrelated to the intervention. Evidence was found for intervention effects on both SDQ Total Difficulties and YTP at 12 months (YTP: adjusted mean difference [AMD] = −0.75, 95% confidence interval [CI] = −1.47, −0.03, p = 0.04; SDQ Total Difficulties: AMD = −1.73, 95% CI = −3.47, 0.02, p = 0.05), with stronger effects over 12 months (YTP: AMD = −0.98, 95% CI = −1.51, −0.45, p < 0.001; SDQ Total Difficulties: AMD = −1.23, 95% CI = −2.37, −0.09; p = 0.03). There was also evidence for intervention effects on internalizing symptoms, impairment, perceived stress, and well-being over 12 months. The intervention effect was stable for most outcomes on sensitivity analyses adjusting for missing data; however, for SDQ Total Difficulties and impairment, the effect was slightly attenuated. The per-student cost of delivering the intervention during the trial was $3 United States dollars (USD; or $158 USD per case) and for scaling up the intervention in the modeled scenario was $4 USD (or $23 USD per case). The scaling up cost accounted for 0.4% of the per-student school budget in New Delhi. The main limitations of the study’s methodology were the lack of sample size calculations powered for 12-month follow-up and the absence of cost-effectiveness analyses using the primary outcomes.ConclusionsIn this study, we observed that a lay counselor–delivered, brief transdiagnostic problem-solving intervention had sustained effects on psychosocial problems and mental health symptoms over the 12-month follow-up period. Scaling up this resource-efficient intervention is an affordable policy goal for improving adolescents’ access to mental health care in low-resource settings. The findings need to be interpreted with caution, as this study was a post hoc extension, and thus, the sample size calculations did not take into account the relatively high attrition rate observed during the long-term follow-up.Trial registrationClinicalTrials.gov NCT03630471.

Kanika Malik, Daniel Michelson, and colleagues study the sustained effectiveness of a lay counsellor-delivered problem solving intervention to mental health of adolescents enrolled in low-income schools in New Delhi, India.  相似文献   

15.
Prior studies of upper gastrointestinal bleeding (UGIB) and acute myocardial infarction (AMI) are small, and long-term effects of UGIB on AMI have not been delineated. We investigated whether UGIB in patients diagnosed with coronary artery disease (CAD) increased their risk of subsequent AMI. This was a population-based, nested case-control study using Taiwan’s National Health Insurance Research Database. After propensity-score matching for age, gender, comorbidities, CAD date, and follow-up duration, we identified 1,677 new-onset CAD patients with AMI (AMI[+]) between 2001 and 2006 as the case group and 10,062 new-onset CAD patients without (AMI[−]) as the control group. Conditional logistic regression was used to examine the association between UGIB and AMI. Compared with UGIB[−] patients, UGIB[+] patients had twice the risk for subsequent AMI (adjusted odds ratio [AOR] = 2.08; 95% confidence interval [CI], 1.72–2.50). In the subgroup analysis for gender and age, UGIB[+] women (AOR = 2.70; 95% CI, 2.03–3.57) and patients < 65 years old (AOR = 2.23; 95% CI, 1.56–3.18) had higher odds of an AMI. UGIB[+] AMI[+] patients used nonsignificantly less aspirin than did UGIB[−] AMI[+] patients (27.69% vs. 35.61%, respectively). UGIB increased the risk of subsequent AMI in CAD patients, especially in women and patients < 65. This suggests that physicians need to use earlier and more aggressive intervention to detect UGIB and prevent AMI in CAD patients.  相似文献   

16.
BackgroundPrior research has underscored negative impacts of perinatal parental depression on offspring cognitive performance in early childhood. However, little is known about the effects of parental depression during adolescence on offspring cognitive development.Methods and findingsThis study used longitudinal data from the nationally representative China Family Panel Studies (CFPS). The sample included 2,281 adolescents aged 10–15 years (the median age was 13 years with an interquartile range between 11 and 14 years) in 2012 when their parents were surveyed for depression symptoms with the 20-item Center for Epidemiologic Studies Depression Scale (CES-D). The sample was approximately balanced by sex, with 1,088 females (47.7%). We examined the associations of parental depression in 2012 with offspring cognitive performance (measured by mathematics, vocabulary, immediate word recall, delayed word recall, and number series tests) in subsequent years (i.e., 2014, 2016, and 2018) using linear regression models, adjusting for various offspring (i.e., age, sex, and birth order), parent (i.e., parents’ education level, age, whether living with the offspring, and employment status), and household characteristics (i.e., place of residence, household income, and the number of offspring). We found parental depression during adolescence to be significantly associated with worse cognitive performance in subsequent years, in both crude and adjusted models. For example, in the crude models, adolescents whose mothers had depression symptoms in 2012 scored 1.0 point lower (95% confidence interval [CI]: −1.2 to −0.8, p < 0.001) in mathematics in 2014 compared to those whose mothers did not have depression symptoms; after covariate adjustment, this difference marginally reduced to 0.8 points (95% CI: −1.0 to −0.5, p < 0.001); the associations remained robust after further adjusting for offspring earlier cognitive ability in toddlerhood (−1.2, 95% CI: −1.6, −0.9, p < 0.001), offspring cognitive ability in 2012 (−0.6, 95% CI: −0.8, −0.3, p < 0.001), offspring depression status (−0.7, 95% CI: −1.0, −0.5, p < 0.001), and parents’ cognitive ability (−0.8, 95% CI: −1.2, −0.3, p < 0.001). In line with the neuroplasticity theory, we observed stronger associations between maternal depression and mathematical/vocabulary scores among the younger adolescents (i.e., 10–11 years) than the older ones (i.e., 12–15 years). For example, the association between maternal depression and 2014 vocabulary scores was estimated to be −2.1 (95% CI: −2.6, −1.6, p < 0.001) in those aged 10–11 years, compared to −1.2 (95% CI: −1.6, −0.8, p < 0.001) in those aged 12–15 years with a difference of 0.9 (95% CI: 0.2, 1.6, p = 0.010). We also observed a stronger association of greater depression severity with worse mathematical scores. The primary limitations of this study were the relatively high attrition rate and residual confounding.ConclusionsIn this study, we observed that parental depression during adolescence was associated with adverse offspring cognitive development assessed up to 6 years later. These findings highlight the intergenerational association between depression in parents and cognitive development across the early life course into adolescence.

In this cohort study, Zhihui Li and colleagues explore associations between parental depression and offspring cognitive development up to six years later.  相似文献   

17.
BackgroundDevelopment of an effective antiviral drug for Coronavirus Disease 2019 (COVID-19) is a global health priority. Although several candidate drugs have been identified through in vitro and in vivo models, consistent and compelling evidence from clinical studies is limited. The lack of evidence from clinical trials may stem in part from the imperfect design of the trials. We investigated how clinical trials for antivirals need to be designed, especially focusing on the sample size in randomized controlled trials.Methods and findingsA modeling study was conducted to help understand the reasons behind inconsistent clinical trial findings and to design better clinical trials. We first analyzed longitudinal viral load data for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) without antiviral treatment by use of a within-host virus dynamics model. The fitted viral load was categorized into 3 different groups by a clustering approach. Comparison of the estimated parameters showed that the 3 distinct groups were characterized by different virus decay rates (p-value < 0.001). The mean decay rates were 1.17 d−1 (95% CI: 1.06 to 1.27 d−1), 0.777 d−1 (0.716 to 0.838 d−1), and 0.450 d−1 (0.378 to 0.522 d−1) for the 3 groups, respectively. Such heterogeneity in virus dynamics could be a confounding variable if it is associated with treatment allocation in compassionate use programs (i.e., observational studies).Subsequently, we mimicked randomized controlled trials of antivirals by simulation. An antiviral effect causing a 95% to 99% reduction in viral replication was added to the model. To be realistic, we assumed that randomization and treatment are initiated with some time lag after symptom onset. Using the duration of virus shedding as an outcome, the sample size to detect a statistically significant mean difference between the treatment and placebo groups (1:1 allocation) was 13,603 and 11,670 (when the antiviral effect was 95% and 99%, respectively) per group if all patients are enrolled regardless of timing of randomization. The sample size was reduced to 584 and 458 (when the antiviral effect was 95% and 99%, respectively) if only patients who are treated within 1 day of symptom onset are enrolled. We confirmed the sample size was similarly reduced when using cumulative viral load in log scale as an outcome.We used a conventional virus dynamics model, which may not fully reflect the detailed mechanisms of viral dynamics of SARS-CoV-2. The model needs to be calibrated in terms of both parameter settings and model structure, which would yield more reliable sample size calculation.ConclusionsIn this study, we found that estimated association in observational studies can be biased due to large heterogeneity in viral dynamics among infected individuals, and statistically significant effect in randomized controlled trials may be difficult to be detected due to small sample size. The sample size can be dramatically reduced by recruiting patients immediately after developing symptoms. We believe this is the first study investigated the study design of clinical trials for antiviral treatment using the viral dynamics model.

Using a viral dynamics model, Shingo Iwami and colleagues investigate the sample sizes required to detect significant antiviral drug effects on COVID-19 in randomized controlled trials.  相似文献   

18.
《Endocrine practice》2014,20(11):1201-1213
ObjectiveThis review provides a comprehensive overview of the most recent findings from the Women’s Health Initiative (WHI) hormone therapy (HT) trials and highlights the role of age and other clinical risk factors in risk stratification.MethodsWe review the findings on cardiovascular disease, cancer outcomes, all-cause mortality, and other major endpoints in the two WHI HT trials (conjugated equine estrogens [CEEs, 0.625 mg/day] with or without medroxyprogesterone acetate [MPA, 2.5 mg/day]).ResultsThe hazard ratio (HR) for coronary heart disease (CHD) was 1.18 (95% confidence interval [CI], 0.95 to 1.45) in the CEE + MPA trial and 0.94 (95% CI, 0.78 to 1.14) in the CEE-alone trial. In both HT trials, there was an increased risk of stroke and deep vein thrombosis and a lower risk of hip fractures and diabetes. The HT regimens had divergent effects on breast cancer. CEE + MPA increased breast cancer risk (cumulative HR, 1.28; 95% CI, 1.11 to 1.48), whereas CEE alone had a protective effect (cumulative HR, 0.79; 95% CI, 0.65 to 0.97). The absolute risks of HT were low in younger women (ages 50 to 59 years) and those who were within 10 years of menopause onset. Furthermore, for CHD, the risks were elevated for women with metabolic syndrome or high low-densitylipoprotein cholesterol concentrations but not in women without these risk factors. Factor V Leiden genotype was associated with elevated risk of venous thromboembolism on HT.ConclusionHT has a complex pattern of benefits and risks. Women in early menopause have low absolute risks of chronic disease outcomes on HT. Use of HT for management of menopausal symptoms remains appropriate, and risk stratification will help to identify women in whom benefits would be expected to outweigh risks. (Endocr Pract. 2014;20:1201-1213)  相似文献   

19.
BackgroundCatheter radiofrequency (RF) ablation for cardiac arrhythmias is a painful procedure. Prior work using functional near-infrared spectroscopy (fNIRS) in patients under general anesthesia has indicated that ablation results in activity in pain-related cortical regions, presumably due to inadequate blockade of afferent nociceptors originating within the cardiac system. Having an objective brain-based measure for nociception and analgesia may in the future allow for enhanced analgesic control during surgical procedures. Hence, the primary aim of this study is to demonstrate that the administration of remifentanil, an opioid widely used during surgery, can attenuate the fNIRS cortical responses to cardiac ablation.Methods and findingsWe investigated the effects of continuous remifentanil on cortical hemodynamics during cardiac ablation under anesthesia. In a randomized, double-blinded, placebo (PL)-controlled trial, we examined 32 pediatric patients (mean age of 15.8 years,16 females) undergoing catheter ablation for cardiac arrhythmias at the Cardiology Department of Boston Children’s Hospital from October 2016 to March 2020; 9 received 0.9% NaCl, 12 received low-dose (LD) remifentanil (0.25 mcg/kg/min), and 11 received high-dose (HD) remifentanil (0.5 mcg/kg/min). The hemodynamic changes of primary somatosensory and prefrontal cortices were recorded during surgery using a continuous wave fNIRS system. The primary outcome measures were the changes in oxyhemoglobin concentration (NadirHbO, i.e., lowest oxyhemoglobin concentration and PeakHbO, i.e., peak change and area under the curve) of medial frontopolar cortex (mFPC), lateral prefrontal cortex (lPFC) and primary somatosensory cortex (S1) to ablation in PL versus remifentanil groups. Secondary measures included the fNIRS response to an auditory control condition. The data analysis was performed on an intention-to-treat (ITT) basis. Remifentanil group (dosage subgroups combined) was compared with PL, and a post hoc analysis was performed to identify dose effects. There were no adverse events. The groups were comparable in age, sex, and number of ablations. Results comparing remifentanil versus PL show that PL group exhibit greater NadirHbO in inferior mFPC (mean difference (MD) = 1.229, 95% confidence interval [CI] = 0.334, 2.124, p < 0.001) and superior mFPC (MD = 1.206, 95% CI = 0.303, 2.109, p = 0.001) and greater PeakHbO in inferior mFPC (MD = −1.138, 95% CI = −2.062, −0.214, p = 0.002) and superior mFPC (MD = −0.999, 95% CI = −1.961, −0.036, p = 0.008) in response to ablation. S1 activation from ablation was greatest in PL, then LD, and HD groups, but failed to reach significance, whereas lPFC activation to ablation was similar in all groups. Ablation versus auditory stimuli resulted in higher PeakHbO in inferior mFPC (MD = 0.053, 95% CI = 0.004, 0.101, p = 0.004) and superior mFPC (MD = 0.052, 95% CI = 0.013, 0.091, p < 0.001) and higher NadirHbO in posterior superior S1 (Pos. SS1; MD = −0.342, 95% CI = −0.680, −0.004, p = 0.007) during ablation of all patients. Remifentanil group had smaller NadirHbO in inferior mFPC (MD = 0.098, 95% CI = 0.009, 0.130, p = 0.003) and superior mFPC (MD = 0.096, 95% CI = 0.008, 0.116, p = 0.003) and smaller PeakHbO in superior mFPC (MD = −0.092, 95% CI = −0.680, −0.004, p = 0.007) during both the stimuli. Study limitations were small sample size, motion from surgery, indirect measure of nociception, and shallow penetration depth of fNIRS only allowing access to superficial cortical layers.ConclusionsWe observed cortical activity related to nociception during cardiac ablation under general anesthesia with remifentanil. It highlights the potential of fNIRS to provide an objective pain measure in unconscious patients, where cortical-based measures may be more accurate than current evaluation methods. Future research may expand on this application to produce a real-time indication of pain that will aid clinicians in providing immediate and adequate pain treatment.Trial registrationClinicalTrials.gov NCT02703090

In a randomized controlled trial, Keerthana Karunakaran, Barry Kussman, and colleagues study whether remifentanil attenuates pain-related brain activity during cardiac ablation surgery.  相似文献   

20.
BackgroundBone morphogenetic protein (BMPs) as a substitute for iliac crest bone graft (ICBG) has been increasingly widely used in lumbar fusion. The purpose of this study is to systematically compare the effectiveness and safety of fusion with BMPs for the treatment of lumbar disease.MethodsCochrane review methods were used to analyze all relevant randomized controlled trials (RCTs) published up to nov 2013.Results19 RCTs (1,852 patients) met the inclusion criteria. BMPs group significantly increased fusion rate (RR: 1.13; 95% CI 1.05–1.23, P = 0.001), while there was no statistical difference in overall success of clinical outcomes (RR: 1.04; 95% CI 0.95–1.13, P = 0.38) and complications (RR: 0.96; 95% CI 0.85–1.09, p = 0.54). A significant reduction of the reoperation rate was found in BMPs group (RR: 0.57; 95% CI 0.42–0.77, p = 0.0002). Significant difference was found in the operating time (MD−0.32; 95% CI−0.55, −0.08; P = 0.009), but no significant difference was found in the blood loss, the hospital stay, patient satisfaction, and work status.ConclusionCompared with ICBG, BMPs in lumbar fusion can increase the fusion rate, while reduce the reoperation rate and operating time. However, it doesn’t increase the complication rate, the amount of blood loss and hospital stay. No significant difference was found in the overall success of clinical outcome of the two groups.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号