首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.

Background

Previous studies have shown that the time of day (TD) of glucose measurement and the fasting duration (FD) influence the glucose levels in adults. Few studies have examined the effects of the TD and FD on the glucose level following a 1-hour, 50-gram glucose challenge test (GCT) in pregnant women in screening for or diagnosing gestational diabetes mellitus (GDM). The objective of this study was to investigate the influence of the TD (morning, afternoon, night) and the FD (the time of the last food ingestion as follows: ≤1 hour, 1–2 hours, and >2 hours) by examining their combined effects on the glucose levels following a 50-gram GCT in pregnant women.

Methods and Results

We analyzed the data of 1,454 non-diabetic pregnant Taiwanese women in a prospective study. Multiple linear regression and multiple logistic regression were used to estimate the relationships between the 9 TD-FD groups and the continuous and binary glucose levels (cut-off at 140 mg/dL) following a 50-gram GCT, after adjusting for maternal age, nulliparity, pre-pregnancy body mass index, and weight gain. Different TD and FD groups were associated with variable glucose responses to the 50-gram GCT, some of which were significant. The estimate coefficients (β) of the TD-FD groups “night, ≤1 hr” and “night, 1–2 hr” revealed significantly lower glucose concentrations [β (95% confidence interval [CI]): −6.46 (−12.53, −0.38) and −6.85 (−12.50, −1.20)] compared with the “morning, >2 hr” group. The TD-FD groups “afternoon, ≤1 hr” and “afternoon, 1–2 hr” showed significantly lower odds ratios (OR) of a positive GCT; the adjusted ORs (95% CI) were 0.54 (0.31–0.95) and 0.58 (0.35–0.96), respectively.

Conclusions

Our findings demonstrate the importance of standardizing the TD and FD for the 1-hour, 50-gram GCT. In screening for and diagnosing GDM, the TD and FD are modifiable factors that should be considered in clinical practice and epidemiological studies.  相似文献   

2.
Arousal has long been known to influence behavior and serves as an underlying component of cognition and consciousness. However, the consequences of hyper-arousal for visual perception remain unclear. The present study evaluates the impact of hyper-arousal on two aspects of visual sensitivity: visual stereoacuity and contrast thresholds. Sixty-eight participants participated in two experiments. Thirty-four participants were randomly divided into two groups in each experiment: Arousal Stimulation or Sham Control. The Arousal Stimulation group underwent a 50-second cold pressor stimulation (immersing the foot in 0–2° C water), a technique known to increase arousal. In contrast, the Sham Control group immersed their foot in room temperature water. Stereoacuity thresholds (Experiment 1) and contrast thresholds (Experiment 2) were measured before and after stimulation. The Arousal Stimulation groups demonstrated significantly lower stereoacuity and contrast thresholds following cold pressor stimulation, whereas the Sham Control groups showed no difference in thresholds. These results provide the first evidence that hyper-arousal from sensory stimulation can lower visual thresholds. Hyper-arousal''s ability to decrease visual thresholds has important implications for survival, sports, and everyday life.  相似文献   

3.
This study examined the relationship between alcohol consumption and health-related quality of life (HRQOL) in a nationally representative sample of middle-aged to older South Koreans. Data collected from 3,408 men and 3,361 women aged ≥ 40 years were obtained from the 2010 and 2011 Korea National Health and Nutrition Examination Survey. Based on the World Health Organization guidelines, the participants were categorized into zones I (0–7), II (8–15), III (16–19), or IV (20–40) according to their Alcohol Use Disorders Identification Test (AUDIT) scores, with a higher zone indicating a higher level of alcohol consumption. Data collected from the AUDIT and EuroQol 5-Dimension (EQ-5D) test were subjected to multiple regression analysis in order to examine the relationship between alcohol consumption patterns and health-related quality of life, and to identify intersex and interzone differences. Significant intersex differences were found for the mean total AUDIT and EQ-5D scores and the proportion of participants rating their pain/discomfort and impairment in mobility and usual activities as “moderate” or “severe” (p < 0.001). The analysis of the EQ-5D scores by alcohol consumption pattern and sex suggested the existence of an inverted U-shaped relationship between the total AUDIT and EQ-5D scores. The HRQOL of moderate alcohol drinkers was higher than that of non-drinkers and heavy drinkers. The results of this study will be valuable in designing appropriate interventions to increase the HRQOL impaired by the harmful use of alcohol, in comparing HRQOL among different countries, and in implementing alcohol-related health projects.  相似文献   

4.
Observational and intervention studies have revealed inconsistent findings with respect to the relationship between vitamin D and insulin resistance. No intervention studies have been conducted in community samples whereas this may be particularly relevant to the primary prevention of type 2 diabetes (T2D) and cardiovascular disease (CVD). In the present study we examined whether temporal improvements in vitamin D status, measured as serum 25-hydroxyvitamin D [25(OH)D], reduce the risk of insulin resistance among individuals without T2D. We accessed and analyzed data from 5730 nondiabetic participants with repeated measures of serum 25(OH)D who enrolled in a preventive health program. We used the homeostatic model assessment for insulin resistance (HOMA-IR) and applied logistic regression to quantify the independent contribution of baseline serum 25(OH)D and temporal increases in 25(OH)D on HOMA-IR. The median time between baseline and follow up was 1.1 year. On average serum 25(OH)D concentrations increased from 89 nanomoles per liter (nmol/L) at baseline to 122 nmol/L at follow up. Univariate analyses showed that relative to participants with baseline serum 25(OH)D less than 50 nmol/L, participants with baseline concentrations of “50-<75”, “75-<100”, “100-<125”, and ≥125 nmol/L were 0.76 (95% confidence intervals: 0.61–0.95), 0.54 (0.43–0.69), 0.48 (0.36–0.64) and 0.36 (0.27–0.49) times as likely to have insulin resistance at follow up, respectively. More importantly, relative to participants without temporal increases in 25(OH)D, those with increases in serum 25(OH)D of “<25”, “25-<50”, “50-<75”, “≥75” nmol/L were 0.92 (0.72–1.17), 0.86 (0.65–1.13), 0.66 (0.47–0.93), and 0.74 (0.55–0.99) times as likely to have insulin resistance at follow up, respectively. In the subgroup of participants without insulin resistance at baseline, this was 0.96 (0.72–1.27), 0.78 (0.56–1.10), 0.66 (0.44–0.99), and 0.67 (0.48–0.94), respectively. These observations suggest that improvements in vitamin D status reduce the risk for insulin resistance and herewith may contribute to the primary prevention of T2D and CVD.  相似文献   

5.
BackgroundHIV testing is the gateway to HIV prevention, treatment, and care. Despite the established vulnerability of young Thai people to HIV infection, studies examining the prevalence and correlates of HIV testing among the general population of Thai youth are still very limited. This study investigates socio-demographic, behavioral, and psychosocial factors associated with HIV testing among young Thai people enrolled in Non-formal Education Centers (NFEC) in urban Chiang Mai, Northern Thailand.MethodsThis was a cross-sectional quantitative study conducted among young unmarried Thai youth—between the ages of 15 and 24—who were enrolled in NFEC in urban Chiang Mai. Multiple logistic regressions were used to identify correlates of “ever tested for HIV” among the sexually active participants.FindingsOf the 295 sexually active participants, 27.3% reported “ever tested for HIV;” 65.4% “did not consistently use condom;” and 61.7% “had at least 2 lifetime partners.” We found that “self-efficacy” (AOR, 4.92; CI, 1.22–19.73); “perception that it is easy to find a location nearby to test for HIV” (AOR, 4.67; CI, 1.21–18.06); “having at least 2 lifetime sexual partners” (AOR, 2.05; CI, 1.09–3.85); and “ever been pregnant or made someone pregnant” (AOR, 4.06; CI, 2.69–9.15); were associated with increased odds of having ever been tested. On the other hand, “fear of HIV test results” (AOR, 0.21; CI, 0.08–0.57) was associated with lower odds of ever having been tested for HIV.ConclusionThe main finding is that a substantially high proportion of Thai youth is engaged in risky sexual behaviors—yet reports low rates of ever having been tested for HIV. This highlights an urgent need to develop appropriate interventions—based on the identified correlates of HIV testing. There is also an urgent need to enhance HIV testing and to promote safer sexual behaviors among young Thai people—particularly those who are out-of-school.  相似文献   

6.
Several steering models in the visual science literature attempt to capture the visual strategies in curve driving. Some of them are based on steering points on the future path (FP), others on tangent points (TP). It is, however, challenging to differentiate between the models’ predictions in real–world contexts. Analysis of optokinetic nystagmus (OKN) parameters is one useful measure, as the different strategies predict measurably different OKN patterns. Here, we directly test this prediction by asking drivers to either a) “drive as they normally would” or b) to “look at the TP”. The design of the experiment is similar to a previous study by Kandil et al., but uses more sophisticated methods of eye–movement analysis. We find that the eye-movement patterns in the “normal” condition are indeed markedly different from the “tp” condition, and consistent with drivers looking at waypoints on the future path. This is the case for both overall fixation distribution, as well as the more informative fixation–by–fixation analysis of OKN. We find that the horizontal gaze speed during OKN corresponds well to the quantitative prediction of the future path models. The results also definitively rule out the alternative explanation that the OKN is produced by an involuntary reflex even while the driver is “trying” to look at the TP. The results are discussed in terms of the sequential organization of curve driving.  相似文献   

7.
Assessment of visual acuity is a well standardized procedure at least for expert opinions and clinical trials. It is often recommended not giving patients feedback on the correctness of their responses. As this viewpoint has not been quantitatively examined so far, we quantitatively assessed possible effects of feedback on visual acuity testing. In 40 normal participants we presented Landolt Cs in 8 orientations using the automated Freiburg Acuity Test (FrACT, <michaelbach.de/fract. Over a run comprising 24 trials, the acuity threshold was measured with an adaptive staircase procedure. In an ABCDDCBA scheme, trial-by-trial feedback was provided in 2 x 4 conditions: (A) no feedback, (B) acoustic signals indicating correctness, (C)visual indication of correct orientation, and (D) a combination of (B) and (C). After each run the participants judged comfort. Main outcome measures were absolute visual acuity (logMAR), its test-retest agreement (limits of agreement) and participants’ comfort estimates on a 5-step symmetric Likert scale. Feedback influenced acuity outcome significantly (p = 0.02), but with a tiny effect size: 0.02 logMAR poorer acuity for (D) compared to (A), even weaker effects for (B) and (C). Test-retest agreement was high (limits of agreement: ± 1.0 lines) and did not depend on feedback (p>0.5). The comfort ranking clearly differed, by 2 steps on the Likert scale: the condition (A)–no feedback–was on average “slightly uncomfortable”, the other three conditions were “slightly comfortable” (p<0.0001). Feedback affected neither reproducibility nor the acuity outcome to any relevant extent. The participants, however, reported markedly greater comfort with any kind of feedback. We conclude that systematic feedback (as implemented in FrACT) offers nothing but advantages for routine use.  相似文献   

8.
We present, to our knowledge, the first demonstration that a non-invasive brain-to-brain interface (BBI) can be used to allow one human to guess what is on the mind of another human through an interactive question-and-answering paradigm similar to the “20 Questions” game. As in previous non-invasive BBI studies in humans, our interface uses electroencephalography (EEG) to detect specific patterns of brain activity from one participant (the “respondent”), and transcranial magnetic stimulation (TMS) to deliver functionally-relevant information to the brain of a second participant (the “inquirer”). Our results extend previous BBI research by (1) using stimulation of the visual cortex to convey visual stimuli that are privately experienced and consciously perceived by the inquirer; (2) exploiting real-time rather than off-line communication of information from one brain to another; and (3) employing an interactive task, in which the inquirer and respondent must exchange information bi-directionally to collaboratively solve the task. The results demonstrate that using the BBI, ten participants (five inquirer-respondent pairs) can successfully identify a “mystery item” using a true/false question-answering protocol similar to the “20 Questions” game, with high levels of accuracy that are significantly greater than a control condition in which participants were connected through a sham BBI.  相似文献   

9.
Humans need to solve computationally intractable problems such as visual search, categorization, and simultaneous learning and acting, yet an increasing body of evidence suggests that their solutions to instantiations of these problems are near optimal. Computational complexity advances an explanation to this apparent paradox: (1) only a small portion of instances of such problems are actually hard, and (2) successful heuristics exploit structural properties of the typical instance to selectively improve parts that are likely to be sub-optimal. We hypothesize that these two ideas largely account for the good performance of humans on computationally hard problems. We tested part of this hypothesis by studying the solutions of 28 participants to 28 instances of the Euclidean Traveling Salesman Problem (TSP). Participants were provided feedback on the cost of their solutions and were allowed unlimited solution attempts (trials). We found a significant improvement between the first and last trials and that solutions are significantly different from random tours that follow the convex hull and do not have self-crossings. More importantly, we found that participants modified their current better solutions in such a way that edges belonging to the optimal solution (“good” edges) were significantly more likely to stay than other edges (“bad” edges), a hallmark of structural exploitation. We found, however, that more trials harmed the participants'' ability to tell good from bad edges, suggesting that after too many trials the participants “ran out of ideas.” In sum, we provide the first demonstration of significant performance improvement on the TSP under repetition and feedback and evidence that human problem-solving may exploit the structure of hard problems paralleling behavior of state-of-the-art heuristics.  相似文献   

10.
We examined the course of repetitive behavior and restricted interests (RBRI) in children with and without Down syndrome (DS) over a two-year time period. Forty-two typically-developing children and 43 persons with DS represented two mental age (MA) levels: “younger” 2–4 years; “older” 5–11 years. For typically developing younger children some aspects of RBRI increased from Time 1 to Time 2. In older children, these aspects remained stable or decreased over the two-year period. For participants with DS, RBRI remained stable or increased over time. Time 1 RBRI predicted Time 2 adaptive behavior (measured by the Vineland Scales) in typically developing children, whereas for participants with DS, Time 1 RBRI predicted poor adaptive outcome (Child Behavior Checklist) at Time 2. The results add to the body of literature examining the adaptive and maladaptive nature of repetitive behavior.  相似文献   

11.
We present a novel “Gaze-Replay” paradigm that allows the experimenter to directly test how particular patterns of visual input—generated from people’s actual gaze patterns—influence the interpretation of the visual scene. Although this paradigm can potentially be applied across domains, here we applied it specifically to social comprehension. Participants viewed complex, dynamic scenes through a small window displaying only the foveal gaze pattern of a gaze “donor.” This was intended to simulate the donor’s visual selection, such that a participant could effectively view scenes “through the eyes” of another person. Throughout the presentation of scenes presented in this manner, participants completed a social comprehension task, assessing their abilities to recognize complex emotions. The primary aim of the study was to assess the viability of this novel approach by examining whether these Gaze-Replay windowed stimuli contain sufficient and meaningful social information for the viewer to complete this social perceptual and cognitive task. The results of the study suggested this to be the case; participants performed better in the Gaze-Replay condition compared to a temporally disrupted control condition, and compared to when they were provided with no visual input. This approach has great future potential for the exploration of experimental questions aiming to unpack the relationship between visual selection, perception, and cognition.  相似文献   

12.
The most common lethal accidents in General Aviation are caused by improperly executed landing approaches in which a pilot descends below the minimum safe altitude without proper visual references. To understand how expertise might reduce such erroneous decision-making, we examined relevant neural processes in pilots performing a simulated landing approach inside a functional MRI scanner. Pilots (aged 20–66) were asked to “fly” a series of simulated “cockpit view” instrument landing scenarios in an MRI scanner. The scenarios were either high risk (heavy fog–legally unsafe to land) or low risk (medium fog–legally safe to land). Pilots with one of two levels of expertise participated: Moderate Expertise (Instrument Flight Rules pilots, n = 8) or High Expertise (Certified Instrument Flight Instructors or Air-Transport Pilots, n = 12). High Expertise pilots were more accurate than Moderate Expertise pilots in making a “land” versus “do not land” decision (CFII: d′ = 3.62±2.52; IFR: d′ = 0.98±1.04; p<.01). Brain activity in bilateral caudate nucleus was examined for main effects of expertise during a “land” versus “do not land” decision with the no-decision control condition modeled as baseline. In making landing decisions, High Expertise pilots showed lower activation in the bilateral caudate nucleus (0.97±0.80) compared to Moderate Expertise pilots (1.91±1.16) (p<.05). These findings provide evidence for increased “neural efficiency” in High Expertise pilots relative to Moderate Expertise pilots. During an instrument approach the pilot is engaged in detailed examination of flight instruments while monitoring certain visual references for making landing decisions. The caudate nucleus regulates saccade eye control of gaze, the brain area where the “expertise” effect was observed. These data provide evidence that performing “real world” aviation tasks in an fMRI provide objective data regarding the relative expertise of pilots and brain regions involved in it.  相似文献   

13.

Background

Clarity of the transplanted tissue and restoration of visual acuity are the two primary metrics for evaluating the success of corneal transplantation. Participation of the transplanted eye in habitual binocular viewing is seldom evaluated post-operatively. In unilateral corneal disease, the transplanted eye may remain functionally inactive during binocular viewing due to its suboptimal visual acuity and poor image quality, vis-à-vis the healthy fellow eye.

Methods and Findings

This study prospectively quantified the contribution of the transplanted eye towards habitual binocular viewing in 25 cases with unilateral transplants [40yrs (IQR: 32–42yrs) and 25 age-matched controls [30yrs (25–37yrs)]. Binocular functions including visual field extent, high-contrast logMAR acuity, suppression threshold and stereoacuity were assessed using standard psychophysical paradigms. Optical quality of all eyes was determined from wavefront aberrometry measurements.Binocular visual field expanded by a median 21% (IQR: 18–29%) compared to the monocular field of cases and controls (p = 0.63). Binocular logMAR acuity [0.0 (0.0–0.0)] almost always followed the fellow eye’s acuity [0.00 (0.00 –-0.02)] (r = 0.82), independent of the transplanted eye’s acuity [0.34 (0.2–0.5)] (r = 0.04). Suppression threshold and stereoacuity were poorer in cases [30.1% (13.5–44.3%); 620.8arc sec (370.3–988.2arc sec)] than in controls [79% (63.5–100%); 16.3arc sec (10.6–25.5arc sec)] (p<0.001). Higher-order wavefront aberrations of the transplanted eye [0.34μ (0.21–0.51μ)] were higher than the fellow eye [0.07μ (0.05–0.11μ)] (p<0.001) and their reduction with RGP contact lenses [0.09μ (0.08–0.12μ)] significantly improved the suppression threshold [65% (50–72%)] and stereoacuity [56.6arc sec (47.7–181.6arc sec)] (p<0.001).

Conclusions

In unilateral corneal disease, the transplanted eye does participate in gross binocular viewing but offers limited support to fine levels of binocularity. Improvement in the transplanted eye’s optics enhances its participation in binocular viewing. Current metrics of this treatment success can expand to include measures of binocularity to assess the functional benefit of the transplantation process in unilateral corneal disease.  相似文献   

14.

Introduction

Respondent-driven sampling (RDS) offers a recruitment strategy for hard-to-reach populations. However, RDS faces logistical and theoretical challenges that threaten efficiency and validity in settings worldwide. We present innovative adaptations to conventional RDS to overcome barriers encountered in recruiting a large, representative sample of men who have sex with men (MSM) who travel internationally.

Methods

Novel methodological adaptations for the “International Travel Research to Inform Prevention” or “I-TRIP” study were offering participants a choice between electronic and paper coupons referrals for recruitment and modifying the secondary incentives structure from small cash amounts to raffle entries for periodic large cash prize raffle drawings. Staged referral limit increases from 3 to 10 referrals and progressive addition of 70 seeds were also implemented.

Results

There were 501 participants enrolled in up to 13 waves of growth. Among participants with a choice of referral methods, 81% selected electronic referrals. Of participants who were recruited electronically, 90% chose to remain with electronic referrals when it was their turn to recruit. The mean number of enrolled referrals was 0.91 for electronic referrals compared to 0.56 for paper coupons. Median referral lag time, i.e., the time interval between when recruiters were given their referrals and when a referred individual enrolled in the study, was 20 days (IQR 10–40) for electronic referrals, 20 days (IQR 8–58) for paper coupons, 20 days (IQR 10–41) for raffle entries and 33 days (IQR 16–148) for small cash incentives.

Conclusions

The recruitment of MSM who travel internationally required maximizing known flexible tools of RDS while at the same time necessitating innovations to increase recruitment efficiency. Electronic referrals emerged as a major advantage in recruiting this hard-to-reach population who are of high socio-economic status, geographically diffuse and highly mobile. These enhancements may improve the performance of RDS in target populations with similar characteristics.  相似文献   

15.

Context

Depression is associated with increased mortality, but it is unclear if this relationship is dose-dependent and if it can be modified by treatment with antidepressants.

Objective

To determine if (1) the association between depression and mortality is independent of other common potential causes of death in later life, (2) there is a dose-response relationship between increasing severity of depression and mortality rates, and (3) the use of antidepressant drugs reduces mortality rates.

Methods

Cohort study of 5,276 community-dwelling men aged 68–88 years living in Perth, Australia. We used the Geriatric Depression Scale 15-items (GDS-15) to ascertain the presence and severity of depression. GDS-15≥7 indicates the presence of clinically significant depression. Men were also grouped according to the severity of symptoms: “no symptoms” (GDS-15 = 0), “questionable” (1≤GDS-15≤4), “mild to moderate” (5≤GDS-15≤9), and “severe” (GDS-15≥10). Participants listed all medications used regularly. We used the Western Australian Data Linkage System to monitor mortality.

Results

There were 883 deaths between the study assessment and the 30th June 2008 (mean follow-up of participants: 6.0±1.1 years). The adjusted mortality hazard (MH) of men with clinically significant depression was 1.98 (95%CI = 1.61–2.43), and increased with the severity of symptoms: 1.39 (95%CI = 1.13–1.71) for questionable, 2.71 (95%CI = 2.13–3.46) for mild/moderate, and 3.32 (95%CI: 2.31–4.78) for severe depression. The use of antidepressants increased MH (HR = 1.31, 95%CI = 1.02–1.68). Compared with men who were not depressed and were not taking antidepressants, MH increased from 1.22 (95%CI = 0.91–1.63) for men with no depression who were using antidepressants to 1.85 (95%CI = 1.47–2.32) for participants who were depressed but were not using antidepressants, and 2.97 (95%CI = 1.94–4.54) for those who were depressed and were using antidepressants. All analyses were adjusted for age, educational attainment, migrant status, physical activity, smoking and alcohol use and the Charlson comorbidity index.

Conclusions

The mortality associated with depression increases with the severity of depressive symptoms and is largely independent of comorbid conditions. The use of antidepressants does not reduce the mortality rates of older men with persistent symptoms of depression.  相似文献   

16.
Human performance on various visual tasks can be improved substantially via training. However, the enhancements are frequently specific to relatively low-level stimulus dimensions. While such specificity has often been thought to be indicative of a low-level neural locus of learning, recent research suggests that these same effects can be accounted for by changes in higher-level areas–in particular in the way higher-level areas read out information from lower-level areas in the service of highly practiced decisions. Here we contrast the degree of orientation transfer seen after training on two different tasks—vernier acuity and stereoacuity. Importantly, while the decision rule that could improve vernier acuity (i.e. a discriminant in the image plane) would not be transferable across orientations, the simplest rule that could be learned to solve the stereoacuity task (i.e. a discriminant in the depth plane) would be insensitive to changes in orientation. Thus, given a read-out hypothesis, more substantial transfer would be expected as a result of stereoacuity than vernier acuity training. To test this prediction, participants were trained (7500 total trials) on either a stereoacuity (N = 9) or vernier acuity (N = 7) task with the stimuli in either a vertical or horizontal configuration (balanced across participants). Following training, transfer to the untrained orientation was assessed. As predicted, evidence for relatively orientation specific learning was observed in vernier trained participants, while no evidence of specificity was observed in stereo trained participants. These results build upon the emerging view that perceptual learning (even very specific learning effects) may reflect changes in inferences made by high-level areas, rather than necessarily fully reflecting changes in the receptive field properties of low-level areas.  相似文献   

17.

Background

Tuberculosis (TB) is common among HIV-infected individuals in many resource-limited countries and has been associated with poor survival. We evaluated morbidity and mortality among individuals first starting antiretroviral therapy (ART) with concurrent active TB or other AIDS-defining disease using data from the “Prospective Evaluation of Antiretrovirals in Resource-Limited Settings” (PEARLS) study.

Methods

Participants were categorized retrospectively into three groups according to presence of active confirmed or presumptive disease at ART initiation: those with pulmonary and/or extrapulmonary TB (“TB” group), those with other non-TB AIDS-defining disease (“other disease”), or those without concurrent TB or other AIDS-defining disease (“no disease”). Primary outcome was time to the first of virologic failure, HIV disease progression or death. Since the groups differed in characteristics, proportional hazard models were used to compare the hazard of the primary outcome among study groups, adjusting for age, sex, country, screening CD4 count, baseline viral load and ART regimen.

Results

31 of 102 participants (30%) in the “TB” group, 11 of 56 (20%) in the “other disease” group, and 287 of 1413 (20%) in the “no disease” group experienced a primary outcome event (p = 0.042). This difference reflected higher mortality in the TB group: 15 (15%), 0 (0%) and 41 (3%) participants died, respectively (p<0.001). The adjusted hazard ratio comparing the “TB” and “no disease” groups was 1.39 (95% confidence interval: 0.93–2.10; p = 0.11) for the primary outcome and 3.41 (1.72–6.75; p<0.001) for death.

Conclusions

Active TB at ART initiation was associated with increased risk of mortality in HIV-1 infected patients.  相似文献   

18.
Neuroimaging has identified many correlates of emotion but has not yet yielded brain representations predictive of the intensity of emotional experiences in individuals. We used machine learning to identify a sensitive and specific signature of emotional responses to aversive images. This signature predicted the intensity of negative emotion in individual participants in cross validation (n =121) and test (n = 61) samples (high–low emotion = 93.5% accuracy). It was unresponsive to physical pain (emotion–pain = 92% discriminative accuracy), demonstrating that it is not a representation of generalized arousal or salience. The signature was comprised of mesoscale patterns spanning multiple cortical and subcortical systems, with no single system necessary or sufficient for predicting experience. Furthermore, it was not reducible to activity in traditional “emotion-related” regions (e.g., amygdala, insula) or resting-state networks (e.g., “salience,” “default mode”). Overall, this work identifies differentiable neural components of negative emotion and pain, providing a basis for new, brain-based taxonomies of affective processes.  相似文献   

19.
Why do people self-report an aversion to words like “moist”? The present studies represent an initial scientific exploration into the phenomenon of word aversion by investigating its prevalence and cause. Results of five experiments indicate that about 10–20% of the population is averse to the word “moist.” This population often speculates that phonological properties of the word are the cause of their displeasure. However, data from the current studies point to semantic features of the word–namely, associations with disgusting bodily functions–as a more prominent source of peoples’ unpleasant experience. “Moist,” for averse participants, was notable for its valence and personal use, rather than imagery or arousal–a finding that was confirmed by an experiment designed to induce an aversion to the word. Analyses of individual difference measures suggest that word aversion is more prevalent among younger, more educated, and more neurotic people, and is more commonly reported by females than males.  相似文献   

20.

Objective

To describe different end criteria for reaching maximal oxygen uptake (VO2max) during a continuous graded exercise test on the treadmill, and to explore the manner by which different end criteria have an impact on the magnitude of the VO2max result.

Methods

A sample of 861 individuals (390 women) aged 20–85 years performed an exercise test on a treadmill until exhaustion. Gas exchange, heart rate, blood lactate concentration and Borg Scale6–20 rating were measured, and the impact of different end criteria on VO2max was studied;VO2 leveling off, maximal heart rate (HRmax), different levels of respiratory exchange ratio (RER), and postexercise blood lactate concentration.

Results

Eight hundred and four healthy participants (93%) fulfilled the exercise test until voluntary exhaustion. There were no sex-related differences in HRmax, RER, or Borg Scale rating, whereas blood lactate concentration was 18% lower in women (P<0.001). Forty-two percent of the participants achieved a plateau in VO2; these individuals had 5% higher ventilation (P = 0.033), 4% higher RER (P<0.001), and 5% higher blood lactate concentration (P = 0.047) compared with participants who did not reach a VO2 plateau. When using RER ≥1.15 or blood lactate concentration ≥8.0 mmol•L–1, VO2max was 4% (P = 0.012) and 10% greater (P<0.001), respectively. A blood lactate concentration ≥8.0 mmol•L–1 excluded 63% of the participants in the 50–85-year-old cohort.

Conclusions

A range of typical end criteria are presented in a random sample of subjects aged 20–85 years. The choice of end criteria will have an impact on the number of the participants as well as the VO2max outcome. Suggestions for new recommendations are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号