首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.

Purpose

Prostate imaging requires optimization in young and old mouse models. We tested which MR sequences and field strengths best depict the prostate gland in young and old mice; and, whether prostate MR signal, size, and architecture change with age.

Technique

Magnetic resonance imaging (MRI) of the prostate of young (2 months) and old (18 months) male nude mice (n = 6) was performed at 4.7 and 7 T and SCID mice (n = 6) at 7 T field strengths, using T1, fat suppressed T1, DWI, T2, fat suppressed T2, as well as T2-based- and proton density-based Dixon “water only” sequences. Images were ranked for best overall sequence for prostate visualization, prostate delineation, and quality of fat suppression. Prostate volume and signal characteristics were compared and histology was performed.

Results

T2-based-Dixon “water only” images ranked best overall for prostate visualization and delineation as well as fat suppression (n = 6, P<0.001) at both 4.7 T and 7 T in nude and 7T in SCID mice. Evaluated in nude mice, T2-based Dixon “water only” had greater prostate CNR and lower fat SNR at 7 T than 4.7 T (P<0.001). Prostate volume was less in older than younger mice (n = 6, P<0.02 nude mice; n = 6, P<0.002 SCID mice). Prostate T2 FSE as well as proton density-based and T2-based-Dixon “water only” signal intensity was higher in younger than older mice (P<0.001 nude mice; P<0.01 SCID mice) both at 4.7 and 7 T. This corresponded to an increase in glandular hyperplasia in older mice by histology (P<0.01, n = 6).

Conclusion

T2-based Dixon “water only” images best depict the mouse prostate in young and old nude mice at 4.7 and 7 T. The mouse prostate decreases in size with age. The decrease in T2 and T2-based Dixon “water only” signal with age corresponds with glandular hyperplasia. Findings suggest age should be an important determinant when choosing models of prostate biology and disease.  相似文献   

2.

Background

Experiencing acute pain can affect the social behaviour of both humans and animals and can increase the risk that they exhibit aggressive or violent behaviour. However, studies have focused mainly on the impact of acute rather than chronic painful experiences. As recent results suggest that chronic pain or chronic discomfort could increase aggressiveness in humans and other mammals, we tested here the hypothesis that, in horses, aggression towards humans (a common source of accidents for professionals) could be linked to regularly reported vertebral problems of riding horses.

Methodology/Principal Findings

Vertebral examination and standardized behavioural tests were made independently on the same horses. Here we showed that most horses severely affected by vertebral problems were prone to react aggressively towards humans (33/43 horses, chi-square test, df = 1, χ2 = 12.30, p<0.001), which was not the case for unaffected or slightly affected horses (9/16 horses, chi-square test, df = 1, χ2 = 0.25, p>0.05). The more affected they were, the fewer positive reactions they exhibited (rs = −0.31, p = 0.02).

Conclusions/Significance

This is to our knowledge the first experimental evidence of such a link between chronic discomfort/potential pain (inferred from the presence of vertebral problems) and aggression, suggesting that chronic painful experiences may act in ways similar to those of acute experiences. Chronic discomfort or pain may often be overlooked when facing “bad tempered” individuals, whether humans or animals. This experimental study confirms the importance of including chronic discomfort or pain as a major factor in interpersonal relations and models of aggression.  相似文献   

3.
Laboratory-based CD4 monitoring of HIV patients presents challenges in resource limited settings (RLS) including frequent machine breakdown, poor engineering support and limited cold chain and specimen transport logistics. This study assessed the performance of two CD4 tests designed for use in RLS; the Dynal assay and the Alere PIMA test (PIMA). Accuracy of Dynal and PIMA using venous blood was assessed in a centralised laboratory by comparison to BD FACSCount (BD FACS). Dynal had a mean bias of −50.35 cells/µl (r2 = 0.973, p<0.0001, n = 101) and PIMA −22.43 cells/µl (r2 = 0.964, p<0.0001, n = 139) compared to BD FACS. Similar results were observed for PIMA operated by clinicians in one urban (n = 117) and two rural clinics (n = 98). Using internal control beads, PIMA precision was 10.34% CV (low bead mean 214.24 cells/µl) and 8.29% (high bead mean 920.73 cells/µl) and similar %CV results were observed external quality assurance (EQA) and replicate patient samples. Dynal did not perform using EQA and no internal controls are supplied by the manufacturer, however duplicate testing of samples resulted in r2 = 0.961, p<0.0001, mean bias = −1.44 cells/µl. Using the cut-off of 350 cells/µl compared to BD FACS, PIMA had a sensitivity of 88.85% and specificity of 98.71% and Dynal 88.61% and 100%. A total of 0.44% (2/452) of patient samples were misclassified as “no treat” and 7.30% (33/452) “treat” using PIMA whereas with Dynal 8.91% (9/101) as “treat” and 0% as “no treat”. In our setting PIMA was found to be accurate, precise and user-friendly in both laboratory and clinic settings. Dynal performed well in initial centralized laboratory evaluation, however lacks requisite quality control measures, and was technically more difficult to use, making it less suitable for use at lower tiered laboratories.  相似文献   

4.
Aminoacyl-tRNA synthetases (ARSs) are in charge of cellular protein synthesis and have additional domains that function in a versatile manner beyond translation. Eight core ARSs (EPRS, MRS, QRS, RRS, IRS, LRS, KRS, DRS) combined with three nonenzymatic components form a complex known as multisynthetase complex (MSC).We hypothesize that the single-nucleotide polymorphisms (SNPs) of the eight core ARS coding genes might influence the susceptibility of sporadic congenital heart disease (CHD). Thus, we conducted a case-control study of 984 CHD cases and 2953 non-CHD controls in the Chinese Han population to evaluate the associations of 16 potentially functional SNPs within the eight ARS coding genes with the risk of CHD. We observed significant associations with the risk of CHD for rs1061248 [G/A; odds ratio (OR) = 0.90, 95% confidence interval (CI) = 0.81–0.99; P = 3.81×10−2], rs2230301 [A/C; OR = 0.73, 95%CI = 0.60–0.90, P = 3.81×10−2], rs1061160 [G/A; OR = 1.18, 95%CI = 1.06–1.31; P = 3.53×10−3] and rs5030754 [G/A; OR = 1.39, 95%CI = 1.11–1.75; P = 4.47×10−3] of EPRS gene. After multiple comparisons, rs1061248 conferred no predisposition to CHD. Additionally, a combined analysis showed a significant dosage-response effect of CHD risk among individuals carrying the different number of risk alleles (P trend = 5.00×10−4). Compared with individuals with “0–2” risk allele, those carrying “3”, “4” or “5 or more” risk alleles had a 0.97-, 1.25- or 1.38-fold increased risk of CHD, respectively. These findings indicate that genetic variants of the EPRS gene may influence the individual susceptibility to CHD in the Chinese Han population.  相似文献   

5.
Assortative mating in phenotype in human marriages has been widely observed. Using genome-wide genotype data from the Framingham Heart study (FHS; number of married couples = 989) and Health Retirement Survey (HRS; number of married couples = 3,474), this study investigates genomic assortative mating in human marriages. Two types of genomic marital correlations are calculated. The first is a correlation specific to a single married couple “averaged” over all available autosomal single-nucleotide polymorphism (SNPs). In FHS, the average married-couple correlation is 0.0018 with p = 3×10−5; in HRS, it is 0.0017 with p = 7.13×10−13. The marital correlation among the positively assorting SNPs is 0.001 (p = .0043) in FHS and 0.015 (p = 1.66×10−24) in HRS. The sizes of these estimates in FHS and HRS are consistent with what are suggested by the distribution of the allelic combination. The study also estimated SNP-specific correlation “averaged” over all married couples. Suggestive evidence is reported. Future studies need to consider a more general form of genomic assortment, in which different allelic forms in homologous genes and non-homologous genes result in the same phenotype.  相似文献   

6.

Background

Although many case reports have described patients with proton pump inhibitor (PPI)-induced hypomagnesemia, the impact of PPI use on hypomagnesemia has not been fully clarified through comparative studies. We aimed to evaluate the association between the use of PPI and the risk of developing hypomagnesemia by conducting a systematic review with meta-analysis.

Methods

We conducted a systematic search of MEDLINE, EMBASE, and the Cochrane Library using the primary keywords “proton pump,” “dexlansoprazole,” “esomeprazole,” “ilaprazole,” “lansoprazole,” “omeprazole,” “pantoprazole,” “rabeprazole,” “hypomagnesemia,” “hypomagnesaemia,” and “magnesium.” Studies were included if they evaluated the association between PPI use and hypomagnesemia and reported relative risks or odds ratios or provided data for their estimation. Pooled odds ratios with 95% confidence intervals were calculated using the random effects model. Statistical heterogeneity was assessed with Cochran’s Q test and I 2 statistics.

Results

Nine studies including 115,455 patients were analyzed. The median Newcastle-Ottawa quality score for the included studies was seven (range, 6–9). Among patients taking PPIs, the median proportion of patients with hypomagnesemia was 27.1% (range, 11.3–55.2%) across all included studies. Among patients not taking PPIs, the median proportion of patients with hypomagnesemia was 18.4% (range, 4.3–52.7%). On meta-analysis, pooled odds ratio for PPI use was found to be 1.775 (95% confidence interval 1.077–2.924). Significant heterogeneity was identified using Cochran’s Q test (df = 7, P<0.001, I 2 = 98.0%).

Conclusions

PPI use may increase the risk of hypomagnesemia. However, significant heterogeneity among the included studies prevented us from reaching a definitive conclusion.  相似文献   

7.

Background

Dying at home and dying at the preferred place of death are advocated to be desirable outcomes of palliative care. More insight is needed in their usefulness as quality indicators. Our objective is to describe whether “the percentage of patients dying at home” and “the percentage of patients who died in their place of preference” are feasible and informative quality indicators.

Methods and Findings

A mortality follow-back study was conducted, based on data recorded by representative GP networks regarding home-dwelling patients who died non-suddenly in Belgium (n = 1036), the Netherlands (n = 512), Italy (n = 1639) or Spain (n = 565). “The percentage of patients dying at home” ranged between 35.3% (Belgium) and 50.6% (the Netherlands) in the four countries, while “the percentage of patients dying at their preferred place of death” ranged between 67.8% (Italy) and 86.0% (Spain). Both indicators were strongly associated with palliative care provision by the GP (odds ratios of 1.55–13.23 and 2.30–6.63, respectively). The quality indicator concerning the preferred place of death offers a broader view than the indicator concerning home deaths, as it takes into account all preferences met in all locations. However, GPs did not know the preferences for place of death in 39.6% (the Netherlands) to 70.3% (Italy), whereas the actual place of death was known in almost all cases.

Conclusion

GPs know their patients’ actual place of death, making the percentage of home deaths a feasible indicator for collection by GPs. However, patients’ preferred place of death was often unknown to the GP. We therefore recommend using information from relatives as long as information from GPs on the preferred place of death is lacking. Timely communication about the place where patients want to be cared for at the end of life remains a challenge for GPs.  相似文献   

8.

Background

IPT with or without concomitant administration of ART is a proven intervention to prevent tuberculosis among PLHIV. However, there are few data on the routine implementation of this intervention and its effectiveness in settings with limited resources.

Objectives

To measure the level of uptake and effectiveness of IPT in reducing tuberculosis incidence in a cohort of PLHIV enrolled into HIV care between 2007 and 2010 in five hospitals in southern Ethiopia.

Methods

A retrospective cohort analysis of electronic patient database was done. The independent effects of no intervention, “IPT-only,” “IPT-before-ART,” “IPT-and-ART started simultaneously,” “ART-only,” and “IPT-after-ART” on TB incidence were measured. Cox-proportional hazards regression was used to assess association of treatment categories with TB incidence.

Results

Of 7,097 patients, 867 were excluded because they were transferred-in; a further 823 (12%) were excluded from the study because they were either identified to have TB through screening (292 patients) or were on TB treatment (531). Among the remaining 5,407 patients observed, IPT had been initiated for 39% of eligible patients. Children, male sex, advanced disease, and those in Pre-ART were less likely to be initiated on IPT. The overall TB incidence was 2.6 per 100 person-years. As compared to those with no intervention, use of “IPT-only” (aHR = 0.36, 95% CI = 0.19–0.66) and “ART-only” (aHR = 0.32, 95% CI = 0.24–0.43) were associated with significant reduction in TB incidence rate. Combining ART and IPT had a more profound effect. Starting IPT-before-ART (aHR = 0.18, 95% CI = 0.08–0.42) or simultaneously with ART (aHR = 0.20, 95% CI = 0.10–0.42) provided further reduction of TB at ∼80%.

Conclusions

IPT was found to be effective in reducing TB incidence, independently and with concomitant ART, under programme conditions in resource-limited settings. The level of IPT provision and effectiveness in reducing TB was encouraging in the study setting. Scaling up and strengthening IPT service in addition to ART can have beneficial effect in reducing TB burden among PLHIV in settings with high TB/HIV burden.  相似文献   

9.

Objectives

There is a need for better, noninvasive quantitative biomarkers for assessing the rate of progression and possible response to therapy in spinal muscular atrophy (SMA). In this study, we compared three electrophysiological measures: compound muscle action potential (CMAP) amplitude, motor unit number estimate (MUNE), and electrical impedance myography (EIM) 50 kHz phase values in a mild mouse model of spinal muscular atrophy, the Smn1c/c mouse.

Methods

Smn1c/c mice (N = 11) and wild type (WT) animals (−/−, N = 13) were measured on average triweekly until approximately 1 year of age. Measurements included CMAP, EIM, and MUNE of the gastrocnemius muscle as well as weight and front paw grip strength. At the time of sacrifice at one year, additional analyses were performed on the animals including serum survival motor neuron (SMN) protein levels and muscle fiber size.

Results

Both EIM 50 kHz phase and CMAP showed strong differences between WT and SMA animals (repeated measures 2-way ANOVA, P<0.0001 for both) whereas MUNE did not. Both body weight and EIM showed differences in the trajectory over time (p<0.001 and p = 0.005, respectively). At the time of sacrifice at one year, EIM values correlated to motor neuron counts in the spinal cord and SMN levels across both groups of animals (r = 0.41, p = 0.047 and r = 0.57, p  = 0.003, respectively), while CMAP did not. Motor neuron number in Smn1c/c mice was not significantly reduced compared to WT animals.

Conclusions

EIM appears sensitive to muscle status in this mild animal model of SMA. The lack of a reduction in MUNE or motor neuron number but reduced EIM and CMAP values support that much of the pathology in these animals is distal to the cell body, likely at the neuromuscular junction or the muscle itself.  相似文献   

10.
11.
Recent analysis of the cannabinoid content of cannabis plants suggests a shift towards use of high potency plant material with high levels of Δ9-tetrahydrocannabinol (THC) and low levels of other phytocannabinoids, particularly cannabidiol (CBD). Use of this type of cannabis is thought by some to predispose to greater adverse outcomes on mental health and fewer therapeutic benefits. Australia has one of the highest per capita rates of cannabis use in the world yet there has been no previous systematic analysis of the cannabis being used. In the present study we examined the cannabinoid content of 206 cannabis samples that had been confiscated by police from recreational users holding 15 g of cannabis or less, under the New South Wales “Cannabis Cautioning” scheme. A further 26 “Known Provenance” samples were analysed that had been seized by police from larger indoor or outdoor cultivation sites rather than from street level users. An HPLC method was used to determine the content of 9 cannabinoids: THC, CBD, cannabigerol (CBG), and their plant-based carboxylic acid precursors THC-A, CBD-A and CBG-A, as well as cannabichromene (CBC), cannabinol (CBN) and tetrahydrocannabivarin (THC-V). The “Cannabis Cautioning” samples showed high mean THC content (THC+THC-A = 14.88%) and low mean CBD content (CBD+CBD-A = 0.14%). A modest level of CBG was detected (CBG+CBG-A = 1.18%) and very low levels of CBC, CBN and THC-V (<0.1%). “Known Provenance” samples showed no significant differences in THC content between those seized from indoor versus outdoor cultivation sites. The present analysis echoes trends reported in other countries towards the use of high potency cannabis with very low CBD content. The implications for public health outcomes and harm reduction strategies are discussed.  相似文献   

12.
13.
The head louse, Pediculus humanus capitis, is an obligate ectoparasite that causes infestations of humans. Studies have demonstrated a correlation between sales figures for over-the-counter (OTC) treatment products and the number of humans with head lice. The deregulation of the Swedish pharmacy market on July 1, 2009, decreased the possibility to obtain complete sale figures and thereby the possibility to obtain yearly trends of head lice infestations. In the presented study we wanted to investigate whether web queries on head lice can be used as substitute for OTC sales figures. Via Google Insights for Search and Vårdguiden medical web site, the number of queries on “huvudlöss” (head lice) and “hårlöss” (lice in hair) were obtained. The analysis showed that both the Vårdguiden series and the Google series were statistically significant (p<0.001) when added separately, but if the Google series were already included in the model, the Vårdguiden series were not statistically significant (p = 0.5689). In conclusion, web queries can detect if there is an increase or decrease of head lice infested humans in Sweden over a period of years, and be as reliable a proxy as the OTC-sales figures.  相似文献   

14.

Background

Learning followed by a period of sleep, even as little as a nap, promotes memory consolidation. It is now generally recognized that sleep facilitates the stabilization of information acquired prior to sleep. However, the temporal nature of the effect of sleep on retention of declarative memory is yet to be understood. We examined the impact of a delayed nap onset on the recognition of neutral pictorial stimuli with an added spatial component.

Methodology/Principal Findings

Participants completed an initial study session involving 150 neutral pictures of people, places, and objects. Immediately following the picture presentation, participants were asked to make recognition judgments on a subset of “old”, previously seen, pictures versus intermixed “new” pictures. Participants were then divided into one of four groups who either took a 90-minute nap immediately, 2 hours, or 4 hours after learning, or remained awake for the duration of the experiment. 6 hours after initial learning, participants were again tested on the remaining “old” pictures, with “new” pictures intermixed.

Conclusions/Significance

Interestingly, we found a stabilizing benefit of sleep on the memory trace reflected as a significant negative correlation between the average time elapsed before napping and decline in performance from test to retest (p = .001). We found a significant interaction between the groups and their performance from test to retest (p = .010), with the 4-hour delay group performing significantly better than both those who slept immediately and those who remained awake (p = .044, p = .010, respectively). Analysis of sleep data revealed a significant positive correlation between amount of slow wave sleep (SWS) achieved and length of the delay before sleep onset (p = .048). The findings add to the understanding of memory processing in humans, suggesting that factors such as waking processing and homeostatic increases in need for sleep over time modulate the importance of sleep to consolidation of neutral declarative memories.  相似文献   

15.
Objective: Best long-term practice in primary HIV-1 infection (PHI) remains unknown for the individual. A risk-based scoring system associated with surrogate markers of HIV-1 disease progression could be helpful to stratify patients with PHI at highest risk for HIV-1 disease progression. Methods: We prospectively enrolled 290 individuals with well-documented PHI in the Zurich Primary HIV-1 Infection Study, an open-label, non-randomized, observational, single-center study. Patients could choose to undergo early antiretroviral treatment (eART) and stop it after one year of undetectable viremia, to go on with treatment indefinitely, or to defer treatment. For each patient we calculated an a priori defined “Acute Retroviral Syndrome Severity Score” (ARSSS), consisting of clinical and basic laboratory variables, ranging from zero to ten points. We used linear regression models to assess the association between ARSSS and log baseline viral load (VL), baseline CD4+ cell count, and log viral setpoint (sVL) (i.e. VL measured ≥90 days after infection or treatment interruption).

Results

Mean ARSSS was 2.89. CD4+ cell count at baseline was negatively correlated with ARSSS (p = 0.03, n = 289), whereas HIV-RNA levels at baseline showed a strong positive correlation with ARSSS (p<0.001, n = 290). In the regression models, a 1-point increase in the score corresponded to a 0.10 log increase in baseline VL and a CD4+cell count decline of 12/µl, respectively. In patients with PHI and not undergoing eART, higher ARSSS were significantly associated with higher sVL (p = 0.029, n = 64). In contrast, in patients undergoing eART with subsequent structured treatment interruption, no correlation was found between sVL and ARSSS (p = 0.28, n = 40).

Conclusion

The ARSSS is a simple clinical score that correlates with the best-validated surrogate markers of HIV-1 disease progression. In regions where ART is not universally available and eART is not standard this score may help identifying patients who will profit the most from early antiretroviral therapy.  相似文献   

16.

Background

Access to safe drinking-water is a fundamental requirement for good health and is also a human right. Global access to safe drinking-water is monitored by WHO and UNICEF using as an indicator “use of an improved source,” which does not account for water quality measurements. Our objectives were to determine whether water from “improved” sources is less likely to contain fecal contamination than “unimproved” sources and to assess the extent to which contamination varies by source type and setting.

Methods and Findings

Studies in Chinese, English, French, Portuguese, and Spanish were identified from online databases, including PubMed and Web of Science, and grey literature. Studies in low- and middle-income countries published between 1990 and August 2013 that assessed drinking-water for the presence of Escherichia coli or thermotolerant coliforms (TTC) were included provided they associated results with a particular source type. In total 319 studies were included, reporting on 96,737 water samples. The odds of contamination within a given study were considerably lower for “improved” sources than “unimproved” sources (odds ratio [OR] = 0.15 [0.10–0.21], I2 = 80.3% [72.9–85.6]). However over a quarter of samples from improved sources contained fecal contamination in 38% of 191 studies. Water sources in low-income countries (OR = 2.37 [1.52–3.71]; p<0.001) and rural areas (OR = 2.37 [1.47–3.81] p<0.001) were more likely to be contaminated. Studies rarely reported stored water quality or sanitary risks and few achieved robust random selection. Safety may be overestimated due to infrequent water sampling and deterioration in quality prior to consumption.

Conclusion

Access to an “improved source” provides a measure of sanitary protection but does not ensure water is free of fecal contamination nor is it consistent between source types or settings. International estimates therefore greatly overstate use of safe drinking-water and do not fully reflect disparities in access. An enhanced monitoring strategy would combine indicators of sanitary protection with measures of water quality. Please see later in the article for the Editors'' Summary  相似文献   

17.
Human and animal cremated osteological remains from twelve graves of Roman Period from archaeological site Sepkovcica near Velika Gorica (Turopolje region, NW Croatia) were analysed. Beside the content of urns and grave pits, fillings of grave vessels like bowls, pots and amphoras from twentytwo grave samples were included in this study. The preservation of osteological and dental remains of human and animal origin was very poor, majority of fragments hardly reach lengths of 10 mm. Weight of each specimen barely exceeds 100 g per person. Apart from traditional macroscopic methods of analysing cremated remains, microscopic method for determination of age at death was also tested. Fragments of femoral bone diaphysis of eighteen persons whose remains had been found on the site were analysed. Person's age at death was presented in the range of five or ten years, and the long bone fragments of a child (infants) were detected. Taxonomic position for each analysed specimen was determined by microscopic analysis of animal cremated bones. Analysis results confirm validity of microscopic method in determination of age at death for human remains and taxonomic qualification of cremated animal remains from archaeological sites.  相似文献   

18.
This study investigated the relationship between level of stress in middle and high school students aged 12–18 and risk of atopic dermatitis. Data from the Sixth Korea Youth Risk Behavior Web-based Survey (KYRBWS-VI), a cross-sectional study among 74,980 students in 800 middle schools and high schools with a response rate of 97.7%, were analyzed. Ordinal logistic regression analyses were conducted to determine the relationship between stress and atopic dermatitis with severity. A total of 5,550 boys and 6,964 girls reported having been diagnosed with atopic dermatitis. Younger students were more likely to have atopic dermatitis. Interestingly, the educational level of parents was found to be associated with having atopic dermatitis and having more severe condition. In particular, girls with mothers with at least college education had a 41% higher risk of having atopic dermatitis and severe atopic condition (odds ratio (OR)) = 1.41, 95% CI, 1.22–1.63; P<0.0001) compared with those with mothers who had attended middle school at most. Similar trend was shown among both boys and girls for their father''s education level. The stress level was found to be significantly associated with the risk of atopic dermatitis. Compared to boys with who reported “no stress”, boys with “very high” stress had 46% higher the risk of having more severe atopic dermatitis (OR = 1.46, 95% CI, 1.20–1.78; P<0.0001), 44% higher (OR = 1.44, 95% CI, 1.19–1.73; P<0.0001) with “high” stress, and 21% higher (OR = 1.21, 95% CI, 1.00–1.45; P = 0.05) with “moderate” stress. In contrast, we found no statistically significant relationship between stress and atopic dermatitis in girls. This study suggests that stress and parents'' education level were associated with atopic dermatitis. Specifically, degree of stress is positively correlated with likelihood of being diagnosed with this condition and increasing the severity.  相似文献   

19.
A mural excavated at the Neolithic Çatalhöyük site (Central Anatolia, Turkey) has been interpreted as the oldest known map. Dating to ∼6600 BCE, it putatively depicts an explosive summit eruption of the Hasan Dağı twin-peaks volcano located ∼130 km northeast of Çatalhöyük, and a birds-eye view of a town plan in the foreground. This interpretation, however, has remained controversial not least because independent evidence for a contemporaneous explosive volcanic eruption of Hasan Dağı has been lacking. Here, we document the presence of andesitic pumice veneer on the summit of Hasan Dağı, which we dated using (U-Th)/He zircon geochronology. The (U-Th)/He zircon eruption age of 8.97±0.64 ka (or 6960±640 BCE; uncertainties 2σ) overlaps closely with 14C ages for cultural strata at Çatalhöyük, including level VII containing the “map” mural. A second pumice sample from a surficial deposit near the base of Hasan Dağı records an older explosive eruption at 28.9±1.5 ka. U-Th zircon crystallization ages in both samples range from near-eruption to secular equilibrium (>380 ka). Collectively, our results reveal protracted intrusive activity at Hasan Dağı punctuated by explosive venting, and provide the first radiometric ages for a Holocene explosive eruption which was most likely witnessed by humans in the area. Geologic and geochronologic lines of evidence thus support previous interpretations that residents of Çatalhöyük artistically represented an explosive eruption of Hasan Dağı volcano. The magmatic longevity recorded by quasi-continuous zircon crystallization coupled with new evidence for late-Pleistocene and Holocene explosive eruptions implicates Hasan Dağı as a potential volcanic hazard.  相似文献   

20.

Background and Purpose

Currently there are more and more studies on the association between short-term effects of exposure to particulate matter (PM) and the morbidity of stroke attack, but few have focused on stroke subtypes. The objective of this study is to assess the relationship between PM and stroke subtypes attack, which is uncertain now.

Methods

Meta-analyses, meta-regression and subgroup analyses were conducted to investigate the association between short-term effects of exposure to PM and the morbidity of different stroke subtypes from a number of epidemiologic studies (from 1997 to 2012).

Results

Nineteen articles were identified. Odds ratio (OR) of stroke attack associated with particular matter (“thoracic particles” [PM10]<10 µm in aerodynamic diameter, “fine particles” [PM2.5]<2.5 µm in aerodynamic diameter) increment of 10 µg/m3 was as effect size. PM10 exposure was related to an increase in risk of stroke attack (OR per 10 µg/m3 = 1.004, 95%CI: 1.001∼1.008) and PM2.5 exposure was not significantly associated with stroke attack (OR per 10 µg/m3 = 0.999, 95%CI: 0.994∼1.003). But when focused on stroke subtypes, PM2.5 (OR per 10 µg/m3 = 1.025; 95%CI, 1.001∼1.049) and PM10 (OR per 10 µg/m3 = 1.013; 95%CI, 1.001∼1.025) exposure were statistically significantly associated with an increased risk of ischemic stroke attack, while PM2.5 (all the studies showed no significant association) and PM10 (OR per 10 µg/m3 = 1.007; 95%CI, 0.992∼1.022) exposure were not associated with an increased risk of hemorrhagic stroke attack. Meta-regression found study design and area were two effective covariates.

Conclusion

PM2.5 and PM10 had different effects on different stroke subtypes. In the future, it''s worthwhile to study the effects of PM to ischemic stroke and hemorrhagic stroke, respectively.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号