首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
The research and analysis presented in this special issue shows that the same limited number of consumption categories are consistently revealed to be responsible for the largest share of environmental impact: mobility (automobile and air transport), food (meat, poultry, fish, and dairy followed by plant‐based food), and residential energy use in the house (heating, cooling, electrical appliances, and lighting). It appears that differences in impact per euro between the product groupings are relatively limited, so it is essential to reduce the life‐cycle impacts of products as such, rather than to shift expenditures to less impact‐intensive product groupings. Furthermore, the effectiveness of expenditure on material products to improve quality of life leaves much room for improvement. Environmentally extended input‐output (EEIO) tables probably form, in this field, the most appropriate information support tool for priority setting, prospective assessment of options, scenario analysis, and monitoring. A clear benefit would result from integrating the input–output (IO) tables in the report to Eurostat of the 25 individual countries that make up the European Union (EU), with other officially available information on emissions and resources use, into a 60‐sector EEIO table for the EU. This, obviously, would be the first step toward more detailed tables. Three strategies are suggested to realize the additional, desirable detail of 150 sectors or more, each achievable at an increasing time horizon and with increasing effort: (1) developing further the current CEDA EU25 table; (2) building a truly European detailed input–output table accepting the restrictions of existing data gathering procedures; and (3) as (2), but developing new, dedicated data gathering and classification procedures. In all cases, a key issue is harmonizing classification systems for industry sectors, consumer expenditure categories, and product classifications (as in import/export statistics) in such a way that data sets may adequately be linked to input–output tables.  相似文献   

3.
By using a generalization of the Poisson process, distributions can be constructed that show appropriate amounts of underdispersion relative to the Poisson distribution that may be apparent from observed data. These are then used to examine the differences between the distributions of numbers of fetal implants in mice corresponding to different doses of the herbicide 2,4,5-T.  相似文献   

4.
Leeyoung Park  Ju H. Kim 《Genetics》2015,199(4):1007-1016
Causal models including genetic factors are important for understanding the presentation mechanisms of complex diseases. Familial aggregation and segregation analyses based on polygenic threshold models have been the primary approach to fitting genetic models to the family data of complex diseases. In the current study, an advanced approach to obtaining appropriate causal models for complex diseases based on the sufficient component cause (SCC) model involving combinations of traditional genetics principles was proposed. The probabilities for the entire population, i.e., normal–normal, normal–disease, and disease–disease, were considered for each model for the appropriate handling of common complex diseases. The causal model in the current study included the genetic effects from single genes involving epistasis, complementary gene interactions, gene–environment interactions, and environmental effects. Bayesian inference using a Markov chain Monte Carlo algorithm (MCMC) was used to assess of the proportions of each component for a given population lifetime incidence. This approach is flexible, allowing both common and rare variants within a gene and across multiple genes. An application to schizophrenia data confirmed the complexity of the causal factors. An analysis of diabetes data demonstrated that environmental factors and gene–environment interactions are the main causal factors for type II diabetes. The proposed method is effective and useful for identifying causal models, which can accelerate the development of efficient strategies for identifying causal factors of complex diseases.  相似文献   

5.
6.
While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates.  相似文献   

7.
Petroleum hydrocarbons may cause risks for humans and the environment that must be properly managed. Some methodologies cluster hundreds of hydrocarbon substances into one single parameter, total petroleum hydrocarbon (TPH) ranged from C10 to C40. Several national policies establish a maximum acceptable concentration in soil to directly consider if a site is seriously contaminated; this scope may be described as a total content approach. Another approach considers TPH division into fractions according to their physico-chemical and toxicological properties, performed in terms of the environmental behavior (aliphatic and aromatic compounds) and the equivalent carbon number (EC). This approach lets us determine the associated risk for human health through the Human Risk Index (HRI). The consequences of application of the total content and fraction approaches is discussed in this study, evaluating the differences in the approach for volatile and semi-volatile hydrocarbons and also in regard to the origin of the contamination. When focusing on volatile substances, the fraction approach is much more restrictive than the total content approach where all oil products are assessed in the same way. When assessing semi-volatile hydrocarbons, their behavior varies depending on the oil product. This work contributes to the implementation of risk-based assessment for petroleum hydrocarbons.  相似文献   

8.
Normal variation in gene expression due to regulatory polymorphisms is often masked by biological and experimental noise. In addition, some regulatory polymorphisms may become apparent only in specific tissues. We derived human induced pluripotent stem (iPS) cells from adult skin primary fibroblasts and attempted to detect tissue-specific cis-regulatory variants using in vitro cell differentiation. We used padlock probes and high-throughput sequencing for digital RNA allelotyping and measured allele-specific gene expression in primary fibroblasts, lymphoblastoid cells, iPS cells, and their differentiated derivatives. We show that allele-specific expression is both cell type and genotype-dependent, but the majority of detectable allele-specific expression loci remains consistent despite large changes in the cell type or the experimental condition following iPS reprogramming, except on the X-chromosome. We show that our approach to mapping cis-regulatory variants reduces in vitro experimental noise and reveals additional tissue-specific variants using skin-derived human iPS cells.  相似文献   

9.
The geochemical evaluation methodology described in this paper is used to distinguish contaminated samples from those that contain only naturally occurring levels of inorganic constituents. Site-to-background comparisons of trace elements in soil based solely on statistical techniques are prone to high false positive indications. Trace element distributions in soil tend to span a wide range of concentrations and are highly right-skewed, approximating lognormal distributions, and background data sets are typically too small to capture this range. Geochemical correlations of trace versus major elements are predicated on the natural elemental associations in soil. Linear trends with positive slopes are expected for scatter plots of specific trace versus major elements in uncontaminated samples. Individual samples that may contain a component of contamination are identified by their positions off the trend formed by uncontaminated samples. In addition to pinpointing which samples may be contaminated, this technique provides mechanistic explanations for naturally elevated element concentrations, information that a purely statistical approach cannot provide. These geochemical evaluations have been successfully performed at numerous facilities across the United States. Removing naturally occurring constituents from consideration early in a site investigation reduces or eliminates unnecessary investigation and risk assessment, and focuses remediation efforts.  相似文献   

10.

Longitudinal studies with binary outcomes characterized by informative right censoring are commonly encountered in clinical, basic, behavioral, and health sciences. Approaches developed to analyze data with binary outcomes were mainly tailored to clustered or longitudinal data with missing completely at random or at random. Studies that focused on informative right censoring with binary outcomes are characterized by their imbedded computational complexity and difficulty of implementation. Here we present a new maximum likelihood-based approach with repeated binary measures modeled in a generalized linear mixed model as a function of time and other covariates. The longitudinal binary outcome and the censoring process determined by the number of times a subject is observed share latent random variables (random intercept and slope) where these subject-specific random effects are common to both models. A simulation study and sensitivity analysis were conducted to test the model under different assumptions and censoring settings. Our results showed accuracy of the estimates generated under this model when censoring was fully informative or partially informative with dependence on the slopes. A successful implementation was undertaken on a cohort of renal transplant patients with blood urea nitrogen as a binary outcome measured over time to indicate normal and abnormal kidney function until the emanation of graft rejection that eventuated in informative right censoring. In addition to its novelty and accuracy, an additional key feature and advantage of the proposed model is its viability of implementation on available analytical tools and widespread application on any other longitudinal dataset with informative censoring.

  相似文献   

11.
In this paper we introduce a new method to expressly use live/corporeal data in quantifying differences of time series data with an underlying limit cycle attractor; and apply it using an example of gait data. Our intention is to identify gait pattern differences between diverse situations and classify them on group and individual subject levels. First we approximated the limit cycle attractors, from which three measures were calculated: δM amounts to the difference between two attractors (a measure for the differences of two movements), δD computes the difference between the two associated deviations of the state vector away from the attractor (a measure for the change in movement variation), and δF, a combination of the previous two, is an index of the change. As an application we quantified these measures for walking on a treadmill under three different conditions: normal walking, dual task walking, and walking with additional weights at the ankle. The new method was able to successfully differentiate between the three walking conditions. Day to day repeatability, studied with repeated trials approximately one week apart, indicated excellent reliability for δM (ICCave > 0.73 with no differences across days; p > 0.05) and good reliability for δD (ICCave  =  0.414 to 0.610 with no differences across days; p > 0.05). Based on the ability to detect differences in varying gait conditions and the good repeatability of the measures across days, the new method is recommended as an alternative to expensive and time consuming techniques of gait classification assessment. In particular, the new method is an easy to use diagnostic tool to quantify clinical changes in neurological patients.  相似文献   

12.
Recognizing the imperiled status of biodiversity and its benefit to human well-being, the world''s governments committed in 2010 to take effective and urgent action to halt biodiversity loss through the Convention on Biological Diversity''s “Aichi Targets”. These targets, and many conservation programs, require monitoring to assess progress toward specific goals. However, comprehensive and easily understood information on biodiversity trends at appropriate spatial scales is often not available to the policy makers, managers, and scientists who require it. We surveyed conservation stakeholders in three geographically diverse regions of critical biodiversity concern (the Tropical Andes, the African Great Lakes, and the Greater Mekong) and found high demand for biodiversity indicator information but uneven availability. To begin to address this need, we present a biodiversity “dashboard” – a visualization of biodiversity indicators designed to enable tracking of biodiversity and conservation performance data in a clear, user-friendly format. This builds on previous, more conceptual, indicator work to create an operationalized online interface communicating multiple indicators at multiple spatial scales. We structured this dashboard around the Pressure-State-Response-Benefit framework, selecting four indicators to measure pressure on biodiversity (deforestation rate), state of species (Red List Index), conservation response (protection of key biodiversity areas), and benefits to human populations (freshwater provision). Disaggregating global data, we present dashboard maps and graphics for the three regions surveyed and their component countries. These visualizations provide charts showing regional and national trends and lay the foundation for a web-enabled, interactive biodiversity indicators dashboard. This new tool can help track progress toward the Aichi Targets, support national monitoring and reporting, and inform outcome-based policy-making for the protection of natural resources.  相似文献   

13.
The present paper deals with an alternative and simple procedure to analyse the non-orthogonal data. The procedure is general in nature but has some advantages for the non-orthogonal data due to some missing observations. The procedure is applied to (i) two way classification with unequal number of observations per cell; (ii) randomized block designs with some missing observations and (iii) balanced incomplete block designs and also illustrated with the help of numerical examples.  相似文献   

14.

Background

Health inequities in developing countries are difficult to eradicate because of limited resources. The neglect of adult mortality in Sub-Saharan Africa (SSA) is a particular concern. Advances in data availability, software and analytic methods have created opportunities to address this challenge and tailor interventions to small areas. This study demonstrates how a generic framework can be applied to guide policy interventions to reduce adult mortality in high risk areas. The framework, therefore, incorporates the spatial clustering of adult mortality, estimates the impact of a range of determinants and quantifies the impact of their removal to ensure optimal returns on scarce resources.

Methods

Data from a national cross-sectional survey in 2007 were used to illustrate the use of the generic framework for SSA and elsewhere. Adult mortality proportions were analyzed at four administrative levels and spatial analyses were used to identify areas with significant excess mortality. An ecological approach was then used to assess the relationship between mortality “hotspots” and various determinants. Population attributable fractions were calculated to quantify the reduction in mortality as a result of targeted removal of high-impact determinants.

Results

Overall adult mortality rate was 145 per 10,000. Spatial disaggregation identified a highly non-random pattern and 67 significant high risk local municipalities were identified. The most prominent determinants of adult mortality included HIV antenatal sero-prevalence, low SES and lack of formal marital union status. The removal of the most attributable factors, based on local area prevalence, suggest that overall adult mortality could be potentially reduced by ∼90 deaths per 10,000.

Conclusions

The innovative use of secondary data and advanced epidemiological techniques can be combined in a generic framework to identify and map mortality to the lowest administration level. The identification of high risk mortality determinants allows health authorities to tailor interventions at local level. This approach can be replicated elsewhere.  相似文献   

15.

Background

Evaluating environmental health risks in communities requires models characterizing geographic and demographic patterns of exposure to multiple stressors. These exposure models can be constructed from multivariable regression analyses using individual-level predictors (microdata), but these microdata are not typically available with sufficient geographic resolution for community risk analyses given privacy concerns.

Methods

We developed synthetic geographically-resolved microdata for a low-income community (New Bedford, Massachusetts) facing multiple environmental stressors. We first applied probabilistic reweighting using simulated annealing to data from the 2006–2010 American Community Survey, combining 9,135 microdata samples from the New Bedford area with census tract-level constraints for individual and household characteristics. We then evaluated the synthetic microdata using goodness-of-fit tests and by examining spatial patterns of microdata fields not used as constraints. As a demonstration, we developed a multivariable regression model predicting smoking behavior as a function of individual-level microdata fields using New Bedford-specific data from the 2006–2010 Behavioral Risk Factor Surveillance System, linking this model with the synthetic microdata to predict demographic and geographic smoking patterns in New Bedford.

Results

Our simulation produced microdata representing all 94,944 individuals living in New Bedford in 2006–2010. Variables in the synthetic population matched the constraints well at the census tract level (e.g., ancestry, gender, age, education, household income) and reproduced the census-derived spatial patterns of non-constraint microdata. Smoking in New Bedford was significantly associated with numerous demographic variables found in the microdata, with estimated tract-level smoking rates varying from 20% (95% CI: 17%, 22%) to 37% (95% CI: 30%, 45%).

Conclusions

We used simulation methods to create geographically-resolved individual-level microdata that can be used in community-wide exposure and risk assessment studies. This approach provides insights regarding community-scale exposure and vulnerability patterns, valuable in settings where policy can be informed by characterization of multi-stressor exposures and health risks at high resolution.  相似文献   

16.
Persistent organic pollutants (POPs) are typically monitored via targeted mass spectrometry, which potentially identifies only a fraction of the contaminants actually present in environmental samples. With new anthropogenic compounds continuously introduced to the environment, novel and proactive approaches that provide a comprehensive alternative to targeted methods are needed in order to more completely characterize the diversity of known and unknown compounds likely to cause adverse effects. Nontargeted mass spectrometry attempts to extensively screen for compounds, providing a feasible approach for identifying contaminants that warrant future monitoring. We employed a nontargeted analytical method using comprehensive two-dimensional gas chromatography coupled to time-of-flight mass spectrometry (GC×GC/TOF-MS) to characterize halogenated organic compounds (HOCs) in California Black skimmer (Rynchops niger) eggs. Our study identified 111 HOCs; 84 of these compounds were regularly detected via targeted approaches, while 27 were classified as typically unmonitored or unknown. Typically unmonitored compounds of note in bird eggs included tris(4-chlorophenyl)methane (TCPM), tris(4-chlorophenyl)methanol (TCPMOH), triclosan, permethrin, heptachloro-1''-methyl-1,2''-bipyrrole (MBP), as well as four halogenated unknown compounds that could not be identified through database searching or the literature. The presence of these compounds in Black skimmer eggs suggests they are persistent, bioaccumulative, potentially biomagnifying, and maternally transferring. Our results highlight the utility and importance of employing nontargeted analytical tools to assess true contaminant burdens in organisms, as well as to demonstrate the value in using environmental sentinels to proactively identify novel contaminants.  相似文献   

17.

Backgrounds

Electronic medical records (EMR) form a rich repository of information that could benefit public health. We asked how structured and free-text narrative EMR data should be combined to improve epidemic surveillance for acute respiratory infections (ARI).

Methods

Eight previously characterized ARI case detection algorithms (CDA) were applied to historical EMR entries to create authentic time series of daily ARI case counts (background). An epidemic model simulated influenza cases (injection). From the time of the injection, cluster-detection statistics were applied daily on paired background+injection (combined) and background-only time series. This cycle was then repeated with the injection shifted to each week of the evaluation year. We computed: a) the time from injection to the first statistical alarm uniquely found in the combined dataset (Detection Delay); b) how often alarms originated in the background-only dataset (false-alarm rate, or FAR); and c) the number of cases found within these false alarms (Caseload). For each CDA, we plotted the Detection Delay as a function of FAR or Caseload, over a broad range of alarm thresholds.

Results

CDAs that combined text analyses seeking ARI symptoms in clinical notes with provider-assigned diagnostic codes in order to maximize the precision rather than the sensitivity of case-detection lowered Detection Delay at any given FAR or Caseload.

Conclusion

An empiric approach can guide the integration of EMR data into case-detection methods that improve both the timeliness and efficiency of epidemic detection.  相似文献   

18.
19.

Background

Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson''s r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data.

Methodology/Principal Findings

The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods.

Conclusions/Significance

The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material.  相似文献   

20.
Cryopreservation is an efficient way to store spermatozoa and plays a critical role in the livestock industry as well as in clinical practice. During cryopreservation, cryo-stress causes substantial damage to spermatozoa. In present study, the effects of cryo-stress at various cryopreservation steps, such as dilution / cooling, adding cryoprtectant, and freezing were studied in spermatozoa collected from 9 individual bull testes. The motility (%), motion kinematics, capacitation status, mitochondrial activity, and viability of bovine spermatozoa at each step of the cryopreservation process were assessed using computer-assisted sperm analysis, Hoechst 33258/chlortetracycline fluorescence, rhodamine 123 staining, and hypo-osmotic swelling test, respectively. The results demonstrate that the cryopreservation steps reduced motility (%), rapid speed (%), and mitochondrial activity, whereas medium/slow speed (%), and the acrosome reaction were increased (P < 0.05). Differences (Δ) of the acrosome reaction were higher in dilution/cooling step (P < 0.05), whereas differences (Δ) of motility, rapid speed, and non-progressive motility were higher in cryoprotectant and freezing as compared to dilution/cooling (P < 0.05). On the other hand, differences (Δ) of mitochondrial activity, viability, and progressive motility were higher in freezing step (P < 0.05) while the difference (Δ) of the acrosome reaction was higher in dilution/cooling (P < 0.05). Based on these results, we propose that freezing / thawing steps are the most critical in cryopreservation and may provide a logical ground of understanding on the cryo-damage. Moreover, these sperm parameters might be used as physical markers of sperm cryo-damage.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号