首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
3.
Bacterial type IV pili are essential for adhesion to surfaces, motility, microcolony formation, and horizontal gene transfer in many bacterial species. These polymers are strong molecular motors that can retract at two different speeds. In the human pathogen Neisseria gonorrhoeae speed switching of single pili from 2 µm/s to 1 µm/s can be triggered by oxygen depletion. Here, we address the question how proton motive force (PMF) influences motor speed. Using pHluorin expression in combination with dyes that are sensitive to transmembrane ΔpH gradient or transmembrane potential ΔΨ, we measured both components of the PMF at varying external pH. Depletion of PMF using uncouplers reversibly triggered switching into the low speed mode. Reduction of the PMF by ≈ 35 mV was enough to trigger speed switching. Reducing ATP levels by inhibition of the ATP synthase did not induce speed switching. Furthermore, we showed that the strictly aerobic Myxococcus xanthus failed to move upon depletion of PMF or oxygen, indicating that although the mechanical properties of the motor are conserved, its regulatory inputs have evolved differently. We conclude that depletion of PMF triggers speed switching of gonococcal pili. Although ATP is required for gonococcal pilus retraction, our data indicate that PMF is an independent additional energy source driving the high speed mode.  相似文献   

4.
随着我国汽车保有量的不断增长,机动车尾气排放成为影响空气质量的重要因素之一。燃料乙醇具有绿色、环保、可再生的资源优势,能够促进燃烧、减少排放污染。本文从国家能源安全、粮食安全、农民增收和环境污染等多方面综述了发展纤维乙醇产业的重要性和必要性,同时结合当前纤维乙醇产业的发展现状对纤维乙醇产业政策提出了建议。  相似文献   

5.
Multivariate data analysis (MVDA) is a highly valuable and significantly underutilized resource in biomanufacturing. It offers the opportunity to enhance understanding and leverage useful information from complex high‐dimensional data sets, recorded throughout all stages of therapeutic drug manufacture. To help standardize the application and promote this resource within the biopharmaceutical industry, this paper outlines a novel MVDA methodology describing the necessary steps for efficient and effective data analysis. The MVDA methodology is followed to solve two case studies: a “small data” and a “big data” challenge. In the “small data” example, a large‐scale data set is compared to data from a scale‐down model. This methodology enables a new quantitative metric for equivalence to be established by combining a two one‐sided test with principal component analysis. In the “big data” example, this methodology enables accurate predictions of critical missing data essential to a cloning study performed in the ambr15 system. These predictions are generated by exploiting the underlying relationship between the off‐line missing values and the on‐line measurements through the generation of a partial least squares model. In summary, the proposed MVDA methodology highlights the importance of data pre‐processing, restructuring, and visualization during data analytics to solve complex biopharmaceutical challenges.  相似文献   

6.
Michael Friendly 《Biometrics》2011,67(3):1177-1177
Graphics for Statistics and Data Analysis with R (K. J. Keen) Michael Friendly Hidden Markov Models for Time Series: An Introduction Using R (W. Zucchini and I. L. MacDonald) Peter Guttorp Bayesian Adaptive Methods for Clinical Trials (S. M. Berry, B. P. Carlin, J. J. Lee, J. J. and P. Muller) Say Beng Tan SAS and R Data Management, Statistical Analysis, and Graphics (K. Kleinman and N. Horton) Juan P. Steibel Numerical Analysis for Statisticians, 2nd edition (K. Lange) Maria Rizzo Statistical Methods for Disease Clustering (T. Tango) Lance A. Waller Brief Reports by the Editor Survival Analysis Using SAS: A Practical Guide (P. D. Allison) Design and Analysis of Quality of Life Studies in Clinical Trials (D. L. Fairclough) Nonparametric Statistical Inference (J. D. Gibbons and S. Chakraborti) Stochastic Processes: An Introduction (P. W Jones and P. Smith)
  相似文献   

7.
Cook AJ  Li Y 《Biometrics》2008,64(4):1289-1292
Summary. This short note evaluates the assumptions required for a permutation test to approximate the null distribution of the spatial scan statistic for censored outcomes proposed in Cook et al. (2007). In particular, we study the exchangeability conditions required for such a test under survival models. A simulation study is further performed to assess the impact on the type I error when the global exchangeability assumption is violated and to determine whether the permutation test still well approximates the null distribution.  相似文献   

8.
Blood-borne lymphocytes migrate continuously to peripheral lymph nodes (PLN) and other organized lymphoid tissues where they are most likely to encounter their cognate antigen. Lymphocyte homing to PLN is a highly regulated process that occurs exclusively in specialized high endothelial venules (HEV) in the nodal paracortex. Recently, it has become possible to explore this vital aspect of peripheral immune surveillance by intravital microscopy of the subiliac lymph node microcirculation in anesthetized mice. This paper reviews technical and experimental aspects of the new model and summarizes recent advances in our understanding of the molecular mechanisms of lymphocyte homing to PLN which were derived from its use. Both lymphocytes and granulocytes initiate rolling interactions via L-selectin binding to the peripheral node addressin (PNAd) in PLN HEV. Subsequently, a G protein-coupled chemoattractant stimulus activates LEA-1 on rolling lymphocytes, but not on granulocytes. Thus. granulocytes continue to roll through the PLN, whereas LEA-I activation allows lymphocytes to arrest and emigrate into the extravascular compartment. We have also identified a second homing pathway that allows L-selectin low/(activated/memory) lymphocytes to home to PLN. P-selectin on circulating activated platelets can mediate simultaneous platelet adhesion to PNAd in HEV and to P-selectin glycoprotein ligand (PSGL)-l on lymphocytes. Through this mechanism, platelets can form a cellular bridge which can effectively substitute for the loss of L-selectin on memory cell subsets.  相似文献   

9.

Background

Atheoretical large-scale data mining techniques using machine learning algorithms have promise in the analysis of large epidemiological datasets. This study illustrates the use of a hybrid methodology for variable selection that took account of missing data and complex survey design to identify key biomarkers associated with depression from a large epidemiological study.

Methods

The study used a three-step methodology amalgamating multiple imputation, a machine learning boosted regression algorithm and logistic regression, to identify key biomarkers associated with depression in the National Health and Nutrition Examination Study (2009–2010). Depression was measured using the Patient Health Questionnaire-9 and 67 biomarkers were analysed. Covariates in this study included gender, age, race, smoking, food security, Poverty Income Ratio, Body Mass Index, physical activity, alcohol use, medical conditions and medications. The final imputed weighted multiple logistic regression model included possible confounders and moderators.

Results

After the creation of 20 imputation data sets from multiple chained regression sequences, machine learning boosted regression initially identified 21 biomarkers associated with depression. Using traditional logistic regression methods, including controlling for possible confounders and moderators, a final set of three biomarkers were selected. The final three biomarkers from the novel hybrid variable selection methodology were red cell distribution width (OR 1.15; 95% CI 1.01, 1.30), serum glucose (OR 1.01; 95% CI 1.00, 1.01) and total bilirubin (OR 0.12; 95% CI 0.05, 0.28). Significant interactions were found between total bilirubin with Mexican American/Hispanic group (p = 0.016), and current smokers (p<0.001).

Conclusion

The systematic use of a hybrid methodology for variable selection, fusing data mining techniques using a machine learning algorithm with traditional statistical modelling, accounted for missing data and complex survey sampling methodology and was demonstrated to be a useful tool for detecting three biomarkers associated with depression for future hypothesis generation: red cell distribution width, serum glucose and total bilirubin.  相似文献   

10.

Background

Over recent years there has been a strong movement towards the improvement of vital statistics and other types of health data that inform evidence-based policies. Collecting such data is not cost free. To date there is no systematic framework to guide investment decisions on methods of data collection for vital statistics or health information in general. We developed a framework to systematically assess the comparative costs and outcomes/benefits of the various data methods for collecting vital statistics.

Methodology

The proposed framework is four-pronged and utilises two major economic approaches to systematically assess the available data collection methods: cost-effectiveness analysis and efficiency analysis. We built a stylised example of a hypothetical low-income country to perform a simulation exercise in order to illustrate an application of the framework.

Findings

Using simulated data, the results from the stylised example show that the rankings of the data collection methods are not affected by the use of either cost-effectiveness or efficiency analysis. However, the rankings are affected by how quantities are measured.

Conclusion

There have been several calls for global improvements in collecting useable data, including vital statistics, from health information systems to inform public health policies. Ours is the first study that proposes a systematic framework to assist countries undertake an economic evaluation of DCMs. Despite numerous challenges, we demonstrate that a systematic assessment of outputs and costs of DCMs is not only necessary, but also feasible. The proposed framework is general enough to be easily extended to other areas of health information.  相似文献   

11.
12.
13.
Summary Cook, Gold, and Li (2007, Biometrics 63, 540–549) extended the Kulldorff (1997, Communications in Statistics 26, 1481–1496) scan statistic for spatial cluster detection to survival‐type observations. Their approach was based on the score statistic and they proposed a permutation distribution for the maximum of score tests. The score statistic makes it possible to apply the scan statistic idea to models including explanatory variables. However, we show that the permutation distribution requires strong assumptions of independence between potential cluster and both censoring and explanatory variables. In contrast, we present an approach using the asymptotic distribution of the maximum of score statistics in a manner not requiring these assumptions.  相似文献   

14.
15.
Complex networks underlie an enormous variety of social, biological, physical, and virtual systems. A profound complication for the science of complex networks is that in most cases, observing all nodes and all network interactions is impossible. Previous work addressing the impacts of partial network data is surprisingly limited, focuses primarily on missing nodes, and suggests that network statistics derived from subsampled data are not suitable estimators for the same network statistics describing the overall network topology. We generate scaling methods to predict true network statistics, including the degree distribution, from only partial knowledge of nodes, links, or weights. Our methods are transparent and do not assume a known generating process for the network, thus enabling prediction of network statistics for a wide variety of applications. We validate analytical results on four simulated network classes and empirical data sets of various sizes. We perform subsampling experiments by varying proportions of sampled data and demonstrate that our scaling methods can provide very good estimates of true network statistics while acknowledging limits. Lastly, we apply our techniques to a set of rich and evolving large-scale social networks, Twitter reply networks. Based on 100 million tweets, we use our scaling techniques to propose a statistical characterization of the Twitter Interactome from September 2008 to November 2008. Our treatment allows us to find support for Dunbar''s hypothesis in detecting an upper threshold for the number of active social contacts that individuals maintain over the course of one week.  相似文献   

16.
One goal of cluster analysis is to sort characteristics into groups (clusters) so that those in the same group are more highly correlated to each other than they are to those in other groups. An example is the search for groups of genes whose expression of RNA is correlated in a population of patients. These genes would be of greater interest if their common level of RNA expression were additionally predictive of the clinical outcome. This issue arose in the context of a study of trauma patients on whom RNA samples were available. The question of interest was whether there were groups of genes that were behaving similarly, and whether each gene in the cluster would have a similar effect on who would recover. For this, we develop an algorithm to simultaneously assign characteristics (genes) into groups of highly correlated genes that have the same effect on the outcome (recovery). We propose a random effects model where the genes within each group (cluster) equal the sum of a random effect, specific to the observation and cluster, and an independent error term. The outcome variable is a linear combination of the random effects of each cluster. To fit the model, we implement a Markov chain Monte Carlo algorithm based on the likelihood of the observed data. We evaluate the effect of including outcome in the model through simulation studies and describe a strategy for prediction. These methods are applied to trauma data from the Inflammation and Host Response to Injury research program, revealing a clustering of the genes that are informed by the recovery outcome.  相似文献   

17.
18.
19.
Over the last few decades, the nature of life sciences research has changed enormously, generating a need for a workforce with a variety of computational skills such as those required to store, manage, and analyse the large biological datasets produced by next-generation sequencing. Those with such expertise are increasingly in demand for employment in both research and industry. Despite this, bioinformatics education has failed to keep pace with advances in research. At secondary school level, computing is often taught in isolation from other sciences, and its importance in biological research is not fully realised, leaving pupils unprepared for the computational component of Higher Education and, subsequently, research in the life sciences. The 4273pi Bioinformatics at School project (https://4273pi.org) aims to address this issue by designing and delivering curriculum-linked, hands-on bioinformatics workshops for secondary school biology pupils, with an emphasis on equitable access. So far, we have reached over 180 schools across Scotland through visits or teacher events, and our open education resources are used internationally. Here, we describe our project, our aims and motivations, and the practical lessons we have learned from implementing a successful bioinformatics education project over the last 5 years.  相似文献   

20.
Clustering analysis is an important tool in studying gene expression data. The Bayesian hierarchical clustering (BHC) algorithm can automatically infer the number of clusters and uses Bayesian model selection to improve clustering quality. In this paper, we present an extension of the BHC algorithm. Our Gaussian BHC (GBHC) algorithm represents data as a mixture of Gaussian distributions. It uses normal-gamma distribution as a conjugate prior on the mean and precision of each of the Gaussian components. We tested GBHC over 11 cancer and 3 synthetic datasets. The results on cancer datasets show that in sample clustering, GBHC on average produces a clustering partition that is more concordant with the ground truth than those obtained from other commonly used algorithms. Furthermore, GBHC frequently infers the number of clusters that is often close to the ground truth. In gene clustering, GBHC also produces a clustering partition that is more biologically plausible than several other state-of-the-art methods. This suggests GBHC as an alternative tool for studying gene expression data.The implementation of GBHC is available at https://sites.google.com/site/gaussianbhc/  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号