共查询到20条相似文献,搜索用时 0 毫秒
1.
Pierre Legendre Olivier Gauthier 《Proceedings. Biological sciences / The Royal Society》2014,281(1778)
This review focuses on the analysis of temporal beta diversity, which is the variation in community composition along time in a study area. Temporal beta diversity is measured by the variance of the multivariate community composition time series and that variance can be partitioned using appropriate statistical methods. Some of these methods are classical, such as simple or canonical ordination, whereas others are recent, including the methods of temporal eigenfunction analysis developed for multiscale exploration (i.e. addressing several scales of variation) of univariate or multivariate response data, reviewed, to our knowledge for the first time in this review. These methods are illustrated with ecological data from 13 years of benthic surveys in Chesapeake Bay, USA. The following methods are applied to the Chesapeake data: distance-based Moran''s eigenvector maps, asymmetric eigenvector maps, scalogram, variation partitioning, multivariate correlogram, multivariate regression tree, and two-way MANOVA to study temporal and space–time variability. Local (temporal) contributions to beta diversity (LCBD indices) are computed and analysed graphically and by regression against environmental variables, and the role of species in determining the LCBD values is analysed by correlation analysis. A tutorial detailing the analyses in the R language is provided in an appendix. 相似文献
2.
《Computer programs in biomedicine》1981,13(3-4):217-224
An interactive computer system for the storage, retrieval and analysis of standardized clinical and material characterization data associated with orthopaedic implants is described. The system consists basically of four independent modules. The essence of the system centers on the cross-referencing capabilities which are virtually unlimited. The system has been designed for use by non-computer trained personnel. 相似文献
3.
4.
We have developed a data management system, `HOSEpipe' (High Output STS Evaluation pipeline) to aid sample tracking and data analysis in STS content mapping projects. The system is based around a World Wide Web
(WWW) server that provides a number of pages including forms for sample processing and data entry accessible via a standard
WWW browser application. The system is split into two main modules: firstly, a sequence evaluation and annotation module that
takes de novo sequence for a potential STS, screens it against existing STSs and DNA sequence databases, followed by appropriate
primer sequence design; secondly, a module that handles YAC library STS screening and includes facilities for both sample
tracking and experimental data analysis. We present the design and rationale of the HOSEpipe system and its development to
support a whole chromosomal physical mapping project. This software and design approach is potentially applicable to physical
mapping projects of varying sizes and resolution and to similar projects, such as sample sequencing and the construction of
sequence-ready maps.
Received: 18 November 1996 / Accepted: 19 March 1997 相似文献
5.
6.
7.
Background
Array CGH (Comparative Genomic Hybridisation) is a molecular cytogenetic technique for the genome wide detection of chromosomal imbalances. It is based on the co-hybridisation of differentially labelled test and reference DNA onto arrays of genomic BAC clones, cDNAs or oligonucleotides, and after correction for various intervening variables, loss or gain in the test DNA can be indicated from spots showing aberrant signal intensity ratios. 相似文献8.
9.
10.
C. R. Legéndy 《Biological cybernetics》1975,17(3):157-163
Certain experiments on the detection of low-contrast gratings, occasionally cited as evidence of Fourier analysis within the visual system, are interpreted without the assumption of Fourier analysis. Theoretical curves are obtained and compared with the published experimental points, showing mostly satisfactory agreement. The computations utilize Gaussian receptive fields (on-center and off-center) for the retinal ganglion cells, spatial summation, center-surround antagonism, quasilinear response at low contrasts (X-cells), and the assumption that the first significant convergence is primarily between cells of like response type and like receptive field geometry. 相似文献
11.
Background
DNA microarrays open up a new horizon for studying the genetic determinants of disease. The high throughput nature of these arrays creates an enormous wealth of information, but also poses a challenge to data analysis. Inferential problems become even more pronounced as experimental designs used to collect data become more complex. An important example is multigroup data collected over different experimental groups, such as data collected from distinct stages of a disease process. We have developed a method specifically addressing these issues termed Bayesian ANOVA for microarrays (BAM). The BAM approach uses a special inferential regularization known as spike-and-slab shrinkage that provides an optimal balance between total false detections and total false non-detections. This translates into more reproducible differential calls. Spike and slab shrinkage is a form of regularization achieved by using information across all genes and groups simultaneously. 相似文献12.
Donatello Shane Cordella Mauro Kaps Renata Kowalska Malgorzata Wolf Oliver 《The International Journal of Life Cycle Assessment》2020,25(5):868-882
The International Journal of Life Cycle Assessment - One possible reason for the poor uptake of the EU Ecolabel for furniture products may be that the criteria are too complex for applicants.... 相似文献
13.
Mary P. Moore Rory P. Cunningham Ryan J. Dashek Justine M. Mucinski R. Scott Rector 《Obesity (Silver Spring, Md.)》2020,28(10):1843-1852
Nonalcoholic fatty liver disease (NAFLD) is a major health problem, and its prevalence has increased in recent years, concurrent with rising rates of obesity and other metabolic diseases. Currently, there are no FDA‐approved pharmacological therapies for NAFLD, and lifestyle interventions, including weight loss and exercise, remain the cornerstones for treatment. Manipulating diet composition and eating patterns may be a sustainable approach to NAFLD treatment. Dietary strategies including Paleolithic, ketogenic, Mediterranean, high‐protein, plant‐based, low‐carbohydrate, and intermittent fasting diets have become increasingly popular because of their purported benefits on metabolic disease. This review highlights what is currently known about these popular dietary approaches in the management of NAFLD in clinical populations with mechanistic insight from animal studies. It also identifies key knowledge gaps to better inform future preclinical and clinical studies aimed at the treatment of NAFLD. 相似文献
14.
An extensive survey of radioimmunoassay calibration data for prednisolone, prednisone and digoxin indicated that the common practice of preparing calibration curves with individual subject's pre-dose plasma or serum, and using this to estimate unknown concentrations for the same subject, is not supported by statistical considerations. Preparation of calibration plots from pooled data is better because this introduces less bias in estimated concentrations. Such a method also saves a great deal of time, since it is not necessary to repeat the calibration procedure each time “unknowns” are being assayed. The data suggest that there is no optimum calibration plot for all radioimmunoassays. Rather, each antibody-drug combination should be investigated thoroughly to determine the best calibration plot for the particular combination. We found that the best calibration plots are; the logistic-logarithmic plot for prednisolone; nonlinear least squares fit to a polyexponential equation for prednisone; and a weighted least squares regression of normalized % bound concentration for digoxin. The error in the radioimmunoassay is usually concentration-dependent, and, in certain regions of the standard curve, is larger than the literature indicates, since, frequently, the error has been gauged from % bound values, but should be gauged from inversely-estimated concentrations. 相似文献
15.
Yitan Zhu Huai Li David J Miller Zuyi Wang Jianhua Xuan Robert Clarke Eric P Hoffman Yue Wang 《BMC bioinformatics》2008,9(1):1-18
Background
We sketch our species identification tool for palm sized computers that helps knowledgeable observers with census activities. An algorithm turns an identification matrix into a minimal length series of questions that guide the operator towards identification. Historic observation data from the census geographic area helps minimize question volume. We explore how much historic data is required to boost performance, and whether the use of history negatively impacts identification of rare species. We also explore how characteristics of the matrix interact with the algorithm, and how best to predict the probability of observing a previously unseen species.Results
Point counts of birds taken at Stanford University's Jasper Ridge Biological Preserve between 2000 and 2005 were used to examine the algorithm. A computer identified species by correctly answering, and counting the algorithm's questions. We also explored how the character density of the key matrix and the theoretical minimum number of questions for each bird in the matrix influenced the algorithm. Our investigation of the required probability smoothing determined whether Laplace smoothing of observation probabilities was sufficient, or whether the more complex Good-Turing technique is required.Conclusion
Historic data improved identification speed, but only impacted the top 25% most frequently observed birds. For rare birds the history based algorithms did not impose a noticeable penalty in the number of questions required for identification. For our dataset neither age of the historic data, nor the number of observation years impacted the algorithm. Density of characters for different taxa in the identification matrix did not impact the algorithms. Intrinsic differences in identifying different birds did affect the algorithm, but the differences affected the baseline method of not using historic data to exactly the same degree. We found that Laplace smoothing performed better for rare species than Simple Good-Turing, and that, contrary to expectation, the technique did not then adversely affect identification performance for frequently observed birds. 相似文献16.
17.
18.
Electrical interconnects in Data Center Networks (DCNs) suffer from various problems which include high energy consumption, high latency, fixed throughput of links and limited reconfigurability. Introducing optical interconnects in DCNs help to reduce these problems to a large extent. Optical interconnects are the technology of the future. To implement optical switching in DCNs various optical components are used which include wavelength selective switch, tunable wavelength converter, arrayed waveguide grating, semiconductor optical amplifier based switch, wavelength division multiplexers and demultiplexers. All these optical components vary the shape, attenuate the optical signal and introduce time delay in bits. A comprehensive study of various architectures for optical interconnects in data center networks (DCN) is carried out. Performance of various architectures is investigated in terms of jitter, bit error rate (BER), receiver sensitivity and eye diagram opening. It is also investigated how different optical components used in optical interconnects in DCNs are effecting the signal degradation in different architectures. The paper concludes with the categorization of the signal degradation types in optical interconnects in DCNs and ways to reduce them. This enables the design of low BER optical interconnects in DCNs. 相似文献
19.
20.
Moolgavkar SH 《Radiation research》2000,154(6):728-9;discussion 730-1