首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
天然黄山松种群格局的分形特征——计盒维数与信息维数   总被引:11,自引:1,他引:11  
采用计盒维数和信息维数对屏南和寿宁不同群落的天然黄山松(Pinus taiwanensis)种群格局的分形特征进行比较分析.结果表明,天然黄山松种群格局具有分形特征,其计盒维数值在1.299 8~1.862 6之间,不同群落的大小次序为Q3>Q1>Q2>Q4>Q7>Q8>Q5>Q6;信息维数值在1.205 7~1.863 7之间,大小次序是Q3>Q1>Q2>Q4>Q8>Q7>Q5>Q6,屏南天然黄山松近纯林黄山松种群的计盒维数和信息维数均高于寿宁混交林,计盒维数定量描述种群占据水平空间的能力和程度,信息维数揭示种群格局强度的尺度变化及个体分布的非均匀程度,分形维数值的高低与群落环境、种群密度、种群在群落中的优势地位、个体的聚集程度及幼树个体数量等相关.黄山松种群格局分形维数随海拔呈现上下波动变化,1 250~1 270 m是更适生的海拔范围.此外,黄山松种群格局的分形特征存在一定的尺度范围,其拐点尺度是分形范围的下限尺度.  相似文献   

2.
Fetal loss often precludes the ascertainment of infection status in studies of perinatal transmission of HIV. The standard analysis based on liveborn babies can result in biased estimation and invalid inference in the presence of fetal death. This paper focuses on the problem of estimating treatment effects for mother-to-child transmission when infection status is unknown for some babies. Minimal data structures for identifiability of parameters are given. Methods using full likelihood and the inverse probability of selection-weighted estimators are suggested. Simulation studies are used to show that these estimators perform well in finite samples. Methods are applied to the data from a clinical trial in Dar es Salaam, Tanzania. To validly estimate the treatment effect using likelihood methods, investigators should make sure that the design includes a mini-study among uninfected mothers and that efforts are made to ascertain the infection status of as many babies lost as possible. The inverse probability weighting methods need precise estimation of the probability of observing infection status. We can further apply our methodology to the study of other vertically transmissible infections which are potentially fatal pre- and perinatally.  相似文献   

3.
iTRAQ (isobaric tags for relative or absolute quantitation) is a mass spectrometry technology that allows quantitative comparison of protein abundance by measuring peak intensities of reporter ions released from iTRAQ-tagged peptides by fragmentation during MS/MS. However, current data analysis techniques for iTRAQ struggle to report reliable relative protein abundance estimates and suffer with problems of precision and accuracy. The precision of the data is affected by variance heterogeneity: low signal data have higher relative variability; however, low abundance peptides dominate data sets. Accuracy is compromised as ratios are compressed toward 1, leading to underestimation of the ratio. This study investigated both issues and proposed a methodology that combines the peptide measurements to give a robust protein estimate even when the data for the protein are sparse or at low intensity. Our data indicated that ratio compression arises from contamination during precursor ion selection, which occurs at a consistent proportion within an experiment and thus results in a linear relationship between expected and observed ratios. We proposed that a correction factor can be calculated from spiked proteins at known ratios. Then we demonstrated that variance heterogeneity is present in iTRAQ data sets irrespective of the analytical packages, LC-MS/MS instrumentation, and iTRAQ labeling kit (4-plex or 8-plex) used. We proposed using an additive-multiplicative error model for peak intensities in MS/MS quantitation and demonstrated that a variance-stabilizing normalization is able to address the error structure and stabilize the variance across the entire intensity range. The resulting uniform variance structure simplifies the downstream analysis. Heterogeneity of variance consistent with an additive-multiplicative model has been reported in other MS-based quantitation including fields outside of proteomics; consequently the variance-stabilizing normalization methodology has the potential to increase the capabilities of MS in quantitation across diverse areas of biology and chemistry.Different techniques are being used and developed in the field of proteomics to allow quantitative comparison of samples between one state and another. These can be divided into gel- (14) or mass spectrometry-based (58) techniques. Comparative studies have found that each technique has strengths and weaknesses and plays a complementary role in proteomics (9, 10). There is significant interest in stable isotope labeling strategies of proteins or peptides as with every measurement there is the potential to use an internal reference allowing relative quantitation comparison, which significantly increases sensitivity of detection of change in abundance. Isobaric labeling techniques such as tandem mass tags (11, 12) or isobaric tags for relative or absolute quantitation (iTRAQ)1 (13, 14) allow multiplexing of four, six and eight separately labeled samples within one experiment. In contrast to most other quantitative proteomics methods where precursor ion intensities are measured, here the measurement and ensuing quantitation of iTRAQ reporter ions occurs after fragmentation of the precursor ion. Differentially labeled peptides are selected in MS as a single mass precursor ion as the size difference of the tags is equalized by a balance group. The reporter ions are only liberated in MS/MS after the reporter ion and balance groups fragment from the labeled peptides during CID. iTRAQ has been applied to a wide range of biological applications from bacteria under nitrate stress (15) to mouse models of cerebellar dysfunction (16).For the majority of MS-based quantitation methods (including MS/MS-based methods like iTRAQ), the measurements are made at the peptide level and then combined to compute a summarized value for the protein from which they arose. An advantage is that the protein can be identified and quantified from data of multiple peptides often with multiple values per distinct peptide, thereby enhancing confidence in both identity and the abundance. However, the question arises of how to summarize the peptide readings to obtain an estimate of the protein ratio. This will involve some sort of averaging, and we need to consider the distribution of the data, in particular the following three aspects. (i) Are the data centered around a single mode (which would be related to the true protein quantitation), or are there phenomena that make them multimodal? (ii) Are the data approximately symmetric (non-skewed) around the mode? (iii) Are there outliers? In the case of multimodality, it is recommended that an effort be made to separate the various phenomena into their separate variables and to dissect the multimodality. Li et al. (17) developed ASAP ratio for ICAT data that includes a complex data combination strategy. Peptide abundance ratios are calculated by combining data from multiple fractions across MS runs and then averaging across peptides to give an abundance ratio for each parent protein. GPS Explorer, a software package developed for iTRAQ, assumes normality in the peptide ratio for a protein once an outlier filter is applied (18). The iTRAQ package ProQuant assumes that peptide ratio data for a protein follow a log-normal distribution (19). Averaging can be via mean (20), weighted average (21, 22), or weighted correlation (23). Some of these methods try to take into account the varying precision of the peptide measurements. There are many different ideas of how to process peptide data, but as yet no systematic study has been completed to guide analysis and ensure the methods being utilized are appropriate.The quality of a quantitation method can be considered in terms of precision, which refers to how well repeated measurements agree with each other, and accuracy, which refers to how much they on average deviate from the true value. Both of these types of variability are inherent to the measurement process. Precision is affected by random errors, non-reproducible and unpredictable fluctuations around the true value. (In)accuracy, by contrast, is caused by systematic biases that go consistently in the same direction. In iTRAQ, systematic biases can arise because of inconsistencies in iTRAQ labeling efficiency and protein digestion (22). Typically, ratiometric normalization has been used to address this tag bias where all peptide ratios are multiplied by a global normalization factor determined to center the ratio distribution on 1 (19, 22). Even after such normalization, concerns have been raised that iTRAQ has imperfect accuracy with ratios shrunken toward 1, and this underestimation has been reported across multiple MS platforms (2327). It has been suggested that this underestimation arises from co-eluting peptides with similar m/z values, which are co-selected during ion selection and co-fragmented during CID (23, 27). As the majority of these will be at a 1:1 ratio across the reporter ion tags (as required for normalization in iTRAQ experiments), they will contribute a background value equally to each of the iTRAQ reporter ion signals and diminish the computed ratios.With regard to random errors, iTRAQ data are seen to exhibit heterogeneity of variance; that is the variance of the signal depends on its mean. In particular, the coefficient of variation (CV) is higher in data from low intensity peaks than in data from high intensity peaks (16, 22, 23). This has also been observed in other MS-based quantitation techniques when quantifying from the MS signal (2830). Different approaches have been proposed to model the variance heterogeneity. Pavelka et al. (31) used a power law global error model in conjunction with quantitation data derived from spectral counts. Other authors have proposed that the higher CV at low signal arises from the majority of MS instrumentation measuring ion counts as whole numbers (32). Anderle et al. (28) described a two-component error model in which Poisson statistics of ion counts measured as whole numbers dominate at the low intensity end of the dynamic range and multiplicative effects dominate at the high intensity end and demonstrated its fit to label-free LC-MS quantitation data. Previously, in the 1990s, Rocke and Lorenzato (29) proposed a two-component additive-multiplicative error model in an environmental toxin monitoring study utilizing gas chromatography MS.How can the variance heterogeneity be addressed in the data analysis? Some of the current approaches include outlier removal (18, 25), weighted means (21, 22), inclusion filters (16, 22), logarithmic transformation (19), and weighted correlation analysis (23). Outlier removal methods, for example using Dixon''s test, assume a normal distribution for which there is little empirical basis. The inclusion filter method, where low intensity data are excluded, reduces the protein coverage considerably if the heterogeneity is to be significantly reduced. The weighted mean method results in higher intensity readings contributing more to the weighted mean than readings from low intensity readings. Filtering, outlier removal, and weighted methods are of limited use for peptides for which only a few low intensity readings were made; however, such cases typically dominate the data sets. Even with a logarithmic transformation, heterogeneity has been reported for iTRAQ data (16, 19, 22). Current methods struggle to address the issue and to maintain sensitivity.Here we investigate the data analysis issues that relate to precision and accuracy in quantitation and propose a robust methodology that is designed to make use of all data without ad hoc filtering rules. The additive-multiplicative model mentioned above motivates the so-called generalized logarithm transformation, a transformation that addresses heterogeneity of variance by approximately stabilizing the variance of the transformed signal across its whole dynamic range (33). Huber et al. (33) provided an open source software package, variance-stabilizing normalization (VSN), that determines the data-dependent transformation parameters. Here we report that the application of this transformation is beneficial for the analysis of iTRAQ data. We investigated the error structure of iTRAQ quantitation data using different peak identification and quantitation packages, LC-MS/MS data collection systems, and both the 4-plex and 8-plex iTRAQ systems. The usefulness of the VSN transformation to address heterogeneity of variance was demonstrated. Furthermore, we considered the correlations between multiple, peptide-level readings for the same protein and proposed a method to summarize them to a protein abundance estimate. We considered same-same comparisons to assess the magnitude of experimental variability and then used a set of complex biological samples whose biology has been well characterized to assess the power of the method to detect true differential abundance. We assessed the accuracy of the system with a four-protein mixture at known ratios spanning a -fold change expression range of 1–4. From this, we proposed a methodology to address the accuracy issues of iTRAQ.  相似文献   

4.
While recording surface electromyography [sEMG], it is possible to record the electrical activities coming from the muscles and transients in the half-cell potential at the electrode–electrolyte interface due to micromovements of the electrode–skin interface. Separating the two sources of electrical activity usually fails due to the overlapping frequency characteristics of the signals. This paper aims to develop a method that detects movement artifacts and suggests a minimization technique. Towards that aim, we first estimated the frequency characteristics of movement artifacts under various static and dynamic experimental conditions. We found that the extent of the movement artifact depended on the nature of the movement and varied from person to person. Our study's highest movement artifact frequency for the stand position was 10 Hz, tiptoe 22, walk 32, run 23, jump from box 41, and jump up and down 40 Hz. Secondly, using a 40 Hz highpass filter, we cut out most of the frequencies belonging to the movement artifacts. Finally, we checked whether the latencies and amplitudes of reflex and direct muscle responses were still observed in the highpass-filtered sEMG. We showed that the 40 Hz highpass filter did not significantly alter reflex and direct muscle variables. Therefore, we recommend that researchers who use sEMG under similar conditions employ the recommended level of highpass filtering to reduce movement artifacts from their records. However, suppose different movement conditions are used. In that case, it is best to estimate the frequency characteristics of the movement artifact before applying any highpass filtering to minimize movement artifacts and their harmonics from sEMG.  相似文献   

5.
ABSTRACT Downing population reconstruction uses harvest-by-age data and backward addition of cohorts to estimate minimum population size over time. Although this technique is currently being used for management of black bear (Ursus americanus) and white-tailed deer (Odocoileus virginianus) populations, it had not undergone a rigorous evaluation of accuracy. We used computer simulations to evaluate the impacts of collapsing age classes and violating the assumptions of this technique on population reconstruction estimates and trends. Changes in harvest rate or survival over time affected accuracy of reconstructed population estimates and trends. The technique was quite robust to collapsing age classes as far as 3+ for bears and deer. This method would be suitable for estimating population growth rate (λ) for populations experiencing no trend in harvest rate or natural mortality rate over time. Our evaluation showed Downing population reconstruction to be a potentially valuable tool for managing harvested species with high harvest rates and low natural mortality, with possible application to black bear and white-tailed deer populations.  相似文献   

6.
Abstract

Purpose: This study investigated the effect of movement speed on task accuracy and precision when participants were provided temporally oriented vibrotactile prompts. Materials and methods: Participants recreated a simple wrist flexion/extension movement at fast and slow speeds based on target patterns conveyed via vibrating motors affixed to the forearm. Each participant was given five performance-blinded trials to complete the task at each speed. Movement accuracy (root mean square error) and precision (standard deviation) were calculated for each trial in both the spatial and temporal domains. Results: 28 participants completed the study. Results showed temporal accuracy and precision improved with movement speed (both: fast?>?slow, p?<?0.01), while all measures improved across trials (temporal accuracy and precision: trial 1?<?all other trials, p?<?0.05; spatial accuracy: trial 1 and 2?<?all other trials, p?<?0.05; spatial precision: trial 1?<?all other trials, p?<?0.05). Conclusions: Overall, temporal and spatial results indicate participants quickly recreated and maintained the desired pattern regardless of speed. Additionally, movement speed seems to influence movement accuracy and precision, particularly within the temporal domain. These results highlight the potential of vibrotactile prompts in rehabilitation paradigms aimed at motor re-education.  相似文献   

7.
8.
以辽宁省本溪市1955-1996年的肝炎、伤寒逐月发病的数据为根据,利用混沌动力学中“相空间技术”,对流行病过程进行能量谱分析及混沌分析。发现伤寒的流行过程是混沌的,混沌迭代模型是Xt+1=rXtexp{-0.0009287(Xt-33.25332)^2};肝炎的流行过程是非混沌的。在模型参数变化范围内,经历了周期状态、混沌状态之间的转换,这表明伤寒的流行过程是复杂的,给出了流行病的“阈值”,以控制它们的流行涨落,求出伤寒的关联分维是3.087。  相似文献   

9.
10.
Pulmonary hypertension (PH) can result in vascular pruning and increased tortuosity of the blood vessels. In this study we examined whether automatic extraction of lung vessels from contrast-enhanced thoracic computed tomography (CT) scans and calculation of tortuosity as well as 3D fractal dimension of the segmented lung vessels results in measures associated with PH.In this pilot study, 24 patients (18 with and 6 without PH) were examined with thorax CT following their diagnostic or follow-up right-sided heart catheterisation (RHC). Images of the whole thorax were acquired with a 128-slice dual-energy CT scanner. After lung identification, a vessel enhancement filter was used to estimate the lung vessel centerlines. From these, the vascular trees were generated. For each vessel segment the tortuosity was calculated using distance metric. Fractal dimension was computed using 3D box counting. Hemodynamic data from RHC was used for correlation analysis.Distance metric, the readout of vessel tortuosity, correlated with mean pulmonary arterial pressure (Spearman correlation coefficient: ρ = 0.60) and other relevant parameters, like pulmonary vascular resistance (ρ = 0.59), arterio-venous difference in oxygen (ρ = 0.54), arterial (ρ = −0.54) and venous oxygen saturation (ρ = −0.68). Moreover, distance metric increased with increase of WHO functional class. In contrast, 3D fractal dimension was only significantly correlated with arterial oxygen saturation (ρ = 0.47).Automatic detection of the lung vascular tree can provide clinically relevant measures of blood vessel morphology. Non-invasive quantification of pulmonary vessel tortuosity may provide a tool to evaluate the severity of pulmonary hypertension.

Trial Registration

ClinicalTrials.gov NCT01607489  相似文献   

11.
Different summarized shape indices, like mean shape index (MSI) and area weighted mean shape index (AWMSI) can change over multiple size scales. This variation is important to describe scale heterogeneity of landscapes, but the exact mathematical form of the dependence is rarely known. In this paper, the use of fractal geometry (by the perimeter and area Hausdorff dimensions) made us able to describe the scale dependence of these indices. Moreover, we showed how fractal dimensions can be deducted from existing MSI and AWMSI data. In this way, the equality of a multiscale tabulated MSI and AWMSI dataset and two scale-invariant fractal dimensions has been demonstrated.  相似文献   

12.
天然黄山松种群空间格局的分形特征——关联维数   总被引:9,自引:0,他引:9  
采用关联维数对屏南和寿宁不同群落的天然黄山松(Pinus taiwanensis)种群格局的分形特征进行分析比较。结果表明,天然黄山松种群格局关联维数值在1.077~1.563之间,不同群落间的大小次序为Q2>Q8>Q5>Q3>Q1>Q7>Q6>Q4,屏南天然黄山松近纯林与寿宁混交林种群个体的空间相关程度差异不大。随海拔升高,黄山松种群关联维数呈现升一降一升的波动变化,1300~1310m及1400m是天然黄山松个体空间关联较强、个体竞争较为激烈、空间占据程度较高的海拔范围。此外,结合前文研究,讨论了如何综合运用3种分形维数对天然黄山松种群格局尺度变化特征进行描述,并绘制分形维数谱。  相似文献   

13.
ABSTRACT Variance in population estimates is affected by the number of samples that are chosen to genotype when multiple samples are available during a sampling period. Using genetic data obtained from noninvasive hair-snags used to sample black bears (Ursus americanus) in the Northern Lower Peninsula of Michigan, USA, we developed a bootstrapping simulation to determine how precision of population estimates varied based on the number of samples genotyped. Improvements in precision of population estimates were not monotonic over all samples sizes available for genotyping. Estimates of cost, both financially and in terms of bias associated with increasing genotyping error and benefits in terms of greater estimate precision, will vary by species and field conditions and should be determined empirically.  相似文献   

14.
基于盒维数的心音信号分形特征研究   总被引:3,自引:0,他引:3  
在传统盒维数的基础上,从尺度变化的角度,提出一种计算心音信号时域波形分形维数的新的二进盒维数算法,并给出了算法思想和估算方法;然后用该方法对正常心音和几种典型的病态心音的分形维数进行计算,并对其分形特征进行了研究.研究结果表明:心音信号具有明显的分形特征,分形维数能够反映心音信号的复杂程度,并且能够明显地区分正常心音和病态心音.  相似文献   

15.
16.
Segments of indentity-by-descent (IBD) detected from high-density genetic data are useful for many applications, including long-range phase determination, phasing family data, imputation, IBD mapping, and heritability analysis in founder populations. We present Refined IBD, a new method for IBD segment detection. Refined IBD achieves both computational efficiency and highly accurate IBD segment reporting by searching for IBD in two steps. The first step (identification) uses the GERMLINE algorithm to find shared haplotypes exceeding a length threshold. The second step (refinement) evaluates candidate segments with a probabilistic approach to assess the evidence for IBD. Like GERMLINE, Refined IBD allows for IBD reporting on a haplotype level, which facilitates determination of multi-individual IBD and allows for haplotype-based downstream analyses. To investigate the properties of Refined IBD, we simulate SNP data from a model with recent superexponential population growth that is designed to match United Kingdom data. The simulation results show that Refined IBD achieves a better power/accuracy profile than fastIBD or GERMLINE. We find that a single run of Refined IBD achieves greater power than 10 runs of fastIBD. We also apply Refined IBD to SNP data for samples from the United Kingdom and from Northern Finland and describe the IBD sharing in these data sets. Refined IBD is powerful, highly accurate, and easy to use and is implemented in Beagle version 4.  相似文献   

17.
It has been commonly recognized that residual dipolar coupling data provide a measure of quality for protein structures. To quantify this observation, a database of 100 single-domain proteins has been compiled where each protein was represented by two independently solved structures. Backbone 1H–15N dipolar couplings were simulated for the target structures and then fitted to the model structures. The fits were characterized by an R-factor which was corrected for the effects of non-uniform distribution of dipolar vectors on a unit sphere. The analyses show that favorable values virtually guarantee high accuracy of the model structure (where accuracy is defined as the backbone coordinate rms deviation). On the other hand, unfavorable values do not necessarily suggest low accuracy. Based on the simulated data, a simple empirical formula is proposed to estimate the accuracy of protein structures. The method is illustrated with a number of examples, including PDZ2 domain of human phosphatase hPTP1E. Electronic supplementary material Electronic supplementary material is available for this article at and accessible for authorised users.  相似文献   

18.
Reference scanners are used in dental medicine to verify a lot of procedures. The main interest is to verify impression methods as they serve as a base for dental restorations. The current limitation of many reference scanners is the lack of accuracy scanning large objects like full dental arches, or the limited possibility to assess detailed tooth surfaces. A new reference scanner, based on focus variation scanning technique, was evaluated with regards to highest local and general accuracy. A specific scanning protocol was tested to scan original tooth surface from dental impressions. Also, different model materials were verified. The results showed a high scanning accuracy of the reference scanner with a mean deviation of 5.3 ± 1.1 µm for trueness and 1.6 ± 0.6 µm for precision in case of full arch scans. Current dental impression methods showed much higher deviations (trueness: 20.4 ± 2.2 µm, precision: 12.5 ± 2.5 µm) than the internal scanning accuracy of the reference scanner. Smaller objects like single tooth surface can be scanned with an even higher accuracy, enabling the system to assess erosive and abrasive tooth surface loss. The reference scanner can be used to measure differences for a lot of dental research fields. The different magnification levels combined with a high local and general accuracy can be used to assess changes of single teeth or restorations up to full arch changes.  相似文献   

19.
20.
杨灿朝  梁伟 《动物学杂志》2016,51(4):663-667
被广泛应用于临床医学、药理学和心理学等学科中的盲实验法,是一种为避免由于人为主观性而对实验观测结果产生偏差的实验设计方法.然而,该方法在动物行为研究领域一直不被重视.近年来有学者逐渐意识到这个问题,并通过文献综述和分析,发现大部分需要采用盲实验法的研究均忽略了此设计方法,使得其研究结果的效应量明显高于采用盲实验法的研究,说明实验中观测者的主观偏见对研究结果造成了影响.本文通过对盲实验法的介绍,强调了其在动物行为研究中的重要性,并对其在该研究领域中的应用进行了阐述和建议.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号