首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The research focuses on the cooperative relationship and the strategy tendency among three mutually interactive parties in financing: small enterprises, commercial banks and micro-credit companies. Complex network theory and time series analysis were applied to figure out the quantitative evidence. Moreover, this paper built up a fundamental model describing the particular interaction among them through evolutionary game. Combining the results of data analysis and current situation, it is justifiable to put forward reasonable legislative recommendations for regulations on lending activities among small enterprises, commercial banks and micro-credit companies. The approach in this research provides a framework for constructing mathematical models and applying econometrics and evolutionary game in the issue of corporation financing.  相似文献   

2.
3.
In this paper we analyze the behavior of tornado time-series in the U.S. from the perspective of dynamical systems. A tornado is a violently rotating column of air extending from a cumulonimbus cloud down to the ground. Such phenomena reveal features that are well described by power law functions and unveil characteristics found in systems with long range memory effects. Tornado time series are viewed as the output of a complex system and are interpreted as a manifestation of its dynamics. Tornadoes are modeled as sequences of Dirac impulses with amplitude proportional to the events size. First, a collection of time series involving 64 years is analyzed in the frequency domain by means of the Fourier transform. The amplitude spectra are approximated by power law functions and their parameters are read as an underlying signature of the system dynamics. Second, it is adopted the concept of circular time and the collective behavior of tornadoes analyzed. Clustering techniques are then adopted to identify and visualize the emerging patterns.  相似文献   

4.
Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method''s practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems.  相似文献   

5.
Genomic time series data generated by evolve-and-resequence (E&R) experiments offer a powerful window into the mechanisms that drive evolution. However, standard population genetic inference procedures do not account for sampling serially over time, and new methods are needed to make full use of modern experimental evolution data. To address this problem, we develop a Gaussian process approximation to the multi-locus Wright-Fisher process with selection over a time course of tens of generations. The mean and covariance structure of the Gaussian process are obtained by computing the corresponding moments in discrete-time Wright-Fisher models conditioned on the presence of a linked selected site. This enables our method to account for the effects of linkage and selection, both along the genome and across sampled time points, in an approximate but principled manner. We first use simulated data to demonstrate the power of our method to correctly detect, locate and estimate the fitness of a selected allele from among several linked sites. We study how this power changes for different values of selection strength, initial haplotypic diversity, population size, sampling frequency, experimental duration, number of replicates, and sequencing coverage depth. In addition to providing quantitative estimates of selection parameters from experimental evolution data, our model can be used by practitioners to design E&R experiments with requisite power. We also explore how our likelihood-based approach can be used to infer other model parameters, including effective population size and recombination rate. Then, we apply our method to analyze genome-wide data from a real E&R experiment designed to study the adaptation of D. melanogaster to a new laboratory environment with alternating cold and hot temperatures.  相似文献   

6.
7.
8.
A recently proposed methodology called the Horizontal Visibility Graph (HVG) [Luque et al., Phys. Rev. E., 80, 046103 (2009)] that constitutes a geometrical simplification of the well known Visibility Graph algorithm [Lacasa et al., Proc. Natl. Sci. U.S.A. 105, 4972 (2008)], has been used to study the distinction between deterministic and stochastic components in time series [L. Lacasa and R. Toral, Phys. Rev. E., 82, 036120 (2010)]. Specifically, the authors propose that the node degree distribution of these processes follows an exponential functional of the form , in which is the node degree and is a positive parameter able to distinguish between deterministic (chaotic) and stochastic (uncorrelated and correlated) dynamics. In this work, we investigate the characteristics of the node degree distributions constructed by using HVG, for time series corresponding to chaotic maps, 2 chaotic flows and different stochastic processes. We thoroughly study the methodology proposed by Lacasa and Toral finding several cases for which their hypothesis is not valid. We propose a methodology that uses the HVG together with Information Theory quantifiers. An extensive and careful analysis of the node degree distributions obtained by applying HVG allow us to conclude that the Fisher-Shannon information plane is a remarkable tool able to graphically represent the different nature, deterministic or stochastic, of the systems under study.  相似文献   

9.
时间序列的相似性测度   总被引:1,自引:0,他引:1  
时间序列(time series)是指按时间顺序排列的观测值集合,在生物信息学研究领域中,DNA序列和基因表达数据都可以视为时间序列数据。时间序列分析中很重要的环节就是刻划两个时间序列或者时间子序列的相似性,用于序列比对等。时间序列的相似性测度是时间序列研究中的基础和重点,直接影响查询、聚类等后续计算的效率和精度,在高通量基因芯片数据分析、基因网络构建等研究中,具有重要的应用,目前已引起了众多研究人员的关注,在欧氏距离的基础上进行了大量的研究,本文综述了基于欧式距离和时间弯曲的时间序列相似性测度及其相关领域的研究进展,可作为进一步研究的参考。  相似文献   

10.
DQ-FIT and CV-SORT have been developed to facilitate the automatic analysis of data sampled by radiotelemetry, but they can also be used with other data sampled in chronobiological settings. After import of data, DQ-FIT performs conventional linear, as well as rhythm analysis according to user-defined specifications. Linear analysis includes calculation of mean values, load values (percentage of values above a defined limit), highest and lowest readings, and areas under the (parameter-time) curve (AUC). All of these parameters are calculated for the total sampling interval and for user-defined day and night periods. Rhythm analysis is performed by fitting of partial Fourier series with up to six harmonics. The contribution of each harmonic to the overall variation of data is tested statistically; only those components are included in the best-fit function that contribute significantly. Parameters calculated in DQ-FIT's rhythm analysis include mesor, amplitudes, and acrophases of all rhythmic components; significance and percentage rhythm of the combined best fit; maximum and minimum of the fitted curve and times of their occurrence. In addition, DQ-FIT uses the first derivative of the fitted curve (i.e., its slope) to determine the time and extent of maximal increases and decreases within the total sampling interval or user-defined intervals of interest, such as the times of lights on or off. CV-SORT can be used to create tables or graphs from groups of data sets analyzed by DQ-FIT. Graphs are created in CV-SORT by calculation of group mean profiles from individual best-fit curves rather than their curve parameters. This approach allows the user to combine data sets that differ in the number and/ or period length of harmonics included. In conclusion, DQ-FIT and CV-SORT can be helpful in the analysis of time-dependent data sampled by telemetry or other monitoring systems. The software can be obtained on request by every interested researcher. (Chronobiology International, 14(6), 561–574, 1997)  相似文献   

11.
A method of analysis of heart rate variability based on the graph theory principle was suggested. The main parameters of the heart rate graph structure were determined and analyzed using models of harmonic oscillations, white noise, and various functional tests (including controllable respiration and mental load). The efficiency of the use of parameters of the heart rate graph for diagnosing some functional states was considered. A correlation of the parameters of the heart rate graph structure with the frequency characteristics of heart rate variability was studied. A general model of changes in the heart rate graph structure parameters at different levels of mental activity was constructed in terms of entropy changes.  相似文献   

12.
13.
14.
In recent years, recommender systems have become an effective method to process information overload. However, recommendation technology still suffers from many problems. One of the problems is shilling attacks-attackers inject spam user profiles to disturb the list of recommendation items. There are two characteristics of all types of shilling attacks: 1) Item abnormality: The rating of target items is always maximum or minimum; and 2) Attack promptness: It takes only a very short period time to inject attack profiles. Some papers have proposed item anomaly detection methods based on these two characteristics, but their detection rate, false alarm rate, and universality need to be further improved. To solve these problems, this paper proposes an item anomaly detection method based on dynamic partitioning for time series. This method first dynamically partitions item-rating time series based on important points. Then, we use chi square distribution (χ2) to detect abnormal intervals. The experimental results on MovieLens 100K and 1M indicate that this approach has a high detection rate and a low false alarm rate and is stable toward different attack models and filler sizes.  相似文献   

15.
目的 梳理肿瘤多学科诊疗团队研究关键主题,分析研究热点和趋势。方法 应用CiteSpace对肿瘤多学科诊疗团队研究进行可视化分析。结果 相关文献共2 160篇,关键文献10篇,主要主题聚类15个,突现术语29个。结论 肿瘤MDT研究主要集中于多学科诊疗团队的构建及运行、推行情况、效果评价,研究前沿如提高实施效率、成本—效果分析、随机对照研究等,为中国肿瘤多学科诊疗团队研究和发展提供了新视角。  相似文献   

16.
The analysis of signals consisting of discrete and irregular data causes methodological problems for the Fourier spectral Analysis: Since it is based on sinusoidal functions, rectangular signals with unequal periodicities cannot easily be replicated. The Walsh spectral Analysis is based on the so called "Walsh functions", a complete set of orthonormal, rectangular waves and thus seems to be the method of choice for analysing signals consisting of binary or ordinal data. The paper compares the Walsh spectral analysis and the Fourier spectral analysis on the basis of simulated and real binary data sets of various length. Simulated data were derived from signals with defined cyclic patterns that were noised by randomly generated signals of the same length. The Walsh and Fourier spectra of each set were determined and up to 25% of the periodogram coefficients were utilized as input for an inverse transform. Mean square approximation error (MSE) was calculated for each of the series in order to compare the goodness of fit between the original and the reconstructed signal. The same procedure was performed with real data derived from a behavioral observation in pigs. The comparison of the two methods revealed that, in the analysis of discrete and binary time series, Walsh spectral analysis is the more appropriate method, if the time series is rather short. If the length of the signal increases, the difference between the two methods is less substantial.  相似文献   

17.
The analysis of signals consisting of discrete and irregular data causes methodological problems for the Fourier spectral Analysis: Since it is based on sinusoidal functions, rectangular signals with unequal periodicities cannot easily be replicated. The Walsh spectral Analysis is based on the so called "Walsh functions", a complete set of orthonormal, rectangular waves and thus seems to be the method of choice for analysing signals consisting of binary or ordinal data. The paper compares the Walsh spectral analysis and the Fourier spectral analysis on the basis of simulated and real binary data sets of various length. Simulated data were derived from signals with defined cyclic patterns that were noised by randomly generated signals of the same length. The Walsh and Fourier spectra of each set were determined and up to 25% of the periodogram coefficients were utilized as input for an inverse transform. Mean square approximation error (MSE) was calculated for each of the series in order to compare the goodness of fit between the original and the reconstructed signal. The same procedure was performed with real data derived from a behavioral observation in pigs. The comparison of the two methods revealed that, in the analysis of discrete and binary time series, Walsh spectral analysis is the more appropriate method, if the time series is rather short. If the length of the signal increases, the difference between the two methods is less substantial.  相似文献   

18.
《IRBM》2022,43(4):309-316
ObjectivesThis study aimed to investigate whether DistEn was capable of identifying complexity or irregularity for gait data and whether having low parameter-dependency sensitivity by comparing with the Approximate Entropy (ApEn) and Sample Entropy (SampEn).Material and methodsThe data were divided into three groups according to gait maturation. Firstly, the mean amplitude histogram, standard deviation (SD), and the power spectrum were calculated for each group. Secondly, ApEn, SampEn, and DistEn algorithms were calculated. Statistical analyses were then performed to compare groups.ResultsFor m=3 with M= 256 and M=512 parameters, DistEn showed a statistically significant difference between in pairwise comparisons between all groups (Pa, Pb, and Pc < 0.05). DistEn consistently decreased from Group1, to Group2, and to Group 3. For m=2 with r=0.30 values, SampEn showed a statistically significant difference only in pairwise comparisons between Group1 and Group3 (Pb < 0.05). For with m=3 and r=0.30 parameters, SampEn also showed a statistically significant difference in pairwise comparisons between Group1 and Group3 (Pc < 0.05) as well as Group2 and Group3 (Pc < 0.05) SampEn increased from Group1 to Group3 and from Group2 to Group3. There was not any statistically significant difference in pairwise comparisons of groups for ApEn. Furthermore, DistEn showed less parameter consistency than ApEn and SampEn.ConclusionDistEn showed the best performance in capture the complexity changes in gain patterns with growth.  相似文献   

19.
武夷山杉木林凋落物动态初探   总被引:14,自引:0,他引:14  
用起伏型时间序列法对武夷山国家级自然保护区杉木林凋落物月动态进行模型,结果令人满意,说明起伏型时间序列分析可应用于森林凋落物动态模拟。  相似文献   

20.

Background

Carpal tunnel release (CTR) is among the most common hand surgeries, although little is known about its pattern. In this study, we aimed to investigate temporal trends, age and gender variation and current practice patterns in CTR surgeries.

Methods

We conducted a population-based time series analysis among over 13 million residents of Ontario, who underwent operative management for carpal tunnel syndrome (CTS) from April 1, 1992 to March 31, 2010 using administrative claims data.

Results

The primary analysis revealed a fairly stable procedure rate of approximately 10 patients per 10,000 population per year receiving CTRs without any significant, consistent temporal trend (p = 0.94). Secondary analyses revealed different trends in procedure rates according to age. The annual procedure rate among those age >75 years increased from 22 per 10,000 population at the beginning of the study period to over 26 patients per 10,000 population (p<0.01) by the end of the study period. CTR surgical procedures were approximately two-fold more common among females relative to males (64.9% vs. 35.1 respectively; p<0.01). Lastly, CTR procedures are increasingly being conducted in the outpatient setting while procedures in the inpatient setting have been declining steadily – the proportion of procedures performed in the outpatient setting increased from 13% to over 30% by 2010 (p<0.01).

Conclusion

Overall, CTR surgical-procedures are conducted at a rate of approximately 10 patients per 10,000 population annually with significant variation with respect to age and gender. CTR surgical procedures in ambulatory-care facilities may soon outpace procedure rates in the in-hospital setting.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号