首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary We introduce a nearly automatic procedure to locate and count the quantum dots in images of kinesin motor assays. Our procedure employs an approximate likelihood estimator based on a two‐component mixture model for the image data; the first component has a normal distribution, and the other component is distributed as a normal random variable plus an exponential random variable. The normal component has an unknown variance, which we model as a function of the mean. We use B‐splines to estimate the variance function during a training run on a suitable image, and the estimate is used to process subsequent images. Parameter estimates are generated for each image along with estimates of standard errors, and the number of dots in the image is determined using an information criterion and likelihood ratio tests. Realistic simulations show that our procedure is robust and that it leads to accurate estimates, both of parameters and of standard errors.  相似文献   

2.
For multicenter randomized trials or multilevel observational studies, the Cox regression model has long been the primary approach to study the effects of covariates on time-to-event outcomes. A critical assumption of the Cox model is the proportionality of the hazard functions for modeled covariates, violations of which can result in ambiguous interpretations of the hazard ratio estimates. To address this issue, the restricted mean survival time (RMST), defined as the mean survival time up to a fixed time in a target population, has been recommended as a model-free target parameter. In this article, we generalize the RMST regression model to clustered data by directly modeling the RMST as a continuous function of restriction times with covariates while properly accounting for within-cluster correlations to achieve valid inference. The proposed method estimates regression coefficients via weighted generalized estimating equations, coupled with a cluster-robust sandwich variance estimator to achieve asymptotically valid inference with a sufficient number of clusters. In small-sample scenarios where a limited number of clusters are available, however, the proposed sandwich variance estimator can exhibit negative bias in capturing the variability of regression coefficient estimates. To overcome this limitation, we further propose and examine bias-corrected sandwich variance estimators to reduce the negative bias of the cluster-robust sandwich variance estimator. We study the finite-sample operating characteristics of proposed methods through simulations and reanalyze two multicenter randomized trials.  相似文献   

3.
The goal of this study was to identify which muscle activation patterns and gait features best predict the metabolic cost of inclined walking. We measured muscle activation patterns, joint kinematics and kinetics, and metabolic cost in sixteen subjects during treadmill walking at inclines of 0%, 5%, and 10%. Multivariate regression models were developed to predict the net metabolic cost from selected groups of the measured variables. A linear regression model including incline and the squared integrated electromyographic signals of the soleus and vastus lateralis explained 96% of the variance in metabolic cost, suggesting that the activation patterns of these large muscles have a high predictive value for metabolic cost. A regression model including only the peak knee flexion angle during stance phase, peak knee extension moment, peak ankle plantarflexion moment, and peak hip flexion moment explained 89% of the variance in metabolic cost; this finding indicates that kinematics and kinetics alone can predict metabolic cost during incline walking. The ability of these models to predict metabolic cost from muscle activation patterns and gait features points the way toward future work aimed at predicting metabolic cost when gait is altered by changes in neuromuscular control or the use of an assistive technology.  相似文献   

4.
Propensity score methods are used to estimate a treatment effect with observational data. This paper considers the formation of propensity score subclasses by investigating different methods for determining subclass boundaries and the number of subclasses used. We compare several methods: balancing a summary of the observed information matrix and equal-frequency subclasses. Subclasses that balance the inverse variance of the treatment effect reduce the mean squared error of the estimates and maximize the number of usable subclasses.  相似文献   

5.
ESTIMATED POPULATION SIZE OF THE CALIFORNIA GRAY WHALE   总被引:1,自引:0,他引:1  
Abstract: The 1987-1988 counts of gray whales passing Monterey are reanalyzed to provide a revised population size estimate. The double count data are modeled using iterative logistic regression to allow for the effects of various covariates on probability of detection, and a correction factor is introduced for night rate of travel. The revised absolute population size estimate is 20,869 animals, with CV = 4.37% and 95% confidence interval (19,200, 22,700). In addition the series of relative population size estimates from 1967-1968 to 1987-1988 is scaled to pass through this estimate and modeled to provide variance estimates from interannual variation in population size estimates. This method yields an alternative population size estimate for 1987-1988 of 21,296 animals, with CV = 6.05% and 95% confidence interval (18,900, 24,000). The average annual rate of increase between 1967-1968 and 1987-1988 was estimated to be 3.29% with standard error 0.44%.  相似文献   

6.
As a first step towards developing a dynamic model of the rat hindlimb, we measured muscle attachment and joint center coordinates relative to bony landmarks using stereophotogrammetry. Using these measurements, we analyzed muscle moment arms as functions of joint angle for most hindlimb muscles, and tested the hypothesis that postural change alone is sufficient to alter the function of selected muscles of the leg. We described muscle attachment sites as second-order curves. The length of the fit parabola and residual errors in the orthogonal directions give an estimate of muscle attachment sizes, which are consistent with observations made during dissection. We modeled each joint as a moving point dependent on joint angle; relative endpoint errors less than 7% indicate this method as accurate. Most muscles have moment arms with a large range across the physiological domain of joint angles, but their moment arms peak and vary little within the locomotion domain. The small variation in moment arms during locomotion potentially simplifies the neural control requirements during this phase. The moment arms of a number of muscles cross zero as angle varies within the quadrupedal locomotion domain, indicating they are intrinsically stabilizing. However, in the bipedal locomotion domain, the moment arms of these muscles do not cross zero and thus are no longer intrinsically stabilizing. We found that muscle function is largely determined by the change in moment arm with joint angle, particularly the transition from quadrupedal to bipedal posture, which may alter an intrinsically stabilizing arrangement or change the control burden.  相似文献   

7.
We present a new algorithm to estimate hemodynamic response function (HRF) and drift components of fMRI data in wavelet domain. The HRF is modeled by both parametric and nonparametric models. The functional Magnetic resonance Image (fMRI) noise is modeled as a fractional brownian motion (fBm). The HRF parameters are estimated in wavelet domain by exploiting the property that wavelet transforms with a sufficient number of vanishing moments decorrelates a fBm process. Using this property, the noise covariance matrix in wavelet domain can be assumed to be diagonal whose entries are estimated using the sample variance estimator at each scale. We study the influence of the sampling rate of fMRI time series and shape assumption of HRF on the estimation performance. Results are presented by adding synthetic HRFs on simulated and null fMRI data. We also compare these methods with an existing method,(1) where correlated fMRI noise is modeled by a second order polynomial functions.  相似文献   

8.

Neuromusculoskeletal models are a powerful tool to investigate the internal biomechanics of an individual. However, commonly used neuromusculoskeletal models are generated via linear scaling of generic templates derived from elderly adult anatomies and poorly represent a child, let alone children with a neuromuscular disorder whose musculoskeletal structures and muscle activation patterns are profoundly altered. Model personalization can capture abnormalities and appropriately describe the underlying (altered) biomechanics of an individual. In this work, we explored the effect of six different levels of neuromusculoskeletal model personalization on estimates of muscle forces and knee joint contact forces to tease out the importance of model personalization for normal and abnormal musculoskeletal structures and muscle activation patterns. For six children, with and without cerebral palsy, generic scaled models were developed and progressively personalized by (1) tuning and calibrating musculotendon units’ parameters, (2) implementing an electromyogram-assisted approach to synthesize muscle activations, and (3) replacing generic anatomies with image-based bony geometries, and physiologically and physically plausible muscle kinematics. Biomechanical simulations of gait were performed in the OpenSim and CEINMS software on ten overground walking trials per participant. A mixed-ANOVA test, with Bonferroni corrections, was conducted to compare all models’ estimates. The model with the highest level of personalization produced the most physiologically plausible estimates. Model personalization is crucial to produce physiologically plausible estimates of internal biomechanical quantities. In particular, personalization of musculoskeletal anatomy and muscle activation patterns had the largest effect overall. Increased research efforts are needed to ease the creation of personalized neuromusculoskeletal models.

  相似文献   

9.
Important activities of daily living, like walking and stair climbing, may be impaired by muscle weakness. In particular, quadriceps weakness is common in populations such as those with knee osteoarthritis (OA) and following ACL injury and may be a result of muscle atrophy or reduced voluntary muscle activation. While weak quadriceps have been strongly correlated with functional limitations in these populations, the important cause–effect relationships between abnormal lower extremity muscle function and patient function remain unknown. As a first step towards determining those relationships, the purpose of this study was to estimate changes in muscle forces and contributions to support and progression to maintain normal gait in response to two sources of quadriceps weakness: atrophy and activation failure. We used muscle-driven simulations to track normal gait kinematics in healthy subjects and applied simulated quadriceps weakness as atrophy and activation failure to evaluate compensation patterns associated with the individual sources of weakness. We found that the gluteus maximus and soleus muscles display the greatest ability to compensate for simulated quadriceps weakness. Also, by simulating two different causes of muscle weakness, this model suggested different compensation strategies by the lower extremity musculature in response to atrophy and activation deficits. Estimating the compensation strategies that are necessary to maintain normal gait will enable investigations of the role of muscle weakness in abnormal gait and inform potential rehabilitation strategies to improve such conditions.  相似文献   

10.
11.
Zhang D  Lin X  Sowers M 《Biometrics》2000,56(1):31-39
We consider semiparametric regression for periodic longitudinal data. Parametric fixed effects are used to model the covariate effects and a periodic nonparametric smooth function is used to model the time effect. The within-subject correlation is modeled using subject-specific random effects and a random stochastic process with a periodic variance function. We use maximum penalized likelihood to estimate the regression coefficients and the periodic nonparametric time function, whose estimator is shown to be a periodic cubic smoothing spline. We use restricted maximum likelihood to simultaneously estimate the smoothing parameter and the variance components. We show that all model parameters can be easily obtained by fitting a linear mixed model. A common problem in the analysis of longitudinal data is to compare the time profiles of two groups, e.g., between treatment and placebo. We develop a scaled chi-squared test for the equality of two nonparametric time functions. The proposed model and the test are illustrated by analyzing hormone data collected during two consecutive menstrual cycles and their performance is evaluated through simulations.  相似文献   

12.
The objective of this study was to develop an efficient methodology for generating muscle-actuated simulations of human walking that closely reproduce experimental measures of kinematics and ground reaction forces. We first introduce a residual elimination algorithm (REA) to compute pelvis and low back kinematic trajectories that ensure consistency between whole-body dynamics and measured ground reactions. We then use a computed muscle control (CMC) algorithm to vary muscle excitations to track experimental joint kinematics within a forward dynamic simulation. CMC explicitly accounts for delays in muscle force production resulting from activation and contraction dynamics while using a general static optimization framework to resolve muscle redundancy. CMC was used to compute muscle excitation patterns that drove a 21-degrees-of-freedom, 92 muscle model to track experimental gait data of 10 healthy young adults. Simulated joint kinematics closely tracked experimental quantities (mean root-mean-squared errors generally less than 1 degrees), and the time histories of muscle activations were similar to electromyographic recordings. A simulation of a half-cycle of gait could be generated using approximately 30 min of computer processing time. The speed and accuracy of REA and CMC make it practical to generate subject-specific simulations of gait.  相似文献   

13.
MOTIVATION: The numerical values of gene expression measured using microarrays are usually presented to the biological end-user as summary statistics of spot pixel data, such as the spot mean, median and mode. Much of the subsequent data analysis reported in the literature, however, uses only one of these spot statistics. This results in sub-optimal estimates of gene expression levels and a need for improvement in quantitative spot variation surveillance. RESULTS: This paper develops a maximum-likelihood method for estimating gene expression using spot mean, variance and pixel number values available from typical microarray scanners. It employs a hierarchical model of variation between and within microarray spots. The hierarchical maximum-likelihood estimate (MLE) is shown to be a more efficient estimator of the mean than the 'conventional' estimate using solely the spot mean values (i.e. without spot variance data). Furthermore, under the assumptions of our model, the spot mean and spot variance are shown to be sufficient statistics that do not require the use of all pixel data.The hierarchical MLE method is applied to data from both Monte Carlo (MC) simulations and a two-channel dye-swapped spotted microarray experiment. The MC simulations show that the hierarchical MLE method leads to improved detection of differential gene expression particularly when 'outlier' spots are present on the arrays. Compared with the conventional method, the MLE method applied to data from the microarray experiment leads to an increase in the number of differentially expressed genes detected for low cut-off P-values of interest.  相似文献   

14.
Previous neural field models have mostly been concerned with prediction of mean neural activity and with second order quantities such as its variance, but without feedback of second order quantities on the dynamics. Here the effects of feedback of the variance on the steady states and adiabatic dynamics of neural systems are calculated using linear neural field theory to estimate the neural voltage variance, then including this quantity in the total variance parameter of the nonlinear firing rate-voltage response function, and thus into determination of the fixed points and the variance itself. The general results further clarify the limits of validity of approaches with and without inclusion of variance dynamics. Specific applications show that stability against a saddle-node bifurcation is reduced in a purely cortical system, but can be either increased or decreased in the corticothalamic case, depending on the initial state. Estimates of critical variance scalings near saddle-node bifurcation are also found, including physiologically based normalizations and new scalings for mean firing rate and the position of the bifurcation.  相似文献   

15.
M. Kirkpatrick  D. Lofsvold    M. Bulmer 《Genetics》1990,124(4):979-993
We present methods for estimating the parameters of inheritance and selection that appear in a quantitative genetic model for the evolution growth trajectories and other "infinite-dimensional" traits that we recently introduced. Two methods for estimating the additive genetic covariance function are developed, a "full" model that fully fits the data and a "reduced" model that generates a smoothed estimate consistent with the sampling errors in the data. By decomposing the covariance function into its eigenvalues and eigenfunctions, it is possible to identify potential evolutionary changes in the population's mean growth trajectory for which there is (and those for which there is not) genetic variation. Algorithms for estimating these quantities, their confidence intervals, and for testing hypotheses about them are developed. These techniques are illustrated by an analysis of early growth in mice. Compatible methods for estimating the selection gradient function acting on growth trajectories in natural or domesticated populations are presented. We show how the estimates for the additive genetic covariance function and the selection gradient function can be used to predict the evolutionary change in a population's mean growth trajectory.  相似文献   

16.
The fate of scientific hypotheses often relies on the ability of a computational model to explain the data, quantified in modern statistical approaches by the likelihood function. The log-likelihood is the key element for parameter estimation and model evaluation. However, the log-likelihood of complex models in fields such as computational biology and neuroscience is often intractable to compute analytically or numerically. In those cases, researchers can often only estimate the log-likelihood by comparing observed data with synthetic observations generated by model simulations. Standard techniques to approximate the likelihood via simulation either use summary statistics of the data or are at risk of producing substantial biases in the estimate. Here, we explore another method, inverse binomial sampling (IBS), which can estimate the log-likelihood of an entire data set efficiently and without bias. For each observation, IBS draws samples from the simulator model until one matches the observation. The log-likelihood estimate is then a function of the number of samples drawn. The variance of this estimator is uniformly bounded, achieves the minimum variance for an unbiased estimator, and we can compute calibrated estimates of the variance. We provide theoretical arguments in favor of IBS and an empirical assessment of the method for maximum-likelihood estimation with simulation-based models. As case studies, we take three model-fitting problems of increasing complexity from computational and cognitive neuroscience. In all problems, IBS generally produces lower error in the estimated parameters and maximum log-likelihood values than alternative sampling methods with the same average number of samples. Our results demonstrate the potential of IBS as a practical, robust, and easy to implement method for log-likelihood evaluation when exact techniques are not available.  相似文献   

17.
A variance statistic was used to partition the total variance into that attributable to each step of a TMEN assay procedure. Estimation of the TMEN of wheat was used as an example. The variance statistic can also be used to optimize the design of a TMEN experiment with respect to cost of the experiment and desired accuracy of the result. Experimental design optimization is accomplished by providing a functional relationship between the accuracy of the estimate and the number of replicates of feed, the number of birds used in the experiment, and the cost of each step. The variance statistic is also a useful tool for identifying and removing outliers and highly variable measurements. This feature was demonstrated with the chosen example data. Gross energy of the feed will explain approximately 50% of the variance of the TMEN estimate depending on how many replicates are evaluated. Nitrogen content of the feed sample will explain approximately 40% of the total variance. It is recommended to replicate this measurement as many times as possible. Ten replicates were recommended for the example data. The energy content of excreta from fed birds represented the next largest source of variance, at approximately 4% of the total variance, respectively. If within-bird variance is large, better homogenization of the sample and more replicates are recommended. If among-bird variance is significantly different, more birds should be used. Nitrogen content of excreta from fed birds represented less than 2.5% of the total variance. Energy and nitrogen content of excreta from unfed birds combined represented less than 2% of the total variance, suggesting that the number of unfed birds and the amount of excreta sub-samples may be reduced without adversely affecting the accuracy of the TMEN estimate. Variance due to the amount of excreta collected from the fed birds, and variance due to the amount of feed consumed by the birds, are expected to be small. This result suggested that force-feeding may not be necessary for accurate TMEN estimates.  相似文献   

18.
The SARS-CoV-2 pathogen is currently spreading worldwide and its propensity for presymptomatic and asymptomatic transmission makes it difficult to control. The control measures adopted in several countries aim at isolating individuals once diagnosed, limiting their social interactions and consequently their transmission probability. These interventions, which have a strong impact on the disease dynamics, can affect the inference of the epidemiological quantities. We first present a theoretical explanation of the effect caused by non-pharmaceutical intervention measures on the mean serial and generation intervals. Then, in a simulation study, we vary the assumed efficacy of control measures and quantify the effect on the mean and variance of realized generation and serial intervals. The simulation results show that the realized serial and generation intervals both depend on control measures and their values contract according to the efficacy of the intervention strategies. Interestingly, the mean serial interval differs from the mean generation interval. The deviation between these two values depends on two factors. First, the number of undiagnosed infectious individuals. Second, the relationship between infectiousness, symptom onset and timing of isolation. Similarly, the standard deviations of realized serial and generation intervals do not coincide, with the former shorter than the latter on average. The findings of this study are directly relevant to estimates performed for the current COVID-19 pandemic. In particular, the effective reproduction number is often inferred using both daily incidence data and the generation interval. Failing to account for either contraction or mis-specification by using the serial interval could lead to biased estimates of the effective reproduction number. Consequently, this might affect the choices made by decision makers when deciding which control measures to apply based on the value of the quantity thereof.  相似文献   

19.
1. In order to determine the variable distributions of 5 activation dependent EEG activity patterns occurring during visual information processing, mean values and standard deviations of the percental quantities of the frequencies 4, 5, ..., 13 Hz, 14 to 20 Hz and 21 to 30 Hz, as well as the mean amplitudes in the frequency bands 3.5 ... 7.4 Hz, 7.5 ... 13.4 Hz and 13.5 to 30 Hz were determined on corresponding to 10 s samples. It could be demonstrated by regression analysis that an interval scale level can be assumed already on the basis of cethe percental quantities in the three last mentioned frequency bands. 2. On the basis of 18 relevant variables, all the adjacent activity patterns could be separated from each other by means of univariate variance analysis at pairwise mean value comparison by at least two variables. 3. After stepwise reduction of dispensable variables in the framework of a linear discriminance analysis an optimal set of variables was determined, comprising the percental quantities of the frequencies 4, 5, 6, 10, 12 Hz, and 14 to 20 Hz, as well as the mean value of the amplitudes in the frequency band 3.5 to 7.4 Hz. In 4 our of 5 elementary discriminance functions, the mean values calculated for each pattern were significantly distinguishable from each other (analysis of variance, Newman-Keuls test). 4. By linear regression analysis it could be shown that the classification system of the EEG activity patterns at visual information processing can be mapped on an interval scale after the reduction of variables, too. Finally, data about the reliability of the scoring procedure are presented.  相似文献   

20.
MOTIVATION: DNA microarrays are now capable of providing genome-wide patterns of gene expression across many different conditions. The first level of analysis of these patterns requires determining whether observed differences in expression are significant or not. Current methods are unsatisfactory due to the lack of a systematic framework that can accommodate noise, variability, and low replication often typical of microarray data. RESULTS: We develop a Bayesian probabilistic framework for microarray data analysis. At the simplest level, we model log-expression values by independent normal distributions, parameterized by corresponding means and variances with hierarchical prior distributions. We derive point estimates for both parameters and hyperparameters, and regularized expressions for the variance of each gene by combining the empirical variance with a local background variance associated with neighboring genes. An additional hyperparameter, inversely related to the number of empirical observations, determines the strength of the background variance. Simulations show that these point estimates, combined with a t -test, provide a systematic inference approach that compares favorably with simple t -test or fold methods, and partly compensate for the lack of replication.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号