首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Dimension reduction methods have been proposed for regression analysis with predictors of high dimension, but have not received much attention on the problems with censored data. In this article, we present an iterative imputed spline approach based on principal Hessian directions (PHD) for censored survival data in order to reduce the dimension of predictors without requiring a prespecified parametric model. Our proposal is to replace the right-censored survival time with its conditional expectation for adjusting the censoring effect by using the Kaplan-Meier estimator and an adaptive polynomial spline regression in the residual imputation. A sparse estimation strategy is incorporated in our approach to enhance the interpretation of variable selection. This approach can be implemented in not only PHD, but also other methods developed for estimating the central mean subspace. Simulation studies with right-censored data are conducted for the imputed spline approach to PHD (IS-PHD) in comparison with two methods of sliced inverse regression, minimum average variance estimation, and naive PHD in ignorance of censoring. The results demonstrate that the proposed IS-PHD method is particularly useful for survival time responses approximating symmetric or bending structures. Illustrative applications to two real data sets are also presented.  相似文献   

2.
In this paper, an autonomic performance management approach is introduced that can be applied to a general class of web services deployed in large scale distributed environment. The proposed approach utilizes traditional large scale control-based algorithms by using interaction balance approach in web service environment for managing the response time and the system level power consumption. This approach is developed in a generic fashion that makes it suitable for web service deployments, where web service performance can be adjusted by using a finite set of control inputs. This approach maintains the service level agreements, maximizes the revenue, and minimizes the infrastructure operating cost. Additionally, the proposed approach is fault-tolerant with respect to the failures of the computing nodes inside the distributed deployment. Moreover, the computational overhead of the proposed approach can also be managed by using appropriate value of configuration parameters during its deployment.  相似文献   

3.
Three-dimensional (3D) reconstruction of electron tomography (ET) has emerged as an important technique in analyzing structures of complex biological samples. However most of existing reconstruction methods are not suitable for extremely noisy and incomplete data conditions. We present an adaptive simultaneous algebraic reconstruction technique (ASART) in which a modified multilevel access scheme and an adaptive relaxation parameter adjustment method are developed to improve the quality of the reconstructed 3D structure. The reconstruction process is facilitated by using a column-sum substitution approach. This modified multilevel access scheme is adopted to arrange the order of projections so as to minimize the correlations between consecutive views within a limited angle range. In the adaptive relaxation parameter adjustment method, not only the weight matrix (as in the existing methods) but the gray levels of the pixels are employed to adjust the relaxation parameters so that the quality of the reconstruction is improved while the convergence process of the reconstruction is accelerated. In the column-sum substitution approach, the computation to obtain the reciprocal of the sum for the columns in each view is avoided so that the needed computations for each iteration can be reduced. Experimental results show that the proposed technique ASART is better based on objective quality measures than other methods, especially when data is noisy and limited in tilt angles. At the same time, the reconstruction by ASART outperforms that of simultaneous algebraic reconstruction technique (SART) in speed.  相似文献   

4.
In this paper, we present a neural adaptive control scheme for active vibration suppression of a composite aircraft fin tip. The mathematical model of a composite aircraft fin tip is derived using the finite element approach. The finite element model is updated experimentally to reflect the natural frequencies and mode shapes very accurately. Piezo-electric actuators and sensors are placed at optimal locations such that the vibration suppression is a maximum. Model-reference direct adaptive neural network control scheme is proposed to force the vibration level within the minimum acceptable limit. In this scheme, Gaussian neural network with linear filters is used to approximate the inverse dynamics of the system and the parameters of the neural controller are estimated using Lyapunov based update law. In order to reduce the computational burden, which is critical for real-time applications, the number of hidden neurons is also estimated in the proposed scheme. The global asymptotic stability of the overall system is ensured using the principles of Lyapunov approach. Simulation studies are carried-out using sinusoidal force functions of varying frequency. Experimental results show that the proposed neural adaptive control scheme is capable of providing significant vibration suppression in the multiple bending modes of interest. The performance of the proposed scheme is better than the H(infinity) control scheme.  相似文献   

5.
《Médecine Nucléaire》2007,31(10):545-552
AimThe aim of this work is to study the influence of medium density on the CT or external source attenuation corrected images, by simulation on a phantom, with various positron emission tomographs.Material and methodA series of experiments on a cylindrical phantom filled with water marked with [18F]-FDG, containing six vials filled per pair with mediums of different densities or solutions of KI, CaCl2 and saccharose with various densities, was carried out under comparable conditions on three different tomographs. In only one of the vials of each pair, an identical radioactivity of [18F]-FDG was added, three to five fold the surrounding activity. The reconstructions and attenuation corrections suggested by the manufacturers, were carried out under the usual conditions of each site. The activity of each structure was estimated by the methods of profiles and regions of interest, on the non attenuation corrected images (NAC), the images corrected by CT (CTAC), and/or external source (GPAC).ResultsWith all three tomographs, the activities estimated on the NAC images present an inverse correlation to the medium density (important absorption by dense material). On CTAC images, we observed with only two of the three tomographs, an overestimation of the activity in the “radioactive” vials, depending on the medium mean Z number and density (over correction), and a artefactual “activity” in the denser “cold” vial (incorrect attenuation correction. The dense saccharose solutions, with non elevated Z number, do not affect the CT attenuation correction.  相似文献   

6.
Runtime analysis of continuous evolutionary algorithms (EAs) is a hard topic in the theoretical research of evolutionary computation, relatively few results have been obtained compared to the discrete EAs. In this paper, we introduce the martingale and stopping time theory to establish a general average gain model to estimate the upper bound for the expected first hitting time. The proposed model is established on a non-negative stochastic process and does not need the assumption of Markov property, thus is more general. Afterwards, we demonstrate how the proposed model can be applied to runtime analysis of continuous EAs. In the end, as a case study, we analyze the runtime of (1, \(\lambda )\)ES with adaptive step-size on Sphere function using the proposed approach, and derive a closed-form expression of the time upper bound for 3-dimensional case. We also discuss the relationship between the step size and the offspring size \(\lambda \) to ensure convergence. The experimental results show that the proposed approach helps to derive tight upper bound for the expected first hitting time of continuous EAs.  相似文献   

7.
Optimal coordination and control of posture and locomotion.   总被引:2,自引:0,他引:2  
This paper presents a theoretical model of stability and coordination of posture and locomotion, together with algorithms for continuous-time quadratic optimization of motion control. Explicit solutions to the Hamilton-Jacobi equation for optimal control of rigid-body motion are obtained by solving an algebraic matrix equation. The stability is investigated with Lyapunov function theory, and it is shown that global asymptotic stability holds. It is also shown how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. The solution describes motion strategies of minimum effort and variance. The proposed optimal control is formulated to be suitable as a posture and stance model for experimental validation and verification. The combination of adaptive and optimal control makes this algorithm a candidate for coordination and control of functional neuromuscular stimulation as well as of prostheses.  相似文献   

8.
To automatically adapt to various hardware and software environments on different devices, this paper presents a time-critical adaptive approach for visualizing natural scenes. In this method, a simplified expression of a tree model is used for different devices. The best rendering scheme is intelligently selected to generate a particular scene by estimating the rendering time of trees based on their visual importance. Therefore, this approach can ensure the reality of natural scenes while maintaining a constant frame rate for their interactive display. To verify its effectiveness and flexibility, this method is applied in different devices, such as a desktop computer, laptop, iPad and smart phone. Applications show that the method proposed in this paper can not only adapt to devices with different computing abilities and system resources very well but can also achieve rather good visual realism and a constant frame rate for natural scenes.  相似文献   

9.
This paper presents a theoretical model of stability and coordination of posture and locomotion, together with algorithms for continuous-time quadratic optimization of motion control. Explicit solutions to the Hamilton–Jacobi equation for optimal control of rigid-body motion are obtained by solving an algebraic matrix equation. The stability is investigated with Lyapunov function theory and it is shown that global asymptotic stability holds. It is also shown how optimal control and adaptive control may act in concert in the case of unknown or uncertain system parameters. The solution describes motion strategies of minimum effort and variance. The proposed optimal control is formulated to be suitable as a posture and movement model for experimental validation and verification. The combination of adaptive and optimal control makes this algorithm a candidate for coordination and control of functional neuromuscular stimulation as well as of prostheses. Validation examples with experimental data are provided.  相似文献   

10.
Many real world situations exist where job scheduling is required. This is the case of some entities, machines, or workers who have to execute certain jobs as soon as possible. Frequently what happens is that several workers or machines are not available to perform their activities during some time periods, due to different circumstances. This paper deals with these situations, and considers stochastic scheduling models to study these problems. When scheduling models are used in practice, they have to take into account that some machines may not be working. That temporal lack of machine availability is known as breakdowns, which happen randomly at any time. The times required to repair those machines are also random variables. The jobs have operations with stochastic processing times, their own release times, and there is no precedence between them. Each job is divided into operations and each operation is performed on the corresponding specialized machine. In addition, in the problems considered, the order in which the operations of each job are done is irrelevant. We develop a heuristic approach to solve these stochastic open-shop scheduling problems where random machine breakdowns can happen. The proposed approach is general and it does not depend on the distribution types of the considered random input data. It provides solutions to minimize the expected makespan. Computational experiences are also reported. The results show that the proposed approach gives a solid performance, finding suitable solutions with short CPU times.  相似文献   

11.
Remodelling of trabecular bone is essentially affected by the mechanical load of the trabeculae. Mathematical modelling and simulation of the remodelling process have to include time-consuming calculations of the displacement field within the complex trabecular structure under loading. We present an adaptive diffuse domain approach for calculating the elastic bone deformation based on micro computer tomogram data of real trabecular bone structures and compared it with a conventional voxel-based finite element method. In addition to allowing for higher computational efficiency, the adaptive approach is characterised by a very smooth representation of the bone surface, which suggests that this approach would be suitable as a basis for future simulations of bone resorption and formation processes within the trabecular structure.  相似文献   

12.
This paper shows an adaptive statistical test for QRS detection of electrocardiography (ECG) signals. The method is based on a M-ary generalized likelihood ratio test (LRT) defined over a multiple observation window in the Fourier domain. The motivations for proposing another detection algorithm based on maximum a posteriori (MAP) estimation are found in the high complexity of the signal model proposed in previous approaches which i) makes them computationally unfeasible or not intended for real time applications such as intensive care monitoring and (ii) in which the parameter selection conditions the overall performance. In this sense, we propose an alternative model based on the independent Gaussian properties of the Discrete Fourier Transform (DFT) coefficients, which allows to define a simplified MAP probability function. In addition, the proposed approach defines an adaptive MAP statistical test in which a global hypothesis is defined on particular hypotheses of the multiple observation window. In this sense, the observation interval is modeled as a discontinuous transmission discrete-time stochastic process avoiding the inclusion of parameters that constraint the morphology of the QRS complexes.  相似文献   

13.
This study proposed a new quantitative technique to identify suitable but unoccupied habitats for metapopulation studies in plants. It is based on species composition at the habitat and knowledge of species co-occurrence patterns. It uses data from a large phytosociological database as a background for estimating species co-occurrence patterns. If such a database is not available, the technique can still be applied using the same data for which the prediction is done to estimate the species co-occurrence pattern. Using the technique we were able to indicate suitable unoccupied habitats and differentiate them from the unoccupied unsuitable ones. We also identified habitats with low probability of being suitable that were occupied. Compared to a direct approach of identification of suitable habitats, which involves introduction of a species to the habitat and studying its performance, the approach presented here is much easier to apply and can provide extensive information on habitat suitability for a range of species with much less effort and time needed.  相似文献   

14.
In this paper we propose a new technique that adaptively extracts subject specific motor imagery related EEG patterns in the space–time–frequency plane for single trial classification. The proposed approach requires no prior knowledge of reactive frequency bands, their temporal behavior or cortical locations. For a given electrode array, it finds all these parameters by constructing electrode adaptive time–frequency segmentations that are optimized for discrimination. This is accomplished first by segmenting the EEG along the time axis with Local Cosine Packets. Next the most discriminant frequency subbands are selected in each time segment with a frequency axis clustering algorithm to achieve time and frequency band adaptation individually. Finally the subject adapted features are sorted according to their discrimination power to reduce dimensionality and the top subset is used for final classification. We provide experimental results for 5 subjects of the BCI competition 2005 dataset IVa to show the superior performance of the proposed method. In particular, we demonstrate that by using a linear support vector machine as a classifier, the classification accuracy of the proposed algorithm varied between 90.5% and 99.7% and the average classification accuracy was 96%.  相似文献   

15.
Due to many advantages associated with mixed cultures, their application in biotechnology has expanded rapidly in recent years. At the same time, many challenges remain for effective mixed culture applications. One obstacle is how to efficiently and accurately monitor the individual cell populations. Current approaches on individual cell mass quantification are suitable for off‐line, infrequent characterization. In this study, we propose a fast and accurate “soft sensor” approach for estimating individual cell concentrations in mixed cultures. The proposed approach utilizes optical density scanning spectrum of a mixed culture sample measured by a spectrophotometer over a range of wavelengths. A multivariate linear regression method, partial least squares or PLS, is applied to correlate individual cell concentrations to the spectrum. Three experimental case studies are used to examine the performance of the proposed soft sensor approach. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:347–354, 2017  相似文献   

16.
In this paper, we present a new evolutionary technique to train three general neural networks. Based on family competition principles and adaptive rules, the proposed approach integrates decreasing-based mutations and self-adaptive mutations to collaborate with each other. Different mutations act as global and local strategies respectively to balance the trade-off between solution quality and convergence speed. Our algorithm is then applied to three different task domains: Boolean functions, regular language recognition, and artificial ant problems. Experimental results indicate that the proposed algorithm is very competitive with comparable evolutionary algorithms. We also discuss the search power of our proposed approach.  相似文献   

17.
In this paper, an approach to the estimation of multiple biomass growth rates and biomass concentration is proposed for a class of aerobic bioprocesses characterized by on-line measurements of dissolved oxygen and carbon dioxide concentrations, as well as off-line measurements of biomass concentration. The approach is based on adaptive observer theory and includes two steps. In the first step, an adaptive estimator of two out of three biomass growth rates is designed. In the second step, the third biomass growth rate and the biomass concentration are estimated, using two different adaptive estimators. One of them is based on on-line measurements of dissolved oxygen concentration and off-line measurement of biomass concentrations, while the other needs only on-line measurements of the carbon dioxide concentration. Simulations demonstrated good performance of the proposed estimators under continuous and batch-fed conditions.  相似文献   

18.
Protein-RNA complexes provide a wide range of essential functions in the cell. Their atomic experimental structure solving, despite essential to the understanding of these functions, is often difficult and expensive. Docking approaches that have been developed for proteins are often challenging to adapt for RNA because of its inherent flexibility and the structural data available being relatively scarce. In this study we adapted the RosettaDock protocol for protein-RNA complexes both at the nucleotide and atomic levels. Using a genetic algorithm-based strategy, and a non-redundant protein-RNA dataset, we derived a RosettaDock scoring scheme able not only to discriminate but also score efficiently docking decoys. The approach proved to be both efficient and robust for generating and identifying suitable structures when applied to two protein-RNA docking benchmarks in both bound and unbound settings. It also compares well to existing strategies. This is the first approach that currently offers a multi-level optimized scoring approach integrated in a full docking suite, leading the way to adaptive fully flexible strategies.  相似文献   

19.
The present work investigates preliminary feasibility and characteristics of a new type of radiation therapy modality based on a single convergent beam of photons. The proposal consists of the design of a device capable of generating convergent X-ray beams useful for radiotherapy. The main goal is to achieve high concentrated dose delivery. The first step is an analytical approach in order to characterize the dosimetric performance of the hypothetical convergent photon beam. Then, the validated FLUKA Monte Carlo main code is used to perform complete radiation transport to account also for scattering effects. The proposed method for producing convergent X-rays is mainly based on the bremsstrahlung effect. Hence the operating principle of the proposed device is described in terms of bremsstrahlung production. The work is mainly devoted characterizing the effect on the bremsstrahlung yield due to accessories present in the device, like anode material and geometry, filtration and collimation systems among others.The results obtained for in-depth dose distributions, by means of analytical and stochastic approaches, confirm the presence of a high dose concentration around the irradiated target, as expected. Moreover, it is shown how this spot of high dose concentration depends upon the relevant physical properties of the produced convergent photon beam.In summary, the proposed design for producing single convergent X-rays attained satisfactory performance for achieving high dose concentration around small targets depending on beam spot size that may be used for some applications in radiotherapy, like radiosurgery.  相似文献   

20.
Here we demonstrate that the film refractive index (RI) can be an even more important parameter than film thickness for identifying nonfouling polymer films to undiluted human blood plasma and serum. The film thickness and RI are two parameters obtained from ellipsometry. Previously, film thickness has been correlated to ultra-low fouling properties. Practically, the film RI can be used to characterize polymer density but is often overlooked. By varying the water content in the surface-initiated atom transfer radical polymerization of zwitterionic carboxybetaine, a minimum of ~1.5 RI units was necessary to achieve <5 ng/cm(2) of adsorption from undiluted human serum. A model of the film structure versus water content was also developed. These results point to an important parameter and simple approach for identifying surface coatings suitable for real-world applications involving complex media. Therefore, ultra-low fouling using a thin film is possible if it is densely packed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号