首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 328 毫秒
1.

Purpose

The aim of this study was to develop a robust post-processing workflow for motion-corrupted datasets in diffusion kurtosis imaging (DKI).

Materials and methods

The proposed workflow consisted of brain extraction, rigid registration, distortion correction, artifacts rejection, spatial smoothing and tensor estimation. Rigid registration was utilized to correct misalignments. Motion artifacts were rejected by using local Pearson correlation coefficient (LPCC). The performance of LPCC in characterizing relative differences between artifacts and artifact-free images was compared with that of the conventional correlation coefficient in 10 randomly selected DKI datasets. The influence of rejected artifacts with information of gradient directions and b values for the parameter estimation was investigated by using mean square error (MSE). The variance of noise was used as the criterion for MSEs. The clinical practicality of the proposed workflow was evaluated by the image quality and measurements in regions of interest on 36 DKI datasets, including 18 artifact-free (18 pediatric subjects) and 18 motion-corrupted datasets (15 pediatric subjects and 3 essential tremor patients).

Results

The relative difference between artifacts and artifact-free images calculated by LPCC was larger than that of the conventional correlation coefficient (p<0.05). It indicated that LPCC was more sensitive in detecting motion artifacts. MSEs of all derived parameters from the reserved data after the artifacts rejection were smaller than the variance of the noise. It suggested that influence of rejected artifacts was less than influence of noise on the precision of derived parameters. The proposed workflow improved the image quality and reduced the measurement biases significantly on motion-corrupted datasets (p<0.05).

Conclusion

The proposed post-processing workflow was reliable to improve the image quality and the measurement precision of the derived parameters on motion-corrupted DKI datasets. The workflow provided an effective post-processing method for clinical applications of DKI in subjects with involuntary movements.  相似文献   

2.
The surface water temperature is a vital ecological and climate variable, and its monitoring is critical. An extensive sensor network measures the ocean, but outliers pervade the monitoring data due to the sudden change in the water surface level. No single algorithm can identify the outlier efficiently. Hence, this work aims to propose and evaluate the performance of three statistical-based outlier detection algorithms for the water surface temperature: 1) the Standard Z-Score method, 2) the Modified Z-Score coupled with decomposition, and 3) the Exponential Moving Average with the Coupled Modified Z-Score and decomposition. A threshold was set to flag the outlier values. The models' performance was evaluated using the F-score method. Results showed that an increase in outlier detection might reduce the precision of identifying the actual outlier. Based on the results, the Exponential Moving Average with the Modified Z-Score gave the highest F-score value (= 0.83) compared to the other two individual methods. Therefore, this proposed algorithm is recommended to detect outliers efficiently in large surface water temperature datasets.  相似文献   

3.
Diffusion tensor imaging (DTI) enables non-invasive, cyto-architectural mapping of in vivo tissue microarchitecture through voxel-wise mathematical modeling of multiple magnetic resonance imaging (MRI) acquisitions, each differently sensitized to water diffusion. DTI computations are fundamentally estimation processes and are sensitive to noise and artifacts. Despite widespread adoption in the neuroimaging community, maintaining consistent DTI data quality remains challenging given the propensity for patient motion, artifacts associated with fast imaging techniques, and the possibility of hardware changes/failures. Furthermore, the quantity of data acquired per voxel, the non-linear estimation process, and numerous potential use cases complicate traditional visual data inspection approaches. Currently, quality inspection of DTI data has relied on visual inspection and individual processing in DTI analysis software programs (e.g. DTIPrep, DTI-studio). However, recent advances in applied statistical methods have yielded several different metrics to assess noise level, artifact propensity, quality of tensor fit, variance of estimated measures, and bias in estimated measures. To date, these metrics have been largely studied in isolation. Herein, we select complementary metrics for integration into an automatic DTI analysis and quality assurance pipeline. The pipeline completes in 24 hours, stores statistical outputs, and produces a graphical summary quality analysis (QA) report. We assess the utility of this streamlined approach for empirical quality assessment on 608 DTI datasets from pediatric neuroimaging studies. The efficiency and accuracy of quality analysis using the proposed pipeline is compared with quality analysis based on visual inspection. The unified pipeline is found to save a statistically significant amount of time (over 70%) while improving the consistency of QA between a DTI expert and a pool of research associates. Projection of QA metrics to a low dimensional manifold reveal qualitative, but clear, QA-study associations and suggest that automated outlier/anomaly detection would be feasible.  相似文献   

4.
5.
Emi Tanaka 《Biometrics》2020,76(4):1374-1382
The aim of plant breeding trials is often to identify crop variety that are well adapt to target environments. These varieties are identified through genomic prediction from the analysis of multi-environmental field trial (MET) using linear mixed models. The occurrence of outliers in MET is common and known to adversely impact the accuracy of genomic prediction yet the detection of outliers are often neglected. A number of reasons stand for this—first, complex data such as a MET give rise to distinct levels of residuals (eg, at a trial level or individual observation level). This complexity offers additional challenges for an outlier detection method. Second, many linear mixed model software packages that cater for complex variance structures needed in the analysis of MET are not well streamlined for diagnostics by practitioners. We demonstrate outlier detection methods that are simple to implement in any linear mixed model software packages and computationally fast. Although these methods are not optimal methods in outlier detection, they offer practical value for ease of application in the analysis pipeline of regularly collected data. These are demonstrated using simulation based on two real bread wheat yield METs. In particular, models that consider analysis of yield trials either independently or jointly (thus borrowing strength across trials) are considered. Case studies are presented to highlight benefit of joint analysis for outlier detection.  相似文献   

6.
Objectives To investigate whether routinely collected data from hospital episode statistics could be used to identify the gynaecologist Rodney Ledward, who was suspended in 1966 and was the subject of the Ritchie inquiry into quality and practice within the NHS.Design A mixed scanning approach was used to identify seven variables from hospital episode statistics that were likely to be associated with potentially poor performance. A blinded multivariate analysis was undertaken to determine the distance (known as the Mahalanobis distance) in the seven indicator multidimensional space that each consultant was from the average consultant in each year. The change in Mahalanobis distance over time was also investigated by using a mixed effects model.Setting NHS hospital trusts in two English regions, in the five years from 1991-2 to 1995-6.Population Gynaecology consultants (n = 143) and their hospital episode statistics data.Main outcome measure Whether Ledward was a statistical outlier at the 95% level.Results The proportion of consultants who were outliers in any one year (at the 95% significance level) ranged from 9% to 20%. Ledward appeared as an outlier in three of the five years. Our mixed effects (multi-year) model identified nine high outlier consultants, including Ledward.Conclusion It was possible to identify Ledward as an outlier by using hospital episode statistics data. Although our method found other outlier consultants, we strongly caution that these outliers should not be overinterpreted as indicative of “poor” performance. Instead, a scientific search for a credible explanation should be undertaken, but this was outside the remit of our study. The set of indicators used means that cancer specialists, for example, are likely to have high values for several indicators, and the approach needs to be refined to deal with case mix variation. Even after allowing for that, the interpretation of outlier status is still as yet unclear. Further prospective evaluation of our method is warranted, but our overall approach may be potentially useful in other settings, especially where performance entails several indicator variables.  相似文献   

7.
Tissue typing has been reviewed in a series of 100 technically successful cadaveric-donor kidney grafts. The criterion of transplant failure was immunological rejection causing total loss of function within three months of operation.No significant correlation was observed between matching grade and graft failure due to early acute rejection. This is attributed to the failure to detect at least one “LA” or “4” antigen (as defined in our laboratory), representing a potential incompatibility, in 89% of the grafts, and in the remaining 11% to the lack of an available recipient with identical “LA” and “4” typing. Undetected antigens on the donor are usually incompatible, and probably these incompatibilities unfavourably influence early graft survival.If the results of cadaveric-donor renal transplantation are to equal those of transplantation from well-matched living related donors it will be necessary to type with sera which can recognize individually all HL-A antigens, including those not yet identified, and to create an international pool of over 1,000 potential recipients.  相似文献   

8.

Background

Ring artifacts are the concentric rings superimposed on the tomographic images often caused by the defective and insufficient calibrated detector elements as well as by the damaged scintillator crystals of the flat panel detector. It may be also generated by objects attenuating X-rays very differently in different projection direction. Ring artifact reduction techniques so far reported in the literature can be broadly classified into two groups. One category of the approaches is based on the sinogram processing also known as the pre-processing techniques and the other category of techniques perform processing on the 2-D reconstructed images, recognized as the post-processing techniques in the literature. The strength and weakness of these categories of approaches are yet to be explored from a common platform.

Method

In this paper, a comparative study of the two categories of ring artifact reduction techniques basically designed for the multi-slice CT instruments is presented from a common platform. For comparison, two representative algorithms from each of the two categories are selected from the published literature. A very recently reported state-of-the-art sinogram domain ring artifact correction method that classifies the ring artifacts according to their strength and then corrects the artifacts using class adaptive correction schemes is also included in this comparative study. The first sinogram domain correction method uses a wavelet based technique to detect the corrupted pixels and then using a simple linear interpolation technique estimates the responses of the bad pixels. The second sinogram based correction method performs all the filtering operations in the transform domain, i.e., in the wavelet and Fourier domain. On the other hand, the two post-processing based correction techniques actually operate on the polar transform domain of the reconstructed CT images. The first method extracts the ring artifact template vector using a homogeneity test and then corrects the CT images by subtracting the artifact template vector from the uncorrected images. The second post-processing based correction technique performs median and mean filtering on the reconstructed images to produce the corrected images.

Results

The performances of the comparing algorithms have been tested by using both quantitative and perceptual measures. For quantitative analysis, two different numerical performance indices are chosen. On the other hand, different types of artifact patterns, e.g., single/band ring, artifacts from defective and mis-calibrated detector elements, rings in highly structural object and also in hard object, rings from different flat-panel detectors are analyzed to perceptually investigate the strength and weakness of the five methods. An investigation has been also carried out to compare the efficacy of these algorithms in correcting the volume images from a cone beam CT with the parameters determined from one particular slice. Finally, the capability of each correction technique in retaining the image information (e.g., small object at the iso-center) accurately in the corrected CT image has been also tested.

Conclusions

The results show that the performances of the algorithms are limited and none is fully suitable for correcting different types of ring artifacts without introducing processing distortion to the image structure. To achieve the diagnostic quality of the corrected slices a combination of the two approaches (sinogram- and post-processing) can be used. Also the comparing methods are not suitable for correcting the volume images from a cone beam flat-panel detector based CT.  相似文献   

9.
Pasteurellaceae are among the most prevalent bacterial pathogens isolated from mice housed in experimental animal facilities. Reliable detection and differentiation of Pasteurellaceae are essential for high-quality health monitoring. In this study, we combined a real-time PCR assay amplifying a variable region in the 16S rRNA sequence with high-resolution melting curve analysis (HRM) to identify and differentiate among the commonly isolated species Pasteurella pneumotropica biotypes “Jawetz” and “Heyl”, Actinobacillus muris, and Haemophilus influenzaemurium. We used a set of six reference strains for assay development, with the melting profiles of these strains clearly distinguishable due to DNA sequence variations in the amplicon. For evaluation, we used real-time PCR/HRM to test 25 unknown Pasteurellaceae isolates obtained from an external diagnostic laboratory and found the results to be consistent with those of partial 16S rRNA sequencing. The real-time PCR/HRM method provides a sensitive, rapid, and closed-tube approach for Pasteurellaceae species identification for health monitoring of laboratory mice.  相似文献   

10.
A number of circular regression models have been proposed in the literature. In recent years, there is a strong interest shown on the subject of outlier detection in circular regression. An outlier detection procedure can be developed by defining a new statistic in terms of the circular residuals. In this paper, we propose a new measure which transforms the circular residuals into linear measures using a trigonometric function. We then employ the row deletion approach to identify observations that affect the measure the most, a candidate of outlier. The corresponding cut-off points and the performance of the detection procedure when applied on Down and Mardia’s model are studied via simulations. For illustration, we apply the procedure on circadian data.  相似文献   

11.
The analysis of motion crowds is concerned with the detection of potential hazards for individuals of the crowd. Existing methods analyze the statistics of pixel motion to classify non-dangerous or dangerous behavior, to detect outlier motions, or to estimate the mean throughput of people for an image region. We suggest a biologically inspired model for the analysis of motion crowds that extracts motion features indicative for potential dangers in crowd behavior. Our model consists of stages for motion detection, integration, and pattern detection that model functions of the primate primary visual cortex area (V1), the middle temporal area (MT), and the medial superior temporal area (MST), respectively. This model allows for the processing of motion transparency, the appearance of multiple motions in the same visual region, in addition to processing opaque motion. We suggest that motion transparency helps to identify “danger zones” in motion crowds. For instance, motion transparency occurs in small exit passages during evacuation. However, motion transparency occurs also for non-dangerous crowd behavior when people move in opposite directions organized into separate lanes. Our analysis suggests: The combination of motion transparency and a slow motion speed can be used for labeling of candidate regions that contain dangerous behavior. In addition, locally detected decelerations or negative speed gradients of motions are a precursor of danger in crowd behavior as are globally detected motion patterns that show a contraction toward a single point. In sum, motion transparency, image speeds, motion patterns, and speed gradients extracted from visual motion in videos are important features to describe the behavioral state of a motion crowd.  相似文献   

12.

Background

Machine learning neuroimaging researchers have often relied on regularization techniques when classifying MRI images. Although these were originally introduced to deal with “ill-posed” problems it is rare to find studies that evaluate the ill-posedness of MRI image classification problems. In addition, to avoid the effects of the “curse of dimensionality” very often dimension reduction is applied to the data.

Methodology

Baseline structural MRI data from cognitively normal and Alzheimer''s disease (AD) patients from the AD Neuroimaging Initiative database were used in this study. We evaluated here the ill-posedness of this classification problem across different dimensions and sample sizes and its relationship to the performance of regularized logistic regression (RLR), linear support vector machine (SVM) and linear regression classifier (LRC). In addition, these methods were compared with their principal components space counterparts.

Principal Findings

In voxel space the prediction performance of all methods increased as sample sizes increased. They were not only relatively robust to the increase of dimension, but they often showed improvements in accuracy. We linked this behavior to improvements in conditioning of the linear kernels matrices. In general the RLR and SVM performed similarly. Surprisingly, the LRC was often very competitive when the linear kernel matrices were best conditioned. Finally, when comparing these methods in voxel and principal component spaces, we did not find large differences in prediction performance.

Conclusions and Significance

We analyzed the problem of classifying AD MRI images from the perspective of linear ill-posed problems. We demonstrate empirically the impact of the linear kernel matrix conditioning on different classifiers'' performance. This dependence is characterized across sample sizes and dimensions. In this context we also show that increased dimensionality does not necessarily degrade performance of machine learning methods. In general, this depends on the nature of the problem and the type of machine learning method.  相似文献   

13.
Since the 1990s, massive use of drifting Fish Aggregating Devices (dFADs) to aggregate tropical tunas has strongly modified global purse-seine fisheries. For the first time, a large data set of GPS positions from buoys deployed by French purse-seiners to monitor dFADs is analysed to provide information on spatio-temporal patterns of dFAD use in the Atlantic and Indian Oceans during 2007-2011. First, we select among four classification methods the model that best separates “at sea” from “on board” buoy positions. A random forest model had the best performance, both in terms of the rate of false “at sea” predictions and the amount of over-segmentation of “at sea” trajectories (i.e., artificial division of trajectories into multiple, shorter pieces due to misclassification). Performance is improved via post-processing removing unrealistically short “at sea” trajectories. Results derived from the selected model enable us to identify the main areas and seasons of dFAD deployment and the spatial extent of their drift. We find that dFADs drift at sea on average for 39.5 days, with time at sea being shorter and distance travelled longer in the Indian than in the Atlantic Ocean. 9.9% of all trajectories end with a beaching event, suggesting that 1,500-2,000 may be lost onshore each year, potentially impacting sensitive habitat areas, such as the coral reefs of the Maldives, the Chagos Archipelago, and the Seychelles.  相似文献   

14.
The study of two biological indicators in monitoring “flash” sterilization demonstrated that indicator construction often leads to a false interpretation of spore survival.  相似文献   

15.
Matrix-Assisted Laser Desorption Ionization-Imaging Mass Spectrometry (MALDI-IMS) is a rapidly evolving method used for the in situ visualization and localization of molecules such as drugs, lipids, peptides, and proteins in tissue sections. Therefore, molecules such as lipids, for which antibodies and other convenient detection reagents do not exist, can be detected, quantified, and correlated with histopathology and disease mechanisms. Furthermore, MALDI-IMS has the potential to enhance our understanding of disease pathogenesis through the use of “biochemical histopathology”. Herein, we review the underlying concepts, basic methods, and practical applications of MALDI-IMS, including post-processing steps such as data analysis and identification of molecules. The potential utility of MALDI-IMS as a companion diagnostic aid for lipid-related pathological states is discussed.  相似文献   

16.
Zhang  Hancui  Zhou  Weida 《Cluster computing》2022,25(1):203-214

Virtual machine abnormal behavior detection is an effective way to help cloud platform administrators monitor the running status of cloud platform to improve the reliability of cloud platform, which has become one of the research hotspots in the field of cloud computing. Aiming at the problems of high computational complexity and high false alarm rate in the existing virtual machine anomaly monitoring mechanism of cloud platform, this paper proposed a two-stage virtual machine abnormal behavior-based detection mechanism. Firstly, a workload-based incremental clustering algorithm is used to monitor and analyze both the virtual machine workload information and performance index information. Then, an online anomaly detection mechanism based on the incremental local outlier factor algorithm is designed to enhance detection efficiency. By applying this two-phase detection mechanism, it can significantly reduce the computational complexity and meet the needs of real-time performance. The experimental results are verified on the mainstream Openstack cloud platform.

  相似文献   

17.
The synthesis of human motion is a complex procedure that involves accurate reconstruction of movement sequences, modeling of musculoskeletal kinematics, dynamics and actuation, and characterization of reliable performance criteria. Many of these processes have much in common with the problems found in robotics research. Task-based methods used in robotics may be leveraged to provide novel musculoskeletal modeling methods and physiologically accurate performance predictions. In this paper, we present (i) a new method for the real-time reconstruction of human motion trajectories using direct marker tracking, (ii) a task-driven muscular effort minimization criterion and (iii) new human performance metrics for dynamic characterization of athletic skills. Dynamic motion reconstruction is achieved through the control of a simulated human model to follow the captured marker trajectories in real-time. The operational space control and real-time simulation provide human dynamics at any configuration of the performance. A new criteria of muscular effort minimization has been introduced to analyze human static postures. Extensive motion capture experiments were conducted to validate the new minimization criterion. Finally, new human performance metrics were introduced to study in details an athletic skill. These metrics include the effort expenditure and the feasible set of operational space accelerations during the performance of the skill. The dynamic characterization takes into account skeletal kinematics as well as muscle routing kinematics and force generating capacities. The developments draw upon an advanced musculoskeletal modeling platform and a task-oriented framework for the effective integration of biomechanics and robotics methods.  相似文献   

18.
Behavioral phenotyping of model organisms has played an important role in unravelling the complexities of animal behavior. Techniques for classifying behavior often rely on easily identified changes in posture and motion. However, such approaches are likely to miss complex behaviors that cannot be readily distinguished by eye (e.g., behaviors produced by high dimensional dynamics). To explore this issue, we focus on the model organism Caenorhabditis elegans, where behaviors have been extensively recorded and classified. Using a dynamical systems lens, we identify high dimensional, nonlinear causal relationships between four basic shapes that describe worm motion (eigenmodes, also called “eigenworms”). We find relationships between all pairs of eigenmodes, but the timescales of the interactions vary between pairs and across individuals. Using these varying timescales, we create “interaction profiles” to represent an individual’s behavioral dynamics. As desired, these profiles are able to distinguish well-known behavioral states: i.e., the profiles for foraging individuals are distinct from those of individuals exhibiting an escape response. More importantly, we find that interaction profiles can distinguish high dimensional behaviors among divergent mutant strains that were previously classified as phenotypically similar. Specifically, we find it is able to detect phenotypic behavioral differences not previously identified in strains related to dysfunction of hermaphrodite-specific neurons.  相似文献   

19.
Within the animal kingdom, human cooperation represents an outlier. As such, there has been great interest across a number of fields in identifying the factors that support the complex and flexible variety of cooperation that is uniquely human. The ability to identify and preferentially interact with better social partners (partner choice) is proposed to be a major factor in maintaining costly cooperation between individuals. Here we show that the ability to engage in flexible and effective partner choice behavior can be traced back to early childhood. Specifically, across two studies, we demonstrate that by 3 years of age, children identify effective communication as “helpful” (Experiments 1 & 2), reward good communicators with information (Experiment 1), and selectively reciprocate communication with diverse cooperative acts (Experiment 2). Taken together, these results suggest that even in early childhood, humans take advantage of cooperative benefits, while mitigating free-rider risks, through appropriate partner choice behavior.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号