首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The purpose of this study was to compare three camera calibration approaches applied to underwater applications: (1) static control points with nonlinear DLT; (2) moving wand with nonlinear camera model and bundle adjustment; (3) moving plate with nonlinear camera model. The DVideo kinematic analysis system was used for underwater data acquisition. The system consisted of two gen-locked Basler cameras working at 100 Hz, with wide angle lenses that were enclosed in housings. The accuracy of the methods was compared in a dynamic rigid bar test (acquisition volume-4.5×1×1.5 m(3)). The mean absolute errors were 6.19 mm for the nonlinear DLT, 1.16 mm for the wand calibration, 1.20 mm for the 2D plate calibration using 8 control points and 0.73 mm for the 2D plane calibration using 16 control points. The results of the wand and 2D plate camera calibration methods were less associated to the rigid body position in the working volume and provided better accuracy than the nonlinear DLT. Wand and 2D plate camera calibration methods presented similar and highly accurate results, being alternatives for underwater 3D motion analysis.  相似文献   

2.
Traditional techniques of human motion analysis use markers located on body articulations. The position of each marker is extracted from each image. Temporal and kinematic analysis is given by matching these data with a reference model of the human body. However, as human skin is not rigidly linked with the skeleton, each movement causes displacements of the markers and induces uncertainty in results. Moreover, the experiments are mostly conducted in restricted laboratory conditions. The aim of our project was to develop a new method for human motion analysis which needs non-sophisticated recording devices, avoids constraints to the subject studied, and can be used in various surroundings such as stadiums or gymnasiums. Our approach consisted of identifying and locating body parts in image, without markers, by using a multi-sensory sensor. This sensor exploits both data given by a video camera delivering intensity images, and data given by a 3D sensor delivering in-depth images. Our goal, in this design, was to show up the feasibility of our approach. In any case the hardware we used could facilitate an automated motion analysis. We used a linked segment model which referred to Winter's model, and we applied our method not on a human subject but on a life size articulated locomotion model. Our approach consists of finding the posture of this articulated locomotion model in the image. By performing a telemetric image segmentation, we obtained an approximate correspondence between linked segment model position and locomotion model position. This posture was then improved by injecting segmentation results in an intensity image segmentation algorithm. Several tests were conducted with video/telemetric images taken in an outdoor surrounding with the articulated model. This real life-size model was equipped with movable joints which, in static positions, described two strides of a runner. With our fusion method, we obtained relevant limbs identification and location for most postures.  相似文献   

3.
In this study we aim at investigating the applicability of underwater 3D motion capture based on submerged video cameras in terms of 3D accuracy analysis and trajectory reconstruction. Static points with classical direct linear transform (DLT) solution, a moving wand with bundle adjustment and a moving 2D plate with Zhang's method were considered for camera calibration. As an example of the final application, we reconstructed the hand motion trajectories in different swimming styles and qualitatively compared this with Maglischo's model. Four highly trained male swimmers performed butterfly, breaststroke and freestyle tasks. The middle fingertip trajectories of both hands in the underwater phase were considered. The accuracy (mean absolute error) of the two calibration approaches (wand: 0.96 mm – 2D plate: 0.73 mm) was comparable to out of water results and highly superior to the classical DLT results (9.74 mm). Among all the swimmers, the hands' trajectories of the expert swimmer in the style were almost symmetric and in good agreement with Maglischo's model. The kinematic results highlight symmetry or asymmetry between the two hand sides, intra- and inter-subject variability in terms of the motion patterns and agreement or disagreement with the model. The two outcomes, calibration results and trajectory reconstruction, both move towards the quantitative 3D underwater motion analysis.  相似文献   

4.
This article describes a method that allows estimating, with the 2D version of the direct linear transformation (DLT), the actual 2D coordinates of a point when the latter is not strictly in the calibration plane. Markers placed in vertical line, above, below and in the centre of a horizontal calibration plane were filmed by a moving camera. Without correction, strong errors (up to 64.5%) were noticed for markers out of the calibration plane. After correction, calculated coordinates were consistent with actual values (error < 0.55%). The method was then applied to slip distance measurement, using a marker fixed on the hoof of a horse trotting on a calibrated track while being followed with a camera. The correction effect represented 6.6% of slip distance. Combined with the 2D-DLT transformation, the proposed corrective method allows an accurate measurement of slip distances, for high-speed outdoor locomotion analysis, using a moving camera.  相似文献   

5.
The ability to analyze human movement is an essential tool of biomechanical analysis for both sport and clinical applications. Traditional 3D motion capture technology limits the feasibility of large scale data collections and therefore the ability to address clinical questions. Ideally, the measurement system/protocol should be non-invasive, mobile, generate nearly instantaneous feedback to the clinician and athlete, and be relatively inexpensive. The retro-grate reflector (RGR) is a new technology that allows for three-dimensional motion capture using a single camera. Previous studies have shown that orientation and position information recorded by the RGR system has high measurement precision and is strongly correlated with a traditional multi-camera system across a series of static poses. The technology has since been refined to record moving pose information from multiple RGR targets at sampling rates adequate for assessment of athletic movements. The purpose of this study was to compare motion data for a standard athletic movement recorded simultaneously with the RGR and multi-camera (Motion Analysis Eagle) systems. Nine subjects performed three single-leg land-and-cut maneuvers. Thigh and shank three-dimensional kinematics were collected with the RGR and Eagle camera systems simultaneously at 100 Hz. Results showed a strong agreement between the two systems in all three planes, which demonstrates the ability of the RGR system to record moving pose information from multiple RGR targets at a sampling rate adequate for assessment of human movement and supports the ability to use the RGR technology as a valid 3D motion capture system.  相似文献   

6.
Action sport cameras (ASC) have achieved a large consensus for recreational purposes due to ongoing cost decrease, image resolution and frame rate increase, along with plug-and-play usability. Consequently, they have been recently considered for sport gesture studies and quantitative athletic performance evaluation. In this paper, we evaluated the potential of two ASCs (GoPro Hero3+) for in-air (laboratory) and underwater (swimming pool) three-dimensional (3D) motion analysis as a function of different camera setups involving the acquisition frequency, image resolution and field of view. This is motivated by the fact that in swimming, movement cycles are characterized by underwater and in-air phases what imposes the technical challenge of having a split volume configuration: an underwater measurement volume observed by underwater cameras and an in-air measurement volume observed by in-air cameras. The reconstruction of whole swimming cycles requires thus merging of simultaneous measurements acquired in both volumes. Characterizing and optimizing the instrumental errors of such a configuration makes mandatory the assessment of the instrumental errors of both volumes.In order to calibrate the camera stereo pair, black spherical markers placed on two calibration tools, used both in-air and underwater, and a two-step nonlinear optimization were exploited. The 3D reconstruction accuracy of testing markers and the repeatability of the estimated camera parameters accounted for system performance. For both environments, statistical tests were focused on the comparison of the different camera configurations. Then, each camera configuration was compared across the two environments. In all assessed resolutions, and in both environments, the reconstruction error (true distance between the two testing markers) was less than 3mm and the error related to the working volume diagonal was in the range of 1:2000 (3×1.3×1.5 m3) to 1:7000 (4.5×2.2×1.5 m3) in agreement with the literature. Statistically, the 3D accuracy obtained in the in-air environment was poorer (p<10−5) than the one in the underwater environment, across all the tested camera configurations. Related to the repeatability of the camera parameters, we found a very low variability in both environments (1.7% and 2.9%, in-air and underwater). This result encourage the use of ASC technology to perform quantitative reconstruction both in-air and underwater environments.  相似文献   

7.
Recently, it has been shown that the resolution in cryo-tomography could be improved by considering the sample motion in tilt-series alignment and reconstruction, where a set of quadratic polynomials were used to model this motion. One requirement of this polynomial method is the optimization of a large number of parameters, which may limit its practical applicability. In this work, we propose an alternative method for modeling the sample motion. Starting from the standard fiducial-based tilt-series alignment, the method uses the alignment residual as local estimates of the sample motion at the 3D fiducial positions. Then, a scattered data interpolation technique characterized by its smoothness and a closed-form solution is applied to model the sample motion. The motion model is then integrated in the tomographic reconstruction. The new method improves the tomogram quality similar to the polynomial one, with the important advantage that the determination of the motion model is greatly simplified, thereby overcoming one of the major limitations of the polynomial model. Therefore, the new method is expected to make the beam-induced motion correction methodology more accessible to the cryoET community.  相似文献   

8.
The aim of this study is to determine the errors of scapular localisation due to skin relative to bone motion with an optoelectronic tracking system. We compared three-dimensional (3D) scapular positions obtained with skin markers to those obtained through palpation of three scapular anatomical landmarks. The scapular kinematics of nine subjects were collected. Static positions of the scapula were recorded with the right arm elevated at 0°, 40°, 80°, 120° and 160° in the sagittal plane. Palpation and subsequent digitisation of anatomical landmarks on scapula and thorax were done at the same positions. Scapular 3D orientation was also computed during 10 repeated movements of arm elevation between 0° and 180°. Significant differences in scapular kinematics were seen between static positions and palpation when considering anterior/posterior tilt and upward/downward rotation at angles over 120° of humeral elevation and only at 120° for internal/external rotation. There was no significant difference between positions computed during static positions and during the movement for the three scapular orientations. A rotation correction model is presented in order to reduce the errors between static position and palpation measurement.  相似文献   

9.
Magnetic resonance imaging (MRI) is a widely used method for non-invasive study of the structure and function of the human brain. Increasing magnetic field strengths enable higher resolution imaging; however, long scan times and high motion sensitivity mean that image quality is often limited by the involuntary motion of the subject. Prospective motion correction is a technique that addresses this problem by tracking head motion and continuously updating the imaging pulse sequence, locking the imaging volume position and orientation relative to the moving brain. The accuracy and precision of current MR-compatible tracking systems and navigator methods allows the quantification and correction of large-scale motion, but not the correction of very small involuntary movements in six degrees of freedom. In this work, we present an MR-compatible tracking system comprising a single camera and a single 15 mm marker that provides tracking precision in the order of 10 m and 0.01 degrees. We show preliminary results, which indicate that when used for prospective motion correction, the system enables improvement in image quality at both 3 T and 7 T, even in experienced and cooperative subjects trained to remain motionless during imaging. We also report direct observation and quantification of the mechanical ballistocardiogram (BCG) during simultaneous MR imaging. This is particularly apparent in the head-feet direction, with a peak-to-peak displacement of 140 m.  相似文献   

10.
A new method based on image matching and frame coupling to handle the problems of object detection caused by a moving camera and object motion is presented in this paper. First, feature points are extracted from each frame. Then, motion parameters can be obtained. Sub-images are extracted from the corresponding frame via these motion parameters. Furthermore, a novel searching method for potential orientations improves efficiency and accuracy. Finally, a method based on frame coupling is adopted, which improves the accuracy of object detection. The results demonstrate the effectiveness and feasibility of our proposed method for a moving object with changing posture and with a moving camera.  相似文献   

11.
12.
This paper proposes a method for comparing data from accelerometers, optical based 3D motion capture systems, and force platforms (FPs) in the context of spatial and temporal differences. Testing method is based on the motion laboratory accreditation test (MLAT), which can be used to test FP and camera based motion capture components of a motion analysis laboratory. This study extends MLAT to include accelerometer data. Accelerometers were attached to a device similar to the MLAT rod. The elevation of the rod from the plane of the floor is computed and compared with the force platform vector orientation and the rod orientation obtained by optical motion capture system. Orientation of the test device is achieved by forming nonlinear equation group, which describes the components of the measured accelerations. Solution for this equation group is estimated by using the Gauss-Newton method. This expanded MLAT procedure can be used in the laboratory setting were either FP, camera based motion capture, or any other motion capture system is used along with accelerometer measurements.  相似文献   

13.
ObjectiveDynamic PET imaging is extensively used in brain imaging to estimate parametric maps. Inter-frame motion can substantially disrupt the voxel-wise time-activity curves (TACs), leading to erroneous maps during kinetic modelling. Therefore, it is important to characterize the robustness of kinetic parameters under various motion and kinetic model related factors.MethodsFully 4D brain simulations ([15O]H2O and [18F]FDG dynamic datasets) were performed using a variety of clinically observed motion patterns. Increasing levels of head motion were investigated as well as varying temporal frames of motion initiation. Kinetic parameter estimation was performed using both post-reconstruction kinetic analysis and direct 4D image reconstruction to assess bias from inter-frame emission blurring and emission/attenuation mismatch.ResultsKinetic parameter bias heavily depends on the time point of motion initiation. Motion initiated towards the end of the scan results in the most biased parameters. For the [18F]FDG data, k4 is the more sensitive parameter to positional changes, while K1 and blood volume were proven to be relatively robust to motion. Direct 4D image reconstruction appeared more sensitive to changes in TACs due to motion, with parameter bias spatially propagating and depending on the level of motion.ConclusionKinetic parameter bias highly depends upon the time frame at which motion occurred, with late frame motion-induced TAC discontinuities resulting in the least accurate parameters. This is of importance during prolonged data acquisition as is often the case in neuro-receptor imaging studies. In the absence of a motion correction, use of TOF information within 4D image reconstruction could limit the error propagation.  相似文献   

14.
Current computational models of motion processing in the primate motion pathway do not cope well with image sequences in which a moving pattern is superimposed upon a static texture. The use of non-linear operations and the need for contrast normalization in motion models mean that the separation of the influences of moving and static patterns on the motion computation is not trivial. Therefore, the response to the superposition of static and moving patterns provides an important means of testing various computational strategies. Here we describe a computational model of motion processing in the visual cortex, one of the advantages of which is that it is highly resistant to interference from static patterns.  相似文献   

15.
Missing information in motion capture data caused by occlusion or detachment of markers is a common problem that is difficult to avoid entirely. The aim of this study was to develop and test an algorithm for reconstruction of corrupted marker trajectories in datasets representing human gait. The reconstruction was facilitated using information of marker inter-correlations obtained from a principal component analysis, combined with a novel weighting procedure. The method was completely data-driven, and did not require any training data. We tested the algorithm on datasets with movement patterns that can be considered both well suited (healthy subject walking on a treadmill) and less suited (transitioning from walking to running and the gait of a subject with cerebral palsy) to reconstruct. Specifically, we created 50 copies of each dataset, and corrupted them with gaps in multiple markers at random temporal and spatial positions. Reconstruction errors, quantified by the average Euclidian distance between predicted and measured marker positions, was ≤ 3 mm for the well suited dataset, even when there were gaps in up to 70% of all time frames. For the less suited datasets, median reconstruction errors were in the range 5–6 mm. However, a few reconstructions had substantially larger errors (up to 29 mm). Our results suggest that the proposed algorithm is a viable alternative both to conventional gap-filling algorithms and state-of-the-art reconstruction algorithms developed for motion capture systems. The strengths of the proposed algorithm are that it can fill gaps anywhere in the dataset, and that the gaps can be considerably longer than when using conventional interpolation techniques. Limitations are that it does not enforce musculoskeletal constraints, and that the reconstruction accuracy declines if applied to datasets with less predictable movement patterns.  相似文献   

16.
In this paper a complete design of a high speed optical motion analyzer system has been described. The main core of the image processing unit has been implemented by the differential algorithm procedure. Some intelligent and conservative procedures that facilitate the search algorithm have also been proposed and implemented for the processing of human motions. Moreover, an optimized modified direct linear transformation (MDLT) method has been used to reconstruct 3D markers positions which are used for deriving kinematic characteristics of the motion. Consequently, a set of complete tests using some simple mechanical devices were conducted to verify the system outputs. Considering the system verification for human motion analysis, we used the system for gait analysis and the results including joint angles showed good compatibility with other investigations. Furthermore, a sport application example of the system has been quantitatively presented and discussed for Iranian National Karate-kas. The low computational cost, the high precision in detecting and reconstructing marker position with 2.39 mm error, and the capability of capturing from any number of cameras to increase the domain of operation of the subject, has made the proposed method a reliable approach for real-time human motion analysis. No special environment limitation, portability, low cost hardware and built in units for simulations and kinematic analysis are the other significant specifications of this system.  相似文献   

17.

Background

Motion-defined form can seem to persist briefly after motion ceases, before seeming to gradually disappear into the background. Here we investigate if this subjective persistence reflects a signal capable of improving objective measures of sensitivity to static form.

Methodology/Principal Findings

We presented a sinusoidal modulation of luminance, masked by a background noise pattern. The sinusoidal luminance modulation was usually subjectively invisible when static, but visible when moving. We found that drifting then stopping the waveform resulted in a transient subjective persistence of the waveform in the static display. Observers'' objective sensitivity to the position of the static waveform was also improved after viewing moving waveforms, compared to viewing static waveforms for a matched duration. This facilitation did not occur simply because movement provided more perspectives of the waveform, since performance following pre-exposure to scrambled animations did not match that following pre-exposure to smooth motion. Observers did not simply remember waveform positions at motion offset, since removing the waveform before testing reduced performance.

Conclusions/Significance

Motion processing therefore interacts with subsequent static visual inputs in a way that can improve performance in objective sensitivity measures. We suggest that the brief subjective persistence of motion-defined forms that can occur after motion offsets is a consequence of the decay of a static form signal that has been transiently enhanced by motion processing.  相似文献   

18.
Positron Emission Tomography (PET) images are prone to motion artefacts due to the long acquisition time of PET measurements. Recently, simultaneous magnetic resonance imaging (MRI) and PET have become available in the first generation of Hybrid MR-PET scanners. In this work, the elimination of artefacts due to head motion in PET neuroimages is achieved by a new approach utilising MR-based motion tracking in combination with PET list mode data motion correction for simultaneous MR-PET acquisitions. The method comprises accurate MR-based motion measurements, an intra-frame motion minimising and reconstruction time reducing temporal framing algorithm, and a list mode based PET reconstruction which utilises the Ordinary Poisson Algorithm and avoids axial and transaxial compression. Compared to images uncorrected for motion, an increased image quality is shown in phantom as well as in vivo images. In vivo motion corrected images show an evident increase of contrast at the basal ganglia and a good visibility of uptake in tiny structures such as superior colliculi.  相似文献   

19.
Frost NA  Lu HE  Blanpied TA 《PloS one》2012,7(5):e36751
In neurons, the shape of dendritic spines relates to synapse function, which is rapidly altered during experience-dependent neural plasticity. The small size of spines makes detailed measurement of their morphology in living cells best suited to super-resolution imaging techniques. The distribution of molecular positions mapped via live-cell Photoactivated Localization Microscopy (PALM) is a powerful approach, but molecular motion complicates this analysis and can degrade overall resolution of the morphological reconstruction. Nevertheless, the motion is of additional interest because tracking single molecules provides diffusion coefficients, bound fraction, and other key functional parameters. We used Monte Carlo simulations to examine features of single-molecule tracking of practical utility for the simultaneous determination of cell morphology. We find that the accuracy of determining both distance and angle of motion depend heavily on the precision with which molecules are localized. Strikingly, diffusion within a bounded region resulted in an inward bias of localizations away from the edges, inaccurately reflecting the region structure. This inward bias additionally resulted in a counterintuitive reduction of measured diffusion coefficient for fast-moving molecules; this effect was accentuated by the long camera exposures typically used in single-molecule tracking. Thus, accurate determination of cell morphology from rapidly moving molecules requires the use of short integration times within each image to minimize artifacts caused by motion during image acquisition. Sequential imaging of neuronal processes using excitation pulses of either 2 ms or 10 ms within imaging frames confirmed this: processes appeared erroneously thinner when imaged using the longer excitation pulse. Using this pulsed excitation approach, we show that PALM can be used to image spine and spine neck morphology in living neurons. These results clarify a number of issues involved in interpretation of single-molecule data in living cells and provide a method to minimize artifacts in single-molecule experiments.  相似文献   

20.
Measuring three-dimensional (3D) forearm rotational motion is difficult. We aimed to develop and validate a new method for analyzing 3D forearm rotational motion. We proposed biplane fluoroscopic intensity-based 2D–3D matching, which employs automatic registration processing using the evolutionary optimization strategy. Biplane fluoroscopy was conducted for forearm rotation at 12.5 frames per second along with computed tomography (CT) at one static position. An arm phantom was embedded with eight stainless steel spheres (diameter, 1.5 mm), and forearm rotational motion measurements using the proposed method were compared with those using radiostereometric analysis, which is considered the ground truth. As for the time resolution analysis, we measured radiohumeral joint motion in a patient with posterolateral rotatory instability and compared the 2D–3D matching method with the simulated multiple CT method, which uses CTs at multiple positions and interpolates between the positions. Rotation errors of the radius and ulna between these two methods were 0.31 ± 0.35° and 0.32 ± 0.33°, respectively, translation errors were 0.43 ± 0.35 mm and 0.29 ± 0.25 mm, respectively. Although the 2D–3D method could detect joint dislocation, the multiple CT method could not detect quick motion during joint dislocation. The proposed method enabled high temporal- and spatial-resolution motion analyses with low radiation exposure. Moreover, it enabled the detection of a sudden motion, such as joint dislocation, and may contribute to 3D motion analysis, including joint dislocation, which currently cannot be analyzed using conventional methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号