首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The use of remote imagery captured by unmanned aerial vehicles (UAV) has tremendous potential for designing detailed site-specific weed control treatments in early post-emergence, which have not possible previously with conventional airborne or satellite images. A robust and entirely automatic object-based image analysis (OBIA) procedure was developed on a series of UAV images using a six-band multispectral camera (visible and near-infrared range) with the ultimate objective of generating a weed map in an experimental maize field in Spain. The OBIA procedure combines several contextual, hierarchical and object-based features and consists of three consecutive phases: 1) classification of crop rows by application of a dynamic and auto-adaptive classification approach, 2) discrimination of crops and weeds on the basis of their relative positions with reference to the crop rows, and 3) generation of a weed infestation map in a grid structure. The estimation of weed coverage from the image analysis yielded satisfactory results. The relationship of estimated versus observed weed densities had a coefficient of determination of r2=0.89 and a root mean square error of 0.02. A map of three categories of weed coverage was produced with 86% of overall accuracy. In the experimental field, the area free of weeds was 23%, and the area with low weed coverage (<5% weeds) was 47%, which indicated a high potential for reducing herbicide application or other weed operations. The OBIA procedure computes multiple data and statistics derived from the classification outputs, which permits calculation of herbicide requirements and estimation of the overall cost of weed management operations in advance.  相似文献   

2.
传统鸟类监测方法具有调查时间长、人力物力消耗大、调查结果不准确的局限。近年来,无人机遥感技术在生态学领域的应用日益广泛,但在鸟类调查上仍缺乏成熟的技术方法。本研究于2019年11月19至25日期间,使用搭载可见光相机的小型多旋翼无人机大疆Mavic 2行业变焦版,在湖南西洞庭湖国家级自然保护区内的四个水鸟集群分布区域划定监测样区,规划航线,设定合适的飞行和拍摄参数后采集遥感数据。根据水鸟对无人机的反应程度划分不同的惊扰等级,记录拍摄过程中鸟类受惊扰情况。利用图像拼接软件PTGui Pro 11.0,对采集到的影像进行拼接、匀色等解译预处理操作。对合成后的遥感影像建立水鸟分类标注表,进行人工解译,并对调查过程中的水鸟惊扰情况进行统计分析。本研究共进行11次飞行调查,获取10个样区数据,最大样区面积约为18 hm2,75 m飞行高度下影像分辨率为0.012 m/像素,对样区内6种体型较大的水鸟——苍鹭(Ardea cinerea)、大白鹭(A.alba)、小天鹅(Cygnus columbianus)、凤头麦鸡(Vanellus vanellus)、绿翅鸭(Anas crecca)和罗纹鸭(Mareca falcata)进行了分类和计数。绿翅鸭和罗纹鸭二者依靠遥感图像无法区分,其余4种拍摄到的水鸟均成功解译和计数。惊扰等级记录显示,本次无人机调查对水鸟的惊扰程度较弱。结果表明,基于搭载可见光相机的小型无人机对湿地大型和中型水鸟进行快速遥感调查监测具有一定的可行性,在湖泊湿地类型的鸟类调查中具有应用潜力;通过选择合适的飞行平台,设定适当的飞行高度、飞行速度和图像重叠度等参数,能够在保证解译结果准确性的同时,避免对水鸟的过度干扰。  相似文献   

3.
Images taken at different spectral bands are increasingly used for characterizing plants and their health status. In contrast to conventional point measurements, imaging detects the distribution and quantity of signals and thus improves the interpretation of fluorescence and reflectance signatures. In multispectral fluorescence and reflectance set-ups, images are separately acquired for the fluorescence in the blue, green, red, and far red, as well as for the reflectance in the green and in the near infrared regions. In addition, 'reference' colour images are taken with an RGB (red, green, blue) camera. Examples of imaging for the detection of photosynthetic activity, UV screening caused by UV-absorbing substances, fruit quality, leaf tissue structure, and disease symptoms are introduced. Subsequently, the different instrumentations used for multispectral fluorescence and reflectance imaging of leaves and fruits are discussed. Various types of irradiation and excitation light sources, detectors, and components for image acquisition and image processing are outlined. The acquired images (or image sequences) can be analysed either directly for each spectral range (wherein they were captured) or after calculating ratios of the different spectral bands. This analysis can be carried out for different regions of interest selected manually or (semi)-automatically. Fluorescence and reflectance imaging in different spectral bands represents a promising tool for non-destructive plant monitoring and a 'road' to a broad range of identification tasks.  相似文献   

4.
基于图像融合与混合像元分解的城市植被盖度提取   总被引:1,自引:0,他引:1  
刘勇  岳文泽 《生态学报》2010,30(1):93-99
城市植被盖度提取对于开展城市绿色空间保护和城市规划具有重要意义。随着遥感技术的发展,混合像元分解模型被广泛用于从中等分辨率的多光谱影像提取城市植被盖度,但较低的影像空间分辨率限制了该模型的应用领域。为此,以杭州市为例,首先引入Gram-Schmidt(GS)方法对Landsat ETM+的多光谱波段和全色波段进行融合,再通过混合像元分解模型从ETM+融合影像上提取城市植被盖度,最后利用SPOT影像进行精度检验。结果发现,采用GS方法对影像进行融合后,标准差、信息熵、平均梯度提高,相对偏差小于0.07,说明在保留多光谱信息的基础上提高了其空间分辨率。与SPOT影像相比,在融合影像上75%以上样本的植被盖度值相似,误差较大的区域是市区植被特别稀疏或茂盛的像元。与源影像相比,从融合影像上提取的植被盖度的均方根误差和系统误差降低了0.01。该方法在降低城市植被监测成本、提高监测精度方面具有潜力。  相似文献   

5.
Aerial surveys of marine mammals are routinely conducted to assess and monitor species’ habitat use and population status. In Australia, dugongs (Dugong dugon) are regularly surveyed and long-term datasets have formed the basis for defining habitat of high conservation value and risk assessments of human impacts. Unmanned aerial vehicles (UAVs) may facilitate more accurate, human-risk free, and cheaper aerial surveys. We undertook the first Australian UAV survey trial in Shark Bay, western Australia. We conducted seven flights of the ScanEagle UAV, mounted with a digital SLR camera payload. During each flight, ten transects covering a 1.3 km2 area frequently used by dugongs, were flown at 500, 750 and 1000 ft. Image (photograph) capture was controlled via the Ground Control Station and the capture rate was scheduled to achieve a prescribed 10% overlap between images along transect lines. Images were manually reviewed post hoc for animals and scored according to sun glitter, Beaufort Sea state and turbidity. We captured 6243 images, 627 containing dugongs. We also identified whales, dolphins, turtles and a range of other fauna. Of all possible dugong sightings, 95% (CI = 90%, 98%) were subjectively classed as ‘certain’ (unmistakably dugongs). Neither our dugong sighting rate, nor our ability to identify dugongs with certainty, were affected by UAV altitude. Turbidity was the only environmental variable significantly affecting the dugong sighting rate. Our results suggest that UAV systems may not be limited by sea state conditions in the same manner as sightings from manned surveys. The overlap between images proved valuable for detecting animals that were masked by sun glitter in the corners of images, and identifying animals initially captured at awkward body angles. This initial trial of a basic camera system has successfully demonstrated that the ScanEagle UAV has great potential as a tool for marine mammal aerial surveys.  相似文献   

6.
Mapping the distribution and quantity of soil properties is important for black soil protection, management, and restoration in northeastern China. The objective of this study was to evaluate the effect of the spatial resolution on soil pH mapping using satellite images of the black soil region in northeastern China. A high spatial resolution Gaofen (GF)-2 high-definition image and multispectral images acquired by the Landsat 8 operational land imager and Sentinel-2 multi-spectral instrument were used to compare their performance in soil pH prediction. The spectral variables, including the original bands of the three satellite images and a variety of spectral indices derived from the original bands, were employed. Then, a machine learning model (quantile regression forest) was used to determine the relationships between the spectral variables and the measured soil pH, and prediction models were established to estimate the soil pH and to characterize the spatial pattern of the soil pH. The results revealed that the soil pH prediction model based on the GF-2 image had a slightly higher prediction accuracy than the models constructed using the Landsat 8 and Sentinel-2 images. The prediction models for Landsat 8, Sentinel-2, and GF-2 had root mean square errors of 0.34, 0.39, and 0.31, respectively. The use of remote sensing images with a high spatial resolution may not substantially increase the prediction accuracy of soil pH mapping compared with the results derived from medium-resolution images.  相似文献   

7.
In an image fusion process, the spatial resolution of a multispectral image is improved by a panchromatic band. However, due to the spatial and spectral resolution differences between these two data sets, the enhanced image may have two distortions, spatial and spectral. Therefore, to evaluate the efficiency of the pan-sharpening method, the status of these two types of distortions is examined. Unfortunately, there is still no developed acceptance index that can thoroughly investigate the quality of the pansharpened image; moreover, most of the proposed methods for reviewing the quality of output images have been developed with an emphasis on the residential area. Accordingly, to assess the quality of the pansharpened image in this study, we evaluated highly effective conventional methods, such as visual examinations, quantitative evaluation and impact analysis regarding the change detection process of mangrove forests. Finally, we suggested a simple yet efficient approach for such research in natural ecosystems. In the proposed method, based on the nature of the ecosystem, a spectral vegetation index is applied to the pansharpened images; the spectral quality of the images is further evaluated based on two parameters, 1) the areas under the curves of the histogram of the spectral vegetation index in the natural ecosystem region and 2) its centroid. The spatial quality of the pansharpened images is evaluated through implementing of two transects perpendicular to each other in the images of the spectral vegetation index, and creating a spatial deviation on them. With expert reviews and visual evaluation of the pansharpened images, the proposed method, especially in natural ecosystems, has more advantages as regards assessing the quality of the fused images. Based on the evaluations, among 11 methods of pansharpening, including Ehlers Fusion, FuzeGO, Gram-Schmidt, HPF, HCS, PCA, Modified IHS, Brovey Transform, Projective Resolution Merge, Wavelet IHS, and Wavelet PCA; the HPF method the Brovey Transform and Modified IHS methods respectively showed the best performance in the digital change detection of Mangrove forests.  相似文献   

8.
Herbaceous aboveground biomass (HAB) is a key indicator of grassland vegetation and indirect estimation tools, such as remote sensing imagery, increase the potential for covering larger areas in a timely and cost‐efficient way. Structure from Motion (SfM) is an image analysis process that can create a variety of 3D spatial models as well as 2D orthomosaics from a set of images. Computed from Unmanned Aerial Vehicle (UAV) and ground camera measurements, the SfM potential to estimate the herbaceous aboveground biomass in Sahelian rangelands was tested in this study. Both UAV and ground camera recordings were used at three different scales: temporal, landscape, and national (across Senegal). All images were processed using PIX4D software (photogrammetry software) and were used to extract vegetation indices and heights. A random forest algorithm was used to estimate the HAB and the average estimation errors were around 150 g m² for fresh mass (20% relative error) and 60 g m² for dry mass (around 25% error). A comparison between different datasets revealed that the estimates based on camera data were slightly more accurate than those from UAV data. It was also found that combining datasets across scales for the same type of tool (UAV or camera) could be a useful option for monitoring HAB in Sahelian rangelands or in other grassy ecosystems.  相似文献   

9.
Unmanned aerial vehicles (UAVs) have become a useful tool in polar research. While their performance is already proven, little is known about their impact on wildlife. To assess the disturbance caused on the penguins, flights with a UAV were conducted over an Adélie penguin (Pygoscelis adeliae) colony. Vertical and horizontal flights were performed between 10 and 50 m in altitude. Penguins’ reactions were video-recorded, and the behavioural response was used to indicate the level of disturbance. During any flight mode, disturbance increased immediately after takeoff and remained elevated at all altitudes between 20 and 50 m. When the UAV descended below 20 m, the disturbance increased further with almost all individuals being vigilant. Only at these low altitudes, vertical flights caused an even higher level of disturbance than horizontal ones. Repetitions of horizontal overflights showed no short-term habituation occurring. Since the results are only valid for the specific UAV model used, we recommend a more extensive approach with different UAV specifications. As the highest flight altitudes already caused detectable but not subjectively visible responses, we also recommend to regard subjective impressions of disturbance with caution.  相似文献   

10.
Soybean is an important food and oil crop in the world. It is of great significance to statics the planting scale accurately for optimizing the crop planting structure and world food security. The technology of accurately extracting the area of soybean planting areas at the field scale using UAV images combined with deep learning algorithms is important for the application. In this study, firstly, RGB images and multispectral images (RGN) were acquired simultaneously by the quad-rotor UAV DJ-Phantom4 Pro at a flying height of 200 m. And the features were extracted from the RGB and RGN images. Further, the fusion image of RGB + VIs and the fusion image of RGN + VIs were obtained by concatenating the band reflectivity of the original image with the calculated Vegetation Index (VI). Then, the soybean planting area was segmented from the feature fusion images by U-Net. And the accuracy of the two sensors was compared. The results showed that the Kappa coefficients obtained based on RGB image, RGN image, CME(the combination of CIVE, MExG, and ExGR), ODR(the combination of OSAVI, DVI, and RDVI), RGB + CME(the combination of RGB and CME), and RGN + ODR(the combination of RGN and ODR) were 0.8806, 0.9327, 0.8437, 0.9330, 0.9420, and 0.9238, respectively. The Kappa coefficient of the combination of the original image and the vegetation index was higher than the original image, indicating that the vegetation index calculation was beneficial to improving the soybean recognition accuracy of the U-Net model. Among them, the precision of the soybean planting area extracted from RGB + CME was the highest, and the Kappa coefficient was 0.9420. Finally, the soybean recognition accuracy of U-Net was compared with the results of DeepLabv3+, Random Forest, and Support Vector Machine. The accuracy of U-Net was the best. It can be concluded that this research proposed the method that was using U-Net trained the fusion image of the original image and vegetation index feature fusion image obtained by the UAV platform, which can effectively segment soybean planting areas. The conclusion of this work provided important technical support for farm level, family cooperatives, and other business entities to manage finely soybean planting and production at low cost.  相似文献   

11.
This article presents a multispectral image analysis approach for probing the spectral backscattered irradiance from algal cultures. It was demonstrated how this spectral information can be used to measure algal biomass concentration, detect invasive species, and monitor culture health in real time. To accomplish this, a conventional RGB camera was used as a three band photodetector for imaging cultures of the green alga Chlorella sp. and the cyanobacterium Anabaena variabilis. A novel floating reference platform was placed in the culture, which enhanced the sensitivity of image color intensity to biomass concentration. Correlations were generated between the RGB color vector of culture images and the biomass concentrations for monocultures of each strain. These correlations predicted the biomass concentrations of independently prepared cultures with average errors of 22 and 14%, respectively. Moreover, the difference in spectral signatures between the two strains was exploited to detect the invasion of Chlorella sp. cultures by A. variabilis. Invasion was successfully detected for A. variabilis to Chlorella sp. mass ratios as small as 0.08. Finally, a method was presented for using multispectral imaging to detect thermal stress in A. variabilis. These methods can be extended to field applications to provide delay free process control feedback for efficient operation of large scale algae cultivation systems. © 2013 American Institute of Chemical Engineers Biotechnol. Prog., 30:233–240, 2014  相似文献   

12.
This paper proposes a supervised classification scheme to identify 40 tree species (2 coniferous, 38 broadleaf) belonging to 22 families and 36 genera in high spatial resolution QuickBird multispectral images (HMS). Overall kappa coefficient (OKC) and species conditional kappa coefficients (SCKC) were used to evaluate classification performance in training samples and estimate accuracy and uncertainty in test samples. Baseline classification performance using HMS images and vegetation index (VI) images were evaluated with an OKC value of 0.58 and 0.48 respectively, but performance improved significantly (up to 0.99) when used in combination with an HMS spectral-spatial texture image (SpecTex). One of the 40 species had very high conditional kappa coefficient performance (SCKC ≥ 0.95) using 4-band HMS and 5-band VIs images, but, only five species had lower performance (0.68 ≤ SCKC ≤ 0.94) using the SpecTex images. When SpecTex images were combined with a Visible Atmospherically Resistant Index (VARI), there was a significant improvement in performance in the training samples. The same level of improvement could not be replicated in the test samples indicating that a high degree of uncertainty exists in species classification accuracy which may be due to individual tree crown density, leaf greenness (inter-canopy gaps), and noise in the background environment (intra-canopy gaps). These factors increase uncertainty in the spectral texture features and therefore represent potential problems when using pixel-based classification techniques for multi-species classification.  相似文献   

13.
基于多源遥感数据的大豆叶面积指数估测精度对比   总被引:1,自引:0,他引:1  
近年来遥感技术的革新促使遥感源越来越丰富.为分析多源遥感数据的叶面积指数(LAI)估测精度,本文以大豆为研究对象,利用比值植被指数(RVI)、归一化植被指数(NDVI)、土壤调整植被指数(SAVI)、差值植被指数(DVI)、三角植被指数(TVI)5种植被指数,结合地面实测LAI构建经验回归模型,比较3类遥感数据(地面高光谱数据、无人机多光谱影像以及高分一号WFV影像)对大豆LAI的估测能力,并从传感器几何位置和光谱响应特性以及像元空间分辨率三方面分析讨论了3类遥感数据的LAI反演差异.结果表明: 地面高光谱数据模型和无人机多光谱数据模型都可以准确预测大豆LAI(在α=0.01显著水平下,R2均>0.69,RMSE均<0.40);地面高光谱RVI对数模型的LAI预测能力优于无人机多光谱NDVI线性模型,但两者差异不大(EA相差0.3%,R2相差0.04,RMSE相差0.006);高分一号WFV数据模型对研究区内大豆LAI的预测效果不理想(R2<0.30,RMSE>0.70).针对星、机、地三类遥感信息源,地面高光谱数据在反演LAI方面较传统多光谱数据有优势但不突出;16 m空间分辨率的高分一号WFV影像无法满足田块尺度作物长势监测的需求;在保证获得高精度大豆LAI预测值和高工作效率的前提条件下,基于无人机遥感的农情信息获取技术不失为一种最佳试验方案.在当今可用遥感信息源越来越多的情况下,农业无人机遥感信息可成为指导田块精细尺度作物管理的重要依据,为精准农业研究提供更科学准确的信息.  相似文献   

14.
The deriving of mangrove biophysical parameters in a cost-effective manner, at a fine spatial scale and over relatively large areas remains a significant challenge. This study aims to provide a comprehensive integrated technical method to map mangrove landscape biophysical characteristic parameters (height, canopy area, canopy perimeter and volume) of two typical mangrove areas in China based on unmanned aerial vehicle (UAV) techniques. In this study, initially, response surface methodology (RSM) was applied to seek the optimal flight parameters for obtaining good-quality synthesized the orthophoto digital composite images. Afterward, a digital surface model (DSM) and a dense photogrammetric point cloud technical method were utilized to derive the mangrove parameters, and artificial visual interpretation was applied to carry out species discrimination and mangrove community canopy coverage. The results showed that the most efficient combination of flight parameters for mangrove extraction is UAV vertical shooting at 30 m altitude and a 75% overlap ratio, which could cover a maximum mangrove investigation area of 0.51 ha during low tide within a day. (2) The integrated technical methods demonstrated good performance in retrieving high-precision mangrove landscape parameters by taking the Dongwei and Daguansha mangrove areas as examples. (3) Transact analysis showed an inverted U-curve of height, canopy area, and volume from the seaward mangrove edge to the landward mangrove edge. Overall, the UAV system with high-resolution (8 cm pixel) images has the potential to enable satisfactory extraction of mangrove landscape parameters by using multisoftware processing. The study will be helpful to the policy-makers, ecologists and environmentalists to formulate and implement various sustainable development programs in mangrove ecosystems.  相似文献   

15.
For three-dimensional (3D) structure determination of large macromolecular complexes, single-particle electron cryomicroscopy is considered the method of choice. Within this field, structure determination de novo, as opposed to refinement of known structures, still presents a major challenge, especially for macromolecules without point-group symmetry. This is primarily because of technical issues: one of these is poor image contrast, and another is the often low particle concentration and sample heterogeneity imposed by the practical limits of biochemical purification. In this work, we tested a state-of-the art 4 k x 4 k charge-coupled device (CCD) detector (TVIPS TemCam-F415) to see whether or not it can contribute to improving the image features that are especially important for structure determination de novo. The present study is therefore focused on a comparison of film and CCD detector in the acquisition of images in the low-to-medium ( approximately 10-25 A) resolution range using a 200 kV electron microscope equipped with field emission gun. For comparison, biological specimens and radiation-insensitive carbon layers were imaged under various conditions to test the image phase transmission, spatial signal-to-noise ratio, visual image quality and power-spectral signal decay for the complete image-processing chain. At all settings of the camera, the phase transmission and spectral signal-to-noise ratio were significantly better on CCD than on film in the low-to-medium resolution range. Thus, the number of particle images needed for initial structure determination is reduced and the overall quality of the initial computed 3D models is improved. However, at high resolution, film is still significantly better than the CCD camera: without binning of the CCD camera and at a magnification of 70 kx, film is better beyond 21 A resolution. With 4-fold binning of the CCD camera and at very high magnification (> 300 kx) film is still superior beyond 7 A resolution.  相似文献   

16.
Thosea sinensis Walker (TSW) rapidly spreads and severely damages the tea plants. Therefore, finding a reliable operational method for identifying the TSW-damaged areas via remote sensing has been a focus of a research community. Such methods also enable us to calculate the precise application of pesticides and prevent the subsequent spread of the pests. In this work, based on the unmanned aerial vehicle (UAV) platform, five band images of multispectral red-edge camera were obtained and used for monitoring the TSW in tea plantations. By combining the minimum redundancy maximum relevance (mRMR) with the selected spectral features, a comprehensive spectral selection strategy was proposed. Then, based on the selected spectral features, three classic machine learning algorithms, including random forest (RF), support vector machine (SVM), and k-nearest neighbors (KNN) were used to construct the pest monitoring model and were evaluated and compared. The results showed that the strategy proposed in this work obtained ideal monitoring accuracy by only using the combination of a few optimized features (2 or 4). In order to differentiate the healthy and TSW-damaged areas (2-class model), the monitoring accuracies of all the three models were computed, which were above 96%. The RF model used the least number of features, including only SAVI and Bandred. In order to further discriminate the pest incidence levels (3-class model), the monitoring accuracies of all the three models were computed, which were above 80%, among which the RF algorithm based on SAVI, Bandred, VARI_green, and Bandred_edge features achieve the highest accuracy (OAA of 87%, and Kappa of 0.79). Considering the computational cost and model accuracy, this work recommends the RF model based on a few optimal feature combinations to monitor and distinguish the severity of TSW in tea plantations. According to the UAV remote sensing mapping results, the TSW infestation exhibited an aggregated distribution pattern. The spatial information of occurrence and severity can offer effective guidance for precise control of the pest. In addition, the relevant methods provide a reference for monitoring other leaf-eating pests, effectively improving the management level of plant protection in tea plantations, and guaranting the yield and quality of tea plantations.  相似文献   

17.
林娜  徐涵秋  何慧 《生态学报》2013,33(10):2983-2991
福建省长汀县曾是我国南方红壤地区水土流失最严重的县份之一,经过20多年的艰辛努力,长汀已成为中国水土流失治理的典范.采用遥感技术和景观格局分析技术,基于1988、1998、2004、2009和2011年的遥感影像,对长汀县水土流失最为严重的河田盆地区进行土地利用动态变化检测与景观格局变化分析.结果表明,研究区在这23a间的土地利用发生了很大变化,其中最主要的特征就是以针叶林为主的林地面积的快速增长和地表裸土面积的大幅下降.景观分析表明,水土流失治理新增的小块林地正逐渐形成连片分布,而裸土面积在大幅减少的同时,其斑块也趋于破碎.总的看来,这23a间的水土流失治理已使得研究区的生态明显趋于好转.  相似文献   

18.
The growth of the eye, unlike other parts of the body, is not ballistic. It is guided by visual feedback with the eventual aim being optimal focus of the retinal image or emmetropization . It has been shown in animal models that interference with the quality of the retinal image leads to a disruption to the normal growth pattern, resulting in the development of refractive errors and defocused retinal images . While it is clear that retinal images rich in pattern information are needed to control eye growth, it is unclear what particular aspect of image structure is relevant. Retinal images comprise a range of spatial frequencies at different absolute and relative contrasts and in different degrees of spatial alignment. Here we show, by using synthetic images, that it is not the local edge structure produced by relative spatial frequency alignments within an image but rather the spatial frequency composition per se that is used to regulate the growth of the eye. Furthermore, it is the absolute energy at high spatial frequencies regardless of the spectral slope that is most effective. Neither result would be expected from currently accepted ideas of how human observers judge the degree of image "blur" in a scene where both phase alignments and the relative energy distribution across spatial frequency (i.e., spectral slope) are important.  相似文献   

19.
We report the development of a multichannel microscopy for whole‐slide multiplane, multispectral and phase imaging. We use trinocular heads to split the beam path into 6 independent channels and employ a camera array for parallel data acquisition, achieving a maximum data throughput of approximately 1 gigapixel per second. To perform single‐frame rapid autofocusing, we place 2 near‐infrared light‐emitting diodes (LEDs) at the back focal plane of the condenser lens to illuminate the sample from 2 different incident angles. A hot mirror is used to direct the near‐infrared light to an autofocusing camera. For multiplane whole‐slide imaging (WSI), we acquire 6 different focal planes of a thick specimen simultaneously. For multispectral WSI, we relay the 6 independent image planes to the same focal position and simultaneously acquire information at 6 spectral bands. For whole‐slide phase imaging, we acquire images at 3 focal positions simultaneously and use the transport‐of‐intensity equation to recover the phase information. We also provide an open‐source design to further increase the number of channels from 6 to 15. The reported platform provides a simple solution for multiplexed fluorescence imaging and multimodal WSI. Acquiring an instant focal stack without z‐scanning may also enable fast 3‐dimensional dynamic tracking of various biological samples.   相似文献   

20.
To develop a noninvasive, early-detection method for skin cancers, a feasibility study of multispectral image analysis was investigated. The three most frequently occurring skin cancer types, ten basal-cell carcinomas (BCCs), ten squamous-cell carcinomas (SCCs) and five malignant melanomas (MMs) were studied, along with ten normal moles. Images were acquired by a charge-coupled device camera using eight narrow-band filters ranging from 450 nm to 800 nm, at 50-nm intervals. To extract main features of these tumors, principal components analysis (PCA) was performed, because it projects the multidimensional (here, eight-dimensional) data in the direction of maximum data variance. Then, the primary PCA components for red, green, and blue subset images were analyzed in terms of hue-saturation-intensity (HSI). By hue distributions, the BCCs and SCCs were differentiated from the MMs and normal moles. Texture information was used to further classify tumor types after the HSI analysis. The texture analysis, performed using a spatial gray-level co-occurrence matrix (SGCM), could separate MMs from normal moles. The BCCs and SCCs were further studied by Fisher's linear discriminant analysis. Distribution was described as a Gaussian mixture model. By this classification procedure, seven BCCs, eight SCCs, five MMs, and ten NMs were correctly classified. Three BCCs and two SCCs were unseparable. Thus, multispectral skin cancer image analysis has potential to diagnose skin cancers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号