首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
During the last decade, three-dimensional, digital models have become increasingly important in geosciences and in particular in palaeontological research. Although significant advances in hard- and software technology have facilitated the acquisition and creation of such models, the presentation of three-dimensional data is still greatly handicapped by the traditionally two-dimensional means of publication. The ability to integrate three-dimensional (3D) models, which can be interactively manipulated, into portable document format (PDF) documents not only considerably improves their accessibility, but also represents an innovative, but so far neglected, approach for the presentation and communication of digital data. This article introduces and illustrates a comprehensive workflow for the creation of 3D PDFs, incorporating different techniques and methodological steps, and using both commercial and freely available software resources. Advantages and disadvantages of each method are discussed, and are accompanied by selected examples of digital models. These examples encompass different methods of data acquisition (computed tomography, synchrotron radiation X-ray tomographic microscopy, photogrammetry) and span a wide range of sizes and taxonomic groups. To the best of the author’s knowledge, this article represents the first application of 3D PDF technology fully integrated into a scientific publication in palaeontology or even geosciences, and not restricted to supplementary material. It provides the reader with extended visual information and facilitates the dissemination of data. As both authors and readers benefit greatly from their usage, it is argued that 3D PDFs should become an accepted standard in palaeontological publications of three-dimensional models.  相似文献   

2.
3.
PurposeTo investigate the impact of compressed sensing – sensitivity encoding (CS-SENSE) acceleration factor on the diagnostic quality of magnetic resonance images within standard brain protocol.MethodsThree routine clinical neuroimaging sequences were chosen for this study due to their long acquisition time: T2-weighted turbo spin echo (TSE), fluid - attenuated inversion recovery (FLAIR), and 3D time of flight (TOF). Fully sampled reference scans and multiple prospectively 2x to 5x undersampled CS scans were acquired. Retrospectively, undersampled scans were compared to fully sampled scans and visually assessed for image quality and diagnostic quality by three independent radiologists.ResultsImages obtained with CS-SENSE accelerated acquisition were of diagnostically acceptable quality at up to 3x acceleration for T2 TSE (average qualitative score 3.53 on a 4-point scale, with the acquisition time reduction of 64%), up to 2x for FLAIR (average qualitative score 3.27, with the acquisition time reduction of 43%) and 4x acceleration for 3D TOF sequence (average qualitative score 3.13, with the acquisition time reduction of 73%). There were no substantial differences between the readers’ diagnostic quality scores (p > 0.05).ConclusionsCS-SENSE accelerated T2 TSE, FLAIR, and 3D TOF sequences of the brain show image quality similar to that of conventional acquisitions with reduced acquisition time. CS-SENSE can moderately reduce scan time, providing many benefits without losing the image quality.  相似文献   

4.

Introduction

Although it is still at a very early stage compared to its mass spectrometry (MS) counterpart, proton nuclear magnetic resonance (NMR) lipidomics is worth being investigated as an original and complementary solution for lipidomics. Dedicated sample preparation protocols and adapted data acquisition methods have to be developed to set up an NMR lipidomics workflow; in particular, the considerable overlap observed for lipid signals on 1D spectra may hamper its applicability.

Objectives

The study describes the development of a complete proton NMR lipidomics workflow for application to serum fingerprinting. It includes the assessment of fast 2D NMR strategies, which, besides reducing signal overlap by spreading the signals along a second dimension, offer compatibility with the high-throughput requirements of food quality characterization.

Method

The robustness of the developed sample preparation protocol is assessed in terms of repeatability and ability to provide informative fingerprints; further, different NMR acquisition schemes—including classical 1D, fast 2D based on non-uniform sampling or ultrafast schemes—are evaluated and compared. Finally, as a proof of concept, the developed workflow is applied to characterize lipid profiles disruption in serum from β-agonists diet fed pigs.

Results

Our results show the ability of the workflow to discriminate efficiently sample groups based on their lipidic profile, while using fast 2D NMR methods in an automated acquisition framework.

Conclusion

This work demonstrates the potential of fast multidimensional 1H NMR—suited with an appropriate sample preparation—for lipidomics fingerprinting as well as its applicability to address chemical food safety issues.
  相似文献   

5.
Cells are 3D objects. Therefore, volume EM (vEM) is often crucial for correct interpretation of ultrastructural data. Today, scanning EM (SEM) methods such as focused ion beam (FIB)–SEM are frequently used for vEM analyses. While they allow automated data acquisition, precise targeting of volumes of interest within a large sample remains challenging. Here, we provide a workflow to target FIB-SEM acquisition of fluorescently labeled cells or subcellular structures with micrometer precision. The strategy relies on fluorescence preservation during sample preparation and targeted trimming guided by confocal maps of the fluorescence signal in the resin block. Laser branding is used to create landmarks on the block surface to position the FIB-SEM acquisition. Using this method, we acquired volumes of specific single cells within large tissues such as 3D cultures of mouse mammary gland organoids, tracheal terminal cells in Drosophila melanogaster larvae, and ovarian follicular cells in adult Drosophila, discovering ultrastructural details that could not be appreciated before.  相似文献   

6.
Cryo-electron microscopy (cryoEM) entails flash-freezing a thin layer of sample on a support, and then visualizing the sample in its frozen hydrated state by transmission electron microscopy (TEM). This can be achieved with very low quantity of protein and in the buffer of choice, without the use of any stain, which is very useful to determine structure-function correlations of macromolecules. When combined with single-particle image processing, the technique has found widespread usefulness for 3D structural determination of purified macromolecules. The protocol presented here explains how to perform cryoEM and examines the causes of most commonly encountered problems for rational troubleshooting; following all these steps should lead to acquisition of high quality cryoEM images. The technique requires access to the electron microscope instrument and to a vitrification device. Knowledge of the 3D reconstruction concepts and software is also needed for computerized image processing. Importantly, high quality results depend on finding the right purification conditions leading to a uniform population of structurally intact macromolecules. The ability of cryoEM to visualize macromolecules combined with the versatility of single particle image processing has proven very successful for structural determination of large proteins and macromolecular machines in their near-native state, identification of their multiple components by 3D difference mapping, and creation of pseudo-atomic structures by docking of x-ray structures. The relentless development of cryoEM instrumentation and image processing techniques for the last 30 years has resulted in the possibility to generate de novo 3D reconstructions at atomic resolution level.  相似文献   

7.
STORM is a recently developed super-resolution microscopy technique with up to 10 times better resolution than standard fluorescence microscopy techniques. However, as the image is acquired in a very different way than normal, by building up an image molecule-by-molecule, there are some significant challenges for users in trying to optimize their image acquisition. In order to aid this process and gain more insight into how STORM works we present the preparation of 3 test samples and the methodology of acquiring and processing STORM super-resolution images with typical resolutions of between 30-50 nm. By combining the test samples with the use of the freely available rainSTORM processing software it is possible to obtain a great deal of information about image quality and resolution. Using these metrics it is then possible to optimize the imaging procedure from the optics, to sample preparation, dye choice, buffer conditions, and image acquisition settings. We also show examples of some common problems that result in poor image quality, such as lateral drift, where the sample moves during image acquisition and density related problems resulting in the ''mislocalization'' phenomenon.  相似文献   

8.
This paper presents a pruning method for artificial neural networks (ANNs) based on the 'Lempel-Ziv complexity' (LZC) measure. We call this method the 'silent pruning algorithm' (SPA). The term 'silent' is used in the sense that SPA prunes ANNs without causing much disturbance during the network training. SPA prunes hidden units during the training process according to their ranks computed from LZC. LZC extracts the number of unique patterns in a time sequence obtained from the output of a hidden unit and a smaller value of LZC indicates higher redundancy of a hidden unit. SPA has a great resemblance to biological brains since it encourages higher complexity during the training process. SPA is similar to, yet different from, existing pruning algorithms. The algorithm has been tested on a number of challenging benchmark problems in machine learning, including cancer, diabetes, heart, card, iris, glass, thyroid, and hepatitis problems. We compared SPA with other pruning algorithms and we found that SPA is better than the 'random deletion algorithm' (RDA) which prunes hidden units randomly. Our experimental results show that SPA can simplify ANNs with good generalization ability.  相似文献   

9.
Vascular endothelial proteins have been analyzed using two-dimensional (2D) gel electrophoresis and subsequent mass spectrometry, with separate methods for the intervening sample preparations. Compact disc (CD) technology was found to be rapid, giving high overall yield both with ordinary Coomassie staining and with Sypro Ruby staining. Combined with automatic in-gel digestion, the CD technology has great capacity for large numbers of protein analysis, although for limited sample numbers, manual methods can give similar sequence coverage. In a test set of 48 samples, 45 proteins were identified using the CD preparation technique, 32 identified with higher sequence coverage using the CD technique, 7 with higher using ZipTips in a robotic workstation, and 5 with higher coverage using dried droplets of unpurified samples. In the process of these methodological comparisons, basic patterns for 116 endothelial proteins were defined, representing 297 separate protein spots on the 2D gels.  相似文献   

10.
A review of publications over the last ten years of methods in the field of physical anthropology for determining age, sex, race and stature of human skeletal material is presented. Comparisons are made with the types of papers published in the previous ten years (1958-1968) in six categories: (1) Visual examination of bones (2) Anthropometric measurements of bones (3) Anthropometric measurements with subsequent use of statistics in the form of discriminate function analyses (4) Time and sequence of eruption of the teeth (5) X-ray examination of the internal structure of bone sections (6) Microscopic examination of the internal structure of bone.  相似文献   

11.

Background

Current guidelines for evaluating cleft palate treatments are mostly based on two-dimensional (2D) evaluation, but three-dimensional (3D) imaging methods to assess treatment outcome are steadily rising.

Objective

To identify 3D imaging methods for quantitative assessment of soft tissue and skeletal morphology in patients with cleft lip and palate.

Data sources

Literature was searched using PubMed (1948–2012), EMBASE (1980–2012), Scopus (2004–2012), Web of Science (1945–2012), and the Cochrane Library. The last search was performed September 30, 2012. Reference lists were hand searched for potentially eligible studies. There was no language restriction.

Study selection

We included publications using 3D imaging techniques to assess facial soft tissue or skeletal morphology in patients older than 5 years with a cleft lip with/or without cleft palate. We reviewed studies involving the facial region when at least 10 subjects in the sample size had at least one cleft type. Only primary publications were included.

Data extraction

Independent extraction of data and quality assessments were performed by two observers.

Results

Five hundred full text publications were retrieved, 144 met the inclusion criteria, with 63 high quality studies. There were differences in study designs, topics studied, patient characteristics, and success measurements; therefore, only a systematic review could be conducted. Main 3D-techniques that are used in cleft lip and palate patients are CT, CBCT, MRI, stereophotogrammetry, and laser surface scanning. These techniques are mainly used for soft tissue analysis, evaluation of bone grafting, and changes in the craniofacial skeleton. Digital dental casts are used to evaluate treatment and changes over time.

Conclusion

Available evidence implies that 3D imaging methods can be used for documentation of CLP patients. No data are available yet showing that 3D methods are more informative than conventional 2D methods. Further research is warranted to elucidate it.

Systematic review registration

International Prospective Register of Systematic Reviews, PROSPERO CRD42012002041  相似文献   

12.

Context

Technological advancements have led craniofacial researchers and clinicians into the era of three-dimensional digital imaging for quantitative evaluation of craniofacial growth and treatment outcomes.

Objective

To give an overview of soft-tissue based methods for quantitative longitudinal assessment of facial dimensions in children until six years of age and to assess the reliability of these methods in studies with good methodological quality.

Data Source

PubMed, EMBASE, Cochrane Library, Web of Science, Scopus and CINAHL were searched. A hand search was performed to check for additional relevant studies.

Study Selection

Primary publications on facial growth and treatment outcomes in children younger than six years of age were included.

Data Extraction

Independent data extraction by two observers. A quality assessment instrument was used to determine the methodological quality. Methods, used in studies with good methodological quality, were assessed for reliability expressed as the magnitude of the measurement error and the correlation coefficient between repeated measurements.

Results

In total, 47 studies were included describing 4 methods: 2D x-ray cephalometry; 2D photography; anthropometry; 3D imaging techniques (surface laser scanning, stereophotogrammetry and cone beam computed tomography). In general the measurement error was below 1 mm and 1° and correlation coefficients range from 0.65 to 1.0.

Conclusion

Various methods have shown to be reliable. However, at present stereophotogrammetry seems to be the best 3D method for quantitative longitudinal assessment of facial dimensions in children until six years of age due to its millisecond fast image capture, archival capabilities, high resolution and no exposure to ionizing radiation.  相似文献   

13.
Sphingosine (SPH) comprises the backbone of sphingolipids and is known as a second messenger involved in the modulation of cell growth, differentiation, and apoptosis. The currently available methods for the quantification of SPH are, in part, complicated, time-consuming, insensitive, or unselective. Therefore, a fast and convenient methodology for the quantification of SPH and the biosynthetic intermediate sphinganine (SPA) was developed. The method is based on an HPLC separation coupled to electrospray ionization tandem mass spectrometry (MS/MS). Quantitation is achieved by the use of a constant concentration of a non-naturally occurring internal standard, 17-carbon chain SPH (C17-SPH), together with a calibration curve established by spiking different concentrations of naturally occurring sphingoid bases. SPH and SPA coeluted with C17-SPH, which allows an accurate correction of the analyte response. Interference of the SPH+2 isotope with SPA quantification was corrected by an experimentally determined factor. The limits of detection were 9 fmol for SPH and 21 fmol for SPA. The overall coefficients of variation were 8% and 13% for SPH and SPA, respectively. The developed HPLC-tandem mass spectrometry methodology, with an analysis time of 3.5 min, simple sample preparation, and automated data analysis, allows high-throughput quantification of sphingoid bases from crude lipid extracts and is a valuable tool for studies of cellular sphingolipid metabolism and signaling.  相似文献   

14.
The purpose of the paper is to find a new approach to measure 3D green biomass of urban forest and to testify its precision. In this study, the 3D green biomass could be acquired on basis of a remote sensing inversion model in which each standing wood was first scanned by Terrestrial Laser Scanner to catch its point cloud data, then the point cloud picture was opened in a digital mapping data acquisition system to get the elevation in an independent coordinate, and at last the individual volume captured was associated with the remote sensing image in SPOT5(System Probatoired''Observation dela Tarre)by means of such tools as SPSS (Statistical Product and Service Solutions), GIS (Geographic Information System), RS (Remote Sensing) and spatial analysis software (FARO SCENE and Geomagic studio11). The results showed that the 3D green biomass of Beijing urban forest was 399.1295 million m3, of which coniferous was 28.7871 million m3 and broad-leaf was 370.3424 million m3. The accuracy of 3D green biomass was over 85%, comparison with the values from 235 field sample data in a typical sampling way. This suggested that the precision done by the 3D forest green biomass based on the image in SPOT5 could meet requirements. This represents an improvement over the conventional method because it not only provides a basis to evalue indices of Beijing urban greenings, but also introduces a new technique to assess 3D green biomass in other cities.  相似文献   

15.
Previous work has shown that IgG rheumatoid factors (RF) bind to the C gamma 2-C gamma 3 interface region of human IgG in the same area that binds staphylococcal protein A (SPA). Group A, C, and G strains of Streptococci possess Fc receptors that bind to IgG but not to fragments containing only the C gamma 2 or C gamma 3 domains. This work describes the binding site location on human IgG for the binding of the isolated Fc receptor from the T15 strain of a Group A streptococcus and its relationship to the site that binds SPA and the IgG RF. The isolated T15 Fc receptor (T15) with a molecular mass of 29.5 kD inhibited the binding of IgG RF to IgG. The binding of T15 itself to IgG was strongly inhibited by SPA (42.0 kD) and its monovalent fragment D (7 kD). Human IgG fragments consisting of the C gamma 3 domains did not inhibit the binding of T15 to IgG, whereas those with both domains were effective inhibitors. T15 did not bind to rabbit IgG fragments consisting of either the C gamma 2 or C gamma 3 domains, but did bind to those with both domains. An IgG3 myeloma protein was a poor inhibitor and has been shown to bind poorly to the IgG RF. Most IgG3 myeloma proteins did not bind to SPA. The substitution of Arg and Phe for His 435 and Tyr 436 is responsible for the poor binding of IgG3 to SPA and to the IgG RF. Chemical modification of His or Tyr on IgG reduced its ability to inhibit the binding of T15 to IgG. Reversal of the chemical modifications with hydroxylamine resulted in near complete restoration of inhibitory capacity. This information, collectively, coupled with the known positions in space of the His and Tyr residues in the C gamma 2-C gamma 3 interface region, verified that both His 435 and Tyr 436, and possibly His 310 and 433, are involved. These residues are also involved in binding SPA and the IgG RF. These data therefore indicate that the T15 Group A Streptococcal Fc receptor binds to the same location on the Fc of IgG as SPA and the IgG RF. The biologic relevance of these similarities between bacterial cell wall Fc receptors and IgG RF are not yet apparent, but suggest that RF could bear the internal image of these bacterial structures.  相似文献   

16.
Acquiring information of the neural structures in the whole‐brain level is vital for systematically exploring mechanisms and principles of brain function and dysfunction. Most methods for whole brain imaging, while capable of capturing the complete morphology of neurons, usually involve complex sample preparation and several days of image acquisition. The whole process including optical clearing or resin embedding is time consuming for a quick survey of the distribution of specific neural circuits in the whole brain. Here, we develop a high‐throughput light‐sheet tomography platform (HLTP), which requires minimum sample preparation. This method does not require optical clearing for block face light sheet imaging. After fixation using paraformaldehyde, an aligned 3 dimensional image dataset of a whole mouse brain can be obtained within 5 hours at a voxel size of 1.30 × 1.30 × 0.92 μm. HLTP could be a very efficient tool for quick exploration and visualization of brain‐wide distribution of specific neurons or neural circuits.   相似文献   

17.
In this protocol, we describe a 3D imaging technique known as 'volume electron microscopy' or 'focused ion beam scanning electron microscopy (FIB/SEM)' applied to biological tissues. A scanning electron microscope equipped with a focused gallium ion beam, used to sequentially mill away the sample surface, and a backscattered electron (BSE) detector, used to image the milled surfaces, generates a large series of images that can be combined into a 3D rendered image of stained and embedded biological tissue. Structural information over volumes of tens of thousands of cubic micrometers is possible, revealing complex microanatomy with subcellular resolution. Methods are presented for tissue processing, for the enhancement of contrast with osmium tetroxide/potassium ferricyanide, for BSE imaging, for the preparation and platinum deposition over a selected site in the embedded tissue block, and for sequential data collection with ion beam milling; all this takes approximately 90 h. The imaging conditions, procedures for alternate milling and data acquisition and techniques for processing and partitioning the 3D data set are also described; these processes take approxiamtely 30 h. The protocol is illustrated by application to developing chick cornea, in which cells organize collagen fibril bundles into complex, multilamellar structures essential for transparency in the mature connective tissue matrix. The techniques described could have wide application in a range of fields, including pathology, developmental biology, microstructural anatomy and regenerative medicine.  相似文献   

18.
We attempted to indicate the requirements for biomedical applications of SIMS microscopy. Sample preparation methodology should preserve both the structural and the chemical integrity of the tissue. Furthermore, it is often necessary to correlate ionic and light microscope images. This implies a common methodological approach to sample preparation for both microscopes. The use of low or high mass resolution depends on the elements studied and their concentrations. To improve the acquisition and processing of images, digital imaging systems have to be designed and require both ionic and optical image superimposition. However, the images do not accurately reflect element concentration; a relative quantitative approach is possible by measuring secondary ion beam intensity. Using an internal reference element (carbon) and standard curves the results are expressed in micrograms/mg of tissue. Despite their limited lateral resolution (0.5 microns) the actual SIMS microscopes are very suitable for the resolution of biomedical problems posed by action modes and drug localization in human pathology. SIMS microscopy should provide a new tool for metabolic radiotherapy by facilitating dose evaluation. The advent of high lateral resolution SIMS imaging (less than 0.1 microns) should open up new fields in biomedical investigation.  相似文献   

19.
The review of the present status of limnological studies in India is based on Indian publications in Hydrobiologia since the inception of the Journal. About 325 Indian papers have appeared up to 1979; nearly two-third appeared in the last one decade. Ponds occupy first place among the freshwater bodies studied. There are only a few papers on the rivers and streams. The emphasis of studies on flora is generally on taxonomical and morphological aspects. Among the studies on fauna, several relate to different aspects of Crustacea, and taxonomy of rotifers. Fish studies published in Hydrobiologia do not reflect the trends of progress in India. Work on production, energy flow and functioning of ecosystems is limited. About thirty papers relate to estuarine and marine environments. Suggestions for future work are made in the light of present studies and the gaps.  相似文献   

20.
We introduce a fast error-free tracking method applicable to sequences of two and three dimensional images. The core idea is to use Quadtree (resp. Octree) data structures for representing the spatial discretization of an image in two (resp. three) spatial dimensions. This representation enables one to merge into large computational cells the regions that can be faithfully described with such a coarse representation, thus significantly reducing the total number of degrees of freedom that are processed, without compromising accuracy. This encoding is particularly effective in the case of algorithms based on moving fronts, since the adaptive refinement provides a natural means to focus the processing resources on information near the moving front. In this paper, we use an existing contour based tracker and reformulate it to the case of Quad-/Oc-tree data structures. Relevant mathematical assumptions and derivations are presented for this purpose. We then demonstrate that, on standard bio-medical image sequences, a speed up of 5X is easily achieved in 2D and about 10X in 3D.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号