首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
I Schmid  P Schmid  J V Giorgi 《Cytometry》1988,9(6):533-538
We describe a simple, reproducible, and generally applicable method to assess the performance of log amplifiers by using a fluorescent sample that provides multiple peaks of different intensities. The channel differences between multiple peaks are used to evaluate the logarithmic behavior of the fluorescence signal amplifier on the flow cytometer. A calibration curve can be created to correct the channel numbers for deviations from true logarithmic behavior and then convert data into relative linear intensities. By using these linear fluorescent intensities, we compared the capacity of different antisera against HIV-1 (human immunodeficiency virus type 1) peptides to inhibit the binding of HIV-1 to CEM, a CD4-positive T-cell line. A wide range of applications for this calibration procedure can be envisioned and the method is valuable for monitoring instrument performance over time.  相似文献   

2.
We describe a simple and rapid method for determining the linearity of a flow cytometer amplification system. The method is based on a fundamental characteristic of linear amplifiers: The difference between two amplified signals increases linearly with increasing amplifier gain. Two populations of beads or cells, differing slightly in fluorescence intensity, are analyzed by the flow cytometer at increasing photomultiplier tube high-voltage settings. The distribution of the populations' mean difference versus mean position is a straight line intersecting the origin for linear amplifiers. Although some types of nonlinearities cannot be detected with this technique, deviations from linearity indicate nonlinear components in the flow cytometer amplification system. The correlation coefficient is used to quantify degree of nonlinearity. We also describe a method for amplifier nonlinearity compensation.  相似文献   

3.
A robust analysis of comparative genomic microarray data is critical for meaningful genomic comparison studies. In this paper, we compare our method (implemented in a new software tool, GENCOM, freely available at ) with three commonly used analysis methods: GACK (freely available at ), an empirical cut-off value of twofold difference between the fluorescence intensities after LOWESS normalization or after AVERAGE normalization in which the fluorescence intensity is divided by the average fluorescence intensity of the entire data set. Each method was tested using data sets from real experiments with prior knowledge of conserved and divergent genes. GENCOM and GACK were superior when a high proportion of genes were divergent. GENCOM was the most suitable method for the data set in which the relationship between the fluorescence intensities was not linear. GENCOM has proved robust in an analysis of all the data sets tested.  相似文献   

4.
Summary Principles and techniques are discussed for measuring with high topological resolution local emission in fluorescing objects, using photographic negatives.Determination of fluorescence intensities is only possible when an unequivocal relation between the original local fluorescence emission intensities of the object, and the transmittances or densities recorded in the microfluorophotograph is known. This relation is formulated in the theoretical part.From this relation it can be concluded that the recorded intensities can be measured optimally when the optical density values produced by the fluorescence emission fall in the range of the linear portion of the Hurter and Driffield curve. In order to obtain this situation, a uniform, low-level preexposure of the film emulsion to (white) light is carried out prior to the actual fluorescence emission exposure. This pre-exposure acts to elevate the signal exposure to the linear (steeper) part of the H.-D. curve.Inhomogeneity of the excitation beam in the object field, or differences in film emulsion response to the light exposure, will result in erroneous optical densities recorded in the photographic negative. Correction for such artifacts could be obtained by addition of a low concentration of fluorophore to the mounting medium of the microscopic preparation. The overall fluorescent background produced in this way, enabled calibration of local fluorescence intensities in different parts of one fluorophotographic negative, and also of the intensities in different negatives taken from one microscopic preparation.The validity of this approach was checked by comparing data obtained from several photographic negatives of the same quinacrine-stained metaphase, taken with different exposure times to imitate fluctuations in excitation illumination, after conversion of the scanning data into emission intensity values with an algorithm based on the proposed theoretical relation.In another experiment, fluorescence emission intensities of Feulgenstained chromosomes which had been measured with a cytofluorometer, were compared with results obtained by conversion of the scanning data measured in the fluorophotographic negatives of the same metaphases. Both types of experiment confirmed the applicability of the procedure described.Supported by grant nr 28-169 of the Praeventiefonds, The Hague  相似文献   

5.
Deconvolution enhances contrast in fluorescence microscopy images, especially in low-contrast, high-background wide-field microscope images, improving characterization of features within the sample. Deconvolution can also be combined with other imaging modalities, such as confocal microscopy, and most software programs seek to improve resolution as well as contrast. Quantitative image analyses require instrument calibration and with deconvolution, necessitate that this process itself preserves the relative quantitative relationships between fluorescence intensities. To ensure that the quantitative nature of the data remains unaltered, deconvolution algorithms need to be tested thoroughly. This study investigated whether the deconvolution algorithms in AutoQuant X3 preserve relative quantitative intensity data. InSpeck Green calibration microspheres were prepared for imaging, z-stacks were collected using a wide-field microscope, and the images were deconvolved using the iterative deconvolution algorithms with default settings. Afterwards, the mean intensities and volumes of microspheres in the original and the deconvolved images were measured. Deconvolved data sets showed higher average microsphere intensities and smaller volumes than the original wide-field data sets. In original and deconvolved data sets, intensity means showed linear relationships with the relative microsphere intensities given by the manufacturer. Importantly, upon normalization, the trend lines were found to have similar slopes. In original and deconvolved images, the volumes of the microspheres were quite uniform for all relative microsphere intensities. We were able to show that AutoQuant X3 deconvolution software data are quantitative. In general, the protocol presented can be used to calibrate any fluorescence microscope or image processing and analysis procedure.  相似文献   

6.
The fluorescence intensity of picoliter-volume samples was quantitated by taking samples and standards into a single siliconized capillary, fixing the capillary under the objective of a microscope-fluorometer, and defining an effective “fluorescence chamber” within the capillary by placing an imaging diaphragm in the emission path. Samples were then moved, by pneumatic control with an air syringe, within the capillary to this “fluorescence chamber.” A diaphragm in the excitation path limited the volume of sample excited. Fluorescence from all samples was thus directly determined under identical optical conditions. The co-efficient of variation of replicate measurements was 3%; carry-over from sample to sample within the capillary was less than 2%. A 3-pl sample containing 0.3 amol of sodium fluorescein (about 200,000 molecules) could be discriminated from the background; fluorescence intensity was linear with concentration for three to four orders of magnitude. Fluorescence intensities of NADH and an ammonia-o-phthalaldehyde-thiol adduct were also determined. Using this “fluorescence chamber” allowed straightforward scaling down of a fluorescence assay for urea in 20-pl samples, lowering the limit of detection to 10 fmol, three orders of magnitude below a previously reported microscale assay. This technique is applicable to many fluorescence assays used in studies of cell physiology, and should allow routine measurement of metabolites in individual cells or of enzymes in individual subcellular organelles.  相似文献   

7.
This paper describes the use of fluorescent silica nanospheres as luminescent signal amplifiers in biological assays based on digital counting of individual particles instead of measuring averaged fluorescence intensity. We recently described a simple method to prepare highly fluorescent mono-dispersed silica nanospheres that avoids microemulsion formulations and the use of surfactants. Modification of the St?ber method was used successfully to prepare fluorescent silica spheres with the inorganic dye dichlorotris(1,10-phenanathroline)ruthenium (II) hydrate encapsulated during the condensation of tetraethylorthosilicate in ethanol and dye aqueous mixtures. Modifications in the ammonia and water content in the reaction mixture resulted in mono-dispersed silica spheres of 65, 440 and 800 nm in diameter. The dye-encapsulating particles emit intense red luminescence when excited at 460 nm. We observed an increased photostability and longer fluorescence lifetime in our particles that we attributed to increased protection of the encapsulated dye molecules from molecular oxygen. The newly prepared fluorescent silica particles were easily modified using trialkoxysilane reagents for covalent conjugation of anti-HER2/neu. We demonstrated the utility of the fluorescent nanospheres to detect the cancer marker HER2/neu in a glass slide based assay. The assay was shown to be simple but highly sensitive with a limit of detection approaching 1 ng/mL and a linear range between 1 ng/mL and 10 microg/mL of HER2/neu.  相似文献   

8.
MOTIVATION: Assessment of gene expression on spotted microarrays is based on measurement of fluorescence intensity emitted by hybridized spots. Unfortunately, quantifying fluorescence intensity from hybridized spots does not always correctly reflect gene expression level. Low gene expression levels produce low fluorescence intensities which tend to be confounded with the local background while high gene expression levels produce high fluorescence intensities which rapidly reach the saturation level. Most algorithms that combine data acquired at different voltages of the photomultiplier tube (PMT) assume that a change in scanner setting transforms the intensity measurements by a multiplicative constant. METHODS AND RESULTS: In this paper we introduce a new model of spot foreground intensity which integrates a PMT voltage independent scanner optical bias. This new model is used to implement a "Combining Multiple Scan using a Two-way ANOVA" (CMS2A) method, which is based on a maximum likelihood estimation of the scanner optical bias. After having computed scanner bias, coefficients of the two-way ANOVA model are used for correcting the saturated spots intensities obtained at high PMT voltage by using their counterpart values at lower PMT voltages. The method was compared to state-of-the-art multiple scan algorithms, using data generated from the MAQC study. CMS2A produced fold-changes that were highly correlated with qPCR fold-changes. As the scanner optical bias is accurately estimated within CMS2A, this method allows also avoiding fold-change compression biases whatever the value of this optical bias.  相似文献   

9.
J A Dvorak  S M Banks 《Cytometry》1989,10(6):811-813
We describe an algorithm, Vout = Integer ([2(12)-1/2(12 lambda)-1] V lambda in-1) + 1; lambda greater than 0 based upon Box-Cox transformations as an alternative to nonlinear electronic amplifiers to expand or compress high- or low-amplitude flow cytometer-derived signals. If the indexing parameter lambda less than 1, input channels in the high-amplitude input range are compressed in the output range as occurs when an electronic logarithmic amplifier is used. However, if lambda greater than 1, input channels in the low-amplitude input range are compressed in the output range as occurs when an electronic power amplifier is used. Our modified Box-Cox transform can be implemented either during data collection or off-line for the transformation of previously collected raw data. The transform is the equivalent of an infinite class of nonlinear amplifiers. As the transform is implemented in software, it does not suffer from many of the disadvantages of nonlinear electronic amplifiers.  相似文献   

10.
Affymetrix high-density oligonucleotide array is a tool that has the capacity to simultaneously measure the abundance of thousands of mRNA sequences in biological samples. In order to allow direct array-to-array comparisons, normalization is a necessity. When deciding on an appropriate normalization procedure there are a couple questions that need to be addressed, e.g., on which level should the normalization be performed: On the level of feature intensities or on the level of expression indexes? Should all features/expression indexes be used or can we choose a subset of features likely to be unregulated? Another question is how to actually perform the normalization: normalize using the overall mean intensity or use a smooth normalization curve? Most of the currently used normalization methods are linear; e.g., the normalization method implemented in the Affymetrix software GeneChip is based on the overall mean intensity. However, along with alternative methods of summarizing feature intensities into an expression index, nonlinear methods have recently started to appear. For many of these alternative methods, the natural choice is to normalize on the level of feature intensities, either using all feature intensities or only perfect match intensities. In this report, a nonlinear normalization procedure aimed for normalizing feature intensities is proposed.  相似文献   

11.
The influence of diffusion potentials across different phospholipid membranes on the fluorescence intensity of 1-anilinonaphthalene-8-sulphonate (ANS) was studied. With liposomes or chloroform spheres covered with a monolayer of egg lecithin, no specific effects were found. With liposomes of soy-bean phospholipids, generation of a diffusion potential leads to an enhancement or decrease, depending on the direction of the potential, of the intensity of ANS fluorescence. This effect is mainly due to a change in quantum yield of the bound ANS. These data support a mechanism according to which ANS molecules are pushed into or pulled out of the membrane by a potential, but not an electrophoretic one in which the potential causes movement of ANS across the membrane.  相似文献   

12.
13.
We propose a simple approach, the multiplicative background correction, to solve a perplexing problem in spotted microarray data analysis: correcting the foreground intensities for the background noise, especially for spots with genes that are weakly expressed or not at all. The conventional approach, the additive background correction, directly subtracts the background intensities from foreground intensities. When the foreground intensities marginally dominate the background intensities, the additive background correction provides unreliable estimates of the differential gene expression levels and usually presents M-A plots with fishtails or fans. Unreliable additive background correction makes it preferable to ignore the background noise, which may increase the number of false positives. Based on the more realistic multiplicative assumption instead of the conventional additive assumption, we propose to logarithmically transform the intensity readings before the background correction, with the logarithmic transformation symmetrizing the skewed intensity readings. This approach not only precludes the fishtails and fans in the M-A plots, but provides highly reproducible background-corrected intensities for both strongly and weakly expressed genes. The superiority of the multiplicative background correction to the additive one as well as the no background correction is justified by publicly available self-hybridization datasets.  相似文献   

14.
MOTIVATION: Because of the high cost of sequencing, the bulk of gene discovery is performed using anonymous cDNA microarrays. Though the clones on such arrays are easier and cheaper to construct and utilize than unigene and oligonucleotide arrays, they are there in proportion to their corresponding gene expression activity in the tissue being examined. The associated redundancy will be there in any pool of possibly interesting differentially expressed clones identified in a microarray experiment for subsequent sequencing and investigation. An a posteriori sampling strategy is proposed to enhance gene discovery by reducing the impact of the redundancy in the identified pool. RESULTS: The proposed strategy exploits the fact that individual genes that are highly expressed in a tissue are more likely to be present as a number of spots in an anonymous library and, as a direct consequence, are also likely to give higher fluorescence intensity responses when present in a probe in a cDNA microarray experiment. Consequently, spots that respond with low intensities will have a lower redundancy and so should be sequenced in preference to those with the highest intensities. The proposed method, which formalizes how the fluorescence intensity of a spot should be assessed, is validated using actual microarray data, where the sequences of all the clones in the identified pool had been previously determined. For such validations, the concept of a repeat plot is introduced. It is also utilized to visualize and examine different measures for the characterization of fluorescence intensity. In addition, as confirmatory evidence, sequencing from the lowest to the highest intensities in a pool, with all the sequences known, is compared graphically with their random sequencing. The results establish that, in general, the opportunity for gene discovery is enhanced by avoiding the pooling of different biological libraries (because their construction will have involved different hybridization episodes) and concentrating on the clones with lower fluorescence intensities.  相似文献   

15.
The increase in fluorescence, upon interaction with several fluorescent dyes was found to depend on the base composition of DNA. 4',6-Diamidino-2-phenylindole-2 HCl and Hoechst 33258 which bind to AT base pairs show a logarithmic relation. This relation is linear when DNAs interact with mithramycin, chromomycin A3, and olivomycin, which bind to GC base pairs. Deviations from these relationships were observed for T2 DNA, containing hydroxymethylcytosine, and for 2C DNA, containing hydroxymethyluracil. On the basis of these data, a simple technique is proposed for determination of base composition. The presence of abnormal bases can be monitored by the use of given fluorophores. Fluorescence intensities were not modified upon linearization of covalently closed circular plasmid pBR322. Denaturation of lambda DNA was accompanied by a decrease of fluorescence, when complexed with the five dyes tested.  相似文献   

16.
A scanning pattern photobleaching method for the analysis of lateral transport is described and discussed. Fluorescence bleaching with a localized pattern allows for the concurrent analysis of motions over two very different characteristic distances: xi 0(-1), the repeat distance of the pattern, and W, the linear dimension of the illuminated region. The former motion is deduced from the decay of the modulation amplitude (of period xi 0(-1) of fluorescence scans with the attenuated pattern, the latter from the recovery of the average fluorescence intensity. Such analysis should prove useful for the study of samples with a wide range of diffusion coefficients, and for the separation of effects arising from lateral diffusion and association dynamics. Theoretical analyses are presented for three related problems: (a) the effect of pattern localization on the decay of the modulation amplitude, (b) the effect of the pattern modulation on the recovery of the average local fluorescence intensity, and (c) the effect of a limited diffusion space (with linear dimensions of only a few pattern periods) on the decay of the modulation amplitude.  相似文献   

17.

Background  

A common feature of microarray experiments is the occurence of missing gene expression data. These missing values occur for a variety of reasons, in particular, because of the filtering of poor quality spots and the removal of undefined values when a logarithmic transformation is applied to negative background-corrected intensities. The efficiency and power of an analysis performed can be substantially reduced by having an incomplete matrix of gene intensities. Additionally, most statistical methods require a complete intensity matrix. Furthermore, biases may be introduced into analyses through missing information on some genes. Thus methods for appropriately replacing (imputing) missing data and/or weighting poor quality spots are required.  相似文献   

18.
19.
We introduce and analyse a simple probabilistic model of genome evolution. It is based on three fundamental evolutionary events: gene loss, duplication and accumulated change. This is motivated by previous works which consisted in fitting the available genomic data into, what is called paralog distributions. This formalism is described by a system of infinite number of linear equations. We show that this system generates a semigroup of linear operators on the space l 1. We prove that size distribution of paralogous gene families in a genome converges to the equilibrium as time goes to infinity. Moreover we show that when probabilities of gene removal and duplication are close to each other, then the resulting distribution is close to logarithmic distribution. Some empirical results for yeast genomes are presented.  相似文献   

20.
The course of foveal dark adaptation was studied as a function of the intensity and duration of preexposure. Four intensities (11,300, 5,650, 1,130, and 565 mL.) and four durations (300, 150, 30, and 15 seconds) were used in all combinations of intensity and duration. The threshold-measuring instrument was a monocular Hecht-Shlaer adaptometer and the threshold measurements were recorded in log micromicrolamberts. There were two subjects and each went through the complete series of intensities and durations five times. The five logarithmic values obtained for each threshold were converted into a geometric mean and these means were the data used in the analysis of the results. The chief results were as follows:— 1. For each subject the final steady threshold value was in the region of 7.0 log µµL. 2. As the intensity, or duration, or both, were increased the initial foveal dark adaptation threshold rose, the slope of the curve decreased, and the time to reach a final steady threshold value increased. 3. For those values of preexposure intensity and time for which the product, I x t, is a constant it was found that for the two higher intensities and two longer durations and also for the two lower intensities and two shorter durations, the dark adaptation curves were the same. For other values of I x t = C the curves were generally not the same.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号