首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
In protein crystallography, much time and effort are often required to trace an initial model from an interpretable electron density map and to refine it until it best agrees with the crystallographic data. Here, we present a method to build and refine a protein model automatically and without user intervention, starting from diffraction data extending to resolution higher than 2.3 A and reasonable estimates of crystallographic phases. The method is based on an iterative procedure that describes the electron density map as a set of unconnected atoms and then searches for protein-like patterns. Automatic pattern recognition (model building) combined with refinement, allows a structural model to be obtained reliably within a few CPU hours. We demonstrate the power of the method with examples of a few recently solved structures.  相似文献   

3.
The Guide to the Expression of Uncertainty in Measurement (usually referred to as the GUM) provides the basic framework for evaluating uncertainty in measurement. The GUM however does not always provide clearly identifiable procedures suitable for medical laboratory applications, particularly when internal quality control (IQC) is used to derive most of the uncertainty estimates. The GUM modelling approach requires advanced mathematical skills for many of its procedures, but Monte Carlo simulation (MCS) can be used as an alternative for many medical laboratory applications. In particular, calculations for determining how uncertainties in the input quantities to a functional relationship propagate through to the output can be accomplished using a readily available spreadsheet such as Microsoft Excel.The MCS procedure uses algorithmically generated pseudo-random numbers which are then forced to follow a prescribed probability distribution. When IQC data provide the uncertainty estimates the normal (Gaussian) distribution is generally considered appropriate, but MCS is by no means restricted to this particular case. With input variations simulated by random numbers, the functional relationship then provides the corresponding variations in the output in a manner which also provides its probability distribution. The MCS procedure thus provides output uncertainty estimates without the need for the differential equations associated with GUM modelling.The aim of this article is to demonstrate the ease with which Microsoft Excel (or a similar spreadsheet) can be used to provide an uncertainty estimate for measurands derived through a functional relationship. In addition, we also consider the relatively common situation where an empirically derived formula includes one or more ‘constants’, each of which has an empirically derived numerical value. Such empirically derived ‘constants’ must also have associated uncertainties which propagate through the functional relationship and contribute to the combined standard uncertainty of the measurand.  相似文献   

4.
The course of foveal dark adaptation was studied as a function of the intensity and duration of preexposure. Four intensities (11,300, 5,650, 1,130, and 565 mL.) and four durations (300, 150, 30, and 15 seconds) were used in all combinations of intensity and duration. The threshold-measuring instrument was a monocular Hecht-Shlaer adaptometer and the threshold measurements were recorded in log micromicrolamberts. There were two subjects and each went through the complete series of intensities and durations five times. The five logarithmic values obtained for each threshold were converted into a geometric mean and these means were the data used in the analysis of the results. The chief results were as follows:— 1. For each subject the final steady threshold value was in the region of 7.0 log µµL. 2. As the intensity, or duration, or both, were increased the initial foveal dark adaptation threshold rose, the slope of the curve decreased, and the time to reach a final steady threshold value increased. 3. For those values of preexposure intensity and time for which the product, I x t, is a constant it was found that for the two higher intensities and two longer durations and also for the two lower intensities and two shorter durations, the dark adaptation curves were the same. For other values of I x t = C the curves were generally not the same.  相似文献   

5.
Electron diffraction patterns of two-dimensional crystals of light-harvesting chlorophyll a/b-protein complex (LHC-II) from photosynthetic membranes of pea chloroplasts, tilted at different angles up to 60°, were collected to 3.2 Å resolution at -125°C. The reflection intensities were merged into a three-dimensional data set. The Friedel R-factor and the merging R-factor were 21.8 and 27.6%, respectively. Specimen flatness and crystal size were critical for recording electron diffraction patterns from crystals at high tilts. The principal sources of experimental error were attributed to limitations of the number of unit cells contributing to an electron diffraction pattern, and to the critical electron dose. The distribution of strong diffraction spots indicated that the three-dimensional structure of LHC-II is less regular than that of other known membrane proteins and is not dominated by a particular feature of secondary structure.  相似文献   

6.
The steady-state kinetics of enzymes in tissues, cells, and concentrated lysates can be characterized using high-resolution nuclear magnetic resonance spectroscopy; this is possible because almost invariably there are differences in the spectra of substrates and products of a reaction and these spectra are obtainable even from optically opaque samples. We used 1H spin-echo NMR spectroscopy to study the hydrolysis of alpha-L-glutamyl-L-alanine by cytosolic peptidases of lysed human erythrocytes. Nonlinear regression of the integrated Michaelis-Menten expression onto the progress-curve data yielded, directly, estimates of Vmax and Km for the hydrolase; a procedure for analyzing progress curves in this manner was adapted and compared with a commonly used procedure which employs the Newton-Raphson algorithm. We also performed a sensitivity analysis of the integrated Michaelis-Menten expression; this yielded equations that indicate under what conditions estimates of Km and Vmax are most sensitive to variations in experimental observables. Specifically, we showed that the most accurate estimates of the steady-state parameters from analysis of progress curves are obtained when the initial substrate concentration is much greater than Km. Furthermore, estimates of these parameters obtained by such an analysis are most sensitive to data obtained when the reaction is 60-80% complete, having started with the highest practicable initial substrate concentration.  相似文献   

7.
In this paper, fluorescent microarray images and various analysis techniques are described to improve the microarray data acquisition processes. Signal intensities produced by rarely expressed genes are initially correctly detected, but they are often lost in corrections for background, log or ratio. Our analyses indicate that a simple correlation between the mean and median signal intensities may be the best way to eliminate inaccurate microarray signals. Unlike traditional quality control methods, the low intensity signals are retained and inaccurate signals are eliminated in this mean and median correlation. With larger amounts of microarray data being generated, it becomes increasingly more difficult to analyze data on a visual basis. Our method allows for the automatic quantitative determination of accurate and reliable signals, which can then be used for normalization. We found that a mean to median correlation of 85% or higher not only retains more data than current methods, but the retained data is more accurate than traditional thresholds or common spot flagging algorithms. We have also found that by using pin microtapping and microvibrations, we can control spot quality independent from initial PCR volume.  相似文献   

8.
A modification of a method of Gardner, which employs Fourier-transform techniques, is used to obtain initial estimates for the number of terms and values of the parameters for data which are represented by a sum of exponential terms. New experimental methods have increased both the amount and accuracy of data from radiopharmaceutical experiments. This in turn allows one to devise specific numerical methods that utilize the better data. The inherent difficulties of fitting exponentials to data, which is an ill-posed problem, cannot be overcome by any method. However, we show that the present accuracy of Fourier methods may be extended by our numerical methods applied to the improved data sets. In many cases the method yields accurate estimates for the parameters; these estimates then are to be used as initial estimates for a nonlinear least-squares analysis of the problem.  相似文献   

9.

Background

There has been increasing interest in measuring under-five mortality as a health indicator and as a critical measure of human development. In countries with complete vital registration systems that capture all births and deaths, under-five mortality can be directly calculated. In the absence of a complete vital registration system, however, child mortality must be estimated using surveys that ask women to report the births and deaths of their children. Two survey methods exist for capturing this information: summary birth histories and complete birth histories. A summary birth history requires a minimum of only two questions: how many live births has each mother had and how many of them have survived. Indirect methods are then applied using the information from these two questions and the age of the mother to estimate under-five mortality going back in time prior to the survey. Estimates generated from complete birth histories are viewed as the most accurate when surveys are required to estimate under-five mortality, especially for the most recent time periods. However, it is much more costly and labor intensive to collect these detailed data, especially for the purpose of generating small area estimates. As a result, there is a demand for improvement of the methods employing summary birth history data to produce more accurate as well as subnational estimates of child mortality.

Methods and Findings

We used data from 166 Demographic and Health Surveys (DHS) to develop new empirically based methods of estimating under-five mortality using children ever born and children dead data. We then validated them using both in- and out-of-sample analyses. We developed a range of methods on the basis of three dimensions of the problem: (1) approximating the average length of exposure to mortality from a mother''s set of children using either maternal age or time since first birth; (2) using cohort and period measures of the fraction of children ever born that are dead; and (3) capturing country and regional variation in the age pattern of fertility and mortality. We focused on improving estimates in the most recent time periods prior to a survey where the traditional indirect methods fail. In addition, all of our methods incorporated uncertainty. Validated against under-five estimates generated from complete birth histories, our methods outperformed the standard indirect method by an average of 43.7% (95% confidence interval [CI] 41.2–45.2). In the 5 y prior to the survey, the new methods resulted in a 53.3% (95% CI 51.3–55.2) improvement. To illustrate the value of this method for local area estimation, we applied our new methods to an analysis of summary birth histories in the 1990, 2000, and 2005 Mexican censuses, generating subnational estimates of under-five mortality for each of 233 jurisdictions.

Conclusions

The new methods significantly improve the estimation of under-five mortality using summary birth history data. In areas without vital registration data, summary birth histories can provide accurate estimates of child mortality. Because only two questions are required of a female respondent to generate these data, they can easily be included in existing survey programs as well as routine censuses of the population. With the wider application of these methods to census data, countries now have the means to generate estimates for subnational areas and population subgroups, important for measuring and addressing health inequalities and developing local policy to improve child survival. Please see later in the article for the Editors'' Summary  相似文献   

10.
Membrane proteins arranged as two-dimensional crystals in the lipid environment provide close-to-physiological structural information, which is essential for understanding the molecular mechanisms of protein function. Previously, X-ray diffraction from individual two-dimensional crystals did not represent a suitable investigational tool because of radiation damage. The recent availability of ultrashort pulses from X-ray free-electron lasers (XFELs) has now provided a means to outrun the damage. Here, we report on measurements performed at the Linac Coherent Light Source XFEL on bacteriorhodopsin two-dimensional crystals mounted on a solid support and kept at room temperature. By merging data from about a dozen single crystal diffraction images, we unambiguously identified the diffraction peaks to a resolution of 7 Å, thus improving the observable resolution with respect to that achievable from a single pattern alone. This indicates that a larger dataset will allow for reliable quantification of peak intensities, and in turn a corresponding increase in the resolution. The presented results pave the way for further XFEL studies on two-dimensional crystals, which may include pump–probe experiments at subpicosecond time resolution.  相似文献   

11.
12.
Løkkeborg  Svein  Fernö  Anders  Jørgensen  Terje 《Hydrobiologia》2002,483(1-3):259-264
Ultrasonic telemetry using stationary positioning systems allows several fish to be tracked simultaneously, but systems that are incapable of sampling multiple frequencies simultaneously can record data from only one transmitter (individual) at a time. Tracking several individuals simultaneously thus results in longer intervals between successive position fixes for each fish. This deficiency leads to loss of detail in the tracking data collected, and may be expected to cause loss of accuracy in estimates of the swimming speeds and movement patterns of the fish tracked. Even systems that track fish on multiple frequencies are not capable of continuous tracking due to technical issues. We determined the swimming speed, area occupied, activity rhythm and movement pattern of cod (Gadus morhua) using a stationary single-channel positioning system, and analysed how estimates of these behavioural parameters were affected by the interval between successive position fixes. Single fish were tracked at a time, and position fixes were eliminated at regular intervals in the original data to generate new data sets, as if they had been collected in the course of tracking several fish (2–16). In comparison with the complete set, these data sets gave 30–70% decreases in estimates of swimming speed depending on the number of fish supposedly being tracked. These results were similar for two individuals of different size and activity level, indicating that they can be employed as correction factors to partly compensate for underestimates of swimming speed when several fish are tracked simultaneously. Tracking `several' fish only slightly affected the estimates of area occupied (1–15%). The diurnal activity rhythm was also similar between the data sets, whereas details in search pattern were not seen when several fish were tracked simultaneously.  相似文献   

13.
Spatial capture–recapture (SCR) analysis is now used routinely to inform wildlife management and conservation decisions. It is therefore imperative that we understand the implications of and can diagnose common SCR model misspecifications, as flawed inferences could propagate to policy and interventions. The detection function of an SCR model describes how an individual''s detections are distributed in space. Despite the detection function''s central role in SCR, little is known about the robustness of SCR‐derived abundance estimates and home range size estimates to misspecifications. Here, we set out to (a) determine whether abundance estimates are robust to a wider range of misspecifications of the detection function than previously explored, (b) quantify the sensitivity of home range size estimates to the choice of detection function, and (c) evaluate commonly used Bayesian p‐values for detecting misspecifications thereof. We simulated SCR data using different circular detection functions to emulate a wide range of space use patterns. We then fit Bayesian SCR models with three detection functions (half‐normal, exponential, and half‐normal plateau) to each simulated data set. While abundance estimates were very robust, estimates of home range size were sensitive to misspecifications of the detection function. When misspecified, SCR models with the half‐normal plateau and exponential detection functions produced the most and least reliable home range size, respectively. Misspecifications with the strongest impact on parameter estimates were easily detected by Bayesian p‐values. Practitioners using SCR exclusively for density estimation are unlikely to be impacted by misspecifications of the detection function. However, the choice of detection function can have substantial consequences for the reliability of inferences about space use. Although Bayesian p‐values can aid the diagnosis of detection function misspecification under certain conditions, we urge the development of additional custom goodness‐of‐fit diagnostics for Bayesian SCR models to identify a wider range of model misspecifications.  相似文献   

14.
Reliable population estimates are critical to implement effective management strategies. The Hawai’i Island spinner dolphin (Stenella longirostris) is a genetically distinct stock that displays a rigid daily behavioural pattern, foraging offshore at night and resting in sheltered bays during the day. Consequently, they are exposed to frequent human interactions and disturbance. We estimated population parameters of this spinner dolphin stock using a systematic sampling design and capture–recapture models. From September 2010 to August 2011, boat-based photo-identification surveys were undertaken monthly over 132 days (>1,150 hours of effort; >100,000 dorsal fin images) in the four main resting bays along the Kona Coast, Hawai’i Island. All images were graded according to photographic quality and distinctiveness. Over 32,000 images were included in the analyses, from which 607 distinctive individuals were catalogued and 214 were highly distinctive. Two independent estimates of the proportion of highly distinctive individuals in the population were not significantly different (p = 0.68). Individual heterogeneity and time variation in capture probabilities were strongly indicated for these data; therefore capture–recapture models allowing for these variations were used. The estimated annual apparent survival rate (product of true survival and permanent emigration) was 0.97 SE±0.05. Open and closed capture–recapture models for the highly distinctive individuals photographed at least once each month produced similar abundance estimates. An estimate of 221±4.3 SE highly distinctive spinner dolphins, resulted in a total abundance of 631±60.1 SE, (95% CI 524–761) spinner dolphins in the Hawai’i Island stock, which is lower than previous estimates. When this abundance estimate is considered alongside the rigid daily behavioural pattern, genetic distinctiveness, and the ease of human access to spinner dolphins in their preferred resting habitats, this Hawai’i Island stock is likely more vulnerable to negative impacts from human disturbance than previously believed.  相似文献   

15.
Single fibres from the semitendinosus muscle of frog were illuminated normally with a He–Ne laser. The intensity transient and fine structure pattern of light diffracted from the fibre undergoing isometric twitches were measured. During fibre shortening, the intensity decreased rapidly and the fine structure pattern preserved its shape and moved swiftly away from the undiffracted laser beam. The fine structure patterns of the contracting and resting fibre were nearly identical. The ratio of intensities of the contracting and resting fibre of the same sarcomere length was determined as a function of the time elapsed after fibre stimulation. The time-resolved intensity ratio increased with sarcomere length and became unity when sarcomere length was between 3.5 m and 3.7 m. A diffraction theory based on the sarcomere unit was developed. It contained a parameter describing the strength of filament interaction. The comparison between the theory and data shows that the initial intensity drop during contraction is primarily due to filament interactions. At a later stage of contraction, sarcomere disorder becomes the major component causing the intensity to decrease. Diffraction models which use the Debye-Waller formalism to explain the intensity decrease are discussed. The sarcomere-unit diffraction model is applied to previously reported intensity measurements from active fibres.  相似文献   

16.
Single-molecule switching nanoscopy overcomes the diffraction limit of light by stochastically switching single fluorescent molecules on and off, and then localizing their positions individually. Recent advances in this technique have greatly accelerated the data acquisition speed and improved the temporal resolution of super-resolution imaging. However, it has not been quantified whether this speed increase comes at the cost of compromised image quality. The spatial and temporal resolution depends on many factors, among which laser intensity and camera speed are the two most critical parameters. Here we quantitatively compare the image quality achieved when imaging Alexa Fluor 647-immunolabeled microtubules over an extended range of laser intensities and camera speeds using three criteria – localization precision, density of localized molecules, and resolution of reconstructed images based on Fourier Ring Correlation. We found that, with optimized parameters, single-molecule switching nanoscopy at high speeds can achieve the same image quality as imaging at conventional speeds in a 5–25 times shorter time period. Furthermore, we measured the photoswitching kinetics of Alexa Fluor 647 from single-molecule experiments, and, based on this kinetic data, we developed algorithms to simulate single-molecule switching nanoscopy images. We used this software tool to demonstrate how laser intensity and camera speed affect the density of active fluorophores and influence the achievable resolution. Our study provides guidelines for choosing appropriate laser intensities for imaging Alexa Fluor 647 at different speeds and a quantification protocol for future evaluations of other probes and imaging parameters.  相似文献   

17.
18.
Cellular barcoding methods offer the exciting possibility of ‘infinite-pseudocolor’ anatomical reconstruction—i.e., assigning each neuron its own random unique barcoded ‘pseudocolor,’ and then using these pseudocolors to trace the microanatomy of each neuron. Here we use simulations, based on densely-reconstructed electron microscopy microanatomy, with signal structure matched to real barcoding data, to quantify the feasibility of this procedure. We develop a new blind demixing approach to recover the barcodes that label each neuron, and validate this method on real data with known barcodes. We also develop a neural network which uses the recovered barcodes to reconstruct the neuronal morphology from the observed fluorescence imaging data, ‘connecting the dots’ between discontiguous barcode amplicon signals. We find that accurate recovery should be feasible, provided that the barcode signal density is sufficiently high. This study suggests the possibility of mapping the morphology and projection pattern of many individual neurons simultaneously, at high resolution and at large scale, via conventional light microscopy.  相似文献   

19.
BackgroundThe prevalence of Schistosoma mansoni infection is usually assessed by the Kato-Katz diagnostic technique. However, Kato-Katz thick smears have low sensitivity, especially for light infections. Egg count models fitted on individual level data can adjust for the infection intensity-dependent sensitivity and estimate the ‘true’ prevalence in a population. However, application of these models is complex and there is a need for adjustments that can be done without modeling expertise. This study provides estimates of the ‘true’ S. mansoni prevalence from population summary measures of observed prevalence and infection intensity using extensive simulations parametrized with data from different settings in sub-Saharan Africa.MethodologyAn individual-level egg count model was applied to Kato-Katz data to determine the S. mansoni infection intensity-dependent sensitivity for various sampling schemes. Observations in populations with varying forces of transmission were simulated, using standard assumptions about the distribution of worms and their mating behavior. Summary measures such as the geometric mean infection, arithmetic mean infection, and the observed prevalence of the simulations were calculated, and parametric statistical models fitted to the summary measures for each sampling scheme. For validation, the simulation-based estimates are compared with an observational dataset not used to inform the simulation.Principal findingsOverall, the sensitivity of Kato-Katz in a population varies according to the mean infection intensity. Using a parametric model, which takes into account different sampling schemes varying from single Kato-Katz to triplicate slides over three days, both geometric and arithmetic mean infection intensities improve estimation of sensitivity. The relation between observed and ‘true’ prevalence is remarkably linear and triplicate slides per day on three consecutive days ensure close to perfect sensitivity.Conclusions/significanceEstimation of ‘true’ S. mansoni prevalence is improved when taking into account geometric or arithmetic mean infection intensity in a population. We supply parametric functions and corresponding estimates of their parameters to calculate the ‘true’ prevalence for sampling schemes up to 3 days with triplicate Kato-Katz thick smears per day that allow estimation of the ‘true’ prevalence.  相似文献   

20.
Oligonucleotide microarrays are commonly adopted for detecting and qualifying the abundance of molecules in biological samples. Analysis of microarray data starts with recording and interpreting hybridization signals from CEL images. However, many CEL images may be blemished by noises from various sources, observed as “bright spots”, “dark clouds”, and “shadowy circles”, etc. It is crucial that these image defects are correctly identified and properly processed. Existing approaches mainly focus on detecting defect areas and removing affected intensities. In this article, we propose to use a mixed effect model for imputing the affected intensities. The proposed imputation procedure is a single-array-based approach which does not require any biological replicate or between-array normalization. We further examine its performance by using Affymetrix high-density SNP arrays. The results show that this imputation procedure significantly reduces genotyping error rates. We also discuss the necessary adjustments for its potential extension to other oligonucleotide microarrays, such as gene expression profiling. The R source code for the implementation of approach is freely available upon request.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号