首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
Wang W  Wu X  Xiong E  Tai F 《Proteomics》2012,12(7):938-943
The presence of high-abundance proteins in complex protein mixtures often masks low-abundance proteins and causes loss of resolution of 2DE. Protein fractionation steps conducted prior to 2DE can enhance the detection of low-abundance proteins and improve the resolution of 2DE. Here, we report a method to prefractionate soluble protein extracts based on protein thermal denaturation. Soluble proteins were extracted from maize embryos and leaves and Escherichia coli cells. Through heating at 95°C for 5 min, soluble protein extracts were prefractionated as heat stable protein fraction (the supernatant) and heat labile protein fraction (the precipitate). Our results showed that heat prefractionation enhanced the separation of proteins in both fractions by 2DE, thereby increasing the chance of detecting low-abundance proteins, many of which were nonvisible in unfractionated extract. In maize embryo, 330 spots were detected in soluble protein extract, while 577 spots were detected after prefractionation. Furthermore, this prefractionation method facilitated the enrichment, detection, and identification of de novo synthesized stress proteins. Because of its simplicity, the one-step heat prefractionation minimizes protein loss. Finally, heat prefractionation requires no expensive special hardware or reagents, and provides an alternative prefractionation for increasing the resolving power of 2DE.  相似文献   

3.
LC‐MS experiments can generate large quantities of data, for which a variety of database search engines are available to make peptide and protein identifications. Decoy databases are becoming widely used to place statistical confidence in result sets, allowing the false discovery rate (FDR) to be estimated. Different search engines produce different identification sets so employing more than one search engine could result in an increased number of peptides (and proteins) being identified, if an appropriate mechanism for combining data can be defined. We have developed a search engine independent score, based on FDR, which allows peptide identifications from different search engines to be combined, called the FDR Score. The results demonstrate that the observed FDR is significantly different when analysing the set of identifications made by all three search engines, by each pair of search engines or by a single search engine. Our algorithm assigns identifications to groups according to the set of search engines that have made the identification, and re‐assigns the score (combined FDR Score). The combined FDR Score can differentiate between correct and incorrect peptide identifications with high accuracy, allowing on average 35% more peptide identifications to be made at a fixed FDR than using a single search engine.  相似文献   

4.
Acoustic recording units (ARUs) enable geographically extensive surveys of sensitive and elusive species. However, a hidden cost of using ARU data for modeling species occupancy is that prohibitive amounts of human verification may be required to correct species identifications made from automated software. Bat acoustic studies exemplify this challenge because large volumes of echolocation calls could be recorded and automatically classified to species. The standard occupancy model requires aggregating verified recordings to construct confirmed detection/non‐detection datasets. The multistep data processing workflow is not necessarily transparent nor consistent among studies. We share a workflow diagramming strategy that could provide coherency among practitioners. A false‐positive occupancy model is explored that accounts for misclassification errors and enables potential reduction in the number of confirmed detections. Simulations informed by real data were used to evaluate how much confirmation effort could be reduced without sacrificing site occupancy and detection error estimator bias and precision. We found even under a 50% reduction in total confirmation effort, estimator properties were reasonable for our assumed survey design, species‐specific parameter values, and desired precision. For transferability, a fully documented r package, OCacoustic, for implementing a false‐positive occupancy model is provided. Practitioners can apply OCacoustic to optimize their own study design (required sample sizes, number of visits, and confirmation scenarios) for properly implementing a false‐positive occupancy model with bat or other wildlife acoustic data. Additionally, our work highlights the importance of clearly defining research objectives and data processing strategies at the outset to align the study design with desired statistical inferences.  相似文献   

5.
We characterize the kinetics of two cancer cell lines: IGROV1 (ovarian carcinoma) and MOLT4 (leukemia). By means of flow cytometry, we selected two populations from exponentially growing in vitro cell lines, depending on the cells' DNA synthesis activity during a preceding labeling period. For these populations we determined the time course of the percentages of cells in different phases of the cycles, sampling every 3 hr for 60 hr. Initially, semi-synchronous populations quickly converged to a stable age distribution, which is typical of the cell line (at equilibrium); this desynchronization reflects the intercell variability in cell cycle duration. By matching these experimental observations to mathematical modelling, we related the convergence rate toward the asymptotic distribution (R) and the period of the phase-percentage oscillations (T), to the mean cell cycle duration and its coefficient of variation. We give two formulas involving the above-mentioned parameters. Since T and R can be drawn by fitting our data to an asymptotic formula obtained from the model, we can estimate the other two kinetic parameters. IGROV1 cells have a shorter mean cell cycle time, but higher intercell variability than the leukemia line, which takes longer to lose synchrony.  相似文献   

6.
Despite the progress in developing personal combat-protective gear, eye and brain injuries are still widely common and carry fatal or long-term repercussions. The complex nature of the cranial tissues suggests that simple methods (e.g. crash-dummies) for testing the effectiveness of personal protective gear against non-penetrating impacts are both expensive and ineffective, and there are ethical issues in using animal or cadavers. The present work presents a versatile testing framework for quantitatively evaluating protective performances of head and eye combat-protective gear, against non-penetrating impacts. The biomimetic finite element (FE) head model that was developed provides realistic representation of cranial structure and tissue properties. Simulated crash impact results were validated against a former cadaveric study and by using a crash-phantom developed in our lab. The model was then fitted with various helmet and goggle designs onto which a non-penetrating ballistic impact was applied. Example data show that reduction of the elastic and shear moduli by 30% and 80% respectively of the helmet outer Kevlar-29 layer, lowered intracranial pressures by 20%. Our modeling suggests that the level of stresses that develop in brain tissues, which ultimately cause the brain damage, cannot be predicted solely by the properties of the helmet/goggle materials. We further found that a reduced contact area between goggles and face is a key factor in reducing the mechanical loads transmitted to the optic nerve and eye balls following an impact. Overall, this work demonstrates the simplicity, flexibility and usefulness for development, evaluation, and testing of combat-protective equipment using computational modeling.
  • Highlights
  • A finite element head model was developed for testing head gear.

  • Reduced helmet’s outer layer elastic and shear moduli lowered intracranial stresses.

  • Gear material properties could not fully predict impact-related stress in the brain.

  • Reduced goggles-face contact lowered transmitted loads to the optic nerve and eyes.

  相似文献   

7.
Fennell DA  Cotter FE 《Cytometry》2000,39(4):266-274
BACKGROUND: Cytofluorometric analysis allows single-cell resolution of all-or-none programmed cell death (apoptosis) responses and permits direct measurement of cumulative frequency distributions (CFDs) of apoptosis sensitivity from which the median apoptosis tolerance can be estimated. Robust estimation of susceptibility to apoptosis within neoplastic cell populations provides a means of either accurately determining pharmacologically induced changes in apoptosis sensitivity or comparing cell population responses to different apoptosis inducers. METHODS: Experimentally determined CFDs for VP-16 (etoposide)-induced apoptosis were measured by phosphotidylserine surface expression and mitochondrial membrane potential dissipation (DeltaPsi(m)) in BV173 leukemia cells. CFDs were modelled by a modified Hill equation using a four-parameter nonlinear regression from which median apoptosis tolerance (K) was estimated. RESULTS: Median apoptosis tolerance (K) was estimated from nonlinear regression analysis of CFDs for DeltaPsi(m) collapse and loss of membrane asymmetry. The error distribution of K determined from nonlinear regression analysis of 100 simulated CFDs was shown to exhibit an asymmetrical distribution. The asymmetrical likelihood intervals for K were computed iteratively, thereby providing a measure of experimental error. CONCLUSIONS: A distribution-based approach to apoptosis assay using multivariate flow analysis offers a powerful, quantitative technique for investigating the phenotypical basis of neoplastic cell responsiveness to apoptosis therapy, permitting separation of cell populations on the basis of apoptosis susceptibility.  相似文献   

8.
Adipocytes are central players in energy metabolism and the obesity epidemic, yet their protein composition remains largely unexplored. We investigated the adipocyte proteome by combining high accuracy, high sensitivity protein identification technology with subcellular fractionation of nuclei, mitochondria, membrane, and cytosol of 3T3-L1 adipocytes. We identified 3,287 proteins while essentially eliminating false positives, making this one of the largest high confidence proteomes reported to date. Comprehensive bioinformatics analysis revealed that the adipocyte proteome, despite its specialized role, is very complex. Comparison with microarray data showed that the mRNA abundance of detected versus non-detected proteins differed by less than 2-fold and that proteomics covered as large a proportion of the insulin signaling pathway. We used the Endeavour gene prioritization algorithm to associate a number of factors with vesicle transport in response to insulin stimulation, a key function of adipocytes. Our data and analysis can serve as a model for cellular proteomics. The adipocyte proteome is available as supplemental material and from the Max-Planck Unified Proteome database.  相似文献   

9.
Advances in proteome analysis by mass spectrometry   总被引:4,自引:0,他引:4  
  相似文献   

10.
Measuring gene expression by quantitative proteome analysis   总被引:11,自引:0,他引:11  
Proteome analysis is most commonly accomplished by the combination of two-dimensional gel electrophoresis for protein separation, visualization, and quantification and mass spectrometry for protein identification. Over the past year, exceptional progress has been made towards developing a new technology base for the precise quantification and identification of proteins in complex mixtures, that is, quantitative proteomics.  相似文献   

11.
12.
Advances in proteome analysis by mass spectrometry   总被引:4,自引:0,他引:4  
Proteome characterization using mass spectrometry is essential for the systematic investigation of biological systems and for the study of gene function. Recent advances in this multifaceted field have occurred in four general areas: protein and peptide separation methodologies; selective labeling chemistries for quantitative measurement of peptide and protein abundances; characterization of post-translational protein modifications; and instrumentation.  相似文献   

13.
14.
We set a twofold investigation: we assess left ventricular (LV) rotation and twist in the human heart through 3D-echocardiographic speckle tracking, and use representative experimental data as benchmark with respect to numerical results obtained by solving our mechanical model of the LV. We aim at new insight into the relationships between myocardial contraction patterns and the overall behavior at the scale of the whole organ. It is concluded that torsional rotation is sensitive to transmural gradients of contractility which is assumed linearly related to action potential duration (APD). Pressure-volume loops and other basic strain measures are not affected by these gradients. Therefore, realistic torsional behavior of human LV may indeed correspond to the electrophysiological and functional differences between endocardial and epicardial cells recently observed in non-failing hearts. Future investigations need now to integrate the mechanical model proposed here with minimal models of human ventricular APD to drive excitation-contraction coupling transmurally.  相似文献   

15.
Cao D  Parker R 《Cell》2003,113(4):533-545
  相似文献   

16.
The present article considers the influence of heterogeneity in a mobile analyte or in an immobilized ligand population on the surface binding kinetics and equilibrium isotherms. We describe strategies for solving the inverse problem of calculating two-dimensional distributions of rate and affinity constants from experimental data on surface binding kinetics, such as obtained from optical biosensors. Although the characterization of a heterogeneous population of analytes binding to uniform surface sites may be possible under suitable experimental conditions, computational difficulties currently limit this approach. In contrast, the case of uniform analytes binding to heterogeneous populations of surface sites is computationally feasible, and can be combined with Tikhonov-Phillips and maximum entropy regularization techniques that provide the simplest distribution that is consistent with the data. The properties of this ligand distribution analysis are explored with several experimental and simulated data sets. The resulting two-dimensional rate and affinity constant distributions can describe well experimental kinetic traces measured with optical biosensors. The use of kinetic surface binding data can give significantly higher resolution than affinity distributions from the binding isotherms alone. The shape and the level of detail of the calculated distributions depend on the experimental conditions, such as contact times and the concentration range of the analyte. Despite the flexibility introduced by considering surface site distributions, the impostor application of this model to surface binding data from transport limited binding processes or from analyte distributions can be identified by large residuals, if a sufficient range of analyte concentrations and contact times are used. The distribution analysis can provide a rational interpretation of complex experimental surface binding kinetics, and provides an analytical tool for probing the homogeneity of the populations of immobilized protein.  相似文献   

17.
18.
Large identifiable landscape units, such as ecoregions, are used to prioritize global and continental conservation efforts, particularly where biodiversity knowledge is inadequate. Setting biodiversity representation targets using coarse large‐scale biogeographic boundaries, can be inefficient and under‐representative. Even when using fine‐scale biodiversity data, representation deficiencies can occur through misalignment of target distributions with such prioritization frameworks. While this pattern has been recognized, quantitative approaches highlighting misalignments have been lacking, particularly for assemblages of mammal species. We tested the efficacy of Australia's bioregions as a spatial prioritization framework for representing mammal species, within protected areas, in New South Wales. We produced an approach based on mammal assemblages and assessed its performance in representing mammal distributions. Substantial spatial misalignment between New South Wales's bioregions and mammal assemblages was revealed, reflecting deficiencies in the representation of more than half of identified mammal assemblages. Using a systematic approach driven by fine‐scale mammalian data, we compared the efficacy of these two frameworks in securing mammalian representation within protected areas. Of the 61 species, 38 were better represented by the mammalian framework, with remaining species only marginally better represented when guided by bioregions. Overall, the rate at which mammal species were incorporated into the protected area network was higher (5.1% ± 0.6 sd) when guided by mammal assemblages. Guided by bioregions, systematic conservation planning of protected areas may be constrained in realizing its full potential in securing representation for all of Australia's biodiversity. Adapting the boundaries of prioritization frameworks by incorporating amassed information from a broad range of taxa should be of conservation significance.  相似文献   

19.
Park GW  Kwon KH  Kim JY  Lee JH  Yun SH  Kim SI  Park YM  Cho SY  Paik YK  Yoo JS 《Proteomics》2006,6(4):1121-1132
In shotgun proteomics, proteins can be fractionated by 1-D gel electrophoresis and digested into peptides, followed by liquid chromatography to separate the peptide mixture. Mass spectrometry generates hundreds of thousands of tandem mass spectra from these fractions, and proteins are identified by database searching. However, the search scores are usually not sufficient to distinguish the correct peptides. In this study, we propose a confident protein identification method for high-throughput analysis of human proteome. To build a filtering protocol in database search, we chose Pseudomonas putida KT2440 as a reference because this bacterial proteome contains fewer modifications and is simpler than the human proteome. First, the P. putida KT2440 proteome was filtered by reversed sequence database search and correlated by the molecular weight in 1-D-gel band positions. The characterization protocol was then applied to determine the criteria for clustering of the human plasma proteome into three different groups. This protein filtering method, based on bacterial proteome data analysis, represents a rapid way to generate higher confidence protein list of the human proteome, which includes some of heavily modified and cleaved proteins.  相似文献   

20.
Contemporary small-molecule drug discovery frequently involves the screening of large compound files as a core activity. Subsequently cost, speed, and safety become critical issues. In order to meet this need, numerous technologies have been developed to allow mix and measure approaches, facilitate miniaturization, and to increase speed and to minimize the use of potentially hazardous reagents such as radioactive materials. However, despite the on-paper advantages of these new technologies, risks can remain undefined. For example, the question of whether the novel method will facilitate identification of active chemical series in a way that is comparable with conventional methods arises. In order to address this question, we have taken the approach of carrying out experiments to directly compare the output of high-throughput screens using a given novel approach and a traditional method. The concordance between the screening methods can then be determined via comparison of the numbers and structures of the active molecules identified. This article describes the approach taken in our laboratory to minimize variability in such experiments and shows data that exemplifies the general result of lower than expected concordance. Statistical modeling was subsequently used to facilitate this interpretation. The model used beta-distribution function to generate a real-activity frequency relationship with added normal random error and occasional outliers to represent assay variability. Hence, the effect of assay parameters such as the threshold, the number of real actives, and the number of outliers and the standard deviation could readily be explored. The model was found to describe the data reasonably and moreover was found to be of great utility when it came to planning further optimal experiments. A key conclusion from the model was that concordance between screening methods could appear poor even when one approach is compared with itself. This occurs simply because the result is a function of assay threshold, standard deviation and the true compound % activity. In response to this finding we have adopted alternative experimental designs that more reliably measure the concordance between screening methods.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号