首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Target validation is one of rate-limiting steps in the modern drug discovery. The authors developed a strategy of combining adenovirus-mediated gene transfer for efficient target functionality validation, both in vivo and in vitro, with baculovirus expression to produce sufficient quantities of protein for high-throughput screening (HTS). The incorporation of green fluorescent protein (GFP) in the adenovirus vectors accelerates recombinant adenovirus plaque purification, whereas the use of epitope and affinity tags facilitates the identification and purification of recombinant protein. In this generalized scheme, the flexible modular design of viral vectors facilitates the transition between target validation and HTS. In the example presented, functional target validation in vivo was achieved by overexpressing the target gene in cell-based models and in the mouse cortex following adenovirus-mediated gene delivery. In this context, target overexpression resulted in the accumulation of a disease-related biomarker both in vitro and in vivo. A baculovirus-based expressional system was then generated to produce enough target protein for HTS. Thus, the use of these viral expression systems represents a generalized method for rapid target functionality validation and HTS assay development, which could be applied to numerous target candidates being elucidated in gene discovery programs.  相似文献   

2.
The increasing supplementation of foods with carbohydrates substitutes and the growing regulatory requirements for controlling these products, turn into the necessary development and validation of accurate analytical control techniques. This paper presents the simultaneous validation of two close analytical procedures for the determination of sucralose and fructooligosaccharides (FOS) in fruit juices using high-performance anion-exchange chromatography with pulsed amperometric detection (HPAE-PAD). This study consisted in applying the accuracy profile procedure with a three-level validation experimental design. Decision criteria, namely acceptability limits (+/-10%) and proportion of result contained in the calculated tolerance intervals (80%), were decided on a consensus basis with end-users, whereas no official references were available. In conclusion, the proposed analytical procedures were validated over the selected validation domains for fruit juices and came out on very capable techniques. Validation strategy was purposely oriented towards the ease of use in routine and the liability of the methods rather than extreme performances. This objective is consistent with this of contract laboratories which need to reach a known level of guarantee for the results which they produce. In that respect, accuracy profile represents a very convenient tool to ascertain such a goal.  相似文献   

3.
This review discusses data analysis strategies for the discovery of biomarkers in clinical proteomics. Proteomics studies produce large amounts of data, characterized by few samples of which many variables are measured. A wealth of classification methods exists for extracting information from the data. Feature selection plays an important role in reducing the dimensionality of the data prior to classification and in discovering biomarker leads. The question which classification strategy works best is yet unanswered. Validation is a crucial step for biomarker leads towards clinical use. Here we only discuss statistical validation, recognizing that biological and clinical validation is of utmost importance. First, there is the need for validated model selection to develop a generalized classifier that predicts new samples correctly. A cross-validation loop that is wrapped around the model development procedure assesses the performance using unseen data. The significance of the model should be tested; we use permutations of the data for comparison with uninformative data. This procedure also tests the correctness of the performance validation. Preferably, a new set of samples is measured to test the classifier and rule out results specific for a machine, analyst, laboratory or the first set of samples. This is not yet standard practice. We present a modular framework that combines feature selection, classification, biomarker discovery and statistical validation; these data analysis aspects are all discussed in this review. The feature selection, classification and biomarker discovery modules can be incorporated or omitted to the preference of the researcher. The validation modules, however, should not be optional. In each module, the researcher can select from a wide range of methods, since there is not one unique way that leads to the correct model and proper validation. We discuss many possibilities for feature selection, classification and biomarker discovery. For validation we advice a combination of cross-validation and permutation testing, a validation strategy supported in the literature.  相似文献   

4.
5.
6.
Many calcified structures produce periodic growth increments useful for age determination at the annual or daily scale. However, age determination is invariably accompanied by various sources of error, some of which can have a serious effect on age-structured calculations. This review highlights the best available methods for insuring ageing accuracy and quantifying ageing precision, whether in support of large-scale production ageing or a small-scale research project. Included in this review is a critical overview of methods used to initiate and pursue an accurate and controlled ageing program, including (but not limited to) validation of an ageing method. The distinction between validation of absolute age and increment periodicity is emphasized, as is the importance of determining the age of first increment formation. Based on an analysis of 372 papers reporting age validation since 1983, considerable progress has been made in age validation efforts in recent years. Nevertheless, several of the age validation methods which have been used routinely are of dubious value, particularly marginal increment analysis. The two major measures of precision, average percent error and coefficient of variation, are shown to be functionally equivalent, and a conversion factor relating the two is presented. Through use of quality control monitoring, ageing errors are readily detected and quantified; reference collections are the key to both quality control and reduction of costs. Although some level of random ageing error is unavoidable, such error can often be corrected after the fact using statistical ('digital sharpening)' methods.  相似文献   

7.
Validation of computational methods in genomics   总被引:1,自引:1,他引:0  
High-throughput technologies for genomics provide tens of thousands of genetic measurements, for instance, gene-expression measurements on microarrays, and the availability of these measurements has motivated the use of machine learning (inference) methods for classification, clustering, and gene networks. Generally, a design method will yield a model that satisfies some model constraints and fits the data in some manner. On the other hand, a scientific theory consists of two parts: (1) a mathematical model to characterize relations between variables, and (2) a set of relations between model variables and observables that are used to validate the model via predictive experiments. Although machine learning algorithms are constructed to hopefully produce valid scientific models, they do not ipso facto do so. In some cases, such as classifier estimation, there is a well-developed error theory that relates to model validity according to various statistical theorems, but in others such as clustering, there is a lack of understanding of the relationship between the learning algorithms and validation. The issue of validation is especially problematic in situations where the sample size is small in comparison with the dimensionality (number of variables), which is commonplace in genomics, because the convergence theory of learning algorithms is typically asymptotic and the algorithms often perform in counter-intuitive ways when used with samples that are small in relation to the number of variables. For translational genomics, validation is perhaps the most critical issue, because it is imperative that we understand the performance of a diagnostic or therapeutic procedure to be used in the clinic, and this performance relates directly to the validity of the model behind the procedure. This paper treats the validation issue as it appears in two classes of inference algorithms relating to genomics - classification and clustering. It formulates the problem and reviews salient results.  相似文献   

8.
The newly available techniques for sensitive proteome analysis and the resulting amount of data require a new bioinformatics focus on automatic methods for spectrum reprocessing and peptide/protein validation. Manual validation of results in such studies is not feasible and objective enough for quality relevant interpretation. The necessity for tools enabling an automatic quality control is, therefore, important to produce reliable and comparable data in such big consortia as the Human Proteome Organization Brain Proteome Project. Standards and well-defined processing pipelines are important for these consortia. We show a way for choosing the right database model, through collecting data, processing these with a decoy database and end up with a quality controlled protein list merged from several search engines, including a known false-positive rate.  相似文献   

9.
10.
We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.  相似文献   

11.
Aims: To develop a new type of microbiological Reference Materials (RMs), displaying long‐term stability at room temperature. The purpose was to produce and validate two batches of RMs for the enumeration of Bacillus cereus and Clostridium perfringens. Methods and Results: The RMs were based on spores of B. cereus and Cl. perfringens, adsorbed on calcium carbonate pellets. Two batches of 1000 units were manufactured and validated in compliance with ISO guide 35. After verification of their homogeneity, the stability of the ‘RM‐B. cereus’ and ‘RM‐Cl. perfringens’ batches was proven during at least 36 and 9 months, respectively, at room temperature. The validation study was completed by international collaborative trial involving 12 laboratories, allowing the validation of the assigned values. Conclusions: The methodology developed in this work enabled to produce easy‐to‐handle and cost‐effective RMs, displaying an unprecedented stability at room temperature, a good homogeneity and a precise and validated assigned value. Significance and Impact of the Study: This study revealed new paths for the development of stable microbiological RMs. Overcoming the intrinsic instability of the living cells makes it possible to produce valuable tools for the quality assurance of microbiology laboratories.  相似文献   

12.
Structural biology and structural genomics are expected to produce many three-dimensional protein structures in the near future. Each new structure raises questions about its function and evolution. Correct functional and evolutionary classification of a new structure is difficult for distantly related proteins and error-prone using simple statistical scores based on sequence or structure similarity. Here we present an accurate numerical method for the identification of evolutionary relationships (homology). The method is based on the principle that natural selection maintains structural and functional continuity within a diverging protein family. The problem of different rates of structural divergence between different families is solved by first using structural similarities to produce a global map of folds in protein space and then further subdividing fold neighborhoods into superfamilies based on functional similarities. In a validation test against a classification by human experts (SCOP), 77% of homologous pairs were identified with 92% reliability. The method is fully automated, allowing fast, self-consistent and complete classification of large numbers of protein structures. In particular, the discrimination between analogy and homology of close structural neighbors will lead to functional predictions while avoiding overprediction.  相似文献   

13.
In vitro inhibition of Helicobacter pylori by extracts of thyme   总被引:2,自引:0,他引:2  
Extracts of several plants were tested for inhibitory activity against Helicobacter pylori. Among these plants thyme (aqueous extract) and cinnamon (alcoholic extract) were the most effective. Since aqueous extract of thyme is easier to produce and consume, it was further investigated. Compared with several antibacterials, the thyme extract had a significant inhibitory effect on H. pylori , reducing both its growth and potent urease activity. From the results of this study, the aqueous extract of thyme possesses a therapeutic potential which merits validation by clinical studies.  相似文献   

14.
Gerard PD  Schucany WR 《Biometrics》1999,55(3):769-773
Seber (1986, Biometrics 42, 267-292) suggested an approach to biological population density estimation using kernel estimates of the probability density of detection distances in line transect sampling. Chen (1996a, Applied Statistics 45, 135-150) and others have employed cross validation to choose a global bandwidth for the kernel estimator or have suggested adaptive kernel estimation (Chen, 1996b, Biometrics 52, 1283-1294). Because estimation of the density is required at only a single point, we investigate a local bandwidth selection procedure that is a modification of the method of Schucany (1995, Journal of the American Statistical Association 90, 535-540) for nonparametric regression. We report on simulation results comparing the proposed method and a local normal scale rule with cross validation and adaptive estimation. The local bandwidths and normal scale rule produce estimates with mean squares that are half the size of the others in most cases. Consistency results are also provided.  相似文献   

15.
McNemar's test is popular for assessing the difference between proportions when two observations are taken on each experimental unit. It is useful under a variety of epidemiological study designs that produce correlated binary outcomes. In studies involving outcome ascertainment, cost or feasibility concerns often lead researchers to employ error-prone surrogate diagnostic tests. Assuming an available gold standard diagnostic method, we address point and confidence interval estimation of the true difference in proportions and the paired-data odds ratio by incorporating external or internal validation data. We distinguish two special cases, depending on whether it is reasonable to assume that the diagnostic test properties remain the same for both assessments (e.g., at baseline and at follow-up). Likelihood-based analysis yields closed-form estimates when validation data are external and requires numeric optimization when they are internal. The latter approach offers important advantages in terms of robustness and efficient odds ratio estimation. We consider internal validation study designs geared toward optimizing efficiency given a fixed cost allocated for measurements. Two motivating examples are presented, using gold standard and surrogate bivariate binary diagnoses of bacterial vaginosis (BV) on women participating in the HIV Epidemiology Research Study (HERS).  相似文献   

16.
Remote sensing can be a valuable alternative or complement to traditional techniques for monitoring wildlife populations, but often entails operational bottlenecks at the image analysis stage. For example, photographic aerial surveys have several advantages over surveys employing airborne observers or other more intrusive monitoring techniques, but produce onerous amounts of imagery for manual analysis when conducted across vast areas, such as the Arctic. Deep learning algorithms, chiefly convolutional neural networks (CNNs), have shown promise for automatically detecting wildlife in large and/or complex image sets. But for sparsely distributed species, such as polar bears (Ursus maritimus), there may not be sufficient known instances of the animals in an image set to train a CNN. We investigated the feasibility of instead providing ‘synthesized’ training data to a CNN to detect polar bears throughout large volumes of aerial imagery from a survey of the Baffin Bay subpopulation. We harvested 534 miscellaneous images of polar bears from the Web that we edited to more closely resemble 21 known images of bears from the aerial survey that were solely used for validation. We combined the Web images of polar bears with 6292 random background images from the aerial survey to train a CNN (ResNet-50), which subsequently correctly classified 20/21 (95%) bear images from the survey and 1172/1179 (99.4%) random background validation images. Given that even a small background misclassification rate could produce multitudinous false positives over many thousands of photos, we describe a potential workflow to efficiently screen out erroneous detections. We also discuss potential avenues to improve CNN accuracy, and the broader applicability of our approach to other image-based wildlife monitoring scenarios. Our results demonstrate the feasibility of using miscellaneously sourced images of animals to train deep neural networks for specific wildlife detection tasks.  相似文献   

17.
The six-toothed bark beetle Ips sexdentatus is one of the most important pests of coniferous trees that can cause extensive tree mortality, and change the structure and composition of forest ecosystems. Many abiotic and biotic factors affect the infestation of bark beetles. Early detection of forest stands predisposed to bark beetle infestations will benefit from reducing the impacts of possible infestations. The study focused on the production and comparison of Ips sexdentatus susceptibility maps using the analytical hierarchy process (AHP), frequency ratio (FR), and logistical regression (LR) models. The research was carried out in the Crimean pine forests of the Taşköprü Forest Enterprise in Kastamonu City in the Western Black Sea region of Türkiye. The eight main criteria used to produce the map were the stand structure, site index, crown closure, stand age, slope, elevation, maximum temperature, and solar radiation. The map of the infested stands was used for the models' validation. Crown closure was determined as the one of the most important factors in all three models. The receiver operating characteristic (ROC) curves and area under the curve (AUC) were used to determine the accuracy of the maps. The validation results showed that the AUC for the FR model was 0.747, for the AHP model was 0.716, and for the LR model was 0.638. The results revealed that the FR model was more accurate than the other models in producing an I. sexdentatus susceptibility map. Besides, the AHP model was also reasonably accurate. This study could help decision makers to produce bark beetle susceptibility maps easily and rapidly so they can take the necessary precautions to slow or prevent infestations.  相似文献   

18.
A method for the quantitative estimation of desoxycholic acid (200-700 micrograms/ml) in the presence of cholic and chenodesoxycholic acids is described. The method is based on the transformation of desoxycholic acid in fluorescent products (lambda ex = 350 nm, lambda em = 458 nm) by the action of concentrated sulphuric acid, this being enhanced by the presence of Ce(IV). The sample is mixed with a solution of Ce(IV) and concentrated sulphuric acid under standard conditions. Fluorescence is measured in relation to a reference containing the same components except Ce(IV). Cholic and chenodesoxycholic acids do not produce a reaction under adequate conditions. Synthetic samples of bile acids were tested for validation of the method.  相似文献   

19.
A non-invasive, in vivo method has been developed to predict the skin flap shrinkage (retraction) following a harvest. It involves the use of a novel custom-designed extensometer to measure the force-displacement behaviour of skin and subsequent data analysis to estimate the shrinkage. In validation experiments performed on pigs, this method has been shown to produce results with an average absolute error of 6.0% between the actual and predicted shrinkages. This may be close to what an experienced surgeon would estimate subjectively, thus indicating the potential usefulness of this method to predict flap shrinkage of patient's donor sites.  相似文献   

20.
Spatio-temporal patterns of melanocytic proliferations observed in vivo are important for diagnosis but the mechanisms that produce them are poorly understood. Here we present an agent-based model for simulating the emergence of the main biologic patterns found in melanocytic proliferations. Our model portrays the extracellular matrix of the dermo-epidermal junction as a two-dimensional manifold and we simulate cellular migration in terms of geometric translations driven by adhesive, repulsive and random forces. Abstracted cellular functions and melanocyte-matrix interactions are modeled as stochastic events. For identification and validation we use visual renderings of simulated cell populations in a horizontal perspective that reproduce growth patterns observed in vivo by sequential dermatoscopy and corresponding vertical views that reproduce the arrangement of melanocytes observed in histopathologic sections. Our results show that a balanced interplay of proliferation and migration produces the typical reticular pattern of nevi, whereas the globular pattern involves additional cellular mechanisms. We further demonstrate that slight variations in the three basic cellular properties proliferation, migration, and adhesion are sufficient to produce a large variety of morphological appearances of nevi. We anticipate our model to be a starting point for the reproduction of more complex scenarios that will help to establish functional connections between abstracted microscopic behavior and macroscopic patterns in all types of melanocytic proliferations including melanoma.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号