共查询到20条相似文献,搜索用时 15 毫秒
1.
Klionsky DJ 《Autophagy》2012,8(3):291
In the August 2009 issue of Autophagy, I indicated that we were launching a new category of article, Protocols. At that time, I noted that we would ultimately be placing these articles on a new site online. Well, that time has finally arrived (see www.landesbioscience.com/journals/autophagy/protocols/ for links to these papers). Therefore, it seems appropriate for me to briefly distinguish among three types of community-oriented papers, Protocol, Toolbox and Resource. 相似文献
2.
《Autophagy》2013,9(3)
In the August 2009 issue of Autophagy, I indicated that we were launching a new category of article, Protocols. At that time, I noted that we would ultimately be placing these articles on a new site online. Well, that time has finally arrived (see www.landesbioscience.com/journals/autophagy/protocols/ for links to these papers). Therefore, it seems appropriate for me to briefly distinguish among three types of community-oriented papers, Protocol, Toolbox and Resource. 相似文献
3.
Paulo S. L. Souza Regina H. C. Santana Marcos J. Santana Ed Zaluska Bruno S. Faical Julio C. Estrella 《PloS one》2013,8(7)
The lack of precision to predict service performance through load indices may lead to wrong decisions regarding the use of web services, compromising service performance and raising platform cost unnecessarily. This paper presents experimental studies to qualify the behaviour of load indices in the web service context. The experiments consider three services that generate controlled and significant server demands, four levels of workload for each service and six distinct execution scenarios. The evaluation considers three relevant perspectives: the capability for representing recent workloads, the capability for predicting near-future performance and finally stability. Eight different load indices were analysed, including the JMX Average Time index (proposed in this paper) specifically designed to address the limitations of the other indices. A systematic approach is applied to evaluate the different load indices, considering a multiple linear regression model based on the stepwise-AIC method. The results show that the load indices studied represent the workload to some extent; however, in contrast to expectations, most of them do not exhibit a coherent correlation with service performance and this can result in stability problems. The JMX Average Time index is an exception, showing a stable behaviour which is tightly-coupled to the service runtime for all executions. Load indices are used to predict the service runtime and therefore their inappropriate use can lead to decisions that will impact negatively on both service performance and execution cost. 相似文献
4.
5.
A General Method for Modeling Macromolecular Shape in Solution: A Graphical (II-G) Intersection Procedure for Triaxial Ellipsoids
下载免费PDF全文

Stephen E. Harding 《Biophysical journal》1987,51(4):673-680
A general method for modeling macromolecular shape in solution is described involving measurements of viscosity, radius of gyration, and the second thermodynamic virial coefficient. The method, which should be relatively straightforward to apply, does not suffer from uniqueness problems, involves shape functions that are independent of hydration, and models the gross conformation of the macromolecule in solution as a general triaxial ellipsoid. The method is illustrated by application to myosin, and the relevance and applicability of ellipsoid modeling to biological structures is discussed. 相似文献
6.
Background
Modern analysis of high-dimensional SNP data requires a number of biometrical and statistical methods such as pre-processing, analysis of population structure, association analysis and genotype imputation. Software used for these purposes often rely on specific and incompatible input and output data formats. Therefore extensive data management including multiple format conversions is necessary during analyses.Methods
In order to support fast and efficient management and bio-statistical quality control of high-dimensional SNP data, we developed the publically available software fcGENE using C++ object-oriented programming language. This software simplifies and automates the use of different existing analysis packages, especially during the workflow of genotype imputations and corresponding analyses.Results
fcGENE transforms SNP data and imputation results into different formats required for a large variety of analysis packages such as PLINK, SNPTEST, HAPLOVIEW, EIGENSOFT, GenABEL and tools used for genotype imputation such as MaCH, IMPUTE, BEAGLE and others. Data Management tasks like merging, splitting, extracting SNP and pedigree information can be performed. fcGENE also supports a number of bio-statistical quality control processes and quality based filtering processes at SNP- and sample-wise level. The tool also generates templates of commands required to run specific software packages, especially those required for genotype imputation. We demonstrate the functionality of fcGENE by example workflows of SNP data analyses and provide a comprehensive manual of commands, options and applications.Conclusions
We have developed a user-friendly open-source software fcGENE, which comprehensively supports SNP data management, quality control and analysis workflows. Download statistics and corresponding feedbacks indicate that software is highly recognised and extensively applied by the scientific community. 相似文献7.
Nikolas Kessler Frederik Walter Marcus Persicke Stefan P. Albaum J?rn Kalinowski Alexander Goesmann Karsten Niehaus Tim W. Nattkemper 《PloS one》2014,9(11)
Adduct formation, fragmentation events and matrix effects impose special challenges to the identification and quantitation of metabolites in LC-ESI-MS datasets. An important step in compound identification is the deconvolution of mass signals. During this processing step, peaks representing adducts, fragments, and isotopologues of the same analyte are allocated to a distinct group, in order to separate peaks from coeluting compounds. From these peak groups, neutral masses and pseudo spectra are derived and used for metabolite identification via mass decomposition and database matching. Quantitation of metabolites is hampered by matrix effects and nonlinear responses in LC-ESI-MS measurements. A common approach to correct for these effects is the addition of a U-13C-labeled internal standard and the calculation of mass isotopomer ratios for each metabolite. Here we present a new web-platform for the analysis of LC-ESI-MS experiments. ALLocator covers the workflow from raw data processing to metabolite identification and mass isotopomer ratio analysis. The integrated processing pipeline for spectra deconvolution “ALLocatorSD” generates pseudo spectra and automatically identifies peaks emerging from the U-13C-labeled internal standard. Information from the latter improves mass decomposition and annotation of neutral losses. ALLocator provides an interactive and dynamic interface to explore and enhance the results in depth. Pseudo spectra of identified metabolites can be stored in user- and method-specific reference lists that can be applied on succeeding datasets. The potential of the software is exemplified in an experiment, in which abundance fold-changes of metabolites of the l-arginine biosynthesis in C. glutamicum type strain ATCC 13032 and l-arginine producing strain ATCC 21831 are compared. Furthermore, the capability for detection and annotation of uncommon large neutral losses is shown by the identification of (γ-)glutamyl dipeptides in the same strains. ALLocator is available online at: https://allocator.cebitec.uni-bielefeld.de. A login is required, but freely available. 相似文献
8.
《仿生工程学报(英文版)》2019,16(6):967-993
9.
The ability to evaluate the validity of data is essential to any investigation, and manual “eyes on” assessments of data quality have dominated in the past. Yet, as the size of collected data continues to increase, so does the effort required to assess their quality. This challenge is of particular concern for networks that automate their data collection, and has resulted in the automation of many quality assurance and quality control analyses. Unfortunately, the interpretation of the resulting data quality flags can become quite challenging with large data sets. We have developed a framework to summarize data quality information and facilitate interpretation by the user. Our framework consists of first compiling data quality information and then presenting it through 2 separate mechanisms; a quality report and a quality summary. The quality report presents the results of specific quality analyses as they relate to individual observations, while the quality summary takes a spatial or temporal aggregate of each quality analysis and provides a summary of the results. Included in the quality summary is a final quality flag, which further condenses data quality information to assess whether a data product is valid or not. This framework has the added flexibility to allow “eyes on” information on data quality to be incorporated for many data types. Furthermore, this framework can aid problem tracking and resolution, should sensor or system malfunctions arise. 相似文献
10.
R. E. Brown 《CMAJ》1967,96(20):1349-1354
Although echoencephalography is a simple, convenient and atraumatic diagnostic technique, the accuracy and reliability of this test and whether it is reproducible can be greatly influenced by the way in which the examination is performed and subsequently analyzed. Echoencephalography in 850 patients has led to the development of a standardized technique which takes into account the positioning of the patient, the application of the transducer, the identification of the midline-echo-complex, and the elimination of observer bias. Once performed the test must meet rigid standards of acceptability. If acceptable the test is analyzed according to an established format. 相似文献
11.
12.
13.
Bacterial small RNAs (sRNAs) have gained considerable attention due to their multivalent roles in the survival and pathogenesis of bacteria and mostly identified through bio-computational methods. A manually curated web-resource, sRNAbase has been constructed to give comprehensive and exhaustive information on non-coding small RNAs excluding tRNAs and rRNAs in Enterobacteriaceae family. The sRNA entries curated in sRNAbase contain experimentally verified small RNAs available in the literature and their partial/non-homologs reported within the related genomes from our earlier studies. The sRNAbase aims to facilitate the scientific community by providing information on the physical genomic location of the non-coding small RNAs, its alias names, sequences, strand orientation, gene identification numbers of the conserved genes that sandwiches the particular sRNA with possible functional role and a link to the PubMed literatures. Currently, sRNAbase holding information on 1986 entries belongs to 80 sRNA families spread over 45 Enterobacteriaceae genomes. The sRNAbase is accessible on the web at http://bicmku.in:8081/srnabase/. 相似文献
14.
Understanding the structure–function relationship of cells and organelles in their natural context requires multidimensional imaging. As techniques for multimodal 3-D imaging have become more accessible, effective processing, visualization, and analysis of large datasets are posing a bottleneck for the workflow. Here, we present a new software package for high-performance segmentation and image processing of multidimensional datasets that improves and facilitates the full utilization and quantitative analysis of acquired data, which is freely available from a dedicated website. The open-source environment enables modification and insertion of new plug-ins to customize the program for specific needs. We provide practical examples of program features used for processing, segmentation and analysis of light and electron microscopy datasets, and detailed tutorials to enable users to rapidly and thoroughly learn how to use the program. 相似文献
15.
Haiyan Wu Yu Han Xi Yang George G. Chase Qiong Tang Chen-Jung Lee Bin Cao Jiang Zhe Gang Cheng 《PloS one》2015,10(2)
The rapid, sensitive and low-cost detection of macromolecular biomarkers is critical in clinical diagnostics, environmental monitoring, research, etc. Conventional assay methods usually require bulky, expensive and designated instruments and relative long assay time. For hospitals and laboratories that lack immediate access to analytical instruments, fast and low-cost assay methods for the detection of macromolecular biomarkers are urgently needed. In this work, we developed a versatile microparticle (MP)-based immunoaggregation method for the detection and quantification of macromolecular biomarkers. Antibodies (Abs) were firstly conjugated to MP through streptavidin-biotin interaction; the addition of macromolecular biomarkers caused the aggregation of Ab-MPs, which were subsequently detected by an optical microscope or optical particle sizer. The invisible nanometer-scale macromolecular biomarkers caused detectable change of micrometer-scale particle size distributions. Goat anti-rabbit immunoglobulin and human ferritin were used as model biomarkers to demonstrate MP-based immunoaggregation assay in PBS and 10% FBS to mimic real biomarker assay in the complex medium. It was found that both the number ratio and the volume ratio of Ab-MP aggregates caused by biomarker to all particles were directly correlated to the biomarker concentration. In addition, we found that the detection range could be tuned by adjusting the Ab-MP concentration. We envision that this novel MP-based immunoaggregation assay can be combined with multiple detection methods to detect and quantify macromolecular biomarkers at the nanogram per milliliter level. 相似文献
16.
Methods to estimate microbial diversity have developed rapidly in an effort to understand the distribution and diversity of microorganisms in natural environments. For bacterial communities, the 16S rRNA gene is the phylogenetic marker gene of choice, but most studies select only a specific region of the 16S rRNA to estimate bacterial diversity. Whereas biases derived from from DNA extraction, primer choice and PCR amplification are well documented, we here address how the choice of variable region can influence a wide range of standard ecological metrics, such as species richness, phylogenetic diversity, β-diversity and rank-abundance distributions. We have used Illumina paired-end sequencing to estimate the bacterial diversity of 20 natural lakes across Switzerland derived from three trimmed variable 16S rRNA regions (V3, V4, V5). Species richness, phylogenetic diversity, community composition, β-diversity, and rank-abundance distributions differed significantly between 16S rRNA regions. Overall, patterns of diversity quantified by the V3 and V5 regions were more similar to one another than those assessed by the V4 region. Similar results were obtained when analyzing the datasets with different sequence similarity thresholds used during sequences clustering and when the same analysis was used on a reference dataset of sequences from the Greengenes database. In addition we also measured species richness from the same lake samples using ARISA Fingerprinting, but did not find a strong relationship between species richness estimated by Illumina and ARISA. We conclude that the selection of 16S rRNA region significantly influences the estimation of bacterial diversity and species distributions and that caution is warranted when comparing data from different variable regions as well as when using different sequencing techniques. 相似文献
17.
In vitro measures of the pharmacodynamics of antibiotics that account for the factors anticipated for bacteria in infected patients are central to the rational design of antibiotic treatment protocols. We consider whether or not continuous culture devices are a way to obtain these measures. Staphylococcus aureus PS80 in high-density continuous cultures were exposed to oxacillin, ciprofloxacin, vancomycin, gentamicin, daptomycin and linezolid. Contrary to results from low density retentostats as well as to predictions of traditional PK/MIC ratios, daily dosing with up to 100× MIC did not clear these cultures. The densities of S. aureus in these cultures oscillated with constant amplitude and never fell below 10(5) CFU per ml. Save for daptomycin "treated" populations, the densities of bacteria in these cultures remained significantly below that of similar antibiotic-free cultures. Although these antibiotics varied in their pharmacodynamic properties there were only modest differences in their mean densities. Mathematical models and experiments suggest that the dominant factor preventing clearance was wall-adhering subpopulations reseeding the planktonic population which can be estimated and corrected for. Continuous cultures provide a way to evaluate the potential efficacy of antibiotic treatment regimes in vitro under conditions that are more clinically realistic and comprehensive than traditional in vitro PK/PD indices. 相似文献
18.
Yoonsuck Choe David Mayerich Jaerock Kwon Daniel E. Miller Chul Sung Ji Ryang Chung Todd Huffman John Keyser Louise C. Abbott 《Journal of visualized experiments : JoVE》2011,(58)
Major advances in high-throughput, high-resolution, 3D microscopy techniques have enabled the acquisition of large volumes of neuroanatomical data at submicrometer resolution. One of the first such instruments producing whole-brain-scale data is the Knife-Edge Scanning Microscope (KESM)7, 5, 9, developed and hosted in the authors'' lab. KESM has been used to section and image whole mouse brains at submicrometer resolution, revealing the intricate details of the neuronal networks (Golgi)1, 4, 8, vascular networks (India ink)1, 4, and cell body distribution (Nissl)3. The use of KESM is not restricted to the mouse nor the brain. We have successfully imaged the octopus brain6, mouse lung, and rat brain. We are currently working on whole zebra fish embryos. Data like these can greatly contribute to connectomics research10; to microcirculation and hemodynamic research; and to stereology research by providing an exact ground-truth. In this article, we will describe the pipeline, including specimen preparation (fixing, staining, and embedding), KESM configuration and setup, sectioning and imaging with the KESM, image processing, data preparation, and data visualization and analysis. The emphasis will be on specimen preparation and visualization/analysis of obtained KESM data. We expect the detailed protocol presented in this article to help broaden the access to KESM and increase its utilization. 相似文献
19.
《Journal of molecular biology》2023,435(2):167895
Micrograph comparison remains useful in bioscience. This technology provides researchers with a quick snapshot of experimental conditions. But sometimes a two- condition comparison relies on researchers’ eyes to draw conclusions. Our Bioimage Analysis, Statistic, and Comparison (BASIN) software provides an objective and reproducible comparison leveraging inferential statistics to bridge image data with other modalities. Users have access to machine learning-based object segmentation. BASIN provides several data points such as images’ object counts, intensities, and areas. Hypothesis testing may also be performed. To improve BASIN’s accessibility, we implemented it using R Shiny and provided both an online and offline version. We used BASIN to process 498 image pairs involving five bioscience topics. Our framework supported either direct claims or extrapolations 57% of the time. Analysis results were manually curated to determine BASIN’s accuracy which was shown to be 78%. Additionally, each BASIN version’s initial release shows an average 82% FAIR compliance score. 相似文献
20.
Wetlands play important ecological, economic, and cultural roles in societies around the world. However, wetland degradation has become a serious ecological issue, raising the global sustainability concern. An accurate wetland map is essential for wetland management. Here we used a fuzzy method to create a hybrid wetland map for China through the combination of five existing wetlands datasets, including four spatially explicit wetland distribution data and one wetland census. Our results show the total wetland area is 384,864 km2, 4.08% of China’s national surface area. The hybrid wetland map also shows spatial distribution of wetlands with a spatial resolution of 1 km. The reliability of the map is demonstrated by comparing it with spatially explicit datasets on lakes and reservoirs. The hybrid wetland map is by far the first wetland mapping that is consistent with the statistical data at the national and provincial levels in China. It provides a benchmark map for research on wetland protection and management. The method presented here is applicable for not only wetland mapping but also for other thematic mapping in China and beyond. 相似文献