首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Regional monitoring strategies frequently employ a nested sampling design where a finite set of study areas from throughout a region are selected and intensive sampling occurs within a subset of sites within the individual study areas. This sampling protocol naturally lends itself to a hierarchical analysis to account for dependence among subsamples. Implementing such an analysis using a classic likelihood framework is computationally challenging when accounting for detection errors in species occurrence models. Bayesian methods offer an alternative approach for fitting models that readily allows for spatial structure to be incorporated. We demonstrate a general approach for estimating occupancy when data come from a nested sampling design. We analyzed data from a regional monitoring program of wood frogs (Lithobates sylvaticus) and spotted salamanders (Ambystoma maculatum) in vernal pools using static and dynamic occupancy models. We analyzed observations from 2004 to 2013 that were collected within 14 protected areas located throughout the northeast United States. We use the data set to estimate trends in occupancy at both the regional and individual protected area levels. We show that occupancy at the regional level was relatively stable for both species. However, substantial variation occurred among study areas, with some populations declining and some increasing for both species. In addition, When the hierarchical study design is not accounted for, one would conclude stronger support for latitudinal gradient in trends than when using our approach that accounts for the nested design. In contrast to the model that does not account for nesting, the nested model did not include an effect of latitude in the 95% credible interval. These results shed light on the range‐level population status of these pond‐breeding amphibians, and our approach provides a framework that can be used to examine drivers of local and regional occurrence dynamics.  相似文献   

2.
For many parallel applications of Next-Generation Sequencing (NGS) technologies short barcodes able to accurately multiplex a large number of samples are demanded. To address these competitive requirements, the use of error-correcting codes is advised. Current barcoding systems are mostly built from short random error-correcting codes, a feature that strongly limits their multiplexing accuracy and experimental scalability. To overcome these problems on sequencing systems impaired by mismatch errors, the alternative use of binary BCH and pseudo-quaternary Hamming codes has been proposed. However, these codes either fail to provide a fine-scale with regard to size of barcodes (BCH) or have intrinsic poor error correcting abilities (Hamming). Here, the design of barcodes from shortened binary BCH codes and quaternary Low Density Parity Check (LDPC) codes is introduced. Simulation results show that although accurate barcoding systems of high multiplexing capacity can be obtained with any of these codes, using quaternary LDPC codes may be particularly advantageous due to the lower rates of read losses and undetected sample misidentification errors. Even at mismatch error rates of 10−2 per base, 24-nt LDPC barcodes can be used to multiplex roughly 2000 samples with a sample misidentification error rate in the order of 10−9 at the expense of a rate of read losses just in the order of 10−6.  相似文献   

3.
Pelagic ecosystems support a significant and vital component of the ocean's productivity and biodiversity. They are also heavily exploited and, as a result, are the focus of numerous spatial planning initiatives. Over the past decade, there has been increasing enthusiasm for protected areas as a tool for pelagic conservation, however, few have been implemented. Here we demonstrate an approach to plan protected areas that address the physical and biological dynamics typical of the pelagic realm. Specifically, we provide an example of an approach to planning protected areas that integrates pelagic and benthic conservation in the southern Benguela and Agulhas Bank ecosystems off South Africa. Our aim was to represent species of importance to fisheries and species of conservation concern within protected areas. In addition to representation, we ensured that protected areas were designed to consider pelagic dynamics, characterized from time-series data on key oceanographic processes, together with data on the abundance of small pelagic fishes. We found that, to have the highest likelihood of reaching conservation targets, protected area selection should be based on time-specific data rather than data averaged across time. More generally, we argue that innovative methods are needed to conserve ephemeral and dynamic pelagic biodiversity.  相似文献   

4.
Prior to performance of linkage analysis, elimination of all Mendelian inconsistencies in the pedigree data is essential. Often, identification of erroneous genotypes by visual inspection can be very difficult and time consuming. In fact, sometimes the errors are not recognized until the stage of running linkage-analysis software. The effort then required to find the erroneous genotypes and to cross-reference pedigree and marker data that may have been recoded and renumbered can be not only tedious but also quite daunting, in the case of very large pedigrees. We have implemented four error-checking algorithms in a new computer program, PedCheck, which will assist researchers in identifying all Mendelian inconsistencies in pedigree data and will provide them with useful and detailed diagnostic information to help resolve the errors. Our program, which uses many of the algorithms implemented in VITESSE, handles large data sets quickly and efficiently, accepts a variety of input formats, and offers various error-checking algorithms that match the subtlety of the pedigree error. These algorithms range from simple parent-offspring-compatibility checks to a single-locus likelihood-based statistic that identifies and ranks the individuals most likely to be in error. We use various real data sets to illustrate the power and effectiveness of our program.  相似文献   

5.
Barbieri introduced and developed the concept of organic codes. The most basic of them is the genetic code, a set of correspondence rules between otherwise unrelated sequences: strings of nucleotides on the one hand, polypeptidic chains on the other hand. Barbieri noticed that it implies ‘coding by convention’ as arbitrary as the semantic relations a language establishes between words and outer objects. Moreover, the major transitions in life evolution originated in new organic codes similarly involving conventional rules. Independently, dealing with heredity as communication over time and relying on information theory, we asserted that the conservation of genomes over the ages demands that error-correcting codes make them resilient to casual errors. Moreover, the better conservation of very old parts of the genome demands that they result from combining successively established nested codes such that the older an information, the more numerous component codes protect it. Barbieri’s concept of organic code and that of genomic error-correcting code may seem unrelated. We show however that organic codes actually entail error-correcting properties. Error-correcting, in general, results from constraints being imposed on a set of sequences. Mathematical equalities are conveniently used in communication engineering for expressing constraints but error correction only needs that constraints exist. Biological sequences are similarly endowed with error-correcting ability by physical-chemical or linguistic constraints, thus defining ‘soft codes’. These constraints are moreover presumably efficient for correcting errors. Insofar as biological sequences are subjected to constraints, organic codes necessarily involve soft codes, and their successive onset results in the nested structure we hypothesized. Organic codes are generated and maintained by means of molecular ‘semantic feedback loops’. Each of these loops involves genes which code for proteins, the enzymatic action of which controls a function needed for the protein assembly. Taken together, thus, they control the assembly of their own structure as instructed by the genome and, once closed, these loops ensure their own conservation. However, the semantic feedback loops do not prevent the genome lengthening. It increases both the redundancy of the genome (as an error-correcting code) and the information quantity it bears, thus improving the genome reliability and the specificity of the enzymes, which enables further evolution.  相似文献   

6.
Combining several screening tests: optimality of the risk score   总被引:5,自引:0,他引:5  
McIntosh MW  Pepe MS 《Biometrics》2002,58(3):657-664
The development of biomarkers for cancer screening is an active area of research. While several biomarkers exist, none is sufficiently sensitive and specific on its own for population screening. It is likely that successful screening programs will require combinations of multiple markers. We consider how to combine multiple disease markers for optimal performance of a screening program. We show that the risk score, defined as the probability of disease given data on multiple markers, is the optimal function in the sense that the receiver operating characteristic (ROC) curve is maximized at every point. Arguments draw on the Neyman-Pearson lemma. This contrasts with the corresponding optimality result of classic decision theory, which is set in a Bayesian framework and is based on minimizing an expected loss function associated with decision errors. Ours is an optimality result defined from a strictly frequentist point of view and does not rely on the notion of associating costs with misclassifications. The implication for data analysis is that binary regression methods can be used to yield appropriate relative weightings of different biomarkers, at least in large samples. We propose some modifications to standard binary regression methods for application to the disease screening problem. A flexible biologically motivated simulation model for cancer biomarkers is presented and we evaluate our methods by application to it. An application to real data concerning two ovarian cancer biomarkers is also presented. Our results are equally relevant to the more general medical diagnostic testing problem, where results of multiple tests or predictors are combined to yield a composite diagnostic test. Moreover, our methods justify the development of clinical prediction scores based on binary regression.  相似文献   

7.
Computational wear prediction is an attractive concept for evaluating new total knee replacement designs prior to physical testing and implementation. An important hurdle to such technology is the lack of in vivo contact pressure predictions. To address this issue, this study evaluates a computationally efficient simulation approach that combines the advantages of rigid and deformable body modeling. The hybrid method uses rigid body dynamics to predict body positions and orientations and elastic foundation theory to predict contact pressures between general three-dimensional surfaces. To evaluate the method, we performed static pressure experiments with a commercial knee implant in neutral alignment using flexion angles of 0, 30, 60, and 90 degrees and loads of 750, 1500, 2250, and 3000N. Using manufacturer CAD geometry for the same implant, an elastic foundation model with linear or nonlinear polyethylene material properties was implemented within a commercial multibody dynamics software program. The model's ability to predict experimental peak and average contact pressures simultaneously was evaluated by performing dynamic simulations to find the static configuration. Both the linear and nonlinear material models predicted the average contact pressure data well, while only the linear material model could simultaneously predict the trends in the peak contact pressure data. This novel modeling approach is sufficiently fast and accurate to be used in design sensitivity and optimization studies of knee implant mechanics and ultimately wear.  相似文献   

8.
Unloader braces are one non-invasive treatment of knee osteoarthritis, which primarily function by applying an external abduction moment to the joint to reduce loads in the medial compartment of the knee. We developed a novel method using brace deflection to estimate the mechanical effect of valgus braces and validated this model using strain gauge instrumentation.Three subjects performed static and walking trials, in which the moment applied by an instrumented brace was calculated using the deflection and strain methods. The deflection method predicted average brace moments of 8.7 Nm across static trials; mean error between the deflection model predictions and the gold-standard strain gauge measurements was 0.32 Nm. Mean brace moment predictions throughout gait ranged from 7.1 to 8.7 Nm using the deflection model. Maximum differences (MAE) over the gait cycle in mean and peak brace moments between methods were 1.50 Nm (0.96) and 0.60 Nm (0.42).Our proposed method enables quantification of brace abduction moments without the use of custom instrumentation. While the deflection-based method is similar to that implemented by Schmalz et al. (2010), the proposed method isolates abduction deflection from the 3 DOF angular changes that occur within the brace. Though the model should be viewed with more caution during swing (MAE = 1.16 Nm), it was shown that the accuracy is influenced by the uncertainty in angle measurement due to cluster spacing. In conclusion, the results demonstrate that the deflection-based method developed can predict comparable brace moments to those of the previously established strain method.  相似文献   

9.
10.
Identification of scapular dyskinesis and evaluation of interventions depend on the ability to properly measure scapulothoracic (ST) motion. The most widely used measurement approach is the acromion marker cluster (AMC), which can yield large errors in extreme humeral elevation and can be inaccurate in children and patient populations. Recently, an individualized regression approach has been proposed as an alternative to the AMC. This technique utilizes the relationship between ST orientation, humerothoracic orientation and acromion process position derived from calibration positions to predict dynamic ST orientations from humerothoracic and acromion process measures during motion. These individualized regressions demonstrated promising results for healthy adults; however, this method had not yet been compared to the more conventional AMC. This study compared ST orientation estimates by the AMC and regression approaches to static ST angles determined by surface markers placed on palpated landmarks in typically developing adolescents performing functional tasks. Both approaches produced errors within the range reported in the literature for skin-based scapular measurement techniques. The performance of the regression approach suffered when applied to positions outside of the range of motion in the set of calibration positions. The AMC significantly underestimated ST internal rotation across all positions and overestimated posterior tilt in some positions. Overall, root mean square errors for the regression approach were smaller than the AMC for every position across all axes of ST motion. Accordingly, we recommend the regression approach as a suitable technique for measuring ST kinematics in functional motion.  相似文献   

11.
12.
13.
The classical view of genetics is based on the central dogma of molecular biology that assigns to DNA a fundamental but static role. According to the dogma, DNA can be duplicated only in identical copies (except for random errors), and no smart mechanism can alter the information content of DNA: in more detail, the direction of transfer of the genetic information is only from DNA through RNA to proteins and never backwards. However, starting from the so-called dynamic genome (McClintock's jumping genes), and the so-called dynamic mutations (such as the trinucleotide expansion or, more generally, the instability of the number of tandem repeats of longer sequences), there is now a growing body of important cases where it is known that the DNA is altered in a more or less sophisticated way, often by smart enzymatic mechanisms. The study of all such dynamic phenomena and of their interpretations can be naturally called dynamical genetics. In this survey we examine a number of such dynamic phenomena, and also some phenomena of great biological importance that have no universally accepted explanation within a static approach to genetics, and for which a dynamical interpretation has been only proposed. Important examples are some controversial but interesting phenomena such as horizontal transmission and Creutzfeldt-Jakob Disease, and those peculiar DNA structures known as G-quadruplexes.  相似文献   

14.
Mansoor SE  Dewitt MA  Farrens DL 《Biochemistry》2010,49(45):9722-9731
Studying the interplay between protein structure and function remains a daunting task. Especially lacking are methods for measuring structural changes in real time. Here we report our most recent improvements to a method that can be used to address such challenges. This method, which we now call tryptophan-induced quenching (TrIQ), provides a straightforward, sensitive, and inexpensive way to address questions of conformational dynamics and short-range protein interactions. Importantly, TrIQ only occurs over relatively short distances (~5-15 ?), making it complementary to traditional fluorescence resonance energy transfer (FRET) methods that occur over distances too large for precise studies of protein structure. As implied in the name, TrIQ measures the efficient quenching induced in some fluorophores by tryptophan (Trp). We present here our analysis of the TrIQ effect for five different fluorophores that span a range of sizes and spectral properties. Each probe was attached to four different cysteine residues on T4 lysozyme, and the extent of TrIQ caused by a nearby Trp was measured. Our results show that, at least for smaller probes, the extent of TrIQ is distance dependent. Moreover, we also demonstrate how TrIQ data can be analyzed to determine the fraction of fluorophores involved in a static, nonfluorescent complex with Trp. Based on this analysis, our study shows that each fluorophore has a different TrIQ profile, or "sphere of quenching", which correlates with its size, rotational flexibility, and the length of attachment linker. This TrIQ-based "sphere of quenching" is unique to every Trp-probe pair and reflects the distance within which one can expect to see the TrIQ effect. Thus,TrIQ provides a straightforward, readily accessible approach for mapping distances within proteins and monitoring conformational changes using fluorescence spectroscopy.  相似文献   

15.
High productivity is the primary goal of flexible manufacturing systems (FMSs) in which semi-independent workstations are integrated using automated material-transport systems and hierarchical local networks. Availability of various subsystems and of the system as a whole is a prerequisite for achieving functional integration as well as high throughput. An FMS also has inherent routing and operation flexibilities that provide it with a certain degree of fault tolerance. A certain volume of production can thus be maintained in the face of subsystem (i.e., machines, robots, material handling system, etc.) failures. In this article, we propose two reliability measures, namely, part reliability (PR) and FMS reliability (FMSR) for manufacturing systems and present algorithms to evaluate them. We also consider the dynamic or time-dependent reliability analysis as a natural generalization of the static analysis. The methods outlined use an algorithm that generates process-spanning graphs (PSGs), which are used to evaluate the reliability measures.  相似文献   

16.
Using dynamic vegetation models to simulate plant range shifts   总被引:3,自引:0,他引:3  
Dynamic vegetation models (DVMs) follow a process‐based approach to simulate plant population demography, and have been used to address questions about disturbances, plant succession, community composition, and provisioning of ecosystem services under climate change scenarios. Despite their potential, they have seldom been used for studying species range dynamics explicitly. In this perspective paper, we make the case that DVMs should be used to this end and can improve our understanding of the factors that influence species range expansions and contractions. We review the benefits of using process‐based, dynamic models, emphasizing how DVMs can be applied specifically to questions about species range dynamics. Subsequently, we provide a critical evaluation of some of the limitations and trade‐offs associated with DVMs, and we use those to guide our discussions about future model development. This includes a discussion on which processes are lacking, specifically a mechanistic representation of dispersal, inclusion of the seedling stage, trait variability, and a dynamic representation of reproduction. We also discuss upscaling techniques that offer promising solutions for being able to run these models efficiently over large spatial extents. Our aim is to provide directions for future research efforts and to illustrate the value of the DVM approach.  相似文献   

17.
Genotypes produced from samples collected non-invasively in harsh field conditions often lack the full complement of data from the selected microsatellite loci. The application to genetic mark-recapture methodology in wildlife species can therefore be prone to misidentifications leading to both ‘true non-recaptures’ being falsely accepted as recaptures (Type I errors) and ‘true recaptures’ being undetected (Type II errors). Here we present a new likelihood method that allows every pairwise genotype comparison to be evaluated independently. We apply this method to determine the total number of recaptures by estimating and optimising the balance between Type I errors and Type II errors. We show through simulation that the standard error of recapture estimates can be minimised through our algorithms. Interestingly, the precision of our recapture estimates actually improved when we included individuals with missing genotypes, as this increased the number of pairwise comparisons potentially uncovering more recaptures. Simulations suggest that the method is tolerant to per locus error rates of up to 5% per locus and can theoretically work in datasets with as little as 60% of loci genotyped. Our methods can be implemented in datasets where standard mismatch analyses fail to distinguish recaptures. Finally, we show that by assigning a low Type I error rate to our matching algorithms we can generate a dataset of individuals of known capture histories that is suitable for the downstream analysis with traditional mark-recapture methods.  相似文献   

18.
We aim at finding the smallest set of genes that can ensure highly accurate classification of cancers from microarray data by using supervised machine learning algorithms. The significance of finding the minimum gene subsets is three-fold: 1) it greatly reduces the computational burden and "noise" arising from irrelevant genes. In the examples studied in this paper, finding the minimum gene subsets even allows for extraction of simple diagnostic rules which lead to accurate diagnosis without the need for any classifiers, 2) it simplifies gene expression tests to include only a very small number of genes rather than thousands of genes, which can bring down the cost for cancer testing significantly, 3) it calls for further investigation into the possible biological relationship between these small numbers of genes and cancer development and treatment. Our simple yet very effective method involves two steps. In the first step, we choose some important genes using a feature importance ranking scheme. In the second step, we test the classification capability of all simple combinations of those important genes by using a good classifier. For three "small" and "simple" data sets with two, three, and four cancer (sub)types, our approach obtained very high accuracy with only two or three genes. For a "large" and "complex" data set with 14 cancer types, we divided the whole problem into a group of binary classification problems and applied the 2-step approach to each of these binary classification problems. Through this "divide-and-conquer" approach, we obtained accuracy comparable to previously reported results but with only 28 genes rather than 16,063 genes. In general, our method can significantly reduce the number of genes required for highly reliable diagnosis  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号