Background
The collection of accurate data on adherence and sexual behaviour is crucial in microbicide (and other HIV-related) research. In the absence of a “gold standard” the collection of such data relies largely on participant self-reporting. After reviewing available methods, this paper describes a mixed method/triangulation model for generating more accurate data on adherence and sexual behaviour in a multi-centre vaginal microbicide clinical trial. In a companion paper some of the results from this model are presented [1].Methodology/Principal Findings
Data were collected from a random subsample of 725 women (7.7% of the trial population) using structured interviews, coital diaries, in-depth interviews, counting returned gel applicators, focus group discussions, and ethnography. The core of the model was a customised, semi-structured in-depth interview. There were two levels of triangulation: first, discrepancies between data from the questionnaires, diaries, in-depth interviews and applicator returns were identified, discussed with participants and, to a large extent, resolved; second, results from individual participants were related to more general data emerging from the focus group discussions and ethnography. A democratic and equitable collaboration between clinical trialists and qualitative social scientists facilitated the success of the model, as did the preparatory studies preceding the trial. The process revealed some of the underlying assumptions and routinised practices in “clinical trial culture” that are potentially detrimental to the collection of accurate data, as well as some of the shortcomings of large qualitative studies, and pointed to some potential solutions.Conclusions/Significance
The integration of qualitative social science and the use of mixed methods and triangulation in clinical trials are feasible, and can reveal (and resolve) inaccuracies in data on adherence and sensitive behaviours, as well as illuminating aspects of “trial culture” that may also affect data accuracy. 相似文献Background
For several immune-mediated diseases, immunological analysis will become more complex in the future with datasets in which cytokine and gene expression data play a major role. These data have certain characteristics that require sophisticated statistical analysis such as strategies for non-normal distribution and censoring. Additionally, complex and multiple immunological relationships need to be adjusted for potential confounding and interaction effects.Objective
We aimed to introduce and apply different methods for statistical analysis of non-normal censored cytokine and gene expression data. Furthermore, we assessed the performance and accuracy of a novel regression approach in order to allow adjusting for covariates and potential confounding.Methods
For non-normally distributed censored data traditional means such as the Kaplan-Meier method or the generalized Wilcoxon test are described. In order to adjust for covariates the novel approach named Tobit regression on ranks was introduced. Its performance and accuracy for analysis of non-normal censored cytokine/gene expression data was evaluated by a simulation study and a statistical experiment applying permutation and bootstrapping.Results
If adjustment for covariates is not necessary traditional statistical methods are adequate for non-normal censored data. Comparable with these and appropriate if additional adjustment is required, Tobit regression on ranks is a valid method. Its power, type-I error rate and accuracy were comparable to the classical Tobit regression.Conclusion
Non-normally distributed censored immunological data require appropriate statistical methods. Tobit regression on ranks meets these requirements and can be used for adjustment for covariates and potential confounding in large and complex immunological datasets. 相似文献Background
The inherent complexity of statistical methods and clinical phenomena compel researchers with diverse domains of expertise to work in interdisciplinary teams, where none of them have a complete knowledge in their counterpart''s field. As a result, knowledge exchange may often be characterized by miscommunication leading to misinterpretation, ultimately resulting in errors in research and even clinical practice. Though communication has a central role in interdisciplinary collaboration and since miscommunication can have a negative impact on research processes, to the best of our knowledge, no study has yet explored how data analysis specialists and clinical researchers communicate over time.Methods/Principal Findings
We conducted qualitative analysis of encounters between clinical researchers and data analysis specialists (epidemiologist, clinical epidemiologist, and data mining specialist). These encounters were recorded and systematically analyzed using a grounded theory methodology for extraction of emerging themes, followed by data triangulation and analysis of negative cases for validation. A policy analysis was then performed using a system dynamics methodology looking for potential interventions to improve this process. Four major emerging themes were found. Definitions using lay language were frequently employed as a way to bridge the language gap between the specialties. Thought experiments presented a series of “what if” situations that helped clarify how the method or information from the other field would behave, if exposed to alternative situations, ultimately aiding in explaining their main objective. Metaphors and analogies were used to translate concepts across fields, from the unfamiliar to the familiar. Prolepsis was used to anticipate study outcomes, thus helping specialists understand the current context based on an understanding of their final goal.Conclusion/Significance
The communication between clinical researchers and data analysis specialists presents multiple challenges that can lead to errors. 相似文献Molecular data provide a more dynamic picture of polyploid evolution than has been traditionally espoused. Numerous studies have demonstrated multiple origins of both allopolyploids and autopolyploids. In several polyploid species studied in detail, multiple origins were found to be frequent on a local geographic scale, as well as during a short span of time. Molecular data strongly suggest that recurrent formation of polyploid species is the rule, rather than the exception. In addition, molecular data indicate that recurrent formation of polyploids has important genetic consequences, introducing considerable genetic variation from diploid progenitors into polyploid derivatives.
Molecular data also suggest a much more important role for natural autopolyploids than has been historically envisioned. In contrast to the longstanding view of autopolyploidy as being rare, molecular data continue to reveal steadily increasing numbers of well-documented autoploids having tetrasomic or higher-level polysomic inheritance. Although autopolyploidy undoubtedly occurs much less frequently than allopolyploidy in natural populations, it nonetheless has been a significant evolutionary mechanism. Molecular data also provide compelling genetic evidence that contradicts the traditional view of autopolyploidy as being maladaptive. Electrophoretic studies have revealed three important attributes of autopolyploids compared to their diploid progenitors: (1) enzyme multiplicity, (2) increased heterozygosity, and (3) increased allelic diversity. Genetic variability is, in fact, typically substantially higher in autopoloids than in their diploid progenitors. These genetic attributes of autopolyploids are due to polysomic inheritance and provide strong genetic arguments for the potential success of autopolyploids in nature.
In addition to providing numerous important insights into the formation of polyploids and the immediate genetic consequences of polyploidy, molecular data also have been used to study the subsequent evolution of polyploid genomes. Common hypotheses on the subsequent evolution of polyploid genomes include (1) gene silencing, eventually leading to extensively diploidized polyploid genomes; (2) gene diversification, resulting in regulatory or functional divergence of duplicate genes; and (3) genome diversification, resulting in chromosomal repatterning. Compelling, but limited, genetic evidence for all of these factors has been obtained in molecular analyses of polyploid species. The occurrence of these processes in polyploid genomes indicates that polyploid genomes are plastic and susceptible to evolutionary change.
In summary, molecular data continue to demonstrate that polyploidization and the subsequent evolution of polyploid genomes are very dynamic processes. 相似文献
Background
Quantitative PCR (qPCR) is a workhorse laboratory technique for measuring the concentration of a target DNA sequence with high accuracy over a wide dynamic range. The gold standard method for estimating DNA concentrations via qPCR is quantification cycle () standard curve quantification, which requires the time- and labor-intensive construction of a standard curve. In theory, the shape of a qPCR data curve can be used to directly quantify DNA concentration by fitting a model to data; however, current empirical model-based quantification methods are not as reliable as standard curve quantification.Principal Findings
We have developed a two-parameter mass action kinetic model of PCR (MAK2) that can be fitted to qPCR data in order to quantify target concentration from a single qPCR assay. To compare the accuracy of MAK2-fitting to other qPCR quantification methods, we have applied quantification methods to qPCR dilution series data generated in three independent laboratories using different target sequences. Quantification accuracy was assessed by analyzing the reliability of concentration predictions for targets at known concentrations. Our results indicate that quantification by MAK2-fitting is as reliable as standard curve quantification for a variety of DNA targets and a wide range of concentrations.Significance
We anticipate that MAK2 quantification will have a profound effect on the way qPCR experiments are designed and analyzed. In particular, MAK2 enables accurate quantification of portable qPCR assays with limited sample throughput, where construction of a standard curve is impractical. 相似文献Background
Understanding the current status of predatory fish communities, and the effects fishing has on them, is vitally important information for management. However, data are often insufficient at region-wide scales to assess the effects of extraction in coral reef ecosystems of developing nations.Methodology/Principal Findings
Here, I overcome this difficulty by using a publicly accessible, fisheries-independent database to provide a broad scale, comprehensive analysis of human impacts on predatory reef fish communities across the greater Caribbean region. Specifically, this study analyzed presence and diversity of predatory reef fishes over a gradient of human population density. Across the region, as human population density increases, presence of large-bodied fishes declines, and fish communities become dominated by a few smaller-bodied species.Conclusions/Significance
Complete disappearance of several large-bodied fishes indicates ecological and local extinctions have occurred in some densely populated areas. These findings fill a fundamentally important gap in our knowledge of the ecosystem effects of artisanal fisheries in developing nations, and provide support for multiple approaches to data collection where they are commonly unavailable. 相似文献Introduction
Mortality data provide essential evidence on the health status of populations in crisis-affected and resource-poor settings and to guide and assess relief operations. Retrospective surveys are commonly used to collect mortality data in such populations, but require substantial resources and have important methodological limitations. We evaluated the feasibility of an alternative method for rapidly quantifying mortality (the informant method). The study objective was to assess the economic feasibility of the informant method.Methods
The informant method captures deaths through an exhaustive search for all deaths occurring in a population over a defined and recent recall period, using key community informants and next-of-kin of decedents. Between July and October 2008, we implemented and evaluated the informant method in: Kabul, Afghanistan; Mae La camp for Karen refugees, Thai-Burma border; Chiradzulu District, Malawi; and Lugufu and Mtabila refugee camps, Tanzania. We documented the time and cost inputs for the informant method in each site, and compared these with projections for hypothetical retrospective mortality surveys implemented in the same site with a 6 month recall period and with a 30 day recall period.Findings
The informant method was estimated to require an average of 29% less time inputs and 33% less monetary inputs across all four study sites when compared with retrospective surveys with a 6 month recall period, and 88% less time inputs and 86% less monetary inputs when compared with retrospective surveys with a 1 month recall period. Verbal autopsy questionnaires were feasible and efficient, constituting only 4% of total person-time for the informant method''s implementation in Chiradzulu District.Conclusions
The informant method requires fewer resources and incurs less respondent burden. The method''s generally impressive feasibility and the near real-time mortality data it provides warrant further work to develop the method given the importance of mortality measurement in such settings. 相似文献![点击此处可从《Molecular biology of the cell》网站下载免费的PDF全文](/ch/ext_images/free.gif)
- Support Vector Machine Recursive Feature Elimination (SVMRFE)
- Leave-One-Out Calculation Sequential Forward Selection (LOOCSFS)
- Gradient based Leave-one-out Gene Selection (GLGS)