首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Numerous statistical methods have been developed for analyzing high‐dimensional data. These methods often focus on variable selection approaches but are limited for the purpose of testing with high‐dimensional data. They are often required to have explicit‐likelihood functions. In this article, we propose a “hybrid omnibus test” for high‐dicmensional data testing purpose with much weaker requirements. Our hybrid omnibus test is developed under a semiparametric framework where a likelihood function is no longer necessary. Our test is a version of a frequentist‐Bayesian hybrid score‐type test for a generalized partially linear single‐index model, which has a link function being a function of a set of variables through a generalized partially linear single index. We propose an efficient score based on estimating equations, define local tests, and then construct our hybrid omnibus test using local tests. We compare our approach with an empirical‐likelihood ratio test and Bayesian inference based on Bayes factors, using simulation studies. Our simulation results suggest that our approach outperforms the others, in terms of type I error, power, and computational cost in both the low‐ and high‐dimensional cases. The advantage of our approach is demonstrated by applying it to genetic pathway data for type II diabetes mellitus.  相似文献   

2.
Recent investigations have revealed 1) that the isochores of the human genome group into two super‐families characterized by two different long‐range 3D structures, and 2) that these structures, essentially based on the distribution and topology of short sequences, mold primary chromatin domains (and define nucleosome binding). More specifically, GC‐poor, gene‐poor isochores are low‐heterogeneity sequences with oligo‐A spikes that mold the lamina‐associated domains (LADs), whereas GC‐rich, gene‐rich isochores are characterized by single or multiple GC peaks that mold the topologically associating domains (TADs). The formation of these “primary TADs” may be followed by extrusion under the action of cohesin and CTCF. Finally, the genomic code, which is responsible for the pervasive encoding and molding of primary chromatin domains (LADs and primary TADs, namely the “gene spaces”/“spatial compartments”) resolves the longstanding problems of “non‐coding DNA,” “junk DNA,” and “selfish DNA” leading to a new vision of the genome as shaped by DNA sequences.  相似文献   

3.
We suspect that there is a level of granularity of protein structure intermediate between the classical levels of “architecture” and “topology,” as reflected in such phenomena as extensive three‐dimensional structural similarity above the level of (super)folds. Here, we examine this notion of architectural identity despite topological variability, starting with a concept that we call the “Urfold.” We believe that this model could offer a new conceptual approach for protein structural analysis and classification: indeed, the Urfold concept may help reconcile various phenomena that have been frequently recognized or debated for years, such as the precise meaning of “significant” structural overlap and the degree of continuity of fold space. More broadly, the role of structural similarity in sequence?structure?function evolution has been studied via many models over the years; by addressing a conceptual gap that we believe exists between the architecture and topology levels of structural classification schemes, the Urfold eventually may help synthesize these models into a generalized, consistent framework. Here, we begin by qualitatively introducing the concept.  相似文献   

4.
Realistic power calculations for large cohort studies and nested case control studies are essential for successfully answering important and complex research questions in epidemiology and clinical medicine. For this, we provide a methodical framework for general realistic power calculations via simulations that we put into practice by means of an R‐based template. We consider staggered recruitment and individual hazard rates, competing risks, interaction effects, and the misclassification of covariates. The study cohort is assembled with respect to given age‐, gender‐, and community distributions. Nested case‐control analyses with a varying number of controls enable comparisons of power with a full cohort analysis. Time‐to‐event generation under competing risks, including delayed study‐entry times, is realized on the basis of a six‐state Markov model. Incidence rates, prevalence of risk factors and prefixed hazard ratios allow for the assignment of age‐dependent transition rates given in the form of Cox models. These provide the basis for a central simulation‐algorithm, which is used for the generation of sample paths of the underlying time‐inhomogeneous Markov processes. With the inclusion of frailty terms into the Cox models the Markov property is specifically biased. An “individual Markov process given frailty” creates some unobserved heterogeneity between individuals. Different left‐truncation‐ and right‐censoring patterns call for the use of Cox models for data analysis. p‐values are recorded over repeated simulation runs to allow for the desired power calculations. For illustration, we consider scenarios with a “testing” character as well as realistic scenarios. This enables the validation of a correct implementation of theoretical concepts and concrete sample size recommendations against an actual epidemiological background, here given with possible substudy designs within the German National Cohort.  相似文献   

5.
6.
This essay provides an introduction to the terminology, concepts, methods, and challenges of image‐based modeling in biology. Image‐based modeling and simulation aims at using systematic, quantitative image data to build predictive models of biological systems that can be simulated with a computer. This allows one to disentangle molecular mechanisms from effects of shape and geometry. Questions like “what is the functional role of shape” or “how are biological shapes generated and regulated” can be addressed in the framework of image‐based systems biology. The combination of image quantification, model building, and computer simulation is illustrated here using the example of diffusion in the endoplasmic reticulum.  相似文献   

7.
Like many economic exchanges, industrial symbiosis (IS) is thought to be influenced by social relationships and shared norms among actors in a network. While many implicit references to social characteristics exist throughout the literature, there have been few explicit attempts to operationalize and measure the concepts. The “short mental distance,”“trust,”“openness,” and “communication” recorded among managers in Kalundborg, Denmark, set a precedent for examining and encouraging social interactions among key personnel in the dozens of eco‐industrial networks around the world. In this article we explore the relationships among various aspects of social embeddedness, social capital, and IS. We develop a conceptual framework and an approach using quantitative and qualitative methods to identify and measure these social characteristics, including social network structure, communication, and similarities in norms and conceptions of waste, and apply them in an industrial network in Nanjangud, South India. The findings suggest that there is a fairly high level of shared norms about dealing with waste—the “short mental distance”—in this network, but by‐product transactions are only weakly correlated with the structure and content of communication among managers. Replication of this approach can increase the understanding and comparability of the role of social characteristics in eco‐industrial activities around the world.  相似文献   

8.
Biofabrication of tissue analogues is aspiring to become a disruptive technology capable to solve standing biomedical problems, from generation of improved tissue models for drug testing to alleviation of the shortage of organs for transplantation. Arguably, the most powerful tool of this revolution is bioprinting, understood as the assembling of cells with biomaterials in three‐dimensional structures. It is less appreciated, however, that bioprinting is not a uniform methodology, but comprises a variety of approaches. These can be broadly classified in two categories, based on the use or not of supporting biomaterials (known as “scaffolds,” usually printable hydrogels also called “bioinks”). Importantly, several limitations of scaffold‐dependent bioprinting can be avoided by the “scaffold‐free” methods. In this overview, we comparatively present these approaches and highlight the rapidly evolving scaffold‐free bioprinting, as applied to cardiovascular tissue engineering.  相似文献   

9.
A multistage single arm phase II trial with binary endpoint is considered. Bayesian posterior probabilities are used to monitor futility in interim analyses and efficacy in the final analysis. For a beta‐binomial model, decision rules based on Bayesian posterior probabilities are converted to “traditional” decision rules in terms of number of responders among patients observed so far. Analytical derivations are given for the probability of stopping for futility and for the probability to declare efficacy. A workflow is presented on how to select the parameters specifying the Bayesian design, and the operating characteristics of the design are investigated. It is outlined how the presented approach can be transferred to statistical models other than the beta‐binomial model.  相似文献   

10.
Characterizing an appropriate dose‐response relationship and identifying the right dose in a clinical trial are two main goals of early drug‐development. MCP‐Mod is one of the pioneer approaches developed within the last 10 years that combines the modeling techniques with multiple comparison procedures to address the above goals in clinical drug development. The MCP‐Mod approach begins with a set of potential dose‐response models, tests for a significant dose‐response effect (proof of concept, PoC) using multiple linear contrasts tests and selects the “best” model among those with a significant contrast test. A disadvantage of the method is that the parameter values of the candidate models need to be fixed a priori for the contrasts tests. This may lead to a loss in power and unreliable model selection. For this reason, several variations of the MCP‐Mod approach and a hierarchical model selection approach have been suggested where the parameter values need not be fixed in the proof of concept testing step and can be estimated after the model selection step. This paper provides a numerical comparison of the different MCP‐Mod variants and the hierarchical model selection approach with regard to their ability of detecting the dose‐response trend, their potential to select the correct model and their accuracy in estimating the dose response shape and minimum effective dose. Additionally, as one of the approaches is based on two‐sided model comparisons only, we make it more consistent with the common goals of a PoC study, by extending it to one‐sided comparisons between the constant and alternative candidate models in the proof of concept step.  相似文献   

11.
12.
A wealth of information on proteins involved in many aspects of disease is encased within formalin‐fixed paraffin‐embedded (FFPE) tissue repositories stored in hospitals worldwide. Recently, access to this “hidden treasure” is being actively pursued by the application of two main extraction strategies: digestion of the entangled protein matrix with generation of tryptic peptides, or decrosslinking and extraction of full‐length proteins. Here, we describe an optimised method for extraction of full‐length proteins from FFPE tissues. This method builds on the classical “antigen retrieval” technique used for immunohistochemistry, and allows generation of protein extracts with elevated and reproducible yields. In model animal tissues, average yields of 16.3 μg and 86.8 μg of proteins were obtained per 80 mm2 tissue slice of formalin‐fixed paraffin‐embedded skeletal muscle and liver, respectively. Protein extracts generated with this method can be used for the reproducible investigation of the proteome with a wide array of techniques. The results obtained by SDS‐PAGE, western immunoblotting, protein arrays, ELISA, and, most importantly, nanoHPLC‐nanoESI‐Q‐TOF MS of FFPE proteins resolved by SDS‐PAGE, are presented and discussed. An evaluation of the extent of modifications introduced on proteins by formalin fixation and crosslink reversal, and their impact on quality of MS results, is also reported.  相似文献   

13.
Three‐arm noninferiority trials (involving an experimental treatment, a reference treatment, and a placebo)—called the “gold standard” noninferiority trials—are conducted in patients with mental disorders whenever feasible, but often fail to show superiority of the experimental treatment and/or the reference treatment over the placebo. One possible reason is that some of the patients receiving the placebo show apparent improvement in the clinical condition. An approach to addressing this problem is the use of the sequential parallel comparison design (SPCD). Nonetheless, the SPCD has not yet been discussed in relation to gold standard noninferiority trials. In this article, our aim was to develop a hypothesis‐testing method and its corresponding sample size calculation method for gold standard noninferiority trials with the SPCD. In a simulation, we show that the proposed hypothesis‐testing method achieves the nominal type I error rate and power and that the proposed sample size calculation method has adequate power accuracy.  相似文献   

14.
Animal‐borne data loggers (ABDLs) or “tags” are regularly used to elucidate animal ecology and physiology, but current literature highlights the need to assess associated deleterious impacts including increased resistive force to motion. Previous studies have used computational fluid dynamics (CFD) to estimate this impact, but many suffer limitations (e.g., inaccurate turbulence modeling, neglecting boundary layer transition, neglecting added mass effects, and analyzing the ABDL in isolation from the animal). A novel CFD‐based method is presented in which a “tag impact envelope” is defined utilizing simulations with and without transition modeling to define upper and lower drag limits, respectively, and added mass coefficients are found via simulations with sinusoidally varying inlet velocity, with modified Navier‐Stokes conservation of momentum equations enforcing a shift to the animal's noninertial reference frame. The method generates coefficients for calculating total resistive force for any velocity and acceleration combination, and is validated against theory for a prolate spheroid. An example case shows ABDL drag impact on a harp seal of 11.21%–16.24%, with negligible influence on added mass. By considering the effects of added mass and boundary layer transition, the approach presented is an enhancement to the CFD‐based ABDL impact assessment methods previously applied by researchers.  相似文献   

15.
A low‐intervention approach to restoration that also allows restoration outcomes to be framed as trajectories of ecosystem change can be described as “open‐ended” restoration. It is an approach which recognizes that long‐term ecosystem behavior involves continual change at small and large spatial and temporal scales. There are a number of situations in which it is appropriate to adopt an open‐ended approach to restoration including: in remote and large areas, where ecological limiting factors will be changed by future climates, where antecedent conditions cannot be replicated, where there are novel starting points for restoration, where restoration relies strongly on processes outside the restoration area, in inherently dynamic systems, where costs are high and where the public demands “wildness.” Where this approach is adopted managers need to explain the project and deal with public expectations and public risk. Monitoring biotic and abiotic components of the project are very important as an open‐ended approach does not equate to “abandon and ignore it.”  相似文献   

16.
The chainmail catalysts (transition metals or metal alloys encapsulated in carbon) are regarded as stable and efficient electrocatalysts for hydrogen generation. However, the fabrication of chainmail catalysts usually involves complex chemical vapor deposition (CVD) or prolonged calcination in a furnace, and the slurry‐based electrode assembly of the chainmail catalysts often suffers from inferior mass transfer and an underutilized active surface. In this work, a freestanding wood‐based open carbon framework is designed embedded with nitrogen (N) doped, few‐graphene‐layer‐encapsulated nickel iron (NiFe) alloy nanoparticles (N‐C‐NiFe). 3D wood‐derived carbon framework with numerous open and low‐tortuosity lumens, which are decorated with carbon nanotubes (CNTs) “villi”, can facilitate electrolyte permeation and hydrogen gas removal. The chainmail catalysts of the N‐C‐NiFe are uniformly in situ assembled on the CNT “villi” using a rapid heat shock treatment. The high heating and quenching rates of the heat shock method lead to formation of the well‐dispersed ultrafine nanoparticles. The self‐supported wood‐based carbon framework decorated with the chainmail catalyst displays high electrocatalytic activity and superior cycling durability for hydrogen evolution. The unique heat shock method offers a promising strategy to rapidly synthesize well‐dispersed binary and polynary metallic nanoparticles in porous matrices for high‐efficiency electrochemical energy storage and conversion.  相似文献   

17.
18.
Mass extinction events (MEEs), defined as significant losses of species diversity in significantly short time periods, have attracted the attention of biologists because of their link to major environmental change. MEEs have traditionally been studied through the fossil record, but the development of birth‐death models has made it possible to detect their signature based on extant‐taxa phylogenies. Most birth‐death models consider MEEs as instantaneous events where a high proportion of species are simultaneously removed from the tree (“single pulse” approach), in contrast to the paleontological record, where MEEs have a time duration. Here, we explore the power of a Bayesian Birth‐Death Skyline (BDSKY) model to detect the signature of MEEs through changes in extinction rates under a “time‐slice” approach. In this approach, MEEs are time intervals where the extinction rate is greater than the speciation rate. Results showed BDSKY can detect and locate MEEs but that precision and accuracy depend on the phylogeny's size and MEE intensity. Comparisons of BDSKY with the single‐pulse Bayesian model, CoMET, showed a similar frequency of Type II error and neither model exhibited Type I error. However, while CoMET performed better in detecting and locating MEEs for smaller phylogenies, BDSKY showed higher accuracy in estimating extinction and speciation rates.  相似文献   

19.
Successful deployment of machine learning algorithms in healthcare requires careful assessments of their performance and safety. To date, the FDA approves locked algorithms prior to marketing and requires future updates to undergo separate premarket reviews. However, this negates a key feature of machine learning—the ability to learn from a growing dataset and improve over time. This paper frames the design of an approval policy, which we refer to as an automatic algorithmic change protocol (aACP), as an online hypothesis testing problem. As this process has obvious analogy with noninferiority testing of new drugs, we investigate how repeated testing and adoption of modifications might lead to gradual deterioration in prediction accuracy, also known as “biocreep” in the drug development literature. We consider simple policies that one might consider but do not necessarily offer any error‐rate guarantees, as well as policies that do provide error‐rate control. For the latter, we define two online error‐rates appropriate for this context: bad approval count (BAC) and bad approval and benchmark ratios (BABR). We control these rates in the simple setting of a constant population and data source using policies aACP‐BAC and aACP‐BABR, which combine alpha‐investing, group‐sequential, and gate‐keeping methods. In simulation studies, bio‐creep regularly occurred when using policies with no error‐rate guarantees, whereas aACP‐BAC and aACP‐BABR controlled the rate of bio‐creep without substantially impacting our ability to approve beneficial modifications.  相似文献   

20.
Protein S‐nitrosylation is a reversible post‐translational modification of protein cysteines that is increasingly being considered as a signal transduction mechanism. The “biotin switch” technique marked the beginning of the study of the S‐nitrosoproteome, based on the specific replacement of the labile S‐nitrosylation by a more stable biotinylation that allowed further detection and purification. However, its application for proteomic studies is limited by its relatively low sensitivity. Thus, typical proteomic experiments require high quantities of protein extracts, which precludes the use of this method in a number of biological settings. We have developed a “fluorescence switch” technique that, when coupled to 2‐DE proteomic methodologies, allows the detection and identification of S‐nitrosylated proteins by using limited amounts of starting material, thus significantly improving the sensitivity. We have applied this methodology to detect proteins that become S‐nitrosylated in endothelial cells when exposed to S‐nitroso‐L ‐cysteine, a physiological S‐nitrosothiol, identifying already known S‐nitrosylation targets, as well as proteins that are novel targets. This “fluorescence switch” approach also allowed us to identify several proteins that are denitrosylated by thioredoxin in cytokine‐activated RAW264.7 (murine macrophage) cells. We believe that this method represents an improvement in order to approach the identification of S‐nitrosylated proteins in physiological conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号