首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
G-protein-coupled receptors (GPCRs) comprise the largest and most pharmacologically targeted membrane protein family. Here, we used the visual receptor rhodopsin as an archetype for understanding membrane lipid influences on conformational changes involved in GPCR activation. Visual rhodopsin was recombined with lipids varying in their degree of acyl chain unsaturation and polar headgroup size using 1-palmitoyl-2-oleoyl-sn-glycero- and 1,2-dioleoyl-sn-glycerophospholipids with phosphocholine (PC) or phosphoethanolamine (PE) substituents. The receptor activation profile after light excitation was measured using time-resolved ultraviolet-visible spectroscopy. We discovered that more saturated POPC lipids back shifted the equilibrium to the inactive state, whereas the small-headgroup, highly unsaturated DOPE lipids favored the active state. Increasing unsaturation and decreasing headgroup size have similar effects that combine to yield control of rhodopsin activation, and necessitate factors beyond proteolipid solvation energy and bilayer surface electrostatics. Hence, we consider a balance of curvature free energy with hydrophobic matching and demonstrate how our data support a flexible surface model (FSM) for the coupling between proteins and lipids. The FSM is based on the Helfrich formulation of membrane bending energy as we previously first applied to lipid-protein interactions. Membrane elasticity and curvature strain are induced by lateral pressure imbalances between the constituent lipids and drive key physiological processes at the membrane level. Spontaneous negative monolayer curvature toward water is mediated by unsaturated, small-headgroup lipids and couples directly to GPCR activation upon light absorption by rhodopsin. For the first time to our knowledge, we demonstrate this modulation in both the equilibrium and pre-equilibrium evolving states using a time-resolved approach.  相似文献   

2.
3.
4.
Summary The paper is the second of two papers about statistical considerations that researchers should make while doing in vitro plant biology research. The first paper focused on aspects from developing a plan to do research through the collection of data. This paper continues with information about editing data, handling outliers, analyzing quantitative and qualitative data, comparing treatment means, preparing graphs and tables, and presenting results.  相似文献   

5.
In a study of 1,609 single live births occurring in San Francisco County, the information on the birth certificate was compared with that on the hospital record to determine completeness and accuracy of the items reported on the certificate.Items such as color or race of mother, age of mother, birth weight and birth length of child were well recorded on the certificate and agreed with information found in the hospital record.Medical conditions were grossly underreported on the birth certificate. Conditions relating to the mother were more frequently recorded than those relating to the infant, but the birth certificates recorded less than one-fifth of all medical conditions of both mother and infant that were entered in the hospital records.Methods suggested for improving the quality of maternal and newborn morbidity information include revision of the medical section of the present certificates of live birth and fetal death and use of a precoded hospital record.  相似文献   

6.
7.
Identification of differentially expressed (DE) genes across two conditions is a common task with microarray. Most existing approaches accomplish this goal by examining each gene separately based on a model and then control the false discovery rate over all genes. We took a different approach that employs a uniform platform to simultaneously depict the dynamics of the gene trajectories for all genes and select differently expressed genes. A new Functional Principal Component (FPC) approach is developed for time-course microarray data to borrow strength across genes. The approach is flexible as the temporal trajectory of the gene expressions is modeled nonparametrically through a set of orthogonal basis functions, and often fewer basis functions are needed to capture the shape of the gene expression trajectory than existing nonparametric methods. These basis functions are estimated from the data reflecting major modes of variation in the data. The correlation structure of the gene expressions over time is also incorporated without any parametric assumptions and estimated from all genes such that the information across other genes can be shared to infer one individual gene. Estimation of the parameters is carried out by an efficient hybrid EM algorithm. The performance of the proposed method across different scenarios was compared favorably in simulation to two-way mixed-effects ANOVA and the EDGE method using B-spline basis function. Application to the real data on C. elegans developmental stages also suggested that FPC analysis combined with hybrid EM algorithm provides a computationally fast and efficient method for identifying DE genes based on time-course microarray data.  相似文献   

8.
9.
10.
One hundred cases of common bile duct explorations were reviewed in an attempt to obtain information that might give insight into the diagnosis and definitive treatment of choledocholithiasis. Fifty of the hundred patients had common duct stones. Correlations were made between the incidence of choledocholithiasis as proved at operation, and the following factors: Kind and number of choledochal exploratory criteria used, the clinical diagnosis of common duct stones, and the pathologic features of gallbladders removed.The incidence of stones was statistically related to aging.The most frequent choledochal exploratory criteria were common duct dilatation or thickening (63 cases) and history of jaundice (50 cases).The most reliable single criterion in “diagnosing” common duct stones was palpable common or hepatic duct stones, the diagnosis having been correct in 15 of 17 such cases.The most reliable combination of criteria was a history of jaundice, plus palpable stones, with correct diagnosis in all such cases.The clinical diagnosis of choledocholithiasis was correct in only 17 per cent of cases.The correlation of the incidence of common duct stones with the degree of gallbladder disease—that is, acute or chronic—did not provide information that might be helpful in diagnosing choledocholithiasis.The incidence of proven retained common duct stones was 3 per cent, the non-fatal postoperative complication rate was 21 per cent and operative mortality was 1 per cent.  相似文献   

11.
Chelated lanthanides such as europium (Eu) have uniquely long fluorescence emission half-lives permitting their use in time-resolved fluorescence (TRF) assays. In Förster resonance energy transfer (FRET) a donor fluorophore transfers its emission energy to an acceptor fluorophore if in sufficiently close proximity. The use of time-resolved (TR) FRET minimizes the autofluorescence of molecules present in biological samples. In this report, we describe a homogenous immunoassay prototype utilizing TR-FRET for detection of antibodies in solution. The assay is based on labeled protein L, a bacterial protein that binds to immunoglobulin (Ig) light chain, and labeled antigen, which upon association with the same Ig molecule produce a TR-FRET active complex. We show that the approach is functional and can be utilized for both mono- and polyvalent antigens. We also compare the assay performance to that of another homogenous TR-FRET immunoassay reported earlier. This novel assay may have wide utility in infectious disease point-of-care diagnostics.  相似文献   

12.

Background

School aged children are a key link in the transmission of influenza. Most cases have little or no interaction with health services and are therefore missed by the majority of existing surveillance systems. As part of a public engagement with science project, this study aimed to establish a web-based system for the collection of routine school absence data and determine if school absence prevalence was correlated with established surveillance measures for circulating influenza.

Methods

We collected data for two influenza seasons (2011/12 and 2012/13). The primary outcome was daily school absence prevalence (weighted to make it nationally representative) for children aged 11 to 16. School absence prevalence was triangulated graphically and through univariable linear regression to Royal College of General Practitioners (RCGP) influenza like illness (ILI) episode incidence rate, national microbiological surveillance data on the proportion of samples positive for influenza (A+B) and with Rhinovirus, RSV and laboratory confirmed cases of Norovirus.

Results

27 schools submitted data over two respiratory seasons. During the first season, levels of influenza measured by school absence prevalence and established surveillance were low. In the 2012/13 season, a peak of school absence prevalence occurred in week 51, and week 1 in RCGP ILI surveillance data. Linear regression showed a strong association between the school absence prevalence and RCGP ILI (All ages, and 5–14 year olds), laboratory confirmed cases of influenza A & B, and weak evidence for a linear association with Rhinovirus and Norovirus.

Interpretation

This study provides initial evidence for using routine school illness absence prevalence as a novel tool for influenza surveillance. The network of web-based data collection platforms we established through active engagement provides an innovative model of conducting scientific research and could be used for a wide range of infectious disease studies in the future.  相似文献   

13.
This paper presents the findings of the Belmont Forum’s survey on Open Data which targeted the global environmental research and data infrastructure community. It highlights users’ perceptions of the term “open data”, expectations of infrastructure functionalities, and barriers and enablers for the sharing of data. A wide range of good practice examples was pointed out by the respondents which demonstrates a substantial uptake of data sharing through e-infrastructures and a further need for enhancement and consolidation. Among all policy responses, funder policies seem to be the most important motivator. This supports the conclusion that stronger mandates will strengthen the case for data sharing.  相似文献   

14.
As clinical and cognitive neuroscience mature, the need for sophisticated neuroimaging analysis becomes more apparent. Multivariate analysis techniques have recently received increasing attention as they have many attractive features that cannot be easily realized by the more commonly used univariate, voxel-wise, techniques. Multivariate approaches evaluate correlation/covariance of activation across brain regions, rather than proceeding on a voxel-by-voxel basis. Thus, their results can be more easily interpreted as a signature of neural networks. Univariate approaches, on the other hand, cannot directly address functional connectivity in the brain. The covariance approach can also result in greater statistical power when compared with univariate techniques, which are forced to employ very stringent, and often overly conservative, corrections for voxel-wise multiple comparisons. Multivariate techniques also lend themselves much better to prospective application of results from the analysis of one dataset to entirely new datasets. Multivariate techniques are thus well placed to provide information about mean differences and correlations with behavior, similarly to univariate approaches, with potentially greater statistical power and better reproducibility checks. In contrast to these advantages is the high barrier of entry to the use of multivariate approaches, preventing more widespread application in the community. To the neuroscientist becoming familiar with multivariate analysis techniques, an initial survey of the field might present a bewildering variety of approaches that, although algorithmically similar, are presented with different emphases, typically by people with mathematics backgrounds. We believe that multivariate analysis techniques have sufficient potential to warrant better dissemination. Researchers should be able to employ them in an informed and accessible manner. The following article attempts to provide a basic introduction with sample applications to simulated and real-world data sets.  相似文献   

15.
16.
The aggregation of α-synuclein is thought to play a role in the death of dopamine neurons in Parkinson’s disease (PD). Alpha-synuclein transitions itself through an aggregation pathway consisting of pathogenic species referred to as protofibrils (or oligomer), which ultimately convert to mature fibrils. The structural heterogeneity and instability of protofibrils has significantly impeded advance related to the understanding of their structural characteristics and the amyloid aggregation mystery. Here, we report, to our knowledge for the first time, on α-synuclein protofibril structural characteristics with cryo-electron microscopy. Statistical analysis of annular protofibrils revealed a constant wall thickness as a common feature. The visualization of the assembly steps enabled us to propose a novel, to our knowledge, mechanisms for α-synuclein aggregation involving ring-opening and protofibril-protofibril interaction events. The ion channel-like protofibrils and their membrane permeability have also been found in other amyloid diseases, suggesting a common molecular mechanism of pathological aggregation. Our direct visualization of the aggregation pathway of α-synuclein opens up fresh opportunities to advance the understanding of protein aggregation mechanisms relevant to many amyloid diseases. In turn, this information would enable the development of additional therapeutic strategies aimed at suppressing toxic protofibrils of amyloid proteins involved in neurological disorders.  相似文献   

17.
18.
Suicide rate follows a seasonal pattern that is related to rising air temperature and global radiation. These findings are reproducible within different climatic regions. Numerous studies have attempted to explain this peak in relation to weather. However, many of these studies did not use meteorological data representative of the site of the suicide or attempted suicide, resulting in limitations of the findings. Previous studies also suffered from limitations in the methods of data analysis. The current study examined the relationship between weather, i.e., solar radiation, air temperature, and the rate of suicides and suicidality in the area of Mittelfranken, Germany, using regional meteorological data. Statistical risk estimation revealed associations between higher global radiation and air temperatures on the day of and day before suicide acts. The results could be of interest for general suicide prevention strategies. Future studies should examine additional possible factors of influence and concentrate on a strict standardized study design. The aim is to obtain reproducible data of the seasonal influences on suicide behavior, allowing for the comparison of data from different meteorological regions and patient subgroups. (Author correspondence: helge.)  相似文献   

19.
Jajoo  A.  Bharti  S.  Kawamori  A. 《Photosynthetica》2004,42(1):59-64
The decay of tyrosine cation radical was found to be biphasic at 253 K. The fast phase corresponds to the YZ component while the slow phase corresponds to the tyrosine D radical (YD ) component. At 253 K, the t1/2 value was 28.6 s for the fast phase and 190.7 s for the slow phase. The fast phase is attributed to the recombination of charges between YZ and QA . The activation energy for the reaction of YZ with QA between 253 and 293 K was 48 kJ mol–1 in Cl-depleted photosystem 2 (PS2) membranes. Both the decay rate and the amplitude of the PAR -induced signal of YZ were affected by addition of chloride anion. Change in the decay rate and the amplitude of the PAR-induced signal of YZ was observed when other anions like Br, I, F, HCO3 , NO3 , PO4 3– were substituted in the Cl-depleted PS2.  相似文献   

20.
Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a “four-headed beast”—it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis. We discuss aspects of new technologies that will need to be developed to rise up and meet the computational challenges that genomics poses for the near future. Now is the time for concerted, community-wide planning for the “genomical” challenges of the next decade.We compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Astronomy has faced the challenges of Big Data for over 20 years and continues with ever-more ambitious studies of the universe. YouTube burst on the scene in 2005 and has sparked extraordinary worldwide interest in creating and sharing huge numbers of videos. Twitter, created in 2006, has become the poster child of the burgeoning movement in computational social science [6], with unprecedented opportunities for new insights by mining the enormous and ever-growing amount of textual data [7]. Particle physics also produces massive quantities of raw data, although the footprint is surprisingly limited since the vast majority of data are discarded soon after acquisition using the processing power that is coupled to the sensors [8]. Consequently, we do not include the domain in full detail here, although that model of rapid filtering and analysis will surely play an increasingly important role in genomics as the field matures.To compare these four disparate domains, we considered the four components that comprise the “life cycle” of a dataset: acquisition, storage, distribution, and analysis (
Data Phase Astronomy Twitter YouTube Genomics
Acquisition 25 zetta-bytes/year0.5–15 billion tweets/year500–900 million hours/year1 zetta-bases/year
Storage 1 EB/year1–17 PB/year1–2 EB/year2–40 EB/year
Analysis In situ data reductionTopic and sentiment miningLimited requirementsHeterogeneous data and analysis
Real-time processingMetadata analysisVariant calling, ~2 trillion central processing unit (CPU) hours
Massive volumesAll-pairs genome alignments, ~10,000 trillion CPU hours
Distribution Dedicated lines from antennae to server (600 TB/s)Small units of distributionMajor component of modern user’s bandwidth (10 MB/s)Many small (10 MB/s) and fewer massive (10 TB/s) data movement
Open in a separate window  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号