全文获取类型
收费全文 | 3270篇 |
免费 | 200篇 |
国内免费 | 54篇 |
专业分类
3524篇 |
出版年
2023年 | 31篇 |
2022年 | 29篇 |
2021年 | 84篇 |
2020年 | 90篇 |
2019年 | 85篇 |
2018年 | 102篇 |
2017年 | 91篇 |
2016年 | 115篇 |
2015年 | 99篇 |
2014年 | 170篇 |
2013年 | 257篇 |
2012年 | 99篇 |
2011年 | 130篇 |
2010年 | 120篇 |
2009年 | 158篇 |
2008年 | 188篇 |
2007年 | 196篇 |
2006年 | 152篇 |
2005年 | 113篇 |
2004年 | 122篇 |
2003年 | 108篇 |
2002年 | 88篇 |
2001年 | 80篇 |
2000年 | 87篇 |
1999年 | 82篇 |
1998年 | 56篇 |
1997年 | 63篇 |
1996年 | 51篇 |
1995年 | 39篇 |
1994年 | 43篇 |
1993年 | 38篇 |
1992年 | 23篇 |
1991年 | 26篇 |
1990年 | 20篇 |
1989年 | 29篇 |
1988年 | 22篇 |
1987年 | 25篇 |
1986年 | 18篇 |
1985年 | 18篇 |
1984年 | 18篇 |
1983年 | 11篇 |
1982年 | 27篇 |
1981年 | 11篇 |
1980年 | 23篇 |
1979年 | 9篇 |
1978年 | 13篇 |
1977年 | 14篇 |
1976年 | 9篇 |
1975年 | 11篇 |
1974年 | 8篇 |
排序方式: 共有3524条查询结果,搜索用时 15 毫秒
51.
Cognition is not directly measurable. It is assessed using psychometric tests, which can be viewed as quantitative measures of cognition with error. The aim of this article is to propose a model to describe the evolution in continuous time of unobserved cognition in the elderly and assess the impact of covariates directly on it. The latent cognitive process is defined using a linear mixed model including a Brownian motion and time-dependent covariates. The observed psychometric tests are considered as the results of parameterized nonlinear transformations of the latent cognitive process at discrete occasions. Estimation of the parameters contained both in the transformations and in the linear mixed model is achieved by maximizing the observed likelihood and graphical methods are performed to assess the goodness of fit of the model. The method is applied to data from PAQUID, a French prospective cohort study of ageing. 相似文献
52.
The pathophysiological mechanisms of progressive demyelinating disorders including multiple sclerosis are incompletely understood. Increasing evidence indicates a role for trace metals in the progression of several neurodegenerative disorders. The study of Skogholt disease, a recently discovered demyelinating disease affecting both the central and peripheral nervous system, might shed some light on the mechanisms underlying demyelination. Cerebrospinal fluid iron and copper concentrations are about four times higher in Skogholt patients than in controls. The transit into cerebrospinal fluid of these elements from blood probably occurs in protein bound form. We hypothesize that exchangeable fractions of iron and copper are further transferred from cerebrospinal fluid into myelin, thereby contributing to the pathogenesis of demyelination. Free or weakly bound iron and copper ions may exert their toxic action on myelin by catalyzing production of oxygen radicals. Similarities to demyelinating processes in multiple sclerosis and other myelinopathies are discussed. 相似文献
53.
Climate change vulnerability assessment is a complex form of risk assessment which accounts for both geophysical and socio-economic components of risk. In indicator-based vulnerability assessment (IBVA), indicators are used to rank the vulnerabilities of socio-ecological systems (SESs). The predominant aggregation approach in the literature, sometimes based on multi-attribute utility theory (MAUT), typically builds a global-scale, utility function based on weighted summation, to generate rankings. However, the corresponding requirement for additive independence and complete knowledge of system interactions by analyst are rarely if ever satisfied in IBVA.We build an analogy between the structures of Multi-Criteria Decision Analysis (MCDA) and IBVA problems and show that a set of techniques called Outranking Methods, developed in MCDA to deal with criteria incommensurability, data uncertainty and preference imprecision, offer IBVA a sound alternative to additive or multiplicative aggregation. We reformulate IBVA problems within an outranking framework, define thresholds of difference and use an outranking method, ELECTRE III, to assess the relative vulnerability to heat stress of 15 local government areas in metropolitan Sydney. We find that the ranking outcomes are robust and argue that an outranking approach is better suited for assessments characterized by a mix of qualitative, semi-quantitative and quantitative indicators, threshold effects and uncertainties about the exact relationships between indicators and vulnerability. 相似文献
54.
ObjectiveThe purpose of this work is to evaluate the impact of optimization of magnification on performance parameters of the variable resolution X-ray (VRX) CT scanner.MethodsA realistic model based on an actual VRX CT scanner was implemented in the GATE Monte Carlo simulation platform. To evaluate the influence of system magnification, spatial resolution, field-of-view (FOV) and scatter-to-primary ratio of the scanner were estimated for both fixed and optimum object magnification at each detector rotation angle. Comparison and inference between these performance parameters were performed angle by angle to determine appropriate object position at each opening half angle.ResultsOptimization of magnification resulted in a trade-off between spatial resolution and FOV of the scanner at opening half angles of 90°–12°, where the spatial resolution increased up to 50% and the scatter-to-primary ratio decreased from 4.8% to 3.8% at a detector angle of about 90° for the same FOV and X-ray energy spectrum. The disadvantage of magnification optimization at these angles is the significant reduction of the FOV (up to 50%). Moreover, magnification optimization was definitely beneficial for opening half angles below 12° improving the spatial resolution from 7.5 cy/mm to 20 cy/mm. Meanwhile, the FOV increased by more than 50% at these angles.ConclusionIt can be concluded that optimization of magnification is essential for opening half angles below 12°. For opening half angles between 90° and 12°, the VRX CT scanner magnification should be set according to the desired spatial resolution and FOV. 相似文献
55.
Irradiation, delivered by a synchrotron facility, using a set of highly collimated, narrow and parallel photon beams spaced by 1 mm or less, has been termed Microbeam Radiation Therapy (MRT). The tolerance of healthy tissue after MRT was found to be better than after standard broad X-ray beams, together with a more pronounced response of malignant tissue. The microbeam spacing and transverse peak-to-valley dose ratio (PVDR) are considered to be relevant biological MRT parameters. We investigated the MRT concept for proton microbeams, where we expected different depth-dose profiles and PVDR dependences, resulting in skin sparing and homogeneous dose distributions at larger beam depths, due to differences between interactions of proton and photon beams in tissue. Using the FLUKA Monte Carlo code we simulated PVDR distributions for differently spaced 0.1 mm (sigma) pencil-beams of entrance energies 60, 80, 100 and 120 MeV irradiating a cylindrical water phantom with and without a bone layer, representing human head. We calculated PVDR distributions and evaluated uniformity of target irradiation at distal beam ranges of 60–120 MeV microbeams. We also calculated PVDR distributions for a 60 MeV spread-out Bragg peak microbeam configuration. Application of optimised proton MRT in terms of spot size, pencil-beam distribution, entrance beam energy, multiport irradiation, combined with relevant radiobiological investigations, could pave the way for hypofractionation scenarios where tissue sparing at the entrance, better malignant tissue response and better dose conformity of target volume irradiation could be achieved, compared with present proton beam radiotherapy configurations. 相似文献
56.
Hellmich M 《Biometrics》2001,57(3):892-898
In order to benefit from the substantial overhead expenses of a large group sequential clinical trial, the simultaneous investigation of several competing treatments becomes more popular. If at some interim analysis any treatment arm reveals itself to be inferior to any other treatment under investigation, this inferior arm may be or may even need to be dropped for ethical and/or economic reasons. Recently proposed methods for monitoring and analysis of group sequential clinical trials with multiple treatment arms are compared and discussed. The main focus of the article is on the application and extension of (adaptive) closed testing procedures in the group sequential setting that strongly control the familywise error rate. A numerical example is given for illustration. 相似文献
57.
David Dubbeldam Sofía Calero Donald E. Ellis Randall Q. Snurr 《Molecular simulation》2016,42(2):81-101
A new software package, RASPA, for simulating adsorption and diffusion of molecules in flexible nanoporous materials is presented. The code implements the latest state-of-the-art algorithms for molecular dynamics and Monte Carlo (MC) in various ensembles including symplectic/measure-preserving integrators, Ewald summation, configurational-bias MC, continuous fractional component MC, reactive MC and Baker's minimisation. We show example applications of RASPA in computing coexistence properties, adsorption isotherms for single and multiple components, self- and collective diffusivities, reaction systems and visualisation. The software is released under the GNU General Public License. 相似文献
58.
Dr. V. Arunachalam Dr. A. Bandyopadhyay 《TAG. Theoretical and applied genetics. Theoretische und angewandte Genetik》1979,54(5):203-207
Summary A set of complex crosses with multiple crosses as female parents were made using multiple pollen in turnip rape (Brassica campestris L.). These multiple cross — multiple pollen hybrids (mucromphs) were evaluated for a large number of quantitative characters including yield. New methods were proposed to study such genetic material in depth so as to formulate suitable strategies to breed for attractive seed yield.Part of the Ph. D. Thesis of junior author submitted to Indian Agricultural Research Institute, New Delhi 相似文献
59.
Fluorescence in situ hybridization with multiple repeated DNA probes applied to the analysis of wheat-rye chromosome pairing 总被引:1,自引:0,他引:1
A. Cuadrado F. Vitellozzi N. Jouve C. Ceoloni 《TAG. Theoretical and applied genetics. Theoretische und angewandte Genetik》1997,94(3-4):347-355
Fluorescence in situ hybridization (FISH) with multiple probes has been applied to meiotic chromosome spreads derived from
ph1b common wheat x rye hybrid plants. The probes used included pSc74 and pSc 119.2 from rye (the latter also hybridizes on wheat,
mainly B genome chromosomes), the Ae. squarrosa pAs1 probe, which hybridizes almost exclusively on D genome chromosomes, and wheat rDNA probes pTa71 and pTa794. Simultaneous
and sequential FISH with a two-by-two combination of these probes allowed unequivocal identification of all of the rye (R)
and most of the wheat (W) chromosomes, either unpaired or involved in pairing. Thus not only could wheat-wheat and wheat-rye
associations be easily discriminated, which was already feasible by the sole use of the rye-specific pSc74 probe, but the
individual pairing partners could also be identified. Of the wheat-rye pairing observed, which averaged from about 7% to 11%
of the total pairing detected in six hybrid plants of the same cross combination, most involved B genome chromosomes (about
70%), and to a much lesser degree, those of the D (almost 17%) and A (14%) genomes. Rye arms 1RL and 5RL showed the highest
pairing frequency (over 30%), followed by 2RL (11%) and 4RL (about 8%), with much lower values for all the other arms. 2RS
and 5RS were never observed to pair in the sample analysed. Chromosome arms 1RL, 1RS, 2RL, 3RS, 4RS and 6RS were observed
to be exclusively bound to wheat chromosomes of the same homoeologous group. The opposite was true for 4RL (paired with 6BS
and 7BS) and 6RL (paired with 7BL). 5RL, on the other hand, paired with 4WL arms or segments of them in more than 80% of the
cases and with 5WL in the remaining ones. Additional cases of pairing involving wheat chromosomes belonging to more than one
homoeologous group occurred with 3RL, 7RS and 7RL. These results, while adding support to previous evidence about the existence
of several translocations in the rye genome relative to that of wheat, show that FISH with multiple probes is an efficient
method by which to study fundamental aspects of chromosome behaviour at meiosis, such as interspecific pairing. The type of
knowledge attainable from this approach is expected to have a significant impact on both theoretical and applied research
concerning wheat and related Triticeae.
Received: 21 February 1996 / Accepted: 12 July 1996 相似文献
60.
Dong K. Kim 《In vitro cellular & developmental biology. Animal》1997,33(4):289-293
Summary Doubling time has been widely used to represent the growth pattern of cells. A traditional method for finding the doubling
time is to apply gray-scaled cells, where the logarithmic transformed scale is used. As an alternative statistical method,
the log-linear model was recently proposed, for which actual cell numbers are used instead of the transformed gray-scaled
cells. In this paper, I extend the log-linear model and propose the extended log-linear model. This model is designed for
extra-Poisson variation, where the log-linear model produces the less appropriate estimate of the doubling time. Moreover,
I compare statistical properties of the gray-scaled method, the log-linear model, and the extended log-linear model. For this
purpose, I perform a Monte Carlo simulation study with three data-generating models: the additive error model, the multiplicative
error model, and the overdispersed Poisson model. From the simulation study, I found that the gray-scaled method highly depends
on the normality assumption of the gray-scaled cells; hence, this method is appropriate when the error model is multiplicative
with the log-normally distributed errors. However, it is less efficient for other types of error distributions, especially
when the error model is additive or the errors follow the Poisson distribution. The estimated standard error for the doubling
time is not accurate in this case. The log-linear model was found to be efficient when the errors follow the Poisson distribution
or nearly Poisson distribution. The efficiency of the log-linear model was decreased accordingly as the overdispersion increased,
compared to the extended log-linear model. When the error model is additive or multiplicative with Gamma-distributed errors,
the log-linear model is more efficient than the gray-scaled method. The extended log-linear model performs well overall for
all three data-generating models. The loss of efficiency of the extended log-linear model is observed only when the error
model is multiplicative with log-normally distributed errors, where the gray-scaled method is appropriate. However, the extended
log-linear model is more efficient than the log-linear model in this case. 相似文献