首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
ObjectiveTo measure the effect of giving out free smoke alarms on rates of fires and rates of fire related injury in a deprived multiethnic urban population.DesignCluster randomised controlled trial.SettingForty electoral wards in two boroughs of inner London, United Kingdom.ParticipantsPrimarily households including elderly people or children and households that are in housing rented from the borough council.Intervention20 050 smoke alarms, fittings, and educational brochures distributed free and installed on request.ResultsGiving out free smoke alarms did not reduce injuries related to fire (rate ratio 1.3; 95% confidence interval 0.9 to 1.9), admissions to hospital and deaths (1.3; 0.7 to 2.3), or fires attended by the fire brigade (1.1; 0.96 to 1.3). Similar proportions of intervention and control households had installed alarms (36/119 (30%) v 35/109 (32%); odds ratio 0.9; 95% confidence interval 0.5 to 1.7) and working alarms (19/118 (16%) v 18/108 (17%); 0.9; 0.4 to 1.8).ConclusionsGiving out free smoke alarms in a deprived, multiethnic, urban community did not reduce injuries related to fire, mostly because few alarms had been installed or were maintained.

What is already known on this topic

In the United Kingdom, residential fires caused 466 deaths and 14 600 non-fatal injuries in 1999The risk of death from fire is associated with socioeconomic classOne study reported an 80% decline in hospitalisations and deaths from residential fires after free smoke alarms were distributed in an area at high risk, but these results may not apply in other settings, and evidence from randomised controlled trials is lacking

What this study adds

Giving out free smoke alarms in a multiethnic poor urban population did not reduce injuries related to fire or firesGiving smoke alarms away may be a waste of resources and of little benefit unless alarm installation and maintenance is assured  相似文献   

2.
BackgroundDuring 2017, twenty health districts (locations) implemented a dengue outbreak Early Warning and Response System (EWARS) in Mexico, which processes epidemiological, meteorological and entomological alarm indicators to predict dengue outbreaks and triggers early response activities.Out of the 20 priority districts where more than one fifth of all national disease transmission in Mexico occur, eleven districts were purposely selected and analyzed. Nine districts presented outbreak alarms by EWARS but without subsequent outbreaks (“non-outbreak districts”) and two presented alarms with subsequent dengue outbreaks (“outbreak districts”). This evaluation study assesses and compares the impact of alarm-informed response activities and the consequences of failing a timely and adequate response across the outbreak groups.MethodsFive indicators of dengue outbreak response (larval control, entomological studies with water container interventions, focal spraying and indoor residual spraying) were quantitatively analyzed across two groups (”outbreak districts” and “non-outbreak districts”). However, for quality control purposes, only qualitative concluding remarks were derived from the fifth response indicator (fogging).ResultsThe average coverage of vector control responses was significantly higher in non-outbreak districts and across all four indicators. In the “outbreak districts” the response activities started late and were of much lower intensity compared to “non-outbreak districts”. Vector control teams at districts-level demonstrated diverse levels of compliance with local guidelines for ‘initial’, ‘early’ and ‘late’ responses to outbreak alarms, which could potentially explain the different outcomes observed following the outbreak alarms.ConclusionFailing timely and adequate response of alarm signals generated by EWARS showed to negatively impact the disease outbreak control process. On the other hand, districts with adequate and timely response guided by alarm signals demonstrated successful records of outbreak prevention. This study presents important operational scenarios when failing or successding EWARS but warrants investigating the effectiveness and cost-effectiveness of EWARS using a more robust designs.  相似文献   

3.
Nurses working in the hospital setting increasingly have become overburdened by managing alarms that, in many cases, provide low information value regarding patient health. The current trend, aided by disposable, wearable technologies, is to promote patient monitoring that does not require entering a patient''s room. The development of telemetry alarms and middleware escalation devices adds to the continued growth of auditory, visual, and haptic alarms to the hospital environment but can fail to provide a more complete understanding of patient health. As we begin to innovate to both address alarm overload and improve patient management, perhaps using fundamentally different integration architectures, lessons from the aviation flight deck are worth considering. Commercial jet transport systems and their alarms have evolved slowly over many decades and have developed integration methods that account for operational context, provide multiple response protocol levels, and present a more integrated view of the airplane system state. We articulate three alarm system objectives: (1) supporting hazard management, (2) establishing context, and (3) supporting alarm prioritization. More generally, we present the case that alarm design in aviation can spur directions for innovation for telemetry monitoring systems in hospitals.

Healthcare, and the hospital setting in particular, has experienced rapid growth of auditory, visual, and haptic alarms. These alarms can be notoriously unreliable or can focus on narrowly defined changes to the patient''s state.1 Further, this alarm proliferation has led nursing staff to become increasingly overburdened and distressed by managing alarms.2 Current alarm system architectures do not effectively integrate meaningful data that support increased patient status awareness and management.3 In contrast, commercial jet transports, over many decades, have developed integration methods that account for operational context, provide multiple response protocol levels, and present a more integrated view of airplane state to support operational decision making. Similar methods for advanced control rooms in nuclear power generation have been reviewed by Wu and Li.4In healthcare, The Joint Commission (TJC) and hospital quality departments have generated guidance that further elevates the need to address the industry''s “alarm problem.” In 2014, TJC issued an accreditation requirement (National Patient Safety Goal 06.01.01) titled, “Reduce patient harm associated with clinical alarm systems.”5 This requirement continues to be included in the 2020 requirements for accreditation.From the authors'' perspective, this requirement is leading to solutions that will not effectively support performance of essential tasks and is moving away from the types of innovations that are being sought in aviation and other settings. For example, healthcare administrators advocate categorizing alarms into high-priority (“run”), medium-priority (“walk”), and low-priority (“shuffle”) alarms independent of unit context, hospital context, situational context, and historical patient context.6 In addition, each alarm category is assigned a minimum response time. When nurses do not meet response time targets, administrators may add staff (“telemetry monitor watchers”), increase the volume of alarms, escalate alarms to other staff to respond, increase the “startling” nature of alarms to better direct attention, and benchmark average response times by individual nurse identifiers. Although well intentioned, these approaches can sometimes add to the alarm overload problem by creating more alarms and involving more people in alarm response.The authors, who have investigated human performance in several operational settings, believe that a need exists to reflect more broadly on the role of alarms in understanding and managing a system (be it an aircraft or a set of patients in a hospital department). Most alarms in hospitals signal when a variable is outside a prespecified range that is determined from the patient population (e.g., high heart rate), when a change in cardiac rhythm occurs (e.g., ventricular fibrillation [V-fib]), or when a problem occurs with the alarm system (e.g., change battery). These alarms support shifts in attention when the event being alarmed requires an action by a nurse and when the relative priority of the response is clear in relation to competing demands.Certain alarms are useful for other purposes, such as aiding situation awareness about planned, routine tasks (e.g., an expected event of high heart rate has occurred, which indicates that a staff member is helping a patient to the bathroom). Increasingly, secondary alarm notification systems (SANSs), otherwise known as middleware escalation systems, are incorporating communications through alarms, such as patient call systems, staff emergency broadcasts, and demands for “code blue” teams to immediately go to a patient''s bedside.Thus, alarms are used to attract attention (i.e., to orient staff to an important change). However, from a cognitive engineering perspective, we believe alarms can also be used to support awareness, prioritization, and decision making. That is, the current siloed approach to alarm presentation in healthcare, which is driven by technology, impedes the ability to properly understand and appreciate the implications of alarms. Understanding the meaning and implications of alarms can best be achieved when they are integrated via a system interface that places the alarm in the broader context of system state. We hope that sharing our insights can spur both design and alarm management innovations for bedside telemetry monitoring devices and related middleware escalation systems and dashboards.In this article, we provide insights from human factors research, and from the integrated glass cockpit in particular, to prompt innovation with clinical alarm systems. To draw lessons from aviation and other domains, we conducted a series of meetings among three human factors engineers with expertise in alarm design in healthcare, aviation, nuclear power generation, and military command and control domains. In the process, we identified differences in the design, use, and philosophies for managing alarms in different domains; defined alarm systems; clarified common elements in the “alarm problem” across these domains; articulated objectives for an alarm system that supports a human operator in controlling a complex process (i.e., supervisory control); and identified levels of alarm system maturity. Based on these activities, we assert that:
  1. Clinical alarm systems fail to reduce unnecessary complexity compared with the integrated glass cockpit.
  2. Aviation and clinical alarm systems share core objectives.
  3. The challenges with aviation and clinical alarm systems are similar, including where alarm systems fall short of their objectives.
  4. We can demarcate levels in the process of alarm system evolution, largely based on alarm reliability, system integration, and how system state is described. The higher levels point the way for innovation in clinical alarm systems.
  相似文献   

4.
ObjectiveTo report the career choices and career destinations in 1995 of doctors who qualified in the United Kingdom in 1988.DesignPostal questionnaire.SettingUnited Kingdom.SubjectsAll doctors who qualified in the United Kingdom in 1988.ResultsOf the 3724 doctors who were sent questionnaires, eight had died and three declined to participate. Of the remaining 3713 doctors, 2885 (77.7%) replied. 16.9% (608/3593; 95% confidence interval 16.1% to 17.8%) of all 1988 qualifiers from medical schools in Great Britain were not working in the NHS in Great Britain in 1995 compared with 17.0% (624/3674; 16.1% to 17.9%) of the 1983 cohort in 1990. The proportion of doctors working in general practice was lower than in previous cohorts. The percentage of women in general practice (44.3% (528/1192)) substantially exceeded that of men (33.1% (443/1340)). 53% (276/522) of the women in general practice and 20% (98/490) of the women in hospital specialties worked part time.ConclusionsConcerns about recruitment difficulties in general practice are justified. Women are now entering general practice in greater numbers than men. There is no evidence of a greater exodus from the NHS from the 1988 qualifiers than from earlier cohorts.

Key messages

  • This study reports the career progress to September 1995 of doctors who qualified in 1988
  • Loss from the British NHS, at 16.9% (95% confidence interval, 16.1% to 17.8%), was no greater than among earlier qualifiers at the same time after qualification
  • The proportion of doctors working in general practice (38%) was lower than in earlier cohorts studied
  • In this generation of doctors, women in general practice now outnumber men
  • Fifty three per cent of the women in general practice and 20% of the women in hospital specialties were working on a part time or flexible basis
  相似文献   

5.
ObjectiveTo determine the prevalence of common mental disorders (anxiety and depression) and help seeking behaviour in African Caribbeans and white Europeans.DesignTwo phase survey in a general population sample. The first phase comprised screening with the 12 item general health questionnaire; the second phase was standardised psychiatric assessment and interview about help seeking.SettingPeople registered with four general practices in central Manchester.ParticipantsOf 1467 people randomly selected from family health services authority lists, 864 were still resident. 337 African Caribbeans and 275 white Europeans completed the screening phase (response rate 71%); 127 African Caribbeans and 103 white Europeans were interviewed in the second phase.Results13% of African Caribbeans (95% confidence interval 10% to 16%) and 14% (10% to 18%) of white Europeans had one or more disorder. Anxiety disorders were significantly less common among African Caribbeans (3% (1% to 5%) v 9% (6% to 12%) in white Europeans). Depressive disorders were significantly more common among African Caribbean women than white women (difference 8% (1% to 15%)). Medical help seeking was similar in the two groups, but African Caribbeans with mental disorders were more likely to seek additional help from non-medical sources (12/29 v 5/29, P=0.082).ConclusionsIn an inner city setting the prevalence of common mental disorders is similar in these two ethnic groups.

Key messages

  • Most studies of ethnic differences in mental health focus on psychotic illness rather than common mental disorders
  • In this inner city study the prevalence of anxiety and depression was similar in African Caribbeans and white Europeans
  • Anxiety disorders were less common, and depression more common, in African Caribbeans than white Europeans
  • Improved recognition and treatment of non-psychotic disorders are necessary, taking into account patients’ views of their illnesses
  相似文献   

6.
ObjectiveTo determine the career destinations, by 1995, of doctors who qualified in the United Kingdom in 1977; the relation between their destinations and early career choice; and their intentions regarding retirement age.DesignPostal questionnaire.SettingUnited Kingdom.SubjectsAll (n=3135) medical qualifiers of 1977.ResultsAfter about 12 years the distribution of respondents by type of employment, and, for women, the percentage of doctors in part time rather than full time medical work, had stabilised. Of all 2997 qualifiers from medical schools in Great Britain, 2399 (80.0% (95% confidence interval 79.5% to 80.6%)) were working in medicine in the NHS in Great Britain 18 years after qualifying. Almost half the women (318/656) worked in the NHS part time. Of 1714 doctors in the NHS, 1125 intended to work in the NHS until normal retirement age, 392 did not, and 197 were undecided. Of the 1548 doctors for whom we had sufficient information, career destinations at 18 years matched the choices made at 1, 3, and 5 years in 58.9% (912), 78.2% (1211), and 86.6% (1341) of cases respectively.ConclusionsPlanning for the medical workforce needs to be supported by information about doctors’ career plans, destinations, and whole time equivalent years of work. Postgraduate training needs to take account of doctors’ eventual choice of specialty (and the timing of this choice).

Key messages

  • A large scale national study in the United Kingdom followed doctors from qualification to mid-career and beyond
  • Most doctors had made their choice of eventual career—at least in terms of broadly defined specialty—within 5 years of qualifying
  • Eighteen years on, 80% of the doctors were working in the NHS and nearly half of women doctors were working part time
  • Almost a quarter of NHS doctors planned to retire early
  相似文献   

7.

Background

During an entomological survey in preparation for malaria control interventions in Mwea division, the number of malaria cases at the Kimbimbi sub-district hospital was in a steady decline. The underlying factors for this reduction were unknown and needed to be identified before any malaria intervention tools were deployed in the area. We therefore set out to investigate the potential factors that could have contributed to the decline of malaria cases in the hospital by analyzing the malaria control knowledge, attitudes and practices (KAP) that the residents in Mwea applied in an integrated fashion, also known as integrated malaria management (IMM).

Methods

Integrated Malaria Management was assessed among community members of Mwea division, central Kenya using KAP survey. The KAP study evaluated community members'' malaria disease management practices at the home and hospitals, personal protection measures used at the household level and malaria transmission prevention methods relating to vector control. Concurrently, we also passively examined the prevalence of malaria parasite infection via outpatient admission records at the major referral hospital in the area. In addition we studied the mosquito vector population dynamics, the malaria sporozoite infection status and entomological inoculation rates (EIR) over an 8 month period in 6 villages to determine the risk of malaria transmission in the entire division.

Results

A total of 389 households in Mwea division were interviewed in the KAP study while 90 houses were surveyed in the entomological study. Ninety eight percent of the households knew about malaria disease while approximately 70% of households knew its symptoms and methods to manage it. Ninety seven percent of the interviewed households went to a health center for malaria diagnosis and treatment. Similarly a higher proportion (81%) used anti-malarial medicines bought from local pharmacies. Almost 90% of households reported owning and using an insecticide treated bed net and 81% reported buying the nets within the last 5 years. The community also used mosquito reduction measures including, in order of preference, environmental management (35%), mosquito repellent and smoke (31%) insecticide canister sprays (11%), and window and door screens (6%). These methods used by the community comprise an integrated malaria management (IMM) package. Over the last 4 years prior to this study, the malaria cases in the community hospital reduced from about 40% in 2000 to less than 10% by 2004 and by the year 2007 malaria cases decreased to zero. In addition, a one time cross-sectional malaria parasite survey detected no Plasmodium infection in 300 primary school children in the area. Mosquito vector populations were variable in the six villages but were generally lower in villages that did not engage in irrigation activities. The malaria risk as estimated by EIR remained low and varied by village and proximity to irrigation areas. The average EIR in the area was estimated at 0.011 infectious bites per person per day.

Conclusions

The usage of a combination of malaria control tools in an integrated fashion by residents of Mwea division might have influenced the decreased malaria cases in the district hospital and in the school children. A vigorous campaign emphasizing IMM should be adopted and expanded in Mwea division and in other areas with different eco-epidemiological patterns of malaria transmission. With sustained implementation and support from community members integrated malaria management can reduce malaria significantly in affected communities in Africa.  相似文献   

8.

Background

Unstable housing and homelessness is prevalent among injection drug users (IDU). We sought to examine whether accessing addiction treatment was associated with attaining stable housing in a prospective cohort of IDU in Vancouver, Canada.

Methods

We used data collected via the Vancouver Injection Drug User Study (VIDUS) between December 2005 and April 2010. Attaining stable housing was defined as two consecutive “stable housing” designations (i.e., living in an apartment or house) during the follow-up period. We assessed exposure to addiction treatment in the interview prior to the attainment of stable housing among participants who were homeless or living in single room occupancy (SRO) hotels at baseline. Bivariate and multivariate associations between the baseline and time-updated characteristics and attaining stable housing were examined using Cox proportional hazard regression models.

Principal Findings

Of the 992 IDU eligible for this analysis, 495 (49.9%) reported being homeless, 497 (50.1%) resided in SRO hotels, and 380 (38.3%) were enrolled in addiction treatment at the baseline interview. Only 211 (21.3%) attained stable housing during the follow-up period and of this group, 69 (32.7%) had addiction treatment exposure prior to achieving stable housing. Addiction treatment was inversely associated with attaining stable housing in a multivariate model (adjusted hazard ratio [AHR]  = 0.71; 95% CI: 0.52–0.96). Being in a partnered relationship was positively associated with the primary outcome (AHR  = 1.39; 95% CI: 1.02–1.88). Receipt of income assistance (AHR  = 0.65; 95% CI: 0.44–0.96), daily crack use (AHR  = 0.69; 95% CI: 0.51–0.93) and daily heroin use (AHR  = 0.63; 95% CI: 0.43–0.92) were negatively associated with attaining stable housing.

Conclusions

Exposure to addiction treatment in our study was negatively associated with attaining stable housing and may have represented a marker of instability among this sample of IDU. Efforts to stably house this vulnerable group may be occurring in contexts outside of addiction treatment.  相似文献   

9.

Background

Zinc treatment of childhood diarrhea has the potential to save 400,000 under-five lives per year in lesser developed countries. In 2004 the World Health Organization (WHO)/UNICEF revised their clinical management of childhood diarrhea guidelines to include zinc. The aim of this study was to monitor the impact of the first national campaign to scale up zinc treatment of childhood diarrhea in Bangladesh.

Methods/Findings

Between September 2006 to October 2008 seven repeated ecologic surveys were carried out in four representative population strata: mega-city urban slum and urban nonslum, municipal, and rural. Households of approximately 3,200 children with an active or recent case of diarrhea were enrolled in each survey round. Caretaker awareness of zinc as a treatment for childhood diarrhea by 10 mo following the mass media launch was attained in 90%, 74%, 66%, and 50% of urban nonslum, municipal, urban slum, and rural populations, respectively. By 23 mo into the campaign, approximately 25% of urban nonslum, 20% of municipal and urban slum, and 10% of rural under-five children were receiving zinc for the treatment of diarrhea. The scale-up campaign had no adverse effect on the use of oral rehydration salt (ORS).

Conclusions

Long-term monitoring of scale-up programs identifies important gaps in coverage and provides the information necessary to document that intended outcomes are being attained and unintended consequences avoided. The scale-up of zinc treatment of childhood diarrhea rapidly attained widespread awareness, but actual use has lagged behind. Disparities in zinc coverage favoring higher income, urban households were identified, but these were gradually diminished over the two years of follow-up monitoring. The scale up campaign has not had any adverse effect on the use of ORS. Please see later in the article for the Editors'' Summary  相似文献   

10.
ObjectiveTo test two methods of providing low cost information on the later health status of survivors of neonatal intensive care.DesignCluster randomised comparison.SettingNine hospitals distributed across two UK health regions. Each hospital was randomised to use one of two methods of follow up.ParticipantsAll infants born ⩽32 weeks'' gestation during 1997 in the study hospitals.MethodFamilies were recruited at the time of discharge. In one method of follow up families were asked to complete a questionnaire about their child''s health at the age of 2 years (corrected for gestation). In the other method the children''s progress was followed by clerks in the local community child health department by using sources of routine information.Results236 infants were recruited to each method of follow up. Questionnaires were returned by 214 parents (91%; 95% confidence interval 84% to 97%) and 223 clerks (95%; 86% to 100%). Completed questionnaires were returned by 201 parents (85%; 76% to 94%) and 158 clerks (67%; 43% to 91%). Most parents found the forms easy to complete, but some had trouble understanding the concept of “corrected age” and hence when to return the form. Community clerks often had to rely on information that was out of date and difficult to interpret.ConclusionNeither questionnaires from parents nor routinely collected health data are adequate methods of providing complete follow up data on children who were born preterm and required neonatal intensive care, though both methods show potential.

What is already known on this topic

Outcome of neonatal intensive care should include later health status not just early mortalityAlthough these data are commonly sought, for various reasons no existing routine system currently delivers the information for ⩾95% of the population (95% representing the minimum acceptable standard)Running one-off studies to gain later follow up data is difficult and costly

What this study adds

Potentially these data could come from parents but to reach 95% ascertainment perhaps 5-10% of parents would require help and support to provide informationExisting data flows may be able to provide the required information if the timing of routine reviews and methods of data recording were harmonised across the United KingdomThe costs attached to introducing such a system seem to be low  相似文献   

11.
ObjectiveTo assess whether transferring knowledge from specialists at centres of excellence to referring doctors through online consultations can improve the management of patients requiring specialised care.DesignRetrospective case review of the first year of internet based patient initiated consultations between referring doctors and consulting specialists.SettingUS teaching hospitals affiliated with an organisation providing internet based consultations.ParticipantsDoctors in various settings around the world engaging in internet based consultations with specialists.Results79 consultations took place. 90% (n=71) of consultations were for services related to oncology. 90% of consultations involved new recommendations for treatment. The most common recommendation was a new chemotherapeutic regimen (68%, n=54). Diagnosis changed in 5% (n=4) of cases. The average turnaround time was 6.8 working days compared with an average of 19 working days to see a comparable specialist.ConclusionsInternet based consultations between specialists at centres of excellence and referring doctors contribute to patient care through recommendations for new treatment and timely access to specialist knowledge. Although change in diagnosis occurred in only a few cases, the prognostic and therapeutic implications for these patients may be profound.

What is already known on this topic

Telemedicine could improve health care by transferring knowledge from centres of excellence to patients'' doctorsFew studies have systematically assessed the value of such internet based specialty consultations

What this study adds

Patients can benefit from internet based consultations between their doctor and consulting specialistsNew recommendations for treatment were discussed in 90% of cases, and change in diagnosis occurred in 5% of casesPatients can access a specialist''s opinion more quickly than waiting to see a specialist  相似文献   

12.

Background

Obese individuals who smoke have a 14 year reduction in life expectancy. Both obesity and smoking are independantly associated with increased risk of malignancy. Natural killer cells (NK) are critical mediators of anti-tumour immunity and are compromised in obese patients and smokers. We examined whether NK cell function was differentially affected by cigarette smoke in obese and lean subjects.

Methodology and Principal Findings

Clinical data and blood were collected from 40 severely obese subjects (BMI>40 kg/m2) and 20 lean healthy subjects. NK cell levels and function were assessed using flow cytometry and cytotoxicity assays. The effect of cigarette smoke on NK cell ability to kill K562 tumour cells was assessed in the presence or absence of the adipokines leptin and adiponectin. NK cell levels were significantly decreased in obese subjects compared to lean controls (7.6 vs 16.6%, p = 0.0008). NK function was also significantly compromised in obese patients (30% +/− 13% vs 42% +/−12%, p = 0.04). Cigarette smoke inhibited NK cell ability to kill tumour cell lines (p<0.0001). NK cells from obese subjects were even more susceptible to the inhibitory effects of smoke compared to lean subjects (33% vs 28%, p = 0.01). Cigarette smoke prevented NK cell activation, as well as perforin and interferon-gamma secretion upon tumour challenge. Adiponectin but not leptin partially reversed the effects of smoke on NK cell function in both obese (p = 0.002) and lean controls (p = 0.01).

Conclusions/Significance

Obese subjects have impaired NK cell activity that is more susceptible to the detrimental effects of cigarette smoke compared to lean subjects. This may play a role in the increase of cancer and infection seen in this population. Adiponectin is capable of restoring NK cell activity and may have therapeutic potential for immunity in obese subjects and smokers.  相似文献   

13.
ObjectiveTo assess the hazards at an early phase of the growing epidemic of deaths from tobacco in China.DesignSmoking habits before 1980 (obtained from family or other informants) of 0.7 million adults who had died of neoplastic, respiratory, or vascular causes were compared with those of a reference group of 0.2 million who had died of other causes.Setting24 urban and 74 rural areas of China.SubjectsOne million people who had died during 1986-8 and whose families could be interviewed.ResultsAmong male smokers aged 35-69 there was a 51% (SE 2) excess of neoplastic deaths, a 31% (2) excess of respiratory deaths, and a 15% (2) excess of vascular deaths. All three excesses were significant (P<0.0001). Among male smokers aged ⩾70 there was a 39% (3) excess of neoplastic deaths, a 54% (2) excess of respiratory deaths, and a 6% (2) excess of vascular deaths. Fewer women smoked, but those who did had tobacco attributable risks of lung cancer and respiratory disease about the same as men. For both sexes, the lung cancer rates at ages 35-69 were about three times as great in smokers as in non-smokers, but because the rates among non-smokers in different parts of China varied widely the absolute excesses of lung cancer in smokers also varied. Of all deaths attributed to tobacco, 45% were due to chronic obstructive pulmonary disease and 15% to lung cancer; oesophageal cancer, stomach cancer, liver cancer, tuberculosis, stroke, and ischaemic heart disease each caused 5-8%. Tobacco caused about 0.6 million Chinese deaths in 1990 (0.5 million men). This will rise to 0.8 million in 2000 (0.4 million at ages 35-69) or to more if the tobacco attributed fractions increase.ConclusionsAt current age specific death rates in smokers and non-smokers one in four smokers would be killed by tobacco, but as the epidemic grows this proportion will roughly double. If current smoking uptake rates persist in China (where about two thirds of men but few women become smokers) tobacco will kill about 100 million of the 0.3 billion males now aged 0-29, with half these deaths in middle age and half in old age.

Key messages

  • Of the Chinese deaths now being caused by tobacco, 45% are from chronic lung disease, 15% from lung cancer, and 5-8% from each of oesophageal cancer, stomach cancer, liver cancer, stroke, ischaemic heart disease, and tuberculosis
  • Tobacco now causes 13% (and will probably eventually cause about 33%) of deaths in men but only 3% (and perhaps eventually about 1%) of deaths in women as the proportion of young women who smoke has become small
  • Two thirds of men now become smokers before age 25; few give up, and about half of those who persist will be killed by tobacco in middle or old age
  • If present smoking patterns continue about 100 million of the 0.3 billion Chinese males now aged 0-29 will eventually be killed by tobacco
  • Tobacco caused 0.6 million deaths in 1990 and will cause at least 0.8 million in 2000 (0.7 million in men) and about 3 million a year by the middle of the century on the basis of current smoking patterns
  相似文献   

14.

Background

Mass treatment to trachoma endemic communities is a critical part of the World Health Organization SAFE strategy. However, non-participation may not be at random, affecting coverage surveys and effectiveness if infection is differential.

Methodology/Principal Findings

As part of the Partnership for Rapid Elimination of Trachoma (PRET), 32 communities in Tanzania, and 48 in The Gambia had a detailed census taken followed by mass treatment with azithromycin. The target coverage in each community was >80% of children ages <10 years. Community treatment assistants observed treatment and recorded compliance, thus coverage at the community, household, and individual level could be determined. Within each community, we determined the actual proportions of households where all, some, or none of the children were treated. Assuming the coverage in children <10 years of the community was as observed and non-participation was at random, we did 500 simulations to derive expected proportions of households where all, some, or none of the children were treated. Clustering of household treatment was detected comparing greater-than-expected proportions of households where none or all of children were treated, and the intraclass correlation (ICC) was calculated. Tanzanian and Gambian mass treatment coverages for children <10 years of age ranged from 82–100% and 62–99%, respectively. Clustering of households where all children were treated or no children were treated was greater than expected. Compared to model simulations, all Tanzanian communities and 44 of 48 (91.7%) Gambian communities had significantly higher proportions of households where all children were treated. Furthermore, 30 of 32 (93.8%) Tanzanian communities and 34 of 48 (70.8%) Gambian communities had a significantly elevated proportion of households compared to the expected proportion where no children were treated. The ICC for Tanzania was 0.77 (95% CI 0.74–0.81) and for The Gambia was 0.55 (95% CI 0.51–0.59).

Conclusions/Significance

In programs aiming for high coverage, complete compliance or non-compliance with mass treatment clusters within households. Non-compliance cannot be assumed to be at random.  相似文献   

15.
ObjectivesTo determine the prevalence of left ventricular systolic dysfunction, and of heart failure due to different causes, in patients with risk factors for these conditions.DesignEpidemiological study, including detailed clinical assessment, electrocardiography, and echocardiography.Setting16 English general practices, representative for socioeconomic status and practice type.Participants1062 patients (66% response rate) with previous myocardial infarction, angina, hypertension, or diabetes.ResultsDefinite systolic dysfunction (ejection fraction <40%) was found in 54/244 (22.1%, 95% confidence interval 17.1% to 27.9%) patients with previous myocardial infarction, 26/321 (8.1%, 5.4% to 11.6%) with angina, 7/388 (1.8%, 0.7% to 3.7%) with hypertension, and 12/208 (5.8%, 3.0% to 9.9%) with diabetes. In each group, approximately half of these patients had symptoms of dyspnoea, and therefore had heart failure. Overall rates of heart failure, defined as symptoms of dyspnoea plus objective evidence of cardiac dysfunction (systolic dysfunction, atrial fibrillation, or clinically significant valve disease) were 16.0% (11.6% to 21.2%) in patients with previous myocardial infarction, 8.4% (5.6% to 12.0%) in those with angina, 2.8% (1.4% to 5.0%) in those with hypertension, and 7.7% (4.5% to 12.2%) in those with diabetes.ConclusionMany people with ischaemic heart disease or diabetes have systolic dysfunction or heart failure. The data support the need for trials of targeted echocardiographic screening, in view of the major benefits of modern treatment. In contrast, patients with uncomplicated hypertension have similar rates to the general population.

What is already known on this topic

The prognosis and symptoms of patients with left ventricular systolic dysfunction and heart failure can be greatly improved by modern treatmentsMany patients with heart failure do not have an assessment of left ventricular function, resulting in undertreatment of the condition

What this study adds

Patients with a history of ischaemic heart disease (especially those with previous myocardial infarction) or diabetes commonly have left ventricular systolic dysfunctionThese patients would be candidates for a targeted echocardiographic screening programmeIn contrast, the yield from screening patients with uncomplicated hypertension would be low  相似文献   

16.
ObjectiveTo evaluate the performance of a near patient test for Helicobacter pylori infection in primary care.DesignValidation study performed within a randomised trial of four management strategies for dyspepsia.Setting43 general practices around Nottingham.Subjects394 patients aged 18-70 years presenting with recent onset dyspepsia.ResultsWhen used in primary care FlexSure test had a sensitivity and specificity of 67% (95% confidence interval 59% to 75%) and 98% (95% to 99%) compared with a sensitivity and specificity of 92% (87% to 97%) and 90% (83% to 97%) when used previously in secondary care. Of the H pylori test and refer group 14% (28/199) were found to have conditions for which H pylori eradication was appropriate compared with 23% (39/170) of the group referred directly for endoscopy.ConclusionsWhen used in primary care the sensitivity of the FlexSure test was significantly poorer than in secondary care. About a third of patients who would have benefited from H pylori eradication were not detected. Near patient tests need to be validated in primary care before they are incorporated into management policies for dyspepsia.

Key messages

  • Near patient tests for H pylori infection have been recommended in the management of dyspepsia in primary care without proper evaluation
  • Such tests should have a high sensitivity to avoid missing treatable illness related to infection
  • The FlexSure near patient test had a lower sensitivity than previously reported in validation studies performed in secondary care
  • Fewer than expected numbers of patients with H pylori related pathology were identified with the FlexSure in primary care
  相似文献   

17.
ObjectiveTo compare the 10 year risk of coronary heart disease (CHD), stroke, and combined cardiovascular disease (CVD) estimated from the Framingham equations.DesignPopulation based cross sectional survey.SettingNine general practices in south London.Population1386 men and women, age 40-59 years, with no history of CVD (475 white people, 447 south Asian people, and 464 people of African origin), and a subgroup of 1069 without known diabetes, left ventricular hypertrophy, peripheral vascular disease, renal impairment, or target organ damage.ResultsPeople of African origin had the lowest 10 year risk estimate of CHD adjusted for age and sex (7.0%, 95% confidence interval 6.5 to 7.5) compared with white people (8.8%, 8.2 to 9.5) and south Asians (9.2%, 8.6 to 9.9) and the highest estimated risk of stroke (1.7% (1.5 to 1.9), 1.4% (1.3 to 1.6), 1.6% (1.5 to 1.8), respectively). The estimate risk of combined CVD, however, was highest in south Asians (12.5%, 11.6 to 13.4) compared with white people (11.9%, 11.0 to 12.7) and people of African origin (10.5%, 9.7 to 11.2). In the subgroup of 1069, the probability that a risk of CHD ⩾15% would identify risk of combined CVD ⩾20% was 91% in white people and 81% in both south Asians and people of African origin. The use of thresholds for risk of CHD of 12% in south Asians and 10% in people of African origin would increase the probability of identifying those at risk to 100% and 97%, respectively.ConclusionPrimary care doctors should use a lower threshold of CHD risk when treating mild uncomplicated hypertension in people of African or south Asian origin.  相似文献   

18.
ObjectiveTo evaluate two methods for identifying speech and language problems in preschool children.DesignProspective population based study.SettingInner London.ResultsReference assessments and usable scores were obtained for 458 (97%) of the 474 children screened. 98 (21%) children had severe language problems and 131 (29%) needed therapy. The sensitivity and specificity for the structured screening test were 66% (95% confidence interval 53% to 76%) and 89% (85% to 93%) respectively for severe language problems and 54% (43% to 65%) and 90% (85% to 93%) for those needing therapy. The sensitivity and specificity for referral by the parent led method were 56% (40% to 71%) and 85% (78% to 90%) for severe language problems and 58% (44% to 71%) and 90% (83% to 94%) for those needing speech and language therapy.ConclusionsBoth approaches failed to detect a substantial proportion of children with severe language problems and led to over-referral for diagnostic assessments. Screening is likely to be an ineffective approach to the management of speech and language problems in preschool children in this population.

What is already known on this topic

Moderate to severe language difficulties in young children are predictive of long term problems affecting learning, school achievement, and behaviourFormal screening tests are widely used, but relying on parents'' observations and health professionals'' clinical judgment may be more effective in identifying children needing therapy

What this study adds

A commonly used screening test and an approach based on parents'' observations and health visitors'' judgment fail to identify a substantial proportion of children with serious language problems and lead to the over-referral of children without serious difficulties  相似文献   

19.
ObjectiveTo evaluate the efficacy of using a nicotine patch for 5 months with a nicotine nasal spray for 1 year.DesignPlacebo controlled, double blind trial.SettingReykjavik health centre.Subjects237 smokers aged 22-66 years living in or around Reykjavik.InterventionsNicotine patch for 5 months with nicotine nasal spray for 1 year (n=118) or nicotine patch with placebo spray (n=119). Treatment with patches included 15 mg of nicotine for 3 months, 10 mg for the fourth month, and 5 mg for the fifth month, whereas nicotine in the nasal spray was available for up to 1 year. Both groups received supportive treatment.ResultsThe log rank test for 6 years (χ2=8.5, P=0.004) shows a significant association between abstinence from smoking and type of treatment. Sustained abstinence rates for the patch and nasal spray group and patch only group were 51% v 35% after 6 weeks (P=0.011 (χ2), 95% confidence interval 1.17% to 3.32%), 37% v 25% after 3 months (P=0.045, 1.01% to 3.08%), 31% v 16% after 6 months (P=0.005, 1.27% to 4.50%), 27% v 11% after 12 months (P=0.001, 1.50% to 6.14%), and 16% v 9% after 6 years (P=0.077, 0.93% to 4.72%).ConclusionsShort and long term abstinence rates show that the combination of using a nicotine patch for 5 months with a nicotine nasal spray for 1 year is a more effective method of stopping smoking than using a patch only. The low percentage of participants using the nasal spray at 1 year, and the few relapses during the second year, suggest that it is not cost effective to use a nasal spray for longer than 7 months after stopping a patch.

Key messages

  • Combined methods of nicotine replacement therapy have a potential advantage over one method because of high levels of substitution
  • Nicotine patches release nicotine slowly, but nicotine nasal spray delivers nicotine more rapidly, enabling the smoker to respond quickly to any smoking urges
  • Treatment with a patch and nicotine nasal spray was significantly more effective than patch and placebo from day 15 after stopping smoking
  • Using a patch for 5 months with a nicotine nasal spray for 1 year provides a more effective means of stopping smoking than using a patch only
  • It is not cost effective to use a nicotine nasal spray for longer than 7 months after stopping a patch
  相似文献   

20.
ObjectiveTo estimate and interpret time trends in vertical transmission rates for HIV using data from national obstetric and paediatric surveillance registers.DesignProspective study of HIV infected women reported through obstetric surveillance. HIV infection status of the child and onset of AIDS were reported through paediatric surveillance. Rates of vertical transmission and progression to AIDS rate were estimated by methods that take account of incomplete follow up of children with indeterminate infection status and delay in AIDS reporting.SettingBritish Isles.SubjectsPregnant women infected with HIV whose infection was diagnosed before delivery, and their babies.ResultsBy January 1999, 800 children born to diagnosed HIV infected women who had not breast fed had been reported. Vertical transmission rates rose to 19.6% (95% confidence interval 8.0% to 32.5%) in 1993 before falling to 2.2% (0% to 7.8%) in 1998. Between 1995 and 1998 use of antiretroviral treatment increased significantly each year, reaching 97% of live births in 1998. The rate of elective caesarean section remained constant, at around 40%, up to 1997 but increased to 62% in 1998. Caesarean section and antiretroviral treatment together were estimated to reduce risk of transmission from 31.6% (13.6% to 52.2%) to 4.2% (0.8% to 8.5%). The proportion of infected children developing AIDS in the first 6 months fell from 17.7% (6.8% to 30.8%) before 1994 to 7.2% (0% to 15.7%) after, coinciding with increased use of prophylaxis against Pneumocystis carinii pneumonia.ConclusionsIn the British Isles both HIV related morbidity and vertical transmission are being reduced through increased use of interventions.

Key messages

  • Reliable estimates of HIV vertical transmission rates can be derived from surveillance data
  • Infected pregnant women are increasingly taking up elective caesarean section and antiretroviral treatment to reduce the risk of transmitting HIV to their babies
  • Vertical transmission rates have fallen greatly over the past four years and progression to AIDS among infected children may also have slowed
  • These benefits can occur only if infected women are diagnosed before or during pregnancy
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号