首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 9 毫秒
1.

Background

On November 24th 2005, the Government of England and Wales removed regulatory restrictions on the times at which licensed premises could sell alcohol. This study tests availability theory by treating the implementation of Licensing Act (2003) as a natural experiment in alcohol policy.

Methods

An interrupted time series design was employed to estimate the Act’s immediate and delayed impact on violence in the City of Manchester (Population 464,200). We collected police recorded rates of violence, robbery, and total crime between the 1st of February 2004 and the 31st of December 2007. Events were aggregated by week, yielding a total of 204 observations (95 pre-, and 109 post-intervention). Secondary analysis examined changes in daily patterns of violence. Pre- and post-intervention events were separated into four three-hour segments 18∶00–20∶59, 21∶00–23.59, 00∶00–02∶59, 03∶00–05∶59.

Results

Analysis found no evidence that the Licensing Act (2003) affected the overall volume of violence. However, analyses of night-time violence found a gradual and permanent shift of weekend violence into later parts of the night. The results estimated an initial increase of 27.5% between 03∶00 to 06∶00 (ω = 0.2433, 95% CI = 0.06, 0.42), which increased to 36% by the end of the study period (δ = −0.897, 95% CI = −1.02, −0.77).

Conclusions

This study found no evidence that a national policy increasing the physical availability of alcohol affected the overall volume of violence. There was, however, evidence suggesting that the policy may be associated with changes to patterns of violence in the early morning (3 a.m. to 6 a.m.).  相似文献   

2.
Statistics in Biosciences - Assessing the impact of complex interventions on measurable health outcomes is a growing concern in health care and health policy. Interrupted time series (ITS) designs...  相似文献   

3.
ObjectivesWe evaluated the impact of a COPD discharge care bundle on readmission rates following hospitalisation with an acute exacerbation.DesignInterrupted time series analysis, comparing readmission rates for COPD exacerbations at nine trusts that introduced the bundle, to two comparison groups; (1) other NHS trusts in London and (2) all other NHS trusts in England. Care bundles were implemented at different times for different NHS trusts, ranging from October 2009 to April 2011.SettingNine NHS acute trusts in the London, England.ParticipantsPatients aged 45 years and older admitted to an NHS acute hospital in England for acute exacerbation of COPD. Data come from Hospital Episode Statistics, April 2002 to March 2012.ResultsIn hospitals introducing the bundle readmission rates were rising before implementation and falling afterwards (e.g. readmissions within 28 days +2.13% per annum (pa) pre and -5.32% pa post (p for difference in trends = 0.012)). Following implementation, readmission rates within 7 and 28 day were falling faster than among other trusts in London, although this was not statistically significant (e.g. readmissions within 28 days -4.6% pa vs. -3.2% pa, p = 0.44). Comparisons with a national control group were similar.ConclusionsThe COPD discharge care bundle appeared to be associated with a reduction in readmission rate among hospitals using it. The significance of this is unclear because of changes to background trends in London and nationally.  相似文献   

4.

Background

In uncontrolled before-after studies, CONSORT was shown to improve the reporting of randomised trials. Before-after studies ignore underlying secular trends and may overestimate the impact of interventions. Our aim was to assess the impact of the 2007 STROBE statement publication on the quality of observational study reporting, using both uncontrolled before-after analyses and interrupted time series.

Methods

For this quasi-experimental study, original articles reporting cohort, case-control, and cross-sectional studies published between 2004 and 2010 in the four dermatological journals having the highest 5-year impact factors (≥4) were selected. We compared the proportions of STROBE items (STROBE score) adequately reported in each article during three periods, two pre STROBE period (2004–2005 and 2006–2007) and one post STROBE period (2008–2010). Segmented regression analysis of interrupted time series was also performed.

Results

Of the 456 included articles, 187 (41%) reported cohort studies, 166 (36.4%) cross-sectional studies, and 103 (22.6%) case-control studies. The median STROBE score was 57% (range, 18%–98%). Before-after analysis evidenced significant STROBE score increases between the two pre-STROBE periods and between the earliest pre-STROBE period and the post-STROBE period (median score2004–05 48% versus median score2008–10 58%, p<0.001) but not between the immediate pre-STROBE period and the post-STROBE period (median score2006–07 58% versus median score2008–10 58%, p = 0.42). In the pre STROBE period, the six-monthly mean STROBE score increased significantly, by 1.19% per six-month period (absolute increase 95%CI, 0.26% to 2.11%, p = 0.016). By segmented analysis, no significant changes in STROBE score trends occurred (−0.40%; 95%CI, −2.20 to 1.41; p = 0.64) in the post STROBE statement publication.

Interpretation

The quality of reports increased over time but was not affected by STROBE. Our findings raise concerns about the relevance of uncontrolled before-after analysis for estimating the impact of guidelines.  相似文献   

5.
6.

Background

Malaria endemic countries have scaled-up community health worker (CHW) interventions, to diagnose and treat malaria in communities with limited access to public health systems. The evaluations of these programmes have centred on CHW’s compliance to guidelines, but the broader changes at public health centres including utilisation and diagnoses made, has received limited attention.

Methods

This analysis was conducted during a CHW–intervention for malaria in Rukungiri District, Western Uganda. Outpatient department (OPD) visit data were collected for children under-5 attending three health centres one year before the CHW-intervention started (pre-intervention period) and for 20 months during the intervention (intervention-period). An interrupted time series analysis with segmented regression models was used to compare the trends in malaria, non-malaria and overall OPD visits during the pre-intervention and intervention-period.

Results

The introduction of a CHW-intervention suggested the frequency of diagnoses of diarrhoeal diseases, pneumonia and helminths increased, whilst the frequency of malaria diagnoses declined at health centres. In May 2010 when the intervention began, overall health centre utilisation decreased by 63% compared to the pre-intervention period and the health centres saw 32 fewer overall visits per month compared to the pre-intervention period (p<0.001). Malaria visits also declined shortly after the intervention began and there were 27 fewer visits per month during the intervention-period compared with the pre-intervention period (p<0.05). The declines in overall and malaria visits were sustained for the entire intervention-period. In contrast, there were no observable changes in trends of non-malarial visits between the pre-intervention and intervention-period.

Conclusions

This analysis suggests introducing a CHW-intervention can reduce the number of child malaria visits and change the profile of cases presenting at health centres. The reduction in workload of health workers may allow them to spend more time with patients or undertake additional curative or preventative roles.  相似文献   

7.
Objective: To investigate the influence of patient obesity on primary care physician practice style. Research Methods and Procedures: This was a randomized, prospective study of 509 patients assigned for care by 105 primary care resident physicians. Patient data collected included sociodemographic information, self‐reported health status (Medical Outcomes Study Short Form‐36), evaluation for depression (Beck Depression Index), and satisfaction. Height and weight were measured to calculate the BMI. Videotapes of the visits were analyzed using the Davis Observation Code (DOC). Results: Regression equations were estimated relating obesity to visit length, each of the 20 individual DOC codes, and the six DOC Physician Practice Behavior Clusters, controlling for patient health status and sociodemographics. Obesity was not significantly associated with the length of the visit, but influenced what happened during the visit. Physicians spent less time educating obese patients about their health (p = 0.0062) and more time discussing exercise (p = 0.0075). Obesity was not related to discussions regarding nutrition. Physicians spent a greater portion of the visit on technical tasks when the patient was obese (p = 0.0528). Mean pre‐visit general satisfaction for obese patients was significantly lower than for non‐obese patients (p = 0.0069); however, there was no difference in post‐visit patient satisfaction. Discussion: Patient obesity impacts the medical visit. Further research can promote a greater understanding of the relationships between obese patients and their physicians.  相似文献   

8.

Background

The aim of this study was to describe the use of gastrointestinal (GI) protection before, during and after hospitalisation for elderly patients using NSAID or low-dose ASA.

Methods

This study included all elderly patients (75+) admitted to hospital in the period of 1st April 2010 to 31st March 2011 at Odense University Hospital, Denmark, who were regular users of NSAID or low-dose ASA before hospital admission, or had one of these drugs initiated during hospital stay. By using pharmacy dispensing data and a hospital-based pharmacoepidemiological database, the treatment strategy for the individual patients was followed across hospital stay.

Results

In total, 3,587 patients were included. Before hospital admission, 93 of 245 NSAID users (38.0%) and 597 of 1994 user of low-dose ASA (29.9%) had used GI protection. During hospital stay, use of GI protection increased to 75% and 33.9%, respectively. When hospital physicians initiated new treatment with NSAID or with low-dose ASA, 305 of 555 (55.0%) and 647 of 961 (67.3%) were initiated without concomitant use of GI protection. When hospital physicians initiated GI protection, 26.8–51.0% were continued in primary care after discharge.

Conclusions

During hospital stay, the use of GI protection increases, but when new treatment with NSAIDs or low-dose ASA is initiated in hospital, the use of gastrointestinal protection is low. The low use of GI protection is carried on in primary care after discharge.  相似文献   

9.
We sought to determine whether an intervention labeled "biofeedback" could be implemented with patients who were diagnosed with "functional" disorders (Irritable Bowel Syndrome, Fibromyagia/Chronic Fatigue Syndrome, Myofascial Pain, Anxiety with somatic features, or Noncardiac Chest Pain), in a primary care setting, and whether cost savings through lowered utilization of medical services would be realized. Seventy patients were initially randomized into a treatment group or comparison group based on willingness to participate. Ultimately, 19 patients completed treatment and 30 were followed through usual treatment as a comparison. Treatment patients completed symptom diaries while working with a biofeedback therapist in the primary care facility. Both group's medical expenses were tracked for 6 months prior to and 6 months after the treatment time interval. Patients in the treatment group lowered symptom frequency and severity significantly. Medical costs were differentially reduced in this group such that all costs were $72 less in the treatment group and $9 in the comparison for the 6 months following the treatment time period. (p < .001). Unfortunately, a large group of assigned treatment patients did not start or complete treatment. These patients had high initial costs and went up even higher post. No comparable group could be found among the controls, limiting any inference regarding cost/benefit. Biofeedback based interventions for "functional" disorders can be easily integrated into primary care settings, can reduce symptoms, and may be able to reduce overall medical costs in this group of patients known as heavy utilizers.  相似文献   

10.

Background

Because Taiwan has the fastest aging rate among developed countries, care for the elderly is becoming more prominent in the country. Primary family caregivers play an important role in patient health and health promotion behavior. Chronic obstructive pulmonary disease (COPD), an age-related disease, is a major public health problem with high morbidity and mortality and can be a long-term burden for family members; however, little attention has been given to the differences in COPD care between elder caregivers and other caregivers. This study aimed to investigate the differences between elder family caregivers and non-elder family caregivers caring for COPD patients in Taiwan, including caring behavior, caregiver response, and caring knowledge.

Methods

This cross-sectional study was conducted between March 2007 and January 2008; 406 primary family caregivers of COPD patients from the thoracic outpatient departments of 6 hospitals in north-central Taiwan were recruited to answer questionnaires measuring COPD characteristics, care behavior, caregiver response, and COPD knowledge. All questionnaires, which addressed caregiver knowledge, care behaviors, and care reactions, were shown to have acceptable validity and reliability, and the data were analyzed using univariate and generalized linear model techniques.

Results

The elder caregivers group had 79 participants, and the non-elder caregivers comprised 327 participants. The COPD-related knowledge scale results were positively correlated with the family caregiver caring behavior scale, suggesting that better COPD-related knowledge among family caregivers may result in improved caring behavior. After adjusting for all possible confounding factors, the elder caregivers had significantly lower COPD-related knowledge than the non-elder caregivers (P<0.001). However, there were no significant differences in the family caregiver caring behavior scale or the caregiver reaction assessment scale between the two groups.

Conclusions

Elder family caregivers require increased education regarding medications and preventive care in COPD patient care.  相似文献   

11.
12.

Background

Birmingham is the largest UK city after London, and central Birmingham has an annual tuberculosis incidence of 80 per 100,000. We examined seasonality and sunlight as drivers of tuberculosis incidence. Hours of sunshine are seasonal, sunshine exposure is necessary for the production of vitamin D by the body and vitamin D plays a role in the host response to tuberculosis.

Methods

We performed an ecological study that examined tuberculosis incidence in Birmingham from Dec 1981 to Nov 2009, using publicly-available data from statutory tuberculosis notifications, and related this to the seasons and hours of sunshine (UK Meteorological Office data) using unmeasured component models.

Results

There were 9,739 tuberculosis cases over the study period. There was strong evidence for seasonality, with notifications being 24.1% higher in summer than winter (p<0.001). Winter dips in sunshine correlated with peaks in tuberculosis incidence six months later (4.7% increase in incidence for each 100 hours decrease in sunshine, p<0.001).

Discussion and Conclusion

A potential mechanism for these associations includes decreased vitamin D levels with consequent impaired host defence arising from reduced sunshine exposure in winter. This is the longest time series of any published study and our use of statutory notifications means this data is essentially complete. We cannot, however, exclude the possibility that another factor closely correlated with the seasons, other than sunshine, is responsible. Furthermore, exposure to sunlight depends not only on total hours of sunshine but also on multiple individual factors. Our results should therefore be considered hypothesis-generating. Confirmation of a potential causal relationship between winter vitamin D deficiency and summer peaks in tuberculosis incidence would require a randomized-controlled trial of the effect of vitamin D supplementation on future tuberculosis incidence.  相似文献   

13.
The healthcare of people with HIV is transitioning from specialty care to the primary healthcare (PHC) system. However, many of the performance indicators used to measure the quality of HIV care pre-date this transition. The goal of this work was to examine how existing HIV care performance indicators measure the comprehensive and longitudinal care offered in a PHC setting. A scoping review consisting of peer-reviewed and grey literature searches was performed. Two reviewers evaluated study eligibility and indicators in documents meeting inclusion criteria were extracted into a database. Indicators were matched to a PHC performance measurement framework to determine their applicability for evaluating quality of care in the PHC setting. The literature search identified 221 publications, of which 47 met inclusion criteria. 1184 indicators were extracted and removal of duplicates left 558 unique indicators. A majority of the 558 indicators fell under the ‘secondary prevention’ (12%) and ‘care of chronic conditions’ (33%) domains when indicators were matched to the PHC performance framework. Despite the imbalance, nearly all performance domains in the PHC framework were populated by at least one indicator with significant concentrations in domains such as patient-provider relationship, patient satisfaction, population and community characteristics, and access to care. Existing performance frameworks for the care of people with HIV provide a comprehensive set of indicators that align well with a PHC performance framework. Nonetheless, some important elements of care, such as patient-reported outcomes, are poorly covered by existing indicators. Advancing our understanding of how the experience of care for people with HIV is impacted by changes in health services delivery, specifically more care within the PHC system, will require performance indicators to capture this aspect of HIV care.  相似文献   

14.

Background

Wide variations in mortality rates persist between different areas in England, despite an overall steady decline. To evaluate a conceptual model that might explain how population and service characteristics influence population mortality variations, an overall null hypothesis was tested: variations in primary healthcare service do not predict variations in mortality at population level, after adjusting for population characteristics.

Methodology/Principal Findings

In an observational study of all 152 English primary care trusts (geographical groupings of population and primary care services, total population 52 million), routinely available published data from 2008 and 2009 were modelled using negative binomial regression. Counts for all-cause, coronary heart disease, all cancers, stroke, and chronic obstructive pulmonary disease mortality were analyzed using explanatory variables of relevant population and service-related characteristics, including an age-correction factor. The main predictors of mortality variations were population characteristics, especially age and socio-economic deprivation. For the service characteristics, a 1% increase in the percentage of patients on a primary care hypertension register was associated with decreases in coronary heart disease mortality of 3% (95% CI 1–4%, p = 0.006) and in stroke mortality of 6% (CI 3–9%, p<0.0001); a 1% increase in the percentage of patients recalling being better able to see their preferred doctor was associated with decreases in chronic obstructive pulmonary disease mortality of 0.7% (CI 0.2–2.0%, p = 0.02) and in all cancer mortality of 0.3% (CI 0.1–0.5%, p = 0.009) (continuity of care). The study found no evidence of an association at primary care trust population level between variations in achievement of pay for performance and mortality.

Conclusions/Significance

Some primary healthcare service characteristics were also associated with variations in mortality at population level, supporting the conceptual model. Health care system reforms should strengthen these characteristics by delivering cost-effective evidence-based interventions to whole populations, and fostering sustained patient-provider partnerships.  相似文献   

15.

Background

Osteoporosis and associated fragility fractures are a major health problem; they are more common in women over 50 years old. Fracture liaison nurses have been widely used in secondary care to promote the recognition of fragility fractures and to promote the use of bone-sparing medication to reduce the risk of recurrent facture.

Objective

Audit the impact of a primary care based fracture liaison nurse on the detection of fragility fractures in people with osteoporosis and their treatment with a bone-sparing medication.

Method

This audit took place in 12 GP practices using ‘before and after’ cross-sectional extractions of anonymised routine data. We report, for females 50–74 years and ≥75 years old, socio-economic deprivation index, the prevalence of osteoporosis, recording of fragility fractures, dual-energy X-ray absorptiometry (DXA), smoking, and body-mass index (BMI) and use of appropriate bone-sparing medication. We used Altman’s test of independent proportions to compare before and after data.

Results

Recording of the diagnosis of osteoporosis increased from 1.5% to 1.7% (p = 0.059); the rate of DXA scans fell (1.8% to 1.4%; p = 0.002); recording of fractures and fragility fractures more than doubled (0.8% to 2.0%; p<0.001 and 0.5% to 1.5%; p<0.001, respectively) with approximate doubling of the recording of smoking, and BMI (p<0.001 level). Fragility fracture recording rose from 8.8% to 15% in females aged 50 to 74, and from 0.8% to 2.3% in people aged ≥75years old (p<0.001). There appeared to be inequity in the service, people who were least deprived were more likely to receive DXA scans and the more deprived to be prescribed bone sparing agents.

Conclusion

A fracture liaison nurse in primary care has been associated with a period of improved management. Liaison nurses based in different parts of the health system should be tested in a prospective trial.  相似文献   

16.

Introduction

In low-income countries, Surgical Site Infection (SSI) is a common form of hospital-acquired infection. Antibiotic prophylaxis is an effective method of preventing these infections, if given immediately before the start of surgery. Although several studies in Africa have compared pre-operative versus post-operative prophylaxis, there are no studies describing the implementation of policies to improve prescribing of surgical antibiotic prophylaxis in African hospitals.

Methods

We conducted SSI surveillance at a typical Government hospital in Kenya over a 16 month period between August 2010 and December 2011, using standard definitions of SSI and the extent of contamination of surgical wounds. As an intervention, we developed a hospital policy that advised pre-operative antibiotic prophylaxis and discouraged extended post-operative antibiotics use. We measured process, outcome and balancing effects of this intervention in using an interrupted time series design.

Results

From a starting point of near-exclusive post-operative antibiotic use, after policy introduction in February 2011 there was rapid adoption of the use of pre-operative antibiotic prophylaxis (60% of operations at 1 week; 98% at 6 weeks) and a substantial decrease in the use of post-operative antibiotics (40% of operations at 1 week; 10% at 6 weeks) in Clean and Clean-Contaminated surgery. There was no immediate step-change in risk of SSI, but overall, there appeared to be a moderate reduction in the risk of superficial SSI across all levels of wound contamination. There were marked reductions in the costs associated with antibiotic use, the number of intravenous injections performed and nursing time spent administering these.

Conclusion

Implementation of a locally developed policy regarding surgical antibiotic prophylaxis is an achievable quality improvement target for hospitals in low-income countries, and can lead to substantial benefits for individual patients and the institution.  相似文献   

17.
Previous empirical work suggests that firms with high environmental performance tend to be profitable, but questions persist about the nature of the relationship. Does stronger environmental performance really lead to better financial performance, or is the observed relationship the outcome of some other underlying firm attribute? Does it pay to have cleanrunning facilities or to have facilities in relatively clean industries? To explore these questions, we analyze 652 U.S. manufacturing firms over the time period 1987–1996. Although we find evidence of an association between lower pollution and higher financial valuation, we find that a firm's fixed characteristics and strategic position might cause this association. Our findings suggest that “When does it pay to be green?” may be a more important question than “Does it pay to be green?”  相似文献   

18.

Background

The recent decline in fertility in India has been unprecedented especially in southern India, where fertility is almost exclusively controlled by means of permanent contraceptive methods, mainly female sterilization, which constitutes about two-thirds of overall contraceptive use. Many Indian women undergo sterilization at relatively young ages as a consequence of early marriage and childbearing in short birth intervals. This research aims to investigate the socioeconomic factors determining the choices for alternative contraceptive choices against the dominant preference for sterilization among married women in India.

Methods

Data for this study are drawn from the 2005–06 National Family Health Surveys focusing on a sample of married women who reported having used a method of contraception in the five years preceding the survey. A multilevel multinomial logit regression is used to estimate the impact of socioeconomic factors on contraceptive choices, differentiating temporary modern or traditional methods versus sterilization.

Findings

Religious affiliation, women''s education and occupation had overarching influence on method choices amongst recent users. Muslim women were at higher odds of choosing a traditional or modern temporary method than sterilization. Higher level of women''s education increased the odds of modern temporary method choices but the education effect on traditional method choices was only marginally significant. Recent users belonging to wealthier households had higher odds of choosing modern methods over sterilization. Exposure to family planning messages through radio had a positive effect on modern and traditional method choices. Community variations in method choices were highly significant.

Conclusion

The persistent dominance of sterilization in the Indian family planning programme is largely determined by socioeconomic conditions. Reproductive health programmes should address the socioeconomic barriers and consider multiple cost-effective strategies such as mass media to promote awareness of modern temporary methods.  相似文献   

19.

Background

The analgesic co-proxamol (paracetamol/dextropropoxyphene combination) has been widely involved in fatal poisoning. Concerns about its safety/effectiveness profile and widespread use for suicidal poisoning prompted its withdrawal in the UK in 2005, with partial withdrawal between 2005 and 2007, and full withdrawal in 2008. Our objective in this study was to assess the association between co-proxamol withdrawal and prescribing and deaths in England and Wales in 2005–2010 compared with 1998–2004, including estimation of possible substitution effects by other analgesics.

Methods and Findings

We obtained prescribing data from the NHS Health and Social Care Information Centre (England) and Prescribing Services Partneriaeth Cydwasanaethau GIG Cymru (Wales), and mortality data from the Office for National Statistics. We carried out an interrupted time-series analysis of prescribing and deaths (suicide, open verdicts, accidental poisonings) involving single analgesics. The reduction in prescribing of co-proxamol following its withdrawal in 2005 was accompanied by increases in prescribing of several other analgesics (co-codamol, paracetamol, codeine, co-dydramol, tramadol, oxycodone, and morphine) during 2005–2010 compared with 1998–2004. These changes were associated with major reductions in deaths due to poisoning with co-proxamol receiving verdicts of suicide and undetermined cause of −21 deaths (95% CI −34 to −8) per quarter, equating to approximately 500 fewer suicide deaths (−61%) over the 6 years 2005–2010, and −25 deaths (95% CI −38 to −12) per quarter, equating to 600 fewer deaths (−62%) when accidental poisoning deaths were included. There was little observed change in deaths involving other analgesics, apart from an increase in oxycodone poisonings, but numbers were small. Limitations were that the study was based on deaths involving single drugs alone and changes in deaths involving prescribed morphine could not be assessed.

Conclusions

During the 6 years following the withdrawal of co-proxamol in the UK, there was a major reduction in poisoning deaths involving this drug, without apparent significant increase in deaths involving other analgesics. Please see later in the article for the Editors'' Summary  相似文献   

20.
The detection of patterns in monitoring data of vital signs is of great importance for adequate bedside decision support in critical care. Currently used alarm systems, which are based on fixed thresholds and independency assumptions, are not satisfactory in clinical practice. Time series techniques such as AR‐models consider autocorrelations within the series, which can be used for pattern recognition in the data. For practical applications in intensive care the data analysis has to be automated. An important issue is the suitable choice of the model order which is difficult to accomplish online. In a comparative case‐study we analyzed 34564 univariate time series of hemodynamic variables in critically ill patients by autoregressive models of different orders and compared the results of pattern detection. AR(2)‐models seem to be most suitable for the detection of clinically relevant patterns, thus affirming that treating the data as independent leads to false alarms. Moreover, using AR(2)‐models affords only short estimation periods. These findings for pattern detection in intensive care data are of medical importance as they justify a preselection of a model order, easing further automated statistical online analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号