Journal of Emergencies, Trauma, and Shock
Home About us Editors Ahead of Print Current Issue Archives Search Instructions Subscribe Advertise Login 
Users online:242   Print this pageEmail this pageSmall font sizeDefault font sizeIncrease font size   


 
 Table of Contents    
ORIGINAL ARTICLE  
Year : 2013  |  Volume : 6  |  Issue : 1  |  Page : 3-10
Evaluating trauma center structural performance: The experience of a Canadian provincial trauma system


1 Department of Social and Preventive Medicine; Unité de Traumatologie-urgence-soins Intensifs, Center de Recherche du CHA (Hôpital de l'Enfant-Jésus), Laval University, Quebec (Qc), Canada
2 Unité de Traumatologie-urgence-soins Intensifs, Center de Recherche du CHA (Hôpital de l'Enfant-Jésus), Laval University, Quebec (Qc), Canada
3 Unité de Traumatologie-urgence-soins Intensifs, Center de Recherche du CHA (Hôpital de l'Enfant-Jésus), Laval University, Quebec (Qc); Department of Rehabilitation, Université Laval, Quebec (Qc), Canada
4 School of Rehabilitation, Faculty of Medicine, University of Montreal, Montreal (Qc), Canada

Click here for correspondence address and email

Date of Submission14-Mar-2012
Date of Acceptance08-Apr-2012
Date of Web Publication22-Jan-2013
 

   Abstract 

Background: Indicators of structure, process, and outcome are required to evaluate the performance of trauma centers to improve the quality and efficiency of care. While periodic external accreditation visits are part of most trauma systems, a quantitative indicator of structural performance has yet to be proposed. The objective of this study was to develop and validate a trauma center structural performance indicator using accreditation report data. Materials and Methods: Analyses were based on accreditation reports completed during on-site visits in the Quebec trauma system (1994-2005). Qualitative report data was retrospectively transposed onto an evaluation grid and the weighted average of grid items was used to quantify performance. The indicator of structural performance was evaluated in terms of test-retest reliability (kappa statistic), discrimination between centers (coefficient of variation), content validity (correlation with accreditation decision, designation level, and patient volume) and forecasting (correlation between visits performed in 1994-1999 and 1998-2005). Results: Kappa statistics were >0.8 for 66 of the 73 (90%) grid items. Mean structural performance score over 59 trauma centers was 47.4 (95% CI: 43.6-51.1). Two centers were flagged as outliers and the coefficient of variation was 31.2% (95% CI: 25.5% to 37.6%), showing good discrimination. Correlation coefficients of associations with accreditation decision, designation level, and volume were all statistically significant (r = 0.61, -0.40, and 0.24, respectively). No correlation was observed over time (r = 0.03). Conclusion: This study demonstrates the feasibility of quantifying trauma center structural performance using accreditation reports. The proposed performance indicator shows good test-retest reliability, between-center discrimination, and construct validity. The observed variability in structural performance across centers and over-time underlines the importance of evaluating structural performance in trauma systems at regular intervals to drive quality improvement efforts.

Keywords: Performance indicators, quality of care, structure, trauma center, trauma system

How to cite this article:
Moore L, Lavoie A, Sirois MJ, Swaine B, Murat V, Sage NL, Emond M. Evaluating trauma center structural performance: The experience of a Canadian provincial trauma system. J Emerg Trauma Shock 2013;6:3-10

How to cite this URL:
Moore L, Lavoie A, Sirois MJ, Swaine B, Murat V, Sage NL, Emond M. Evaluating trauma center structural performance: The experience of a Canadian provincial trauma system. J Emerg Trauma Shock [serial online] 2013 [cited 2020 May 27];6:3-10. Available from: http://www.onlinejets.org/text.asp?2013/6/1/3/106318



   Introduction Top


In addition to being the first cause of mortality for persons under the age of 45, injury represents the disease with the highest financial burden. [1],[2] Furthermore, the demographic shift along with the incessant development of new and costly technologies have led to an explosion in the costs of trauma care. [1],[2],[3] In the light of evidence suggesting that acute clinical trauma care is suboptimal [4] and that patient outcomes vary across trauma centers, [5],[6] health care authorities worldwide are expressing the urgent need for information about health care performance. [3]

Performance in health care is widely measured using Donabedian's structure-process-outcome model. [7] Structure refers to the physical environment of the health care facility including human resources, process refers to clinical interventions in individual patients, and outcome refers to the status of the patient at the end of an episode of care. According to this model, improvements in structure influence process and ultimately, patient outcome. To improve quality of care, performance should be measured by indicators in each domain.

Performance evaluation in trauma began in 1976, when the American College of Surgeons Committee on Trauma (ACS-COT) introduced a series of 12 audit filters, updated to 22 in 1993. [8] However, these audit filters are based uniquely on clinical processes and patient outcome. In organised trauma systems, structure is traditionally evaluated in a qualitative fashion with on-site accreditation visits by a committee of external experts who verify adherence to a series of criteria recommended by ACS-COT. [8] No quantitative indicator of structural performance has been proposed for trauma center evaluation despite the widespread use of accreditation procedures.

The objective of this study was to develop and validate a structural performance indicator that can be ascertained from accreditation visit data and used to drive performance improvement efforts.


   Materials and Methods Top


Study population

The study was based on the inclusive trauma system of the province of Quebec, Canada. The Quebec trauma system was instated in 1993 and involves regionalized care from urban level I trauma centers through to rural community hospitals. At the end of the study period, the system included 6 level I (including 2 pediatric), 4 level II, 21 level III, and 28 level IV centers. Standardized prehospital protocols ensure that major trauma cases are taken to these hospitals and standing agreements regulate interhospital transfers within the system.

Data

Accreditation data

Data were extracted from accreditation reports completed during on-site visits performed at each trauma center between 1994 and 2005. The Quebec accreditation process was performed by an independent trauma medical counsel comprising 30 medical experts from provincial trauma centers. The medical counsel evaluated each center using a checklist of items based on ACS-COT criteria. [9] After each site visit, the accreditation committee made a final recommendation to either: 1) maintain the accreditation level without modification, (2) maintain current level pending specific modifications without an on-site control visit, 3) maintain current level pending on modifications with an on-site control visit, or 4) revoke the designation status.

Two waves of accreditation visits were performed by the trauma medical counsel after the initial designation process in 1993 - the first between 1994 and 1999 and the second between 1998 and 2005. Analyses were based on the second wave of visits because they were conducted after the system run-in period and they coincide with the collection of trauma registry data.

Patient-level data

To describe the patient population of the Quebec trauma system, data were extracted from the Quebec trauma registry (1999-2006). This registry is mandatory for all 59 provincially designated trauma centers and includes all deaths following injury, intensive care unit admissions, hospital stays ≥2 days, and interhospital transfers.

Development of the structural performance indicator

We first developed a standardized evaluation grid to extract qualitative information from the accreditation reports (Appendix). This grid included 73 items grouped under three themes - commitment, trauma program, and procedural protocols that were based on the ACS-COT checklist. A preliminary series of 10 randomly selected reports were transposed independently onto the grid by five experts from the trauma medical counsel to standardize the transposition process. After this run-in period, a single evaluator transposed all reports onto the evaluation grid. A series of 12 randomly selected reports were duplicated and inserted under a fictitious hospital among the reports to verify intraobserver reliability. To generate a quantitative score from the evaluation grid, each item was scored either on a four-point Likert scale (0-very negative to 4-very positive) or on a binary scale (0-absent/4-present).



Weights were then assigned to each item of the grid by group of 10 experts from the trauma medical counsel according to perceived importance. The structural performance score was calculated for each trauma center as the weighted sum of the 73 items, standardized to range between 0 and 100 (see appendix). A higher score thus represents better structural performance; a score of 0 would be given to a trauma center with all items absent or very negative whereas a score of 100 would be given to a trauma center with all items present or very positive.

Since the accreditation reports were written in short form, elements of the evaluation grid were not all thoroughly documented in each report. In addition, some items were not scored because they were not applicable to a particular level of trauma center (see appendix). Missing data items were attributed a score of two (neutral). Sensitivity analyses to verify the robustness of results to this treatment of missing data are presented in the Results section.

Structural performance scores are presented using modified rank plots whereby trauma centers are ordered by designation level and volume. Because data are based on only one accreditation visit, confidence intervals (CI) could not be calculated for each individual center. We, therefore, identified outliers by plotting ± 2 standard deviation around the global mean.

Validation of the structural performance indicator

The structural performance indicator was evaluated in terms of intra-observer reliability, discrimination between trauma centers, construct validity, and forecasting according to recommendations for evaluating composite indicators proposed by the Agency for Healthcare Research and Quality. [10]

Intraobserver reliability was evaluated by calculating quadratic weighted kappa statistics [11] on each item for a sample of 12 randomly selected reports that were coded twice by the same evaluator.

Discrimination was defined as the ability of the indicator to differentiate performance between trauma centers and was evaluated with the coefficient of variation, calculated as the ratio of the standard deviation of the structural performance scores to their mean multiplied by 100. The CI of the coefficient of variation was generated using boostrap resampling (n = 500). [12] Coefficients of variation are expressed as a percentage where higher values indicate higher between-center variation. A coefficient of variation with a lower 95% CI above 10% was considered to reflect a distribution with high variation, synonym of good between-center discrimination. [13]

Construct validity was defined as the degree of association between the structural performance indicator and other measures of quality and was assessed by evaluating the correlation of the structure scores with the following: i) designation levels, ii) trauma patient volume, iii) accreditation recommendations. Correlation was assessed with Spearman's correlation coefficients along with asymptotic 95% CI.

Forecasting was defined as the ability of the structural performance indicator to predict structural performance over time. For this analysis we calculated structural performance scores using accreditation reports completed between 1994 and 1999 during the run-in period of the Quebec trauma system (data not used in any other analyses). We then evaluated the correlation between structural performance scores generated from the two waves of accreditation visits (1994-1999 and 1998-2005). Correlation was assessed with Spearman's correlation coefficient with an asymptotic 95% CI. Since the first wave of visits was only available for 57/59 trauma centers, the correlation coefficient was based on n = 57.

Sensitivity analyses

Sensitivity analyses were performed to assess the robustness of the structural performance scores to the treatment of missing data items and the weighting scheme. For missing data items, we evaluated the correlation in trauma center structural performance scores when missing data values were coded 0 (very negative/absent: worst case scenario) or 4 (very positive/present: Best case scenario) instead of 2 (neutral). For the weighting scheme, we evaluated the correlation in trauma center ranks when equal weights were used over consensus-based weights.

All analyses were performed with the SAS system (version 9.2). Ethical approval was obtained from our institutional ethics committee. The identity of trauma centers is not revealed to protect institutional confidentiality.


   Results Top


Level I trauma centers admitted on average 900 patients per year that met trauma registry criteria whereas mean volume was below 40 for rural level IV centers [Table 1]. Level I/II centers admitted more seriously injured patients who more frequently presented with a head injury than lower level centers. Only 4% of the patient population was admitted for penetrating trauma.
Table 1: Characteristics of trauma centers in the Quebec trauma system (1999-2006)


Click here to view


The mean structural performance score among the 59 trauma centers was 47.4/100 (95% CI: 43.6-51.1). Most points were lost on trauma program (mean of 12.8/33.4), followed by procedural protocols (20.3/44.4), and lastly commitment (15.6/22.2). There were two institutional outliers - one level III center was a high performer whereas one level IV center was a low performer [Figure 1].
Figure 1: Structural performance scores for the 59 trauma centers in the Quebec trauma system (1998-2005)

Click here to view


Intraobserver agreement was almost perfect (kappa > 0.8) for 66 of the 73 items (90%) in the evaluation grid. [14]

Between-center discrimination of structural performance was good. The coefficient of variation of the structural performance scores was 31.2% with a bootstrap 95% CI of 25.5-37.6%, indicating that the distribution of structural performance scores had high variation.

Of the 59 trauma centers in the system, 20 (33.9%) maintained their designation status as is, 6 (10.2%) were asked to make specific changes with no on-site control visit, 15 (25.4%) were asked to make changes with a control visit and 18 (30.5%) had their designation level revoked. In terms of construct validity, accreditation decisions had a strong correlation with structural performance scores [Table 2]; median scores were 52/100 for those that conserved their status as is, 47/100 for centers with changes to make but no control visit, 43/100 for centers with changes and a control visit, and 35/100 for centers with revoked status (P = 0.0008). The indicator of structural performance also correlated with trauma center designation level [Table 2]. Median structural performance scores for level I through IV were 63/100, 52/100, 54/100, and 40/100, respectively, showing decreasing scores with decreasing level of accreditation (P = 0.002). We observed a weaker but statistically significant correlation with patient volume whereby higher volumes were associated with higher structural performance scores (P = 0.05). No correlation was observed between structural performance scores in the first wave of accreditation visits, performed just after implementation of the trauma system in 1993, and those in the second wave performed between 1998 and 2005 [Figure 2].
Table 2: Association of structural performance scores with the accreditation decision, designation level, and trauma center volume (1998-2005)


Click here to view
Figure 2: Correlation between structural performance scores derived from accreditation visits performed in the first wave (1994-1999) and the second wave (1998-2005)

Click here to view


Sensitivity analyses

Structural performance scores calculated by attributing a value of 0 (very negative/absent: worst case scenario) or 4 (very positive/present: Best case scenario) to all missing data items had a high correlation with scores calculated by attributing a value of 2 [Table 3]. Between-center variation was unchanged, as expected. Correlations with accreditation decision, designation level and volume were similar as was the correlation between the two waves of visits. The correlation between structural performance scores based on consensus weights and equal weights for each item of the grid was almost perfect: r = 0.99 (95% CI 0.98-0.99).
Table 3: Sensitivity analyses: Comparison of study results when missing data values were coded 2 (neutral) with study results when missing data values were coded 0 (very negative/absent) or 4 (very positive/present)


Click here to view



   Discussion Top


In this study, we have demonstrated the feasibility of deriving a quantitative composite indicator of structural performance using trauma system accreditation reports based on ASC-COT criteria. The proposed indicator shows good reliability, discriminates between centers, and correlates well with designation level, volume, and accreditation status. Lack of correlation over time may suggest that accreditation reports need to be filled out in a more standardized manner but may also be an indication that structural performance fluctuates over time. The proposed indicator can be used to describe structural performance and identify trauma center outliers to drive quality improvement efforts.

Indicators of structural performance are an essential part of trauma center performance evaluation. In addition, they are easier to assess than process or outcome indicators as they are collected at the hospital rather than the patient level and they may also be more easily actionable. Data used to calculate the proposed indicator are already routinely collected in most trauma systems on periodic accreditation visits. Furthermore, many items could be collected at a distance rather than at on-site visits. The observed association of the proposed structural performance indicator with designation level and volume is an indication of construct validity and suggests that the indicator is likely to have good predictive criterion validity as both designation level and volume have been shown to have a negative correlation with risk-adjusted rates of mortality, severe disability at discharge, and hospital length of stay. [15],[16] In addition, the near perfect correlation between structural performance scores based on consensus and equal weights suggests that the latter could be used with no impact on results. This would simplify calculation of structural performance scores as while weights allow administrators to account for the perceived importance of structural aspects, they require periodic updating according to changes in best practise guidelines and may not be exportable across trauma systems.

The observed inability of the score to forecast performance at a later time could be due to the lack of standardization of the accreditation process between the first visit during the run-in period and the second visit, performed when the system had had a chance to mature. This and the presence of missing data suggest that the accreditation process would benefit from better standardization. Indeed, in our system, the upcoming round of visits will be based on mandatory electronic completion of all items in the evaluation grid. However, the observed lack of temporal correlation may also be explained by important changes in structural performance over time. Of interest, among the themes that are part of the structural score, we observed better forecasting for procedural protocols (r = 0.19; 95% CI -0.08;0.43) than for the trauma program (r = 0.009; 95% CI -0.25;0.27) or commitment (r = -0.05; 95% CI -0.30;0.22).

Accreditation visits are customary for trauma centers. However, a recent scoping review [17] and systematic review [18] on trauma quality indicators did not identify any studies that implemented or validated a composite indicator of structural performance for trauma care. In other health care sectors, few studies have looked at quantifying hospital structural performance. Daley et al. used on-site visits to measure structural performance for Veteran's Affairs medical centers and developed a quantitative score using consensus-based ratings. They demonstrated a relationship between this indicator of structural performance and risk-adjusted mortality/morbidity rates. [19] The Index of Hospital Quality includes a composite indicator of structure for 12 medical specialities derived by a one-factor solution from principal components analysis that is used by the US News and World Report (National Opinion Research Center) to generate US hospital rankings. [20] Although trauma care is not evaluated per se in US hospital rankings, the structural indicator includes an indicator of the presence and level of trauma care. [21]

Strengths and limitations

This study was based on accreditation data from a mature integrated trauma system that follows ACS-COT designation criteria. Information from reports was transposed onto a standardized evaluation grid calibrated over several experts. In addition, we were able to evaluate intra-rater reliability and partly address the internal validity of the score. However, limitations include the lack of standardized data collection, problems related to composite scores, and generalizability beyond the Quebec trauma system.

The accreditation visit reports used in the present study lacked standardization because they were not performed in the context of quantitative performance evaluation. We addressed this problem by retrospectively extracting qualitative information from reports and transposing it onto a standardized evaluation grid. However, missing data items were frequent, which may have influenced the final structural performance scores. A value of 2 (neutral) was attributed to missing data items of the evaluation grid. This would have led to an underestimation of between-center variation and is likely to have led to an underestimation of associations of structural performance scores with designation level, volume, and accreditation status. However, the results of sensitivity analyses suggest that the strategy used to address missing data items was adequate. To address this problem, future accreditation visits will be based on mandatory prospective electronic completion of all items in the evaluation grid.

The proposed indicator of structural performance is a composite measure and while it is a useful indicator of global performance, it is associated with the limitations inherent to composite scores. It may mask item-specific differences across centers and may make action difficult as it is hard to identify the specific causes of high/low performance. For this reason, composite scores should always be accompanied by item-specific data.

There are several different methods for deriving composite scores and there is no clear consensus on which method is optimal. [22],[23] We used a simple weighted average (consensus or equal-weights) over more complex methods such as principal components analysis or latent variable models partly because these methods would have required at least 10 observations (trauma centers) per grid item (i.e., 10 × 84 trauma centers) to generate accurate results. [24]

Finally, the evaluation grid used in our trauma system is based on ACS-COT criteria but evaluation items are likely to vary across systems. Therefore, while this study provides evidence of the feasibility of generating a quantitative structural performance indicator from report data, methods will have to be adapted to different reporting formats. In addition, the validity and reliability of the proposed composite indicator of trauma center structural performance will need to be assessed in other trauma systems.


   Conclusion Top


In summary, we have proposed a structural performance indicator that can be derived from trauma center accreditation visit reports and we have provided evidence of its reliability and internal validity. The indicator can be used to describe performance, identify institutional outliers, and inform accreditation decisions with the goal of driving performance improvement efforts. Further research needs to evaluate the influence of structural performance measured by the proposed indicator on clinical process performance and ultimately, on patient outcome. The observed variability in structural performance across centers and the change in structural scores over time underline the importance of evaluating structural performance in trauma systems at regular intervals to drive quality improvement efforts.

 
   References Top

1.The economic burden of injury in Canada [Internet]. Toronto (ON): SMARTRISK 2009. Available from: http://www.smartrisk.ca/downloads/burden/Canada2009/EBI-Eng-Final.pdf. [Last accessed on 2012 Apr 12].  Back to cited text no. 1
    
2.The incidence and economic burden of injury in the United States [Internet]. Atlanta (GA): Centers for Disease Control and Prevention 2006. Available from: http://www.cdc.gov/ncipc/factsheets/CostBook/Economic_Burden_of_Injury.htm. [Last accessed on 2012 Jan 12].  Back to cited text no. 2
    
3.Romanov, RJ. Building on values: The future of health care in Canada [Internet]. Commission on the future of health care in Canada 2002. Available from: http://www.dsp-psd.pwgsc.gc.ca/Collection/CP32-85-2002E.pdf. [Last accessed on 2012 Apr 12].  Back to cited text no. 3
    
4.Chiara O, Cimbanassi S, Pitidis A, Vesconi S. Preventable trauma deaths: From panel review to population based-studies. World J Emerg Surg 2006;1:12.  Back to cited text no. 4
[PUBMED]    
5.Moore L, Hanley JA, Turgeon AF, Lavoie A, Eric B. A new method for evaluating trauma center outcome performance: TRAM-adjusted mortality estimates. Ann Surg 2010;251:952-8.  Back to cited text no. 5
[PUBMED]    
6.Nathens AB, Xiong W, Shafi S. Ranking of trauma center performance: The bare essentials. J Trauma 2008;65:628-35.  Back to cited text no. 6
[PUBMED]    
7.Donabedian A. Evaluating the quality of medical care. Milbank Mem Fund Q 1966;44(3 Suppl):166-206.  Back to cited text no. 7
    
8.American College of Surgeons Committee on Trauma. Resources for optimal care of the injured patient. Chicago: American College of Surgeons; 2006.  Back to cited text no. 8
    
9.Ohlssen DI, Sharples LD, Spiegelhalter D. A hierarchical modelling framework for identifying unusual performance in health care providers. J R Stat Soc 2006;170:865-90.  Back to cited text no. 9
    
10.Inpatient Quality Indicators (IQI) Composite Measure Workgroup Final Report [Internet]. Rockville, MD: Agency for Healthcare Research and Quality (AHRQ) 2008. Available from: http://www.qualityindicators.ahrq.gov/downloads/iqi/AHRQ_IQI_Workgroup_Final.pdf. [Last accessed on 2012 Jan 18].  Back to cited text no. 10
    
11.Cohen J. Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychol Bull 1968;70:213-20.  Back to cited text no. 11
[PUBMED]    
12.Carpenter J, Bithell J. Bootstrap confidence intervals: When, which, what? A practical guide for medical statisticians. Stat Med 2000;19:1141-64.  Back to cited text no. 12
[PUBMED]    
13.Hendricks WA, Robey KW. The sampling distribution of the coefficient of variation. Ann Math Stat 1936;7:129-32.  Back to cited text no. 13
    
14.Landis JR, Koch GG. The measurement of observer agreement for categorical data. Biometrics 1977;33:159-74.  Back to cited text no. 14
[PUBMED]    
15.Demetriades D, Martin M, Salim A, Rhee P, Brown C, Chan L. The effect of trauma center designation and trauma volume on outcome in specific severe injuries. Ann Surg 2005;242:512-7; discussion 517-519.  Back to cited text no. 15
[PUBMED]    
16.Nathens AB, Jurkovich GJ, Maier RV, Grossman DC, MacKenzie EJ, Moore M, et al. Relationship between trauma center volume and outcomes. JAMA 2001;285:1164-71.  Back to cited text no. 16
[PUBMED]    
17.Stelfox HT, Straus SE, Nathens A, Bobranska-Artiuch B. Evidence for quality indicators to evaluate adult trauma care: A systematic review. Crit Care Med 2011;39:846-59.  Back to cited text no. 17
[PUBMED]    
18.Ratcliffe GE, Lowry A, Mashiter G, Smith MA, Young AE, Maisey MN. Thyroid hormone concentrations in Nepal: A study of potential Gurkha army recruits. The effect of changes in diet. J R Army Med Corps 1991;137:14-21.  Back to cited text no. 18
[PUBMED]    
19.Daley J, Forbes MG, Young GJ, Charns MP, Gibbs JO, Hur K, et al. Validating risk-adjusted surgical outcomes: Site visit assessment of process and structure. National VA Surgical Risk Study. J Am Coll Surg 1997;185:341-51.  Back to cited text no. 19
[PUBMED]    
20.Hill CA, Winfrey KL, Rudolph BA. "Best hospitals": A description of the methodology for the Index of Hospital Quality. Inquiry 1997;34:80-90.  Back to cited text no. 20
[PUBMED]    
21.Green J, Wintfeld N, Krasner M, Wells C. In search of America's best hospitals. The promise and reality of quality assessment. JAMA 1997;277:1152-5.  Back to cited text no. 21
[PUBMED]    
22.O'Brien SM, DeLong ER, Dokholyan RS, Edwards FH, Peterson ED. Exploring the behavior of hospital composite performance measures: An example from coronary artery bypass surgery. Circulation 2007;116:2969-75.  Back to cited text no. 22
[PUBMED]    
23.Shwartz M, Ren J, Pekoz EA, Wang X, Cohen AB, Restuccia JD. Estimating a composite measure of hospital quality from the Hospital Compare database: Differences when using a Bayesian hierarchical latent variable model versus denominator-based weights. Med Care 2008;46:778-85.  Back to cited text no. 23
    
24.Kim JO, Mueller CW. Factor analysis: Statistical methods and practical issues. Beverly Hills: Sage; 1978.  Back to cited text no. 24
    

Top
Correspondence Address:
Lynne Moore
Department of Social and Preventive Medicine; Unité de Traumatologie-urgence-soins Intensifs, Center de Recherche du CHA (Hôpital de l'Enfant-Jésus), Laval University, Quebec (Qc)
Canada
Login to access the Email id

Source of Support: The Canadian Health Services Research Foundation, the Fondation de recherche en Santé du Québec (project #RC2-1460-05), and the Canadian Health Services Research Foundation (LM is a recipient of a new investigator award)., Conflict of Interest: None


DOI: 10.4103/0974-2700.106318

Rights and Permissions


    Figures

  [Figure 1], [Figure 2]
 
 
    Tables

  [Table 1], [Table 2], [Table 3]

This article has been cited by
1 Derivation and Validation of a Quality Indicator of Acute Care Length of Stay to Evaluate Trauma Care
Lynne Moore,Henry Thomas Stelfox,Alexis F. Turgeon,Avery B. Nathens,André Lavoie,Marcel Émond,Gilles Bourgeois,Xavier Neveu
Annals of Surgery. 2014; 260(6): 1121
[Pubmed] | [DOI]
2 Derivation and validation of a quality indicator for 30-day unplanned hospital readmission to evaluate trauma care
Lynne Moore,Henry Thomas Stelfox,Alexis F. Turgeon,Avery B. Nathens,André Lavoie,Gilles Bourgeois,Jean Lapointe
Journal of Trauma and Acute Care Surgery. 2014; 76(5): 1310
[Pubmed] | [DOI]
3 A comparison of Methods to obtain a composite performance indicator for evaluating clinical processes in trauma care
Moore, L. and Lavoie, A. and Sirois, M.-J. and Belcaid, A. and Bourgeois, G. and Lapointe, J. and Sampalis, J.S. and Le Sage, N. and Émond, M.
Journal of Trauma and Acute Care Surgery. 2013; 74(5): 1344-1350
[Pubmed]



 

Top
  
 
  Search
 
  
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Email Alert *
    Add to My List *
* Registration required (free)  


    Abstract
   Introduction
    Materials and Me...
   Results
   Discussion
   Conclusion
    References
    Article Figures
    Article Tables

 Article Access Statistics
    Viewed3393    
    Printed158    
    Emailed1    
    PDF Downloaded19    
    Comments [Add]    
    Cited by others 3    

Recommend this journal