Background
As many as 16% of hospitalized patients experience a medial error during their health encounter [
1]. Hospital culture and patient safety are interrelated as organizational failures and system driven errors contribute to the unintended events that produce poor quality outcomes [
2]. Prominent international agencies, such as the World Health Organization (WHO) and the United States Agency for Healthcare Research and Quality (AHRQ), recommend hospitals address this problem by improving their organizational cultures [
3]. As such, health services researchers are interested in the intersection of organizational culture with patient safety [
4,
5]. Safety culture can be defined as the overarching but emergent healthcare property where professional attitudes and work climates result in the degree of system reliability and resilience to adverse outcomes [
6].
Clinical quality and patient safety outcomes are linked to the organizational culture dimensions that can be measured with safety culture instruments for hospitals [
7]. Safety culture measures are correlated with employee performance (e.g. safety behavior), process and system errors, and accident rates across industries [
8] and cultures [
9] with similar results from the health sector [
10]. For example, researchers using the Hospital Survey on Patient Safety Culture (HSOPSC) have reported strong correlations between safety culture, adverse event frequency, and patient outcomes [
11,
12]. For these reasons, the measurement of safety culture has become the prerequisite for continuous quality improvement efforts to provide leaders with the essential feedback that stimulates organizational improvement [
13‐
15].
In developing countries basic hospital safety indicators are largely incomplete or unavailable [
16‐
18]. The sparse available data indicates adverse event prevalence in South American countries, including Peru [
19‐
21], is much higher than developed countries [
22,
23]. Patient safety is an emerging focus in the Peruvian health sector, as demonstrated by the lack of basic safety programs, processes, and practices [
24]. The HSOPSC is a validated tool to measure the effectiveness of work environment and organizational process associated with preventing the types of errors linked to consequential adverse events [
25]. When administered yearly, the HSOPSC can provide leaders with a proxy measurement for the effectiveness of their quality improvement efforts focused on achieving patient safety [
13].
Early instruments to measure safety culture were developed in the late 1980’s and into early 1990’s [
14] but did not gain popularity due to poorly performing psychometric properties [
15]. In a seminal review of contemporary instruments [
26], multiple safety culture instruments were identified but varied considerably in the general characteristics, dimensions covered, and psychometric properties. Furthermore, the items, and associated dimensions, were not derived from a theoretically constructed framework [
13]. In response to the need for a standard instrument to measure safety culture, the AHRQ commissioned the development of a psychometrically valid and reliable instrument [
27‐
29], called the HSOPSC.
The HSOPSC, developed in English [
28] for use in American hospitals, has demonstrated excellent psychometric properties [
27]. The instrument has been disseminated to other countries, in multiple languages [
8,
24,
30‐
36]; however, published studies generally neglect to report the instrument translation method and validation data. When provided, the information is either limited or poor quality such as reporting a simple post hoc psychometric analysis. In this regard, a Spanish language version of the instrument is available for use with Spanish speaking health workers in the United States [
37]. This version has also been used in Mexico [
38,
39], Colombia [
40], and most recently Peru [
24]. Also, a different Spanish-language version of the instrument [
41] was developed for use in Spain [
42]. However, studies applying either of these Spanish language instruments for cross-culture research do not report the validation method and provide limited psychometric data about reliability. According to the basic recommended translation strategies published by the AHRQ [
43], there is no validated Spanish language version of the HSOPSC reported in the literature for cross-cultural research in Peru, as well as in other countries in Latin America. As the Spanish language differs between countries and the meaning of words differs across cultures, a validated Spanish language version of the HSOPSC needs to be developed for cross-cultural research in Peru, that confirms construct applicability and survey item integrity [
44,
45].
Researchers working with Spanish speaking populations [
46,
47] have reported the traditional forward-reverse translation technique does not result in a valid target-language instrument [
48,
49]. However, the item meanings, dimension integrity, and construct validity need to remain constant regardless of the language across cultures [
50], with an attention to eliminating bias and maximizing equivalence [
51]. In order to study patient safety culture in a global perspective, the HSOPC needs to be translated from the source language (original language) into the target language (translated language) without losing the meaning [
52] and context of the items and associated scales and/or dimensions during the translation process [
51]. As such, the purpose of this study was to test a mixed method approach for target-language instrument translation to produce a valid HSOPSC translation for cross-cultural research in Peru.
Developing a psychometrically sound target-language instrument for cross-cultural research from an English-language source, originally validated for use in the United States, is not straightforward [
53]. In cross-cultural research, the assumption that all instruments will automatically be equivalent across groups does not hold [
54]. However, instruments that are products of their local environments at the time they were developed are likely to be more reliable [
55]. Yet, researchers consistently fail to describe how the HSOPSC was translated and validated [
56] for cross-cultural comparability and compatibility [
57]. For cross-cultural research, the technical and semantic equivalence and cultural relevance of each item needs to be evaluated prior to data collection [
58‐
60]. Without a sufficient pre-data collection evaluation to ensure the correct representation of the instrument [
60], the resulting factor analyses post-data collection will be flawed and less rigorous [
56].
Methods
Despite the increased adaptation of English language instruments for cross-cultural research, there is not consensus about the gold standard method for instrument translation and validation, including cultural adaptation [
56,
61,
62]. However, proper cross-cultural research with instrument translation generally requires multiple qualitative and quantitative methods and techniques [
55,
56,
61,
63], including feedback questionnaires, pilot testing, expert panels, and cognitive interviews [
63‐
65]. As such, using an iterative mixed method approach to translate instruments [
46], with cognitive interviews [
57,
66] ensures the resulting instrument will have equivalence [
51]; one that asks the same questions, in the same manner, with the same intended meaning, as the source instrument.
With globalization in the context of cross-cultural research, evidence-based methods are required to produce equivalent target-language instrument translations from the sources [
52,
67]. As such, this is a mixed-method approach adapted from the translation guidelines recommended by AHRQ [
43,
68,
69] with reference to other best practices and strategies [
63,
70‐
72]. The approach adhered to the adapted version of the equivalence criteria for cross-cultural research with instruments [
50,
55,
63,
73], see Table
1. The AHRQ provided written permission by email (CRM:00350304) to use the HSOPSC in English and Spanish for this research project. In addition, permission was granted to publish the resulting instrument from this study as well as the existing Spanish and English HSOPSC instruments.
Table 1
Equivalence Criteria for Cross-Cultural Research with Instruments
Content Equivalence | The content of each item of the instrument is relevant to the phenomena of each culture being studied. | — Research Team Experts. — Clinical Practice Experts. — Subject Matter Experts. — Content validity index score. — Annotated survey dimension document. |
Semantic Equivalence | The meaning of each item is the same in each culture after translation into the language and idiom (written or oral) of each culture. | — Translation guide from AHRQ. — Qualified / experienced translators. — Forward- and reverse-translation. — Pilot test (cultural relevance & readability). — Confirmation of translation: Cognitive interviews and expert reviews. |
Technical Equivalence | The method of assessment is comparable in each culture with respect to the data that it yields. | — Translation guide from AHRQ — Experienced translators. — Subject Matter Experts. — Pilot test (evaluation scores). — Cognitive interviews. |
Criterion Equivalence | The interpretation of the measurement of the variable remains the same when compared with the norm for each culture studied. | — Research Team Experts. — Subject Matter Experts. — Pilot test (evaluation scores). — Cognitive interviews. |
Conceptual Equivalence | The instrument is measuring the same theoretical construct in each culture. | — Translation guide from AHRQ. — Qualified / experienced translators. — Forward- and reverse-translation. — Item to dimension selection/match. — Dual scoring process with content validity index and pilot test.. |
Data collection
Per the recommendations for cross-cultural instrument research where the source country, culture, and language are different than the target [
72], the data collection process required three phases, including: Instrument translation, Cultural-adaptation, content validation and equivalence [
47,
72]. These phases included a forward and reverse translation, cognitive interviews, targeted participant review, structured pilot test, content validation, and expert evaluation of equivalence. The work completed in each of the phases are described next.
Participants
Phase 1, the translation process, was completed by three bilingual (English/Spanish) professionals, with at least an undergraduate degree in linguistics and extensive experience in translation and interpretation. Phase 2 included nine participants (2 nurses and 7 physicians), called clinical practice experts (CPEs), purposefully selected from licensed health professionals in Peru with current hospital work experience, fluency in Spanish, and advanced English skills as self-reported and observed by the principal researcher during a brief interview. Phase 3, included 7 participants (4 nurses, 2 physicians, 1 pharmacist), called subject matter experts (SMEs), purposefully selected as recommended by Grant and Davis [
74], intermediate to advanced English was required. All participants verbally consented to participate in this study approved by the A.T. Still University Institutional Review Board (Protocol #01146).
Instrument HSOPSC
The HSOPSC is a 42-item instrument grouped into 12 composite dimensions and nine non-dimensional items, including two safety assessment items and seven demographic items [
69]. Each dimension is represented by three to five items measured with a 5-point Likert scale to ascertain agreement (strongly disagree to strongly agree) or frequency (never to always). The instrument includes 59.52% positively worded items and 48% negatively worded items. The outcome measures are the two single-item responses inquiring about the number of events reported within the past 12 months and the overall patient safety score (excellent to failing). For the reported error questions, errors are defined as any type, without regard to harm.
Instrument translation – phase 1
Step 1: instrument review and translator selection
The source American English version of the HSOPC was reviewed by two bilingual healthcare experts with the primary investigator to establish the content equivalence, or the relevance and sensitivity [
50,
75]. Although some challenging terms and phrases were noted in items, through discussion, the experts determined the instrument content was generally adaptable for translation and implementation in Peru. As such, the instrument was sent for translation without changes [
74].
Step 2: forward translations with synthesis
The instrument was independently forward-translated from American English to Peruvian Spanish by two translators [
76,
77]. Then, the two versions were synthesized and consolidated into a single instrument [
75] by the two healthcare experts and the primary investigator, all knowledgeable about the instrument properties and scientific foundation. The translators were involved in this process specifically for grammar and language clarifications.
Step 3: reverse translation with reconciliation
The goal of the reverse-translation process was to establish a conceptual rather than a literal “word-by-word” meaning [
78]. An independent bilingual translator reverse-translated the synthesized instrument from Peruvian Spanish to American English. Then, the reverse-translated instrument was examined, in comparison to the original instrument, by the two healthcare experts and the primary investigator. Although discrete differences were identified in the reverse-translation, these were primarily related to nearly equivalent verb selections. There were five instances where phrases required clarification to achieve the same meaning in Spanish as expressed in English. However, these clarifications were noted for discussion during the cognitive interview phase. Again, the translator provided clarifications specific to grammar and language.
Cultural-adaptation – phase 2
Although a pilot test with a bilingual version of the instrument has been recommended to determine the equivalence between the newly translated and the original instruments [
79], there were not enough bilingual participants readily accessible for this study. Alternatively, this study used a pretest to review and refine the instrument through a series of three rounds. The cognitive interviews followed each pretest to assesses the four stages of the participant engagement in completing the instrument, including: Comprehension, retrieval, judgement, and responding [
80]. The procedure is described below.
Step 4. Pre-test
Nine CPEs engaged in the pilot testing with three CPE per round across three sequential rounds. First, the CPEs completed the instrument without interruption. Then, the CPEs were asked for their general perspective about the instrument. Finally, all CPEs were asked to complete the new instrument and for a final assessment for item clarity and cultural equivalence. The assessment required each CPE to respond to each item and then rank the item on a five-point Likert scale for language clarity (5 – completely readable and understandable, 4 – mostly readable and understandable, 3 – readable and understandable, 2 – somewhat readable and somewhat understandable; and 1 – not readable and understandable) and cultural equivalence (5 – completely culturally relevant, 4 – mostly culturally relevant, 3 – culturally relevant, 2 – somewhat culturally relevant, 1 – not culturally relevant). This coding schematic was necessary to identify the aggregate level of item clarity and cultural relevance across participants. The assessment data was entered into an Excel database for the analysis. The item specific scores guided the primary focus of the first round of cognitive interviews; scores less than four were considered opportunities for improvement.
Step 5. Cognitive interviews
Once the data entry was completed, the primary investigator conducted a cognitive interview with each CPE [
69,
81], through a structured but open-ended discussion, to understand how participants read, comprehended, and responded to each item [
57]. A cognitive interview script guided the content probes for additional evaluation of items scoring three or less in the first round and items scoring four or less in the subsequent rounds. This probative process was constructed to identify translation deficiencies by asking CPEs to describe their understanding of the item in Spanish and then again after considering the original question in English. Furthermore, the other two Spanish HSOPSC versions, United States [
37] and Spain [
41], were incorporated into the process as resources to review, discuss, analyze, and refine problematic items. The CPEs were also asked to identify items with unfamiliar or inappropriate grammar and syntax. Cognitive interview data was thematically coded to identify problematic items [
81]. Notes were collected and compiled throughout the process.
Step 6. Research team review with item revision
The data from the pilot testing for each round was individually and aggregately reviewed by the research team with the primary focus on improving items with ratings less than four. The notes from the cognitive interviews were also referenced when reviewing each item. Revisions for problems with items was completed to improve the item performance in the next round. The translators from the forward- and reverse-translation were consulted with specific questions about word selections and phrase clarification. All revisions were noted in the Excel spreadsheet to provide a record of the progressive evolution of each item across rounds. Each round was conducted sequentially and distinctly but all items were compared between and across rounds.
Content validation and equivalence evaluation – phase 3
Step 7a. Subject matter expert evaluation of content validity
The SMEs were provided the final instrument for review with a content validity questionnaire [
82‐
87]. Each item in the preliminary final instrument was rated on a 4 point-Likert scale (1-irrelevant, 2-little relevance; 3-relevant; and 4-extremely relevant). In addition, the item evaluation used during the pilot testing for language clarity (readability/understandability) and cultural relevance (context) was incorporated with dichotomous acceptable or unacceptable scoring. Finally, there was a space for open ended comments for each item and at the end of the questionnaire. The content validity index (CVI) and the item evaluation for language and context was calculated per item [
83]. For analysis, the rankings were merged into two categories as items scored one and two were considered irrelevant and items scored three and four were considered relevant [
82]. Then, the CVI was calculated individually for each item (proportion of experts rating the item as relevant). The items were then calculated as part of the twelve dimensions, as well as for the total instrument as the mean CVI [
86]. A significant CVI for this process was set, a priori, at 0.70 or greater [
82,
85,
86].
Step 7b. Subject matter expert evaluation of equivalence
The SMEs were provided with the final items from the instrument and asked to indicate which items are associated with which dimensions. Each expert was required to select only one of the twelve dimensions for each item, but they were also asked to indicate a secondary dimension, with a short rational, in the case they were unable to easily decide the dimension for the item. Through this process, the equivalence can be established. The Kappa value was calculated for the SME agreement in identifying each item with the correct dimension. The minimally acceptable Kappa value was set to 0.40 [
88‐
90] as values below this point are not considered robust [
91,
92]. According to the scale of Landis & Koch [
93], the strength of the Kappa coefficients were defined in groupings (0.01–0.20 slight; 0.21–0.40 fair; 0.41–0.60 moderate; 0.61–0.80 substantial; 0.81–0.99 almost perfect, and 1.00 perfect).
Discussion
Similar to Levin et al. [
47], the item analysis revealed two types of translational issues: 1) Words which when translated from English to Spanish did not convey similar constructs; and 2) Phrases or specific words which when translated from English to Spanish were not familiar to the participants as they had different meanings across cultures and/or national borders. Through the cognitive interview process; however, the participants refined these issues into four distinct domains, including: 1) translation (“reads wrong”); 2) culture relevance (“don’t understand”); 3) general instrument design (“looks strange”); and 4) navigational issues with completing instrument (“where do I go” or “what do I do” next). The fourth category has not been reported in the literature but describes some problems associated with the format and flow of the instrument. Finally, the participants realized one item asked about patient safety outcomes. As such, they all recommended additional items focused on the types and quantities of adverse errors observed in practice or personally committed.
The general design issues identified during the cognitive probing process, can be classified into four categories: 1) Formal structures and process; 2) Physician work environment; 3) Professional domains; and 4) Terminology for the public and private systems. Although the first three categories were improved with relatively minor item adjustments, the final category necessitated an instrument design with words to satisfy the distinct vocabulary of private hospitals (called
clínicas) and public hospitals (
called hospitales). As such, the final instrument included specific words in combination such as “hospital/clinica” and “area/unidad” to satisfy the differences in vocabulary additional items”. Table
8 presents examples of the four categories of general design issues.
Table 8
Examples of General Design Issues Identified from Cognitive Interviews
Health system differences between countries | Description of issue: In Peru, there is not a formal adverse event and reporting system at every facility. As such, the concept is unfamiliar to many participants. Revision: Addition of a question specific to whether or not the participant is aware of the presence of a formal error reporting system at their respective facility. |
Work environment for physicians is different | Description of issue: In Peru, most physicians work at more than one facility. Also, they are usually employed full-time by the public health system (or semi-public) and then work as contractors in the private health system. Revision: Incorporated a question to capture the number of hours worked at all health facilities in addition to their primary work facility in Peru. |
Professional domains are different or absent | Description of issue: There are many professions in the United States and in Peru which are different. For example, registered nurses are present in both places but respiratory therapists do not exist in Peru. Revision: Reconstruct the questions specific to professional roles and positions in facilities in Peru |
Multiple concepts are different between the public and private health systems | Description of issue: Within Peru, there are differences between the reference terms for areas, departments, and even facilities. Revision: For example, the public sector term for an acute care facility is “hospital” while the private sector term is “clinica.” Additional step: Incorporate changes by crafting two instruments: 1) Instrument specific for public facilities; and 2) Instrument for private facilities but only with changes to distinct descriptions and terminology but without changes to the item meaning. |
| Description of issue: Within Peru, there are differences in job titles and descriptions between the public and the private health facilities. Revision: For public system, use the appropriate terms and for the private system use the specific terms. Additional step: Incorporate changes throughout the instrument for public facilities and then throughout the instrument for private facilities. |
Across the cognitive interviews, participants consistently identified similar issues with the identified problematic items. For example, in Round 2 about 82% of the items rated by the participants as a three or less, at least two of the three participants identified the same or similar issue. Also, there were multiple items where the identification agreement highlighted items that the traditional forward- and reverse-translation process would have missed. For example, a cultural issue specific to asking nurses and physicians about work hours in their primary facility was captured by cognitive interviews. In Peru, nurses and physicians routinely work enough hours at two or three facilities to be equivalent to two full-time jobs in the United States. Often, professionals are employed full-time by the public health system (or semi-public) and then work full-time in the private health system. As such, the four participants suggested we needed to incorporate an additional question to capture the total number of hours worked at all facilities in addition to their primary work facility.
Most items translated were etic [
52], or concepts that were universally transferable. These items had very good clarity and substantial cultural relevance, without or with quite minor modification. Then, there were some emic concepts that needed to be addressed by the research team. These are the items reflecting culturally specific concepts in meaning, such as ideas and behaviors, in the source language [
52], but prove to be inequivalent with only translation [
96]. For example, an English worded item, “We work in ‘crisis mode’ trying to do too much, too quickly” translated easily but the language did not capture the meaning of the item. Although ‘crisis mode’ translates easily to ‘modo de crisis’ in Spanish speaking countries, the concept translates different in many countries such as Peru. As such, the initial translation included the phrase ‘modo de crisis’ (Trabajamos en “modo de crisis”, tratando de hacer demasiado y muy rápidamente) required significant modification that did not include the phrase (Trabajamos bajo presión intentando realizar demasiadas cosas muy rápidamente).
Another example of a seemingly discrete issue was discovered with the word ‘chance’. Although ‘chance’ exists in Spanish, when ‘chance’ is linked in a phrase with the word ‘mistake’ the participants had difficulty understanding the context of the item. In English, the item is “It is just by chance that more serious mistakes don’t happen around here” and the initial translation was “Aquí no suceden errores más serios sólo por casualidad”. By rearranging the word order and using an underline to emphasis the ‘sólo por casualidad’ the final translation was “Es sólo por casualidad que errores más serios no suceden aquí”. In this case, the item score improved significantly from an average of a 2.3 to a 4.7 in the next round and the cognitive interviews generated no additional concerns.
With the final item analysis for grammar, syntax, and other issues, we discovered negatively worded items performed poorly, required multiple conversations during cognitive interviews, and generated disagreement between experts in panel discussions. With using only positively worded questions there is higher risk for acquiescence bias [
97‐
99]; however, the literature in this regard is primarily from English language countries and cultures. Similar to our finding about negatively worded questions, Solis-Salazar [
100] reported combining positive and negative items can seriously damages the internal consistency of dimensions or subscales in Spanish language instruments.
Finally, this study has three limitations. First, the cognitive interviews required health professionals with advance to almost native levels of English comprehension. As such, these professionals might have more knowledge about safety culture than those without the same level of comprehension. In addition, the English language requirement narrowed the number of varied professionals to participate in the cognitive interviews. Second, this study assumes the etic approach to cross-cultural research. In this regard, the theoretical assumptions and operational constructs specific to safety culture were assumed to be translatable to Peru. However, the study method was a strength in providing evidence the language used to describe the assumptions and construct was transferable. Third, and last, the translators from the forward and reverse translation process were utilized as language consultants for the expert deliberations. In this regard, with justification, the “purity” of the forward and reverse translation was not maintained. A side-by-side instrument comparative each item, by dimensions, is provided in the Supplemental Table
1 for the AHRQ English and Spanish versions, the Spanish Version for Spain, and the instrument produced from this study.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.