Educators practice towards clinical simulation
One hundred forty-four (47.2%) and fifty-nine (27.8%) participants reported that they sometimes and never plan for simulation teaching, respectively, while only 22 (10.4%) of the participants always plan for simulation teaching. Similarly, one hundred one (47.6%) participants responded that they sometimes plan specific activities to ensure that their simulation teaching skills are up-to-date with the latest clinical evidence, while only 18% of the respondents always plan specific activities to ensure that their simulation practice is up-to-date. On the contrary, nearly only one-fifth of the participants responded that they would give an opportunity for the students to return to demonstration and provide feedback for the students about their performance.
Respondents were also categorized as those who are good practitioner and those who are poor practitioner about clinical simulation. Thus, 121(57.1%) of the 212 respondents were classified as good practitioner, whereas the remaining 43% at (95% CI: 36% −50%) of the respondents were considered to be poor practitioner of clinical simulation (Table
2) (Tables
3 and
4).
Table 2
Practice toward clinical simulation among educators Bahir Dar, May Ethiopia, n = 212
I prepare a written plan for simulation teaching in the program in which I teach | 59(27.8%) | 100(47.2%) | 31(14.6%) | 22(10.4%) |
I make the module learning objectives available to my students | 20(9.4%) | 109(51.4%) | 43(20.3%) | 40(18.9%) |
I plan specific activities to ensure that my simulation teaching skills are up-to-date with the latest clinical evidence | 14(6.6%) | 101(47.6%) | 66(6.6%) | 31(14.6%) |
I have highlighted the learning objectives to be achieved by my students in the session | 15(7%) | 101(47.6%) | 61(28.8%) | 35(16.5%) |
I have prepared a set of teaching skills checklist for evaluating my students | 12(5.7%) | 92(43.4%) | 71(33.5%) | 37(17.5%) |
I have a written plan for the ongoing assessment of practical competencies of my students | 14(6.6%) | 102(48%) | 75(35.4%) | 21(10%) |
I identify the individual learning needs of each student for whom I have a practical teaching responsibility, | 14(6.6%) | 99(46.7%) | 72(34%) | 27(12.7%) |
I assess the suitability of assigning a student at simulation center | 14(6.6%) | 87(41%) | 80(37.7%) | 31(14.6%) |
I assess the degree to which simulation teaching time conforms to their regulatory body | 18(8.5%) | 83(39.2%) | 85(40.1%) | 26(12.3%) |
I often provide procedure checklist to the learners to the learners | 13(6%) | 38(17.9%) | 113(53%) | 48(22.6%) |
I engage in self-reflection about my own strengths and weaknesses | 15(7%) | 81(38.2%) | 80(37.7%) | 36(17%) |
I update my personal knowledge of current best clinical practices | 14(6.6%) | 17(8%) | 95(44.8%) | 86(41%) |
I give my students return demonstration to improve skill by clinical simulation | 21(9.9%) | 56(26.4%) | 90(42.5%) | 45(21.2%) |
I debrief with my students about the clinical encounters | 13(6%) | 71(33.5%) | 83(39.2%) | 45(21.2%) |
I give ongoing feedback | 16(7.5%) | 62(29.2%) | 92(43.4%) | 42(19.8%) |
I give my students the time and opportunity to reflect upon and discuss | 16(7.5%) | 72(34%) | 78(36.8%) | 46(21.7%) |
I evaluate the effectiveness of my student skills assessment instruments on a regular basis | 13(6%) | 16(7.5%) | 83(39.2%) | 100(47.2%) |
Table 3
Factors associated with clinical simulation towards undergraduate nurse and midwife health Bahir Dar, Ethiopia
Institution | Government | 52 | 52 | 1 |
Private | 39 | 69 | 1.77 (1.02- 3.065) | 1.55 (0.83–2.91) | 0.17 |
Experience | 1–5 years | 16 | 7 | 0.18 (0.06–498) | 0.21(.07–64) | 0.006 |
6–10 years | 56 | 67 | 0.48 (0.255-.92) | 0.57(0.28–1.19 | 0.135 |
> 10 years | 19 | 47 | 1 |
Educational qualification | Masters | 64 | 108 | 1 |
B.Sc | 21 | 10 | 0.28 (0.125–0.64) | 0.37(0.15–0.93) | 0.034 |
Clinical Nurse | 6 | 3 | 0.30 (0.07–1.23) | 0.37(0.07–1.89) | 0.23 |
Training | Yes | 38 | 70 | 1 |
No | 53 | 51 | 0.52 (0.30-.906) | 0.52 (0.27-.98) | .044 |
Perceived cost of skill lab materials | Yes | 73 | 66 | 0.30 (0.16-.55)) | 0.37(0.18–0.74) | .005 |
No | 18 | 55 | 1 |
Support of administration | Yes | 41 | 75 | 0.50 (0.28-.87) | 0.53(0.28–1.01) | .055 |
No | 50 | 46 | 1 |
No | 67 | 79 | 1 |
Age of respondent | 20–30 | 46 | 53 | 0.38 (0.13–1.14) | 0.36(.0981.304) | 0.12 |
31–40 | 40 | 53 | 0.44 (015–1.32) | 0.42 (.12–1.54) | 0.19 |
> 41 | 5 | 15 | | | |
Table 4
Socio-demographic characteristics of nurse educators interviewed in Bahir Dar, Ethiopia, May 2022
Educational Institution | Government | 2 | 25% |
Private | 6 | 75% |
Gender | Male | 3 | 38% |
Female | 5 | 63% |
Educational Qualification | Clinical Nurse | 2 | 25% |
B.Sc. Nurse | 2 | 25% |
M.Sc | 4 | 50% |
Teaching Experience | 6 months-1 year | 3 | 38% |
2–5 years | 2 | 25% |
Above 5 years | 3 | 8% |
Factors associated towards clinical simulation practice
After being adjusted for important covariates in a multivariable logistic regression model, work experience, educational qualification, training, and cost show statistical association with level of practice.
The odds of having good practice among respondents with experience less than five years, 21% times less likely as compared to respondents with greater than ten years teaching experience. (AOR = 0.21; 95% CI: 0.07–64) The odds of having good practice among respondents with Bachelor degree in simulation practice is by 37% less likely as compared to respondents with master degree (AOR = 0.37; (95% CI) = 0.15–0.93). The odds of having good practice training by 52% less likely as compared to those respondents who did take training in simulation practice. (AOR = 0.52; (95% CI) = (0.27–0.98). The odds of cost having good practice with respond “yes” is by 37% less likely as compared to those who respond “No” (AOR = 0.37; (95% CI) = (0.18–0.74).
Qualitative part
Qualitative data collection method: In-depth interviews were conducted with the support of an interview guide. These interviews were audio-recorded, and non-verbal information was captured by taking extra notes to support the recording. Interviews and questioners were done concurrently.
Sampling framework: A purposive sampling method was used, from which six institutions were selected and sampled. Two department heads, three technical assistants, and three instructors formed a sample of eight participants, determined according to data saturation.
Target group: Nurse and midwife educators, managers, and technical staff who were actively involved in clinical simulation exercises.
Exclusion criteria: Individuals not involved in clinical simulation or who were not healthcare professionals.
Result of qualitative part
The qualitative part explored the effectiveness of clinical simulation. Eight in-depth interview participants: two department heads, three technical assistants and three instructors participated in the interview. Two individual selected from government institution whereas the other selected from private college. Interviews exceeding eight were not considered because of the potential for the redundancy of information and the likelihood of data saturation. Through thematic analysis, four main themes were identified: inadequate infrastructure, large class sizes, and insufficient staff training illustrated barriers to effective implementation. And instructional resources; staff and student preparation and human resources; management support and student evaluation system. Each theme is discussed in the following section.
Infrastructure and instructional resources
The participants felt that insufficient infrastructure and instructional resources had created a big barrier to effective clinical simulation. Participant 5: "Large class size and shortage of classrooms are common problems we are facing in our institute. Besides, some equipment is lacking.”
Researcher interpretation: This underlines that logistical issues at classroom level, such as lack of space and delays in securing appropriate equipment, are significant barriers to offering optimum simulation sessions. These factors limit students' opportunities for repeated practice in developing their skills. These, along with a mismatch in the number of classrooms vis-à-vis the students attending them, make simulation experiences overcrowded and less efficient.
One respondent discussed resources not being appropriately utilized:
Participant quote: "I believe that it is not effective because we are not using the setup appropriately. It will be effective if we use it properly in the future" (Participant 2).
Researcher interpretation: This response shows that even when infrastructure is available, poor utilization of the resource blunts the potential impact of clinical simulation. The implication here is that clear protocols and better coordination are still needed to optimize resource utilization.
Staff and student training and readiness
The majority of the participants showed that training plays a pivotal role in ensuring that both educators and students are equipped for simulation-based learning.
Participant quote: "In my department, there is only one assistant. The number of teachers and assistants is not comparable" (Participant 5).
Researcher: It is the shortage of skilled staff that directly influences scalability and quality of simulation sessions. This calls for increased investment in human resources, especially in terms of skilled laboratory assistants and simulation facilitators, to help educators conduct effective sessions.
Notwithstanding these challenges, some participants expressed their belief in the effectiveness of simulation:
Participant quote: "I believe that it is effective because it increases students' confidence. Besides, it minimizes errors" (Participant 8).
Researcher interpretation: The response further delineates how clinical simulation training positively influences the outcome of students' confidence and reduces clinical errors, though this is only achieved when training for staff and students becomes regular.
Management support
Management support was viewed as inconsistent, with participants citing irregular training schedules and insufficient institutional backing.
Participant quote: "The support is not considerable, and even the training is arranged rarely. Due to a lack of training, some instruments are in an idle stage, and the students cannot learn properly" (Participant 7).
Researcher interpretation: The infrequency of training and lack of managerial support create obstacles to making optimal use of simulation equipment. In that respect, it was indicative of a structured program for professional development along with sustained managerial involvement.
Another participant reported dependence on external organizations for training:
Participant quote: "The support is somehow good, and we take training when some organizations volunteer to train us" (Participant 5).
Researcher interpretation: This portrays the reliance on external entities for training, which suggests that no active steps are taken from within the institution to maintain and develop simulation practices. Such dependence may be reduced by establishing mechanisms for training internally.
Student evaluation methods
The participants also reported that, in most simulation sessions, evaluation is done by standardized checklists:
Participant quote: "We use a checklist and evaluate students at the middle and end of the session. Since they do the practice repeatedly, we give them comments while following them" (Participant 6).
Researcher interpretation: Checklists provide a systematic means of evaluating student performance, but embedding continuous formative feedback within the sessions may promote more effective learning as students are afforded an opportunity to address the deficits in real-time.
One such response was that
Participant quote: "We assess the students using a checklist in the middle and at the end of the session" (Participant 4).
Researcher interpretation: While this approach emphasizes periodic assessment, it also represents a missed opportunity for dynamic, ongoing feedback. Incorporating more iterative mechanisms for evaluation could further enhance student engagement and learning outcomes.
Complement quantitative and qualitative findings
Broader context
Where the quantitative data identified the scope of problems such as poor practice and training deficits, qualitative interviews exposed why these problems did not improve, such as resource constraints and lack of managerial support.
Actions able insights
The statistical evidence for the correlation of training and better practices was attested by comments from participants insisting that the training programs ought to be a continuous process.
Validation
Qualitative themes such as infrastructure and resources were further used to make the data robust and actionable.
Together, these approaches put up a comprehensive perspective whereby the quantitative data outlined "what", while the qualitative data investigated "why" and "how" behind the findings. This synergy strengthens the conclusions and recommendations toward better clinical simulation practices.
Complementary insights
Quantitative findings
These provided measurable data on the prevalence and effectiveness of clinical simulation practices. For example, 57% of respondents were categorized as having good practices, while 43% had poor practices, based on survey scores.
Statistical associations highlighted factors like educational qualifications, training, and cost affecting simulation practice. For instance, educators with a bachelor's degree were 37% less likely to have good practices compared to those with a master's degree.
Qualitative findings
These added depth to the quantitative data by uncovering underlying reasons for the challenges identified. Themes such as inadequate infrastructure, large class sizes, and insufficient staff training illustrated barriers to effective implementation.
Participants highlighted specific issues, like delays in equipment procurement, lack of classroom space, and insufficient training opportunities, which explained the quantitative trends.