Introduction
Some of the new demands that the knowledge society has placed on higher education include encouraging students to develop skills to address situations by applying their knowledge[
1]. In the field of nursing education, methodologies aimed at integrating theory with practice are fundamental, assessing both knowledge and skills as well as conveying attitudes [
2]. In this context, clinical simulation provides a method of learning and training in which knowledge and skills are intertwined and can lead to learning outcomes that are not achieved through lectures or error with real patients [
3].
Simulation has always been present in nursing education, however, in recent years, it has gained significant popularity [
4]. Its growth and dissemination are related to the concern for quality and safety in patient care. Specifically, in Spain, simulation is taking center stage in both undergraduate and postgraduate nursing education, with the creation of multiple spaces for simulation within universities, although its implementation and curricular integration is still a challenge [
5].
According to Gaba [
6], simulation is a technique of learning that amplifies real experiences with guided ones that evoke reality in an interactive manner. It has been shown to be effective for acquiring technical skills and integrating complex clinical knowledge and skills, increasing the degree of retention of what has been learned when compared to traditional teaching methods [
7‐
9]. This type of training is associated with a feedback or debriefing session, in which students and teachers analyze the activity performed, its strengths and areas for improvement, accompanied by a phase of reflective-critical thinking to deepen the trained process [
10]. The student assumes an active role in their learning, as the protagonist in the construction of their knowledge in contexts that are similar to reality [
11].
Several published meta-analyses have concluded the effectiveness of undergraduate simulation programs vs. traditional teaching models [
12]. The meta-analysis published by Cook in 2013 reveals the success factors within simulation programs, highlighting the need for debriefing, integration of simulation into the formal curriculum, individualized simulation practice that is spread over time and demonstrating different variants or clinical contexts [
13].
In accordance with these needs, the School of Nursing (Fundation Jiménez Díaz – UAM School of Nursing) has proposed a curricular design where clinical simulation is not an independent subject, but rather it is integrated into the curriculum in a cross-cutting fashion. In the 2018–2019 academic year, simulated clinical experiences were carried out with students in the 1st and 2nd year of the nursing degree within the framework of the subjects Nursing Methodology, Adult Nursing I, II and Psychosociology of Care. In the 2019–2020 academic year, the same simulated clinical experiences were carried out with students in the 1st and 2nd year of the nursing degree and were also carried out in the 3rd and 4th years in the framework of the following subjects: Pediatrics, Psychiatry and Management of Critical Situations. All these simulation activities were designed following the recommendations for a successful simulation program published in the latest systematic reviews [
13,
14].
In simulation training, higher student satisfaction results in better learning outcomes, and the design features of a simulation influence its learning outcomes [
15], it is essential to increase the impact of the simulated experience by designing simulation scenarios appropriate to the level and learning objectives of the students [
16].
In addition, the debriefing that takes place after the simulated event also requires prior preparation and should be related to the completion of the learning process. In our case, each of the simulation modules required at least 3 multidisciplinary work sessions between the teachers responsible for the course, clinical experts and simulation experts, with the aim of designing simulation experiences according to the real needs of the students.
Therefore, it is essential that the teacher receives feedback from the student to understand whether the simulated experience has allowed the student to advance in their learning process or whether it has deviated from their real needs in order to complement their theoretical knowledge base [
17].
According to the Standards of Best Practice in simulation [
18], teachers should ensure the effectiveness of the overall experience with the goal of identifying aspects of the simulation program that support optimal transfer of knowledge, skills and overall competence into practice. This evaluation of the simulation program should be comprehensive, combining evaluation of activities before, during and after the simulations [
19].
In this regard, several instruments have been developed to measure student satisfaction in the field of clinical simulation, teamwork and decision making, among others [
20‐
23]. At present, the only scale validated in Spanish is the High-Fidelity Simulation Satisfaction Scale for Students (ESSAF). This is a 33-item questionnaire, validated by Alconero et al. [
12], which assesses student satisfaction and evaluates the students’ perception of the usefulness of clinical simulation training, together with other aspects. This questionnaire was validated with an initial sample of 150 students from the same academic year and we do not know if it is valid for use with students with different experience, since the evidence determines the importance of adapting the type of simulation design most appropriate for the experience of the student [
24].
Although this is a valid questionnaire, it is conformed of 33 items, which makes it extensive, and therefore difficult to systematically implement for evaluating satisfaction of all simulations. The development of a simplified version with the same psychometric characteristics would be a way to improve the universalization of this evaluation system. For this reason, the aim of this study was to validate a brief version of the ESSAF questionnaire for its application in the different academic years of the nursing degree and in students with or without clinical experience.
Materials and methods
Design
A cross-sectional descriptive study was employed within the framework of a Teaching innovation project funded by Universidad Autónoma de Madrid (UAM) of undergraduate Nursing students belonging to the Fundaci?n Jimónez Díaz - UAM School of Nursing. The study population was 1st and 2nd year students of the 2018–2019 academic year and 1st, 2nd, 3rd, and 4th year students of the 2019–2020 academic year. 425 students completed the satisfaction survey.
During the months of May to July 2018, the initial simulation program was designed by a multidisciplinary working group over a series of six work sessions. The decision was made to start with first and second year students in order to consolidate and apply the theoretical knowledge prior to their clinical practices, extending it to all courses during the following academic year.
Sample selection
The criteria for carrying out a factorial analysis were used to calculate the sample size. These criteria contemplate 10 subjects for each [
25]; therefore, a sample of at least 330 participants was needed.
Description of the activity
A total of 32 simulation sessions were carried out throughout the 2018–2019 academic year, and a total of 59 sessions took place during the 2019–2020 academic year. In each simulation session, small groups of 8–10 students participated, with an approximate duration of two to four hours, in which three scenarios were developed. These sessions were recorded on a video system and viewed in real time by the students. The following link shows an example of a scenario carried out by third year students for verbal restraint of a psychiatric patient.
https://www.youtube.com/watch?v=b8gM5u2ihsA.
All simulation scenarios were performed with the same teaching design:
1)
Prebriefing or introduction to clinical simulation.
2)
Patient presentation and work environment.
3)
Three simulated clinical scenarios in which all trainees participated in at least one of the scenarios.
4)
A debriefing following each of the scenarios using the sound judgment approach [
10].
To conduct the simulation, a main instructor was in charge of clinical simulation immersion and of coordinating the debriefing. In addition, a co-instructor provided support as expert staff of the subject to be trained, and for the management of the simulator and video recording systems. On occasions, actors were needed to faithfully recreate the real situation.
Data collection
The ESSAF scale (Additional File 1) was used, which is a self-administered questionnaire that students completed voluntarily and anonymously at the end of the simulation module. This scale contains 33 statements answered on a 5-point Likert-type scale, with a minimum score of 1 (strongly disagree) to 5 (strongly agree). With appropriate indicators for factoring, the 33 items are grouped into 8 factors or dimensions of student perception of clinical simulation: “Usefulness”, “Characteristics of cases and applications”, “Communication”, “Perceived performance”, “Increased self-confidence”, “Relationship between theory and practice”, “Facilities and equipment” and “Negative aspects”.
Sociodemographic variables such as students’ age, sex and academic year were also collected.
To facilitate data collection and guarantee anonymity, an ad-hoc questionnaire was generated and completed by the students at the end of the simulation practices (Additional File 1. Supplementary material: ESSAF Questionnaire). All students who completed the simulation sessions were included, excluding those who for any reason did not complete the sessions.
Data analysis
All questionnaires were numerically coded using SPSS version 20 statistical software for data collection and analysis.
In terms of baseline quality control, all variables included in the study were analyzed for missing values or errors in data recording (analysis for out-of-range values, incomplete data, statistical analysis for errors or outliers (descriptive, frequencies, means, range).
The distribution of variables was determined by descriptive analysis of the variables and the use of Q-Q graphs, histograms, and box plots; in the event of doubt, the Kolmogorv-Smirnov statistical test was used, in which the null hypothesis assumes normal or Gaussian distribution of the variable. For variables with normal distribution, parametric analyses were used. If the variables had a non-Gaussian distribution, nonparametric analyses were used.
Descriptive statistics: the results of quantitative variables with a normal distribution were expressed by their mean and standard deviation (SD). Quantitative variables with a non-Gaussian distribution were expressed as median and interquartile range (IQR), and qualitative variables were expressed as frequency and percentage. Ordinal variables were analyzed as continuous variables, expressed as median and interquartile ranges. To facilitate comprehensive data interpretation for our readers, both mean (SD) and median (IQR) values will be provided. This approach ensures clarity, especially when certain data exhibit a normal distribution while others do not.
In the present study, the option recommended by several authors was used [
26,
27], exploratory factor analysis (EFA) based on polychoric correlations, given that in the univariate analysis of the ordinal items the authors find an excess of kurtosis and skewness. The robust unweighted least squares (ULS) method was used as the factor estimation method [
26]. Parallel analysis (PA) was used as the factor selection retention method, and the PROMIN method was used as the factor rotation method.
The FACTOR program (version 10.9.02) was used for Exploratory Factor Analysis (EFA). Initially, a descriptive analysis was conducted for each item, assessing mean, standard deviation, skewness, and the corrected item-test correlation. To minimize noise in the subsequent factor analysis, items were removed with correlations below 0.20, as recommended [
27]. We further scrutinized the distribution of items by evaluating the kurtosis and skewness coefficients [
28]. Following Kline’s criteria [
29], item-test was corrected correlation for the entire scale, excluding items with values below 0.20.
To gauge reliability, which we define as the internal consistency of items measuring a construct, we relied on both the ORION coefficient and Cronbach’s Alpha. ORION (an acronym for “Overall Reliability of fully Informative prior Oblique N-EAP scores”) measures the overall reliability of the aforementioned oblique scores [
30]. While the Cronbach’s Alpha coefficient is grounded on the mean correlations between items, it remains the most popular statistic for internal consistency, despite some controversies surrounding it [
31‐
33].
Ethical considerations
The study was sent for evaluation to the ethics committee of the Universidad Autónoma de Madrid, which ruled that the project did not contradict ethical standards and did not need to be evaluated as it was a satisfaction survey.
All experimental protocols were approved by the ethics committee of the Universidad Autónoma de Madrid on October 1, 2021.
Participants were informed of the study and confirmed their informed consent to participate in the research. All data were treated confidentially in accordance with the Organic Law 3/2018 of 5 December on the Protection of Personal Data and Guarantee of Digital Rights, keeping them strictly confidential and non-accessible to unauthorized third parties and the Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on Data Protection (GDPR). The simulation scenarios were recorded for later analysis during the debriefing, all participants were informed that the recordings would be used exclusively for teaching or research use and signed their informed consent to the recording.
Discussion
The ESSAF scale reduced to 25 items and 6 factors, assessing pre-, during and post-simulation (debriefing) aspects with high reliability, makes this scale a simpler and more reliable tool than the original one, which will facilitate comprehensive simulation program evaluation.
This need for comprehensive simulation program evaluation has increased as a result of the development of best practice standards and is a key point for academic and clinical simulation programs to determine if efforts to improve knowledge, skills and/or attitudes have been effective [
18,
19]. At the same time, this assessment can be complex and having a simple tool that is applicable to students with different academic backgrounds can help in this evaluation process.
The ESSAF scale presented good internal reliability (α = 0.859) and high replicability indices (H-index close to unity). However, the reliability analysis of the different dimensions in the present study does not replicate the good reliability found by its authors [
12]; only two of the 8 factors of the ESSAF tool presented an α ≥ 0.70. The availability of a larger sample, with students from different academic years has made it possible to simplify the sample and to eliminate items, as well as a new classification by subscales or factors.
These six factors perfectly cover all the key aspects of simulation training [
35] and encompass all areas of training described in the literature in a simple and reliable manners for all levels of experience in the nursing curriculum. Not only are they focused on the direct assessment of nursing care, but the factors enable the assessment of the benefit on cognitive competencies of reasoning and prior preparation or the benefit of feedback or subsequent debriefing which is currently considered the key to any clinical simulation activity [
13,
35] and it is not always evaluated as reflected in the systematic review by Levett-Jones and Lapkin (2014) who included 10 controlled studies in undergraduate nursing, only two of which addressed the benefit or impact of feedback or subsequent debriefing [
36].
Table
4 shows the mean scores of the different factors and, as mentioned in the literature reviewed, the factor referring to debriefing (F2) has the highest scores of all the factors, which reflects that our students have recognized the importance of debriefing to generate new models of thinking and how to apply them in future practice [
10].
As for the factors that encompass direct care competencies, we can perfectly differentiate those that are not only focused on care (F1) and that are becoming increasingly important in the curricular design of undergraduate training, such as communication skills with patients and family (F6), teamwork (F4) and safety and confidence (F5), all of which were recently highlighted in a systematic review showing the usefulness of simulation training for acquiring these types of competencies [
37].
Finally, the factor related to the benefits or usefulness of pre-planning (F3) is going to allow to complete that comprehensive evaluation described in the literature, in this case focused on the pre-simulation part and that will help in the narrative of the clinical scenario, which is related to the decision making in the scenario and also the level of complexity and how to adjust the information given to the students. The teacher can use this to guide decisions on the types of information to provide to adjust the complexity of the clinical scenario [
11,
38].
Limitations
As for the limitations of this work, we began with a very limited sample chosen by non-probabilistic convenience sampling. Although various systems are currently used to determine the number of subjects required for validation studies, such as the N/p type criterion and the criterion of 10 times more subjects than items, among others, they are completely discouraged, as they have no solid basis [
27]. In fact, there is no consensus, since the minimum recommended size depends on numerous factors. Logically, the larger the sample size available, the more confidence we will have that the solution obtained is stable, especially if the communality is low, or when we have many possible factors to extract and/or few items per factor. Nonetheless, to evaluate the quality of a test, clearly, a sample size of at least 200 cases is recommended, even under optimal conditions of high communalities and well-determined factors [
27]. We opted for the criterion of 10 subjects per item, which represents a sample much larger than 200 and therefore adequate for the purpose of the study.
Another of the limitations observed can be found in the homogeneity of the sample where there is an imbalance between the characteristics of the participants since 70.1% have no previous experience in simulation and there is also a higher percentage of first and second year students. However, we consider that it could be a strength because the psychometric characteristics are adequate despite being a non-homogeneous sample.
Conclusions
Conducting ongoing evaluation of the simulation program provides teachers with the data needed to recognize and implement changes in future simulation experiences.
We have observed that the modified ESSAF scale, divided into six subscales, is a practical and reliable tool to be used by nursing students from different academic years and with different degrees of clinical experience, compared to the original scale. This new classification is very useful to provide teachers with feedback not only in relation to the competencies acquired, but also in relation to the design of the simulated clinical experiences and their subsequent analysis or debriefing.
Evaluating each simulation program with different tools can be complex and tiring for teachers and students. This simple and concise tool can be the first step in evaluating a simulation program for nursing students in a comprehensive manner and guide a second, more concise evaluation phase on relevant aspects that have been detected.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.