Background
Although often a healthy event, childbirth is the most frequent reason for hospitalization in Canada and the United States [
1,
2]. Given the risk for negative short- and long-term consequences for both the pregnant or birthing person and their infant(s), ensuring that high quality and evidence-based care is provided is essential. Improving maternal-newborn care
1 has therefore become an international priority and there has been a growing focus on the implementation of quality care in these settings [
3]. However, evidence-practice gaps remain. For instance, compared to evidence-informed recommendations, there continues to be overuse of practices such as caesarean births, inductions, and formula supplementation [
4], among others. This overuse may further strain limited financial and human resources, lead to patient dissatisfaction, and cause physical, psychological, and social harm [
5,
6].
Closing these evidence-practice gaps requires evidence-based
implementation [
7]. Implementation science aims to develop and test methods and strategies to effectively move evidence into practice [
8]. Despite a growing body of literature, implementation science has not been well translated into practice-based settings [
9]. This has created a “secondary gap” where healthcare teams can identify the need to implement evidence into practice to improve care and outcomes, but they do not necessarily use evidence to inform the processes they use to implement that evidence into practice [
10]. However, there are a growing number of examples of teams successfully applying implementation science informed approaches in maternal-newborn care [
11‐
13] and other settings [
14], illustrating the potential of implementation science in improving care processes and outcomes in healthcare settings.
Nurses are frequently tasked with leading and supporting change initiatives, and there are increasing calls for nursing to embrace and apply implementation science in their practice to further improve evidence-based practice [
10,
15]. Yet many healthcare providers, including nurses, are not aware of or confident using current evidence on how to implement evidence-based programs, guidelines, or innovations [
16]. Nurses and other healthcare professionals are often not exposed to implementation science in their training, and clinical demands, lack of resources and time, and lack of professional development opportunities may preclude them from learning about implementation science and its applications to their work [
9,
10,
17].
Given that capacity for implementation (i.e., whether you have enough people with the right skills), has been identified as one of the key factors for implementation success [
18], understanding current implementation practice and implementation capacity-building needs in nursing and healthcare settings is critical to inform future interventions and supports. As a first step to explore this, our research team previously conducted a secondary qualitative analysis of 22 interviews with nursing managers and directors in Ontario, Canada [
19], and compared their described implementation approach to an implementation science framework [
20]. Overall, there was variability in implementation steps taken across hospitals. Several implementation steps were described infrequently or sub-optimally, suggesting that real-world processes were not as comprehensive as the implementation framework used for comparison [
19]. However, limitations of the secondary analysis were the inability to assess the implementation process at a more detailed activity level, and the lack of information about healthcare providers’ perceptions of the importance or value of different steps and their confidence completing them. To build on this work, we sought to collect more comprehensive and recent primary data from a broader range of Ontario maternal-newborn nurses, other healthcare professionals, and leaders.
Guiding framework
The implementation framework that guided this study was the Implementation Roadmap [
20]. The Implementation Roadmap is a planned-action framework (also known as an implementation process model [
21]) that is informed by best practices in implementation science and practice [
20]. We chose the Implementation Roadmap as it consolidates concepts and steps from established knowledge translation and implementation frameworks (e.g., Knowledge-to-Action framework [
22], CAN-Implement [
23]) as well as experiential knowledge from implementation science and practice experts. The Implementation Roadmap was designed to guide professionals, including nurses, through the core phases, steps, and activities of the implementation process (based on best practices in implementation science and practice), making it a suitable framework for exploring the current implementation practices of maternal-newborn teams.
Study aims
In this study we aimed to understand Ontario maternal-newborn teams’: (1) approaches to implementing practice changes, including who is involved and which activities are done, and how closely their processes align with an implementation science planned-action framework (the Implementation Roadmap [
20]); and (2) perceived importance and confidence completing different implementation activities.
Methods
We used the Checklist for Reporting Of Survey Studies (CROSS) to guide the writing of this manuscript [
24] (see Additional file
1 for completed checklist).
Design
We used a cross-sectional survey design. This study was exploratory in nature and as such, we did not have pre-existing hypotheses for statistical testing.
Study setting and sampling
The setting for our study was Ontario, Canada, which has over 140,000 births per year. There are 93 hospitals in Ontario that provide perinatal, birthing and newborn care [
25], with most of these of births occurring in hospital [
26]. Data on nearly every birth is captured in Ontario’s prescribed birth registry, the Better Outcomes Registry & Network (BORN) Ontario, which provides important data on maternal-newborn healthcare and outcomes to contributing hospitals, researchers, and policymakers [
27].
We used purposive sampling. Using BORN Ontario’s internal contact list (which included BORN’s primary contacts from 91 out of 93 hospitals), we compiled a list of individuals who were expected to be involved in implementation projects to improve practice, such as directors, nursing managers, clinical nurse educators, and nurses (
N = 278 contacts across 91 hospitals). While it was anticipated that these contacts would meet the eligibility criteria, if they did not meet eligibility criteria or were unable to participate, we asked them to forward the invitation email to other colleagues (i.e., snowball sampling). We included nearly all Ontario maternal-newborn hospitals in our sampling frame (
n = 91) and aimed to collect at least one response from each hospital. To assess the representativeness of the sample, we compared the acuity levels and geographical locations of the responding organizations to the distribution across the province as a whole (Additional file
3).
We followed Dillman’s guidelines for survey distribution and reminders [
28,
29], with recruitment occurring over a nine-week period (September to November 2023). Contacts received a “pre-notice” email from their designated BORN coordinator, providing an alert about the upcoming questionnaire from a known contact. The lead researcher then sent the survey email invitation and two email reminders (at 2 weeks and 4 weeks) to all contacts. Lastly, the internal BORN coordinators did a final personal telephone or email reminder (as per their usual communication methods) to their non-responding contacts.
Inclusion and exclusion criteria
The inclusion criteria were individuals who: (1) work in an Ontario maternal-newborn hospital unit (i.e., labour and delivery, postpartum, neonatal intensive care unit, or special care nursery); (2) are responsible for leading, supervising, or participating in implementation projects, quality improvement projects, or practice change initiatives in their current role; and (3) read and understand English.
Instrument
Because there was no existing validated questionnaire that met our needs, the research team developed the questionnaire, informed by two main sources.
First, we used the Implementation Roadmap activities, which are spread across three main phases: phase 1 involves issue identification and clarification; phase 2 involves building solutions and field-testing them; and phase 3 involves implementing, evaluating, and sustaining [
20]. The activities in this framework became a series of 28 questionnaire items to understand which are viewed as important, which are typically done, and which activities respondents feel confident completing (Table
1). We slightly modified the breakdown and wording of the Implementation Roadmap activities to better align with language used by nursing managers and directors in our previous interviews with a similar sample [
19] and to increase understandability based on team feedback. The developer of the framework was involved in this process and ensured the integrity of the framework's activities was maintained.
Table 1
Implementation Roadmap activities as presented in questionnaire
PHASE 1 – Issue identification and clarification 1. Identify a relevant problem or issue 2. Form a working group 3. Involve stakeholders as partners throughout change initiative 4. Create a formal implementation plan 5. Use research evidence to identify potential programs, guidelines, practices, or innovations to solve problem 6. Assess the quality of the program, guideline, practice, or innovation 7. Identify or create a tangible indicator of best practice 8. Collect local data to learn about current practice in your setting 9. Compare current practice in your setting to the best practice to determine how big the “gap” is |
PHASE 2 – Build solutions and field test them 10. Work as a team to select the best practice to be implemented 11. Analyze best practice for who needs to do what, when, to whom, and under what circumstances 12. Confirm that key stakeholders endorse the selected best practice 13. Customize the selected best practice for your setting 14. Conduct a stakeholder analysis 15. Assess the barriers and drivers to implementing the best practice 16. Prioritize the identified barriers that are feasible to address 17. Select change strategies to address the identified barriers 18. Field-test the selected change strategies 19. Create a plan for a process evaluation 20. Create a plan for an outcome evaluation |
PHASE 3 – Implement, evaluate, sustain 21. Complete a pre-launch checklist 22. Create a sustainability plan 23. Use data to assess if the best practice is being used 24. Use data to assess if use of best practice resulted in the desired outcomes 25. Use the monitoring and evaluation findings to adjust the change strategies 26. Use strategies to sustain use of the best practice over time 27. Use data to assess if sustainability strategies are maintaining use of best practice |
Throughout implementation process 28. Consider equity, diversity, and inclusion (EDI) |
Second, we used our findings from a previous secondary qualitative analysis of interviews with maternal-newborn nursing leaders [
19] to identify areas for exploration in this current survey. Our previous study identified the need to better understand the specific roles involved in the various implementation phases, the mechanisms for engagement, and considerations for equity, diversity, and inclusion (EDI); we included these topics in the questionnaire used in this study to probe these areas.
The research team, which included experts in nursing, implementation science, and implementation practice, reviewed the study questionnaire to assess its face validity. The questionnaire was piloted by two maternal-newborn nurses with experience leading implementation initiatives. Changes were made to enhance clarity (e.g., defining terms, including examples), flow (e.g., re-ordering the questions), and functionality (e.g., changing large matrices to individual questions). While feedback suggested the questionnaire was long and may not be feasible for busy healthcare providers and leaders to complete, we made the decision to maintain the length of the questionnaire to facilitate a comprehensive understanding of respondents’ views and approaches to implementing practice changes.
The final questionnaire had 159 questions across six parts. Question formats included 135 closed-ended questions (including multi-select, single-select, and rating scale formats) and 24 open-ended questions (including substitution, extension, and expansion questions [
30]). This paper reports on the first three questionnaire sections: eligibility screening, demographics, and implementation process and steps (see Additional file
2 for full questionnaire).
Data collection
Respondents were asked to complete an online questionnaire hosted in REDCap (Research Electronic Data Capture), a secure, web-based software platform designed to support data capture for research studies [
31]. The questionnaire took approximately 20 to 30 min to complete. The questionnaire was an “open survey,” meaning there was one generic link for all respondents. To minimize the risk of multiple entries by the same person, we collected detailed demographic information (including hospital name, unit name, and role), which allowed us to identify and remove duplicate entries if needed.
Data analysis
We imported the questionnaire responses into SPSS (version 29) for analysis. We used descriptive statistics for the close-ended questions. We excluded records that did not answer any questions beyond the 28 Implementation Roadmap activities. For nominal and ordinal data, we present frequencies and percentages to show the response distribution across categories. For continuous data, we present measures of central tendency (medians, interquartile ranges [IQR], ranges). For open-ended text boxes, we grouped the written responses into categories and provide frequency counts, where appropriate, to illustrate recurring topics. There was some missing data from respondents skipping questions; we indicate the denominator throughout to make clear the number of respondents and missingness for each question.
Ethical considerations
This study was approved by the Children’s Hospital of Eastern Ontario (CHEO) Research Ethics Board (protocol #23/59X) and the University of Ottawa Research Ethics Board (H–12–23–9959). The questionnaire homepage included detailed information on the study and consent was implied by starting the questionnaire. The questionnaire included indirect identifiers (e.g., hospital name, unit type, role), which together could identify the respondent; however, these were optional. To protect confidentiality, only the required research team members had access to the REDCap database and identifying information was saved in a master list separate from the data.
Discussion
In this study we aimed to explore the implementation processes and implementation capacity-building needs of maternal-newborn nurses, other healthcare professionals, and leaders. We grounded our data collection in an implementation planned-action framework, the Implementation Roadmap, to identify how usual implementation practices compare to recommendations from implementation science. We also learned about respondents’ perceptions of importance and confidence completing various implementation activities. Together, this highlights specific phases and activities where maternal-newborn teams may need additional support in the implementation process.
About half of respondents indicated their organization had a mandatory or optional practice change process or framework, and we observed that these respondents more frequently reported always completing activities in the Implementation Roadmap. This is unsurprising given that theories, models, and frameworks encourage a systematic and structured implementation process and creates a shared language and understanding among team members, which may ultimately increase success and sustainability [
10,
16,
34]. In addition, some respondents named a formal theory, model, or framework they use. Similar to our previous findings [
19], most of these were approaches and tools grounded in quality improvement rather than implementation planned-action frameworks. With the growing recognition that implementation science can inform and improve change initiatives in healthcare settings, it will be important to consider how implementation science and quality improvement practice can work together to achieve optimal processes and outcomes [
35].
Respondents identified a variety of roles that are involved in different phases of the implementation process. Nurses in both leadership and clinical roles were the most frequently selected role for leading change initiatives, identifying problems to address, developing solutions, and implementing solutions. These results highlight the substantial and essential role that nurses play in implementing practice changes [
15].
We observed that overall, there was strong involvement from nursing managers throughout the implementation process, which is promising given that leadership behaviors of nurse managers affect the unit climate for the implementation of evidence-based practices [
36]. However, the involvement of senior leadership was lower, which likely reflects their role in providing general support and infrastructure rather than engaging in the day-to-day activities. Senior nurse leaders have been identified as a key group that can embed implementation science in their organizations [
37]. Given our findings that evaluation and sustainability planning and execution were some of the least frequently completed activities, there may be a particularly important role for senior nurse leaders to play in supporting these activities.
We observed that pregnant and birthing people and their families were less frequently involved in the implementation process, a finding consistent with our previous qualitative study [
19]. While about half of respondents indicated pregnant and birthing people and their families were involved in identifying problems (likely through patient feedback systems and patient and family advisory committees), there was a notable decrease in the number of respondents who said they also involved them in developing and implementing solutions. This may be a missed opportunity given that involving patients in co-designing solutions can result in improvements to care, service delivery, and governance [
38].
Overall, we found that most respondents perceived the three phases and 28 implementation activities to be important, aligning with our previous work in a different setting [
39]. Despite these generally positive ratings of importance, we observed that these implementation activities were not always done. There are several potential explanations for the discrepancy between viewing an activity as important but not doing it. Implementation is complex and there are multi-level barriers to implementing evidence in practice [
40]. Implementation is often done with limited time and resources in conditions that are rapidly changing, resistant to change, and subject to policies and regulations [
41], and this may account for some implementation activities not being feasible to complete, despite being viewed as important. It is also possible that the difference relates to a lack of knowledge and skill in how to complete the activity. This is consistent with respondents most frequently indicating they were “somewhat” confident across the implementation activities, suggesting that room for improvement exists. Given that healthcare professionals are typically not taught about implementation and improvement practice in their training, nor are there sufficient professional development opportunities in this area [
9,
42], it is unsurprising that nurses and other healthcare professionals are reported to have low knowledge [
43] and skill proficiency in quality improvement [
44] and low confidence in knowledge translation and implementation [
45,
46].
We observed a general decrease in respondents’ ratings of importance, completion, and confidence when moving across the three implementation phases. Variability in practitioner perceptions across the implementation process has been reported in previous studies [
45‐
48]. For instance, allied health professionals reported higher confidence levels in the earlier steps of the implementation process (e.g., identifying an evidence-practice gap, finding and appraising evidence), and reported lower confidence levels with activities later in the process (e.g., implementation, monitoring, evaluation) [
45,
46]. McNett et al. [
48] described three core implementation stages (initiating, maintaining, completing and sustaining) and found that respondents reported moderate difficulty completing each of the stages, with the difficulty ratings increasing across stages. Similarly, respondents rated their success higher in the first phase, with success ratings decreasing in stages 2 and 3. This aligns with our findings that respondents’ ratings of importance, completion, and confidence were highest in the first phase, and decreased in later phases, and signals that nurses, other healthcare professionals, and leaders may need additional support in later implementation activities.
Most respondents identified a need to increase their knowledge and skills in using an evidence-informed approach to implementing practice changes. Our study and previous literature [
45,
46] suggest that nurses and other healthcare professionals are in fact interested in learning more about how to implement evidence into practice. Respondents who had previous training in implementation, quality improvement, or knowledge translation rated their knowledge and confidence more positively than those with no training. Implementation capacity-building interventions may result in positive outcomes including increased knowledge and understanding of implementation and an increase in self-reported ability to implement [
49]. While we acknowledge that training and tools alone would be insufficient, it is promising to see there is an appetite for further learning about how to effectively move evidence into practice. Ongoing work is needed to develop and evaluate evidence-informed tools, resources, and training initiatives to support the application of implementation science in clinical and health services settings.
Strengths and limitations
Despite use of strategies known to improve response rates (e.g., pre-notice, reminders) [
50], our individual response rate and questionnaire completion rate were only 26% and 58%, respectively. These numbers may reflect the feasibility of the questionnaire for busy healthcare providers and leaders, which took longer than the recommended 10 minutes [
50]. Despite this, we obtained at least one response from nearly two-thirds of maternal-hospitals in Ontario, providing good representation of both geographical locations and acuity levels of maternal-newborn units across the province. However, due to recruitment occurring in Ontario, Canada only, the sample may not be representative of maternal-newborn teams in other jurisdictions, and the findings should therefore be generalized with caution.
It is important to note that most respondents were nurses and therefore our findings likely do not reflect the perceptions of other healthcare professionals. However, this nursing perspective is essential given that nurses are often the ones responsible for leading and participating in implementing practice changes. Another limitation is that all data is self-reported; this means that when a respondent said their team completed an implementation activity, we do not know whether this is accurate, nor can we say that it was done optimally. In addition, it is possible that respondents may have answered questions in a way that reflected positively on themselves and their units (i.e., social response bias). However, in many cases, respondents indicated activities their teams did not complete or areas where they were not confident, suggesting that respondents were likely not overestimating.
Finally, it is important to acknowledge that although we assessed face validity, the questionnaire did not undergo validity and reliability testing. In addition, in the questionnaire section with the 28 Implementation Roadmap activities we used a simple three-point scale to assess importance, completion, and confidence. While this decision was made to make the lengthy questionnaire easier to complete, we acknowledge that this poses limitations for our analysis, preventing us from obtaining a more nuanced understanding of respondents’ ratings.
Recommendations for research
This work highlights several opportunities for future research. First, while this study identified general patterns across implementation phases, we do not know the reasons for these differences. Future research could explore these gaps by asking respondents for more details on the “why” behind these findings. Second, given the limited evidence on how implementation science is being applied by practitioners in healthcare settings, further research is needed to explore this in settings beyond maternal-newborn care and in other jurisdictions. Third, while in this study we focused on describing implementation processes, future research is needed to understand how different implementation steps and activities result in different clinical and implementation outcomes. Finally, promoting equity and better engaging patients throughout the implementation process have been identified as implementation research priorities in maternal health [
51]. While we assessed how frequently respondents consider equity and engage patients throughout implementation, further research is needed to develop effective strategies to facilitate these essential components of effective, person-centered, and equitable implementation.
Implications for nursing
Implementing best practices in organizations is a collective activity, and leadership has been identified as an important mediator that can facilitate or hinder implementation [
52]. Nurse managers have indicated a desire to learn about implementation and better support clinical staff [
53]; ongoing supports and opportunities are needed to further enhance nursing leaders’ ability to foster environments that facilitate implementation. There are also considerations for nursing education. To date, nursing education programs have embedded evidence-based practice and research methods in curricula; however, there is a need to also include content on how to actually implement evidence into practice [
54], including exposure to implementation science and its application to nursing practice.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.