Design and randomization
This was a randomized, single-blind, parallel-group trial. Years of experience as a PHN (two to three years, four to five years) and affiliation (prefectures, government-designated or other major cities, other municipalities) were utilized for allocation purposes employing a stratified permuted block method. The allocation table developed by the author was uploaded to the Research Electronic Data Capture (RED-Cap) system and immediately deleted to prevent any subsequent arbitrary allocation. Participants were automatically assigned by the RED-Cap to the intervention or control group [1:1 ratio] based on the order of registration. The intervention group viewed our program, whereas the control group did not receive any intervention in the trial. Owing to ethical considerations, participants allocated to the control group were given an opportunity to receive the same training after completing the questionnaires.
The research manager then sent the URL, password, and training instructions for the program to the participants’ personal e-mail addresses. Participants were strictly prohibited from disclosing the URL, password, or other information about the program to third parties. The study was reported in accordance with the Consolidated Standards of Reporting Trials (CONSORT) [
21] and the Template for Intervention Description and Replication (TIDieR) guidelines [
22].
Sample and setting
Based on the findings of Yoshioka–Maeda et al. [
20], a sample size of 134 was calculated utilizing G*power 3.1.9.2, where the effect size, alpha, and power were set at 0.5, 0.05%, and 80%, respectively. However, based on the findings of their research, it was determined that 174 samples were required to counter a dropout rate of 30%.
To solicit participation, survey documents were sent by mail to 53 prefectural health centers, 123 health centers controlled by government-designated cities or major cities, and 138 municipal health centers in the Kansai, Chubu, and Chugoku regions of Japan in mid-October 2022. Additionally, booklets outlining this study were sent so that PHNs could apply for participation based on fully comprehending the study. Applicants were asked to register on the RED-Cap webpage by early December 2022.
Intervention
This program was named the Capacity Development Training Course for Evidence-based Program Implementation (“EPI-TRE”). The ultimate objective of this program was to acquire basic program implementation capacity. The IDAS was adopted as a training framework because, as a competency list, it enables the assessment of individual program implementation capacities [
19].
The IDAS was developed utilizing the three phases proposed by Boateng et al. [
23]. Initially, a validated implementation science framework suitable for this program was selected [
19]. The CFIR, selected as the framework, was translated into Japanese and verified multiple times for accuracy [
19]. To ensure that the framework’s content was appropriate for the context of Japanese health programs, multiple revisions were made through expert consultation and pre-testing [
19]. In the second phase, a questionnaire survey was administered to the target population nationwide as the primary test [
19]. In the final phase, the reliability and validity of this scale were examined from the obtained data (GFI: 0.87, CFI: 0.92, RMSEA: 0.06, Cronbach’s alpha > 0.8, Spearman’s coefficient 0.95), all reaching acceptable levels [
19].
The IDAS lists 31 core indicators across five domains to assist PHNs with implementing evidence-based programs [
19]. The five domains are defined as: 1) “Intervention characteristics,” referring to activities to verify or adjust the evidence, merits, procedures, adaptability to the local context, expenses, and so on of a program; 2) “Outer setting” referring to activities to identify and leverage external factors, including extrinsic incentives, best practice, and possibilities of collaboration; 3) “Inner setting,” referring to activities to encourage program introduction by setting or verifying objectives, considering organizational culture and the impact of other intrinsic factors; 4) “Characteristics of individuals,” referring to the competencies of individuals, measured regarding self-efficacy, professional identity, and ongoing upskilling; and 5) “Process,” involving activities to plan, implement, review, and evaluate a program at both the individual and organizational levels, in collaboration with other stakeholders [
24].
Three D&I researchers with public health nursing experience (two of whom created the IDAS) discussed a framework for the training program based on the 4-step implementation science framework [
25], the 10-step learning derivation of policy transfer [
26], and IDAS. Consequently, the framework for this training was set at eight items. Additionally, the research group decided to focus on three indicators within the five domains that PHNs with five years or less experience had been found to place less importance on, and therefore have fewer opportunities to implement, based on a prior survey by Okamoto et al. [
27]. These domains include “verification of evidence,” “verification of trialability,” and “access to knowledge and information” [
27]. The training was drafted by the research group over approximately one month, with input from five health program experts who also participated in the creation of the IDAS. Currently, EPI-TRE is accessible at
https://www.phn-waza.com/content2-3/ [
28].
The program consists of four modules: the introduction (Module 0) and three competency-building modules (Modules 1–3). Each training module lasts approximately 30 minutes and is divided into several parts, enabling participants to start from any part. All parts remain accessible for four weeks, during which participants can view the content as many times as they like. However, as the program does not allow fast-forwarding, it requires the complete viewing of each part. The program features animated characters as lecturers, facilitators, and supporters to guide the participants through the learning process. It includes narrative simulations where participants are asked to select their preferred conduct from two options in each situation.
An outline of the web-based training program is presented in Table
1. Module 1 aims to incentivize participants to engage in the program by assisting them with comprehending the basic facts and skills regarding program implementation. This module includes simulations of the eight steps of program implementation and a lecture on evidence. It also builds on Lave’s learning transfer model [
29].
Module 2 was designed to enhance the participants’ readiness to implement EBPH programs. In this module, participants learn how to introduce best practices by leveraging the most appropriate knowledge and skills. It includes simulations for capacity-building related to the “quality and level of evidence” and “verification of trialability.” The lecture in Module 2 focuses on “access to knowledge and information.”
Module 3 has the same objective as Module 2, enabling participants to review what they have learned. If participants answer a question incorrectly, they are automatically directed back to the relevant page for re-learning. A message encouraging participants is included at the end of the module. Modules 2 and 3 are based on an experimental learning model [
30]. As part of the engagement strategy of the study, reminder e-mails were sent to all participants to advise them when to complete the survey. Thereafter e-mails were frequently sent to non-participants to encourage their participation and ensure all participants completed the program, including the questionnaires. The e-mail messages were changed several times to increase their motivation. The program also used animated characters and music to encourage participation. As a further incentive, a downloadable PDF file for the program was sent to those who completed all the activities. In line with the findings of Yoshida-Maeda et al. [
20], we did not organize any group sessions but provided support for access to the program.
Table 1
Components of the training program
0 | 4min | Introduction | 1) Explain in the way to use the training program | |
| | | 2) Introduce terms and definitions that are used in the training | |
1 | 28min | Understand the main point of health program implementation | 1) Clarify the necessity of developing or improving an EBPH program | Family support program |
| | 2) Share the necessity of developing or improving a program among stakeholders and introduce a project |
| | 3) Select an appropriate best practice case |
| | 4) Establish evidence supporting the best practice program selected |
| | 5) Decide on whether to adopt a program and prepare for its application |
| | 6) Increase the possibility of successful implementation of the adopted program |
| | 7) Evaluate the new or improved program after implementation |
| | 8) Build professional competencies |
| | Lecture 1: What is evidence? / How to use evidence | |
2 | 27min | Apply the best practice to new program | 1) Review module 1 | Community health promotion program |
| | 2) How to select an appropriate best practice case |
| | 3) How to establish evidence supporting the best practice program selected |
| | 4) How to decide to adopt a best practice/ How to prepare for its application |
| | Lecture 2: How to find evidence |
3 | 35min | Use the program in your own practice | 1) Review module 2/How to study module3 | Disaster management for community |
| | Lecture 3: Positive outcome resulting from the development of the implementation skill |
| | 2) Clarify the necessity of developing or improving an EBPH program |
| | 3) Share the necessity of developing or improving a program among stakeholders, and introduce a project |
| | 4) Select an appropriate best practice case |
| | 5) Establish evidence supporting the best practice program selected |
| | 6) Decide on whether to adopt a program and prepare for its application |
| | 7) Increase the possibility of successful implementation of the adopted program |
| | 8) Evaluate the new or improved program after implementation |
| | 9) Encourage the participants to build professional competencies | |
Statistical analysis
To compare participant demographics between the control and intervention groups, Fisher’s exact test was utilized for categorical variables and the Mann–Whitney U-test for continuous variables. Following the intent-to-treat principle, analyses were conducted utilizing both a full analysis set (FAS) and a per-protocol set (PPS). The FAS included participants who completed the baseline questionnaire and proceeded to Module 1 of the intervention program. Missing data were not imputed, as indicated by Little’s missing completely at random test (χ2 = 19.96, DF = 20, P = 0.460). The primary outcome, the total IDAS score, was analyzed utilizing t-tests at T1, T2, and T3. Similarly, scores for the five domains of the IDAS were analyzed. Adjustments for differences between T1 and T3 scores in years of experience and affiliation were conducted utilizing analysis of covariance (ANCOVA). The level of understanding was assessed at each time point utilizing the Mann–Whitney U-test. Differences in scores between the baseline and endpoint were adjusted for years of experience and affiliation utilizing Quade’s non-parametric ANCOVA test. Statistical analysis was conducted utilizing IBM SPSS Statistics for Windows ver. 29. All statistical tests were two-tailed, and values less than 0.05 were considered statistically significant.