The aim of this study was to establish expert consensus on the initial content for an algorithm to be used in creating a mobile application for wound care, specifically designed for newly graduated nurses. The e-Delphi approach was used. Wound care experts achieved consensus on 75 items for inclusion in the algorithm of the future application. Nevertheless, the response rate remains high in each round, surpassing the 70% threshold necessary to maintain methodological rigor [
35] and outperforming rates seen in other e-Delphi studies of wound care experts [
58‐
60].
Several strategies, such as incorporating an animated explanatory video on the initial questionnaire screen, avoiding distribution during the holiday season, and sending personalized email reminders, contributed to the usability of the online questionnaire and mitigated attrition [
61]. Additionally, the removal of consensus items from the second round resulted in a more concise third-round questionnaire. While this methodological choice may have contributed to participant retention, it also meant that items already achieving consensus in round 2 had no opportunity to achieve even greater consensus [
35]. In addition to the reminders sent, the high response rate can be attributed to the experts’ implicit recognition of the subject’s significance. This level of commitment aligns with the findings of Belton et al. [
62], who noted that experts are more likely to continue participating when they perceive the purpose and relevance of the Delphi exercise or when the consensus’s outcome directly affects them. However, this recognition may introduce bias, as individuals with dissenting opinions are more likely to drop out of the study [
62].
Composition of the expert panel
While there is no formal, universal guidelines on the required number of experts for a representative panel in a consensus method, the number of experts who completed the e-Delphi exercise is considered satisfactory. The choice of sample size depends on various factors, including the consensus objective, the chosen method, available time, and practical logistics [
35,
48,
63,
64]. Wound care e-Delphi studies have shown a wide range of sample sizes, from 14 [
65] to 173 participants [
60]. Most publications and consensus method guides suggest that a minimum of six participants is necessary for reliable results [
35,
41,
42,
64,
66,
67]. While larger sample sizes enhance result reliability, groups exceeding 12 participants may encounter challenges related to attrition and coordination [
43,
68]. In their methodological paper on the adequacy of utilizing a small number of experts in a Delphi panel, Akins et al. [
69] argue that reliable results and response stability can be achieved with a relatively small expert panel (
n = 23) provided they are selected based on strict inclusion criteria. This was particularly relevant in the present study due to the limited number of experts specializing in wound care.
Beyond the numbers, it is important to emphasize that the representativeness of the sample serves a qualitative rather than statistical purpose, focusing on the quality of the expert panel rather than its size [
35]. The heterogeneity of the expert panel is a critical element in the rigorous implementation of a consensus method by expanding the range of perspectives, fostering debate, and stimulating the development of innovative solutions [
41,
50,
61]. This principle is strongly supported by Niederberger and Spranger [
70], who suggest drawing experts from diverse backgrounds to create a broad knowledge base that can yield more robust and creative results. Additionally, the heterogeneity of the expert panel helps mitigate potential conflicts of interest related to publications, clinical environments, or affiliations with universities.
Consensus
The literature on Delphi techniques does not provide a universally agreed upon definition of consensus [
35]. However, the 80% consensus threshold used in this study exceeds the thresholds proposed in some methodological literature, such as 51% [
71] and 75% [
63]. This level of consensus aligns with other Delphi studies in wound care, typically ranging between 75% [
58] and 80% [
72,
73]. The results of this e-Delphi study indicate consensus for 75 items based on descriptive statistics, and analysis of comments. The decreasing number of comments and the interquartile ranges less than or equal to one demonstrate convergence of opinions. In addition to scientific criteria, practical factors such as available time and participant fatigue were considered. Consequently, the e-Delphi concluded after three rounds, as consensus was achieved for most items. This aligns with the typical practice of Delphi exercises, which often involve two or three rounds [
70]. It was unlikely that a fourth round would introduce new items. Consensus aims to reconcile differences rather than eliminate them. Hence, it was decided to address remaining areas of debate and less stable items in the subsequent stage of the application design process, utilizing another method: focus groups with prospective application users.
The substantial number of items that gained consensus in the second round suggests the complexity of considerations for safe wound care delivery. Many of them, including clinical situation assessment and factors affecting wound healing, were considered highly essential. The top 10 consensus items, separated by minimal differences, clustered closely together. Notably, the distribution of agreement was markedly skewed, with experts more likely to strongly agree or agree (score of 4 or 5) than to disagree (score of 1 or 2) or remain neutral (score of 3).
The item that achieved the strongest consensus in this study, namely “signs and symptoms of infection”, aligns with the latest guidelines from the International Wound Infection Institute guidelines [
74]. This high ranking was anticipated due to ongoing concerns surrounding antimicrobial resistance and the pressing need to improve practices related to the assessment and management of wound infections [
74]. In addition to “signs and symptoms of infection”, there was also significant consensus on the appropriate timing for wound cultures. Wound cultures are often unnecessarily requested when wounds lack clinical signs of infection, resulting in approximately 161,000 wound cultures performed annually in Quebec and an average annual expenditure exceding CAN$15.6 million [
75]. This problem could be addressed with the future application, which would recommend performing a culture exclusively to guide treatment decisions after a clinical diagnosis of infection based on signs and symptoms [
74,
76]. Certainly, the experts’ positions on these infection-related issues have the potential to foster safe, evidence-based wound care practice.
Some items, although considered essential, received notably lower average agreement levels. This was particularly evident in items related to dressings, including trade names and government reimbursement codes, which achieved some of the lowest consensus in the second round. Qualitative comments shed light on this phenomenon, suggesting that experts prioritize fundamental wound care principles: the identification and management of causal factors and adequate wound bed preparation should precede the selection of a dressing [
77,
78]. Nonetheless, the assessment of the ankle-brachial index and its indications also achieved some of the lowest consensus scores during the second round. This finding reflects Quebec’s initial wound-care training, which designates the ankle-brachial index as a subject reserved for university-level education [
79]. This implies that recently graduated nurses from colleges may lack the necessary knowledge in this aspect of vascular assessment. Nonetheless, it is recommended as an item to be included, and this result fuels the ongoing debate regarding university training as the standard for entry into the profession [
80].
It is worth noting the shift in opinions between the second and third rounds which underscores the value of the iterative process in the e-Delphi technique employed. The extended range of kappa coefficients highlights the impact of the process and feedback on the evolving views of experts. It is essential to remember that kappa measures the level of agreement between individual experts between two rounds, not among the experts on the panel [
51]. For example, some experts may have revised their opinions due to decreased confidence and aligned with the majority’s view. While methodologically adequate, the sample size can be considered statistically small, making a single expert changing their stance significantly affect the kappa coefficient [
53,
81]. Scheibe et al. describe these variations as “inevitable” [
82, p.272]. However, the average responses after the third round changed by less than one point for each item that progressed from the second round, demonstrating the overall stability of the aggregate rank and the reliability of the agreement for these items [
83]. In quantifying the extent of disagreement, the range of the standard deviation of items that achieved consensus in the third round decreased. This suggests a reduction in outliers and a convergence of viewpoints as the rounds progressed [
51]. These results support the conclusions of Greatorex and Dexter [
83], namely that the results of each item submitted to the Delphi technique must have acceptable mean and standard deviation values to represent a consensus.
Implications
Four main implications can be drawn from this study. First, as mentioned earlier, the results will inform the development of the algorithm that will be used to create a wound care mobile application. Second, the high levels of consensus demonstrated in this study indicate strong support among experts for the creation of digital wound care tools, which can help bridge the existing gap between wound care theory and practice. Third, presenting items thematically can assist stakeholders in utilizing parts of the results to create tools such as a comprehensive and holistic initial assessment tool. Finally, this study defines the expectations of expert wound care nurses regarding the competencies new nurses should possess upon entering the profession. While the future application can support knowledge, it cannot replace training, which forms the foundation of skill development. Therefore, this study provides a set of items that could be used to enhance initial training and professional development. For future research, it will be important to validate and compare these results with those of scientific and academic nurses. Given that the expert panel for this study primarily consisted of clinical nurses, experts from the fields of research and education were under-represented. Given this composition, it is not possible to establish statistically significant differences between these groups (e.g., academic vs. clinical backgrounds). This would be an interesting avenue to explore with a larger sample and with members from various health disciplines.
Strengths
The primary strength of this study lies in the choice of the e-Delphi technique and its transparent and rigorous implementation to achieve consensus in a field where empirical data are often lacking [
35]. Given the challenges posed by the COVID-19 pandemic and the uncertainties surrounding in-person meetings, the use of an online questionnaire remains an unquestionable advantage, which justifies the choice of the e-Delphi technique. Moreover, experts are highly unlikely to travel long distances to participate in discussion groups, as suggested by the nominal group method [
86]. Additionally, the asynchronous completion of the questionnaire sets the e-Delphi technique apart, recognizing the considerable challenge of coordinating the already busy schedules of experts.
The adoption of the e-Delphi technique in this study, following the classic Delphi technique used in nursing since the mid-1970s [
41], offered several advantages. It was cost-effective, efficient, environmentally friendly, and not constrained by geographical boundaries. Additionally, it allowed for pretesting, had no sampling limits, and enabled asynchronous participation, ensuring data accessibility for the research team at any time and location [
35,
87‐
90]. Considering the variable schedules of expert wound care nurses, these benefits undoubtedly contributed to the high retention rate. The iterative e-Delphi process enhanced the experts’ reflexivity, leading to a wealth of data. Beyond the advantages of standardization, such as improved external validity, this collaborative approach enhances the acceptability of these items [
55].
Another strength of this study is the protection of inter-participant anonymity. The e-Delphi technique enabled experts from diverse backgrounds and levels of expertise to express their views without fear of bias or judgment from others. This approach minimizes potential biases associated with dominant group opinions, social influences, and the halo effect [
87]. Additionally, each participant’s input held equal weight in the process [
35,
63].
Limitations
This study has several limitations. First, all the participating experts were from Quebec, which introduces a geographical bias, restricting the generalizability of the results beyond this region. This choice was deliberate to ensure that the experts had a deep understanding of the specific context in which the future application would be used. It is important to recognize that the findings of Delphi studies are typically specific to the expert panel [
35,
40]. Second, the use of purposive sampling introduced an inherent selection bias [
45]. Additionally, network recruitment might have led experts to recommend like-minded colleagues. To mitigate this, the experts were recruited with the goal of achieving the broadest possible representation and encompassing a wide range of viewpoints. Another methodological limitation is that this e-Delphi study did not facilitate direct interaction between the experts, which prevented in-depth debate and discussion.
Despite the anonymity, the experts might have been influenced by the opinions of their peers or the results of previous rounds, potentially leading to a conformity bias associated with the bandwagon effect, which could have influenced them to withhold their honest opinions [
63,
67]. Conversely, an anchoring bias may have influenced experts not to consider alternative perspectives [
63,
67]. Last, it is important to remember that expert consensus does not represent absolute truth. Instead, it represents a valuable outcome based on the opinions of a selected group of experts and must be interpreted critically and contextually in conjunction with the literature.