Next Article in Journal
Comparative Analysis of Distribution Transformers with Varying Ratings and Insulation States via Frequency Domain Spectroscopy and Capacitance Ratio
Previous Article in Journal
Determination of Subgrade Reaction Modulus Considering the Relative Stiffnesses of Soil–Foundation Systems
Previous Article in Special Issue
Using Compressed JPEG and JPEG2000 Medical Images in Deep Learning: A Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating the Benefits and Implementation Challenges of Digital Health Interventions for Improving Self-Efficacy and Patient Activation in Cancer Survivors: Single-Case Experimental Prospective Study

by
Umut Arioz
1,*,†,
Urška Smrke
1,
Valentino Šafran
1,
Simon Lin
2,3,
Jama Nateqi
2,3,
Dina Bema
4,
Inese Polaka
4,
Krista Arcimovica
4,
Anna Marija Lescinska
4,
Gaetano Manzo
5,6,
Yvan Pannatier
5,
Shaila Calvo-Almeida
7,
Maja Ravnik
8,
Matej Horvat
8,
Vojko Flis
8,
Ariadna Mato Montero
9,
Beatriz Calderón-Cruz
10,
José Aguayo Arjona
10,
Marcela Chavez
11,
Patrick Duflot
11,
Valérie Bleret
11,
Catherine Loly
11,
Tunç Cerit
12,
Kadir Uguducu
12 and
Izidor Mlakar
1,†
add Show full author list remove Hide full author list
1
Faculty of Electrical Engineering and Computer Science, University of Maribor, 2000 Maribor, Slovenia
2
Science Department, Symptoma GmbH, 1010 Vienna, Austria
3
Department of Internal Medicine, Paracelsus Medical University, 5020 Salzburg, Austria
4
Institute of Clinical and Preventive Medicine, Faculty of Medicine and Life Sciences, University of Latvia (UL), 1586 Riga, Latvia
5
Institute of Informatics, University of Applied Sciences and Arts Western Switzerland HES-SO, 3960 Sierre, Switzerland
6
Computational Health Research Branch, National Library of Medicine, Bethesda, MD 20894, USA
7
Galician Research & Development Center in Advanced Telecommunications (GRADIANT), 36214 Vigo, Spain
8
University Medical Centre Maribor (UKCM), 2000 Maribor, Slovenia
9
Research Group in Gastrointestinal Oncology Ourense (REGGIOu), Fundación Pública Galega de Investigación Biomédica Galicia Sur, Hospital Universitario de Ourense, SERGAS, 32005 Ourense, Spain
10
Statistics and Methodology Unit, Galicia Sur Health Research Institute (IIS Galicia Sur), SERGAS-UVIGO, 36312 Vigo, Spain
11
Centre Hospitalier Universitaire de Liège (CHU), 4000 Liege, Belgium
12
Emoda Yazilim, 35590 Izmir, Türkiye
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2025, 15(9), 4713; https://doi.org/10.3390/app15094713
Submission received: 5 February 2025 / Revised: 2 April 2025 / Accepted: 17 April 2025 / Published: 24 April 2025

Abstract

:
Cancer survivors face numerous challenges, and digital health interventions can empower them by enhancing self-efficacy and patient activation. This prospective study aimed to assess the impact of a mHealth app on self-efficacy and patient activation in 166 breast and colorectal cancer survivors. Participants received a smart bracelet and used the app to access personalized care plans. Data were collected at baseline and follow-ups, including patient-reported outcomes and clinician feedback. The study demonstrated positive impacts on self-efficacy and patient activation. The overall trial retention rate was 75.3%. Participants reported high levels of activation (PAM levels 1–3: P = 1.0; level 4: P = 0.65) and expressed a willingness to stay informed about their disease (CASE-Cancer factor 1: P = 0.98; factor 2: P = 0.66; factor 3: P = 0.25). Usability of the app improved, with an increase in participants rating the system as having excellent usability (from 14.82% to 22.22%). Additional qualitative analysis revealed positive experiences from both patients and clinicians. This paper contributes significantly to cancer survivorship care by providing personalized care plans tailored to individual needs. The PERSIST platform shows promise in improving patient outcomes and enhancing self-management abilities in cancer survivors. Further research with larger and more diverse populations is needed to establish its effectiveness.

1. Introduction

Cancer survivorship is a transformative experience that encompasses both physical and psychological challenges [1]. As individuals navigate the complexities of managing their health post-treatment, fostering self-efficacy and patient activation is paramount to achieving optimal health outcomes. Digital health interventions (DHIs), owing to their capacity to personalize, engage, and empower patients, offer considerable promise in addressing these critical aspects of cancer survivorship [2,3].
Self-efficacy and patient activation are crucial components in cancer survivorship [4]. Self-efficacy refers to an individual’s belief in their capacity to perform a specific task or behavior, whereas patient activation pertains to an individual’s knowledge, skills, and confidence in managing their health [5]. Mazanec et al. [6] demonstrated that higher levels of self-efficacy and patient activation correlate with improved outcomes for cancer survivors, including enhanced quality of life, reduced symptomatology, and the adoption of healthier lifestyles.
Self-efficacy is a key determinant of health outcomes [7]. Cancer survivors with high self-efficacy are more likely to adhere to treatment regimens, engage in health-promoting behaviors, and effectively manage symptoms [8]. Activated patients assume responsibility for their care, collaborate with healthcare providers, and actively seek out necessary information and support [9].
Traditionally, interventions aimed at promoting self-efficacy and patient activation have relied on in-person counseling and education [10]. While these approaches offer value, they often lack the scalability and accessibility required to accommodate the growing global population of cancer survivors. DHIs provide a personalized, accessible, and continuous approach to enhancing self-efficacy and patient activation among cancer survivors [11]. These interventions have been shown to improve adherence to treatment regimens, increase physical activity, and reduce symptom burden. Furthermore, DHIs can assist survivors in managing stress and anxiety, thereby fostering resilience and promoting overall well-being [12,13,14].
Several studies have examined the impact of DHIs on self-efficacy and patient activation. Powley et al. [15] evaluated the effectiveness of a digital health coaching program for patients undergoing surgery. Participants received personalized guidance via a mobile application, focusing on enhancing self-efficacy and lifestyle factors. The results demonstrated significant improvements in both areas, with patients reporting increased confidence in managing their health and adopting healthier behaviors. Van Der Hout et al. [16] investigated the efficacy of Oncokompas, a web-based self-management platform, in improving health-related quality of life and reducing symptoms among cancer survivors. The findings indicated that individuals with lower self-efficacy and higher levels of personal control or health literacy exhibited greater improvements. These results suggest that tailoring interventions to address specific needs can enhance their effectiveness in supporting the well-being of cancer survivors.
A growing body of evidence indicates that the adoption of healthy lifestyle practices, such as regular physical exercise [17], increased consumption of fruits and vegetables [18], maintaining a healthy weight and body composition [18], smoking cessation [19], and engagement in cognitive behavioral therapy [20], can positively influence cancer prognosis. However, many cancer survivors face significant challenges in adhering fully to these recommendations [19]. Previous studies have predominantly focused on either self-efficacy [16,21,22] or patient activation [12,23], rather than integrating both factors within a unified framework.
This study distinguishes itself from prior research by adopting a personalized approach to cancer survivorship care. It utilizes a mobile health (mHealth) application (version 1.0) to deliver tailored interventions that address the unique needs of each patient. Moreover, this study conducts a multifaceted evaluation of the mHealth application’s impact on various dimensions of patient well-being, including self-efficacy, patient activation, satisfaction with care, and physical activity levels. Notably, this study also examines the real-world feasibility and acceptability of implementing digital health interventions in clinical settings, addressing critical challenges such as user engagement, data integration, and technical issues.
The hypothesis of this clinical trial posits that “A comparison of self-efficacy levels at the beginning and end of the intervention will demonstrate a significant increase in self-efficacy among participants who receive the personalized intervention supported by the mHealth application”. The primary objective of this study is to assess the acceptability, usability, and impact of the mHealth application on survivors’ perceived self-efficacy and satisfaction with their care. The secondary objective includes measuring patient activation and evaluating the acceptance of the mHealth application, as well as exploring survivors’ experiences during their use of the application. Ultimately, the aim of this study is to assess the effectiveness of a personalized mHealth intervention in enhancing self-efficacy, patient activation, and satisfaction with care among cancer survivors.

2. Materials and Methods

2.1. PERSIST Project

This study was conducted as part of a European project titled PERSIST [24], which aims to enhance health outcomes and quality of life while simultaneously reducing stress among cancer survivors through a patient-centered care plan that leverages big data and artificial intelligence. The PERSIST consortium seeks to develop an open, interconnected system to optimize the care provided to cancer survivors. The primary objective of the PERSIST project was to evaluate whether and how self-efficacy and patient activation can be addressed as a collaborative effort between patients and clinical professionals. The secondary objective focused on assessing patient engagement, willingness to use the mHealth application, and gathering feedback from both patients and clinicians regarding their experiences with the PERSIST solution (see study protocol [25]).
An overview of the PERSIST platform is depicted in Figure 1. Following the collection of real-world data from patients through mobile applications and smart bracelets, cutting-edge technologies were employed to extract multimodal features using the Multimodal Risk Assessment and Symptom Tracking (MRAST) framework. In addition to physical markers, subjective data were collected through questionnaires. All objective markers (vital signs) and subjective markers (Patient-Reported Experience Measures [PREMs], Patient-Reported Outcome Measures [PROMs], and linguistic/vocal/face cues) were gathered via the mHealth application. The fused data were processed using various tools (Cohort and Trajectory Analysis, Information Retrieval Tool, Alerts, and Motivation Mechanisms) and subsequently integrated into the Clinical Decision Support System (CDSS) (version 1.0) and the mClinician application (version 1.0).

Digital Interventions

To facilitate data collection and analysis, the study utilized several digital interventions. The mHealth application (version 1.0) enabled patients to self-report health parameters and vital signs, while the mClinician application (version 1.0) allowed healthcare providers to input and manage patient data. The MRAST platform (version 1.0) employed advanced artificial intelligence (AI) techniques, including automatic speech recognition and natural language processing, to extract valuable insights from patient interactions. These digital tools collectively contributed to the efficient and accurate collection of patient data, enabling the platform to deliver personalized care recommendations and support through the CDSS (version 1.0).
The mHealth application (version 1.0) served as the central component for populating the Big Data Platform (version 1.0) with diverse types of patient data, thereby enabling other services to process and analyze this information. The mHealth application (version 1.0) allows individuals to track their health parameters and vital signs (Figure 2). Furthermore, the application facilitates the flow of data between users and healthcare providers. The mobile applications were developed using the Flutter framework (version 2.0) and the Dart programming language. Interoperability of patient data was ensured using Fast Healthcare Interoperability Resources (FHIR) standards, which enabled data sharing among project partners.
The mClinician application (version 1.0) served as a data ingestion tool within the PERSIST project (Figure 3). It was primarily developed for clinicians to assist in collecting patient data and providing an overview of the acquired information. The mClinician application (version 1.0) displays concepts from Symptoma’s (Symptoma GmbH, Wien, Austria) API to generate structured data for patients in FHIR format. Clinicians determined the most relevant data to be collected from patients’ electronic health records.
The MRAST platform (version 1.0) incorporates multimodal analysis of patient video recordings [26] (Figure 4). It primarily consists of automatic speech recognition (SPREAD [27]), natural language processing, and facial landmark detection to extract linguistic, speech, and visual features. MRAST platform (version 1.0) also includes disease-centric discourse through the extraction of symptoms from free-text data, which is carried out using an information retrieval tool developed based on Symptoma’s (Symptoma GmbH, Wien, Austria) core technology.
Figure 5 illustrates the CDSS (version 1.0), which leverages patient-specific information and a comprehensive knowledge repository to generate evidence-based medical decisions or recommendations. Healthcare providers can initiate requests for decision support, which are processed by the inference engine to produce tailored clinical guidance. Additionally, the CDSS (version 1.0) is integrated with cohort and trajectory analysis in multi-agent support systems, a breast cancer survival analysis with high-risk marker detection [28], an AI service predicting five-year relapse recurrence for breast and colorectal cancer survivors, and a model for automatic circulating tumor cell (CTC) detection in liquid biopsy samples [29].

2.2. Clinical Trial

2.2.1. Trial Design

The clinical trial was designed using a single-case experimental design (SCED) methodology [30] as a prospective study to validate the PERSIST platform. SCED provides a robust approach for studying individual participants in-depth and offers valuable insights into the effects of interventions over an extended period. Its flexibility, repeated measures, and potential for strong internal validity render it a valuable tool for this trial. Cancer types with relatively high incidence and survival rates were selected to ensure the identification of a sufficiently large survivor population amenable to enhanced follow-up.

2.2.2. Participants

Breast and colorectal cancer survivors who had completed curative treatment were included in this study. A survivor was defined as a patient who had remained recurrence-free for 3–24 months post-treatment (surgery, radiation therapy, and/or chemotherapy). For colorectal cancer survivors, two subgroups were considered: those who had received chemotherapy and those who had not. Each subgroup comprised at least 33% of the total colorectal cancer survivor group. For breast cancer survivors, both patients who underwent surgery and those who received chemotherapy were included.
The inclusion criteria encompassed individuals aged 18–75 years, in stable health, with a life expectancy exceeding two years. Participants were required to understand the study information, attend follow-up appointments, provide informed consent, and possess sufficient technological skills to use mobile devices and reliable internet access. Exclusion criteria included individuals with a life expectancy of less than one year, dementia or cognitive impairments, dependence on others for daily tasks, inability to make dietary decisions, current participation in other clinical studies, anticipated relocation, or major depression/psychiatric conditions that could interfere with daily activities.
Patients meeting the inclusion criteria and not meeting the exclusion criteria were assessed as outpatients, informed about the study, and personally invited to participate by clinicians at four clinical centers: Centre Hospitalier Universitaire de Liège (CHU) in Belgium, University Medical Centre Maribor (UKCM) in Slovenia, Complejo Hospitalario Universitario de Ourense (SERGAS) in Spain, and Riga East Clinical University Hospital (REUH) in Latvia, in collaboration with the University of Latvia (UL).

2.2.3. Sample Size Calculation

The sample size for this study was estimated based on the expected effectiveness of mobile device interventions in promoting healthy habits among cancer survivors, as demonstrated in previous research [31,32]. A power analysis using G*Power software (version 3.1.9) indicated that 160 patients would be sufficient to detect significant differences between pre- and post-intervention measures of healthy habits, assuming a two-sided confidence level of 95%, a statistical power of 90%, and an effect size of 0.25.

2.2.4. Recruitment

The recruitment process spanned a four-month screening period and a four-month enrollment period (Figure 6). To ensure successful patient recruitment, (1) clinical research staff at participating hospitals underwent comprehensive training on the persist project (goals, criteria, timeline), and devices (mobile phones and smart bracelets) and received recruitment materials (brochures, videos); (2) trained clinicians then invited eligible post-treatment cancer patients; (3) eligible patients were assessed in outpatient settings; and (4) nurses or data managers assisted medical staff in explaining the study and obtaining informed consent from selected patients. The trial concluded after receiving all results from the participants within the allocated timeframe in the PERSIST project.

2.2.5. Data Collection

The data collection timeline is provided in the patient flow chart (Figure 6). Following a four-month screening period, eligible participants (n = 166) were invited to sign informed consent documents and receive the study devices. During the enrollment process, participants were given detailed explanations of the study phases, and their medical history was recorded. Three questionnaires were administered at three distinct time points: baseline, first follow-up, and last follow-up.
Participants in this study received both a mobile phone with the mHealth app (Huawei Y6 2019, Huawei Technologies España, Madrid, Spain) and a smart band (Naicom smart bracelet, Naicoms Ltd., Sofia, Bulgaria). This mHealth app facilitates the collection of a multifaceted dataset collected through manual user input into digital forms, including sociodemographic variables, clinical assessments, and lifestyle indicators. Physiological measurements, specifically heart rate and sleep architecture, as well as physical activity data, such as step counts and activity intensity, are derived from a smartphone-connected wearable smart band. The specific model of the smart band was selected by the technical partners based on data safety (General Data Protection Regulation (GDPR) compliance) and budget constraints. The device was capable of measuring steps/activity, sleep, heart rate, and blood pressure and was compatible with a smartphone-based data access kit that avoids cloud storage from the device manufacturer.
Prior to patient recruitment, clinical and technical partners developed training materials for patients, which were translated into Spanish, French, Slovenian, Latvian, and Russian. During recruitment, training on the study protocol and the use of the mHealth app and smart band was initially conducted for all clinical partners and subsequently for all participants at each hospital by nurses or data managers supporting the medical doctors/physiotherapists. The app also provided personalized follow-up based on patterns identified from big data. Additionally, patients were able to input additional data through online questionnaires, which were prompted by notifications from the app. Selected questionnaires were collected automatically during phone calls or medical follow-ups as a validation tool, while others required patients to record video diaries discussing their daily lives.
The data collected through the mClinician app (version 1.0) allowed clinicians to access information about patients, including their demographics, cancer diagnosis, treatment history, and diagnostic performance. Data collected via the CDSS (version 1.0) included results, scores, and the history of CDSS outputs presented to the clinician.
Clinical centers were selected based on the presence of survivor populations and the opportunity to implement the intervention across diverse regions in Europe, aligning with the PERSIST project’s goals. Although the last data collection occurred between September and October 2022, the clinical trial was extended to December 2022 to facilitate additional data collection and updates to the mClinician app (version 1.0). During this extension, a substantial number of patients voluntarily continued their participation, enabling a more comprehensive analysis of study outcomes.

2.2.6. Outcomes

To measure self-efficacy, usability, and patient activation, the Communication and Attitudinal Self-Efficacy scale for cancer (CASE-Cancer) [33], the System Usability Scale (SUS) [34], and the Patient Activation Measure (PAM) [35] were used, respectively. All questionnaires were made available online within the mHealth app.
The CASE-Cancer scale, validated by Wolf et al. [33], is a 12-item instrument designed to assess the communication and attitudinal self-efficacy of cancer patients. The participants’ perceived ability to engage in their care was measured across three domains: understanding and participating in care, maintaining a positive outlook, and actively seeking and obtaining relevant information. Higher scores on this scale indicate greater self-efficacy. This instrument has been used in various cancer cases in recent literature [36,37,38].
This study utilized the PAM-13 questionnaire as a secondary endpoint to assess patients’ self-management knowledge, skills, and confidence. The PAM-13 is a validated short form of the PAM-22 [27] and has demonstrated comparable effectiveness [39]. PAM levels 1 and 2 indicate lower patient activation, while PAM levels 3 and 4 indicate higher patient activation. Ng et al. [40] demonstrated the cross-cultural reliability and validity of the PAM tool, highlighting its correlation with health outcomes relevant to patient-centered care, as evidenced by a comprehensive review of 39 studies.
The SUS is a ten-item Likert scale questionnaire designed to provide a global view of subjective assessments of system usability. Developed by John Brooke in 1986 [34], the SUS has been widely used to measure the usability of electronic office systems. It consists of five positively worded items and five negatively worded items, with responses ranging from strongly agree to strongly disagree. The SUS is a well-established tool for evaluating system usability and has been applied in various contexts [41].

2.2.7. Statistical Analysis

Statistical analysis involves descriptive analyses of epidemiological and clinical characteristics, including means, standard deviations, percentiles, frequencies, and percentages. To evaluate changes in CASE-Cancer, SUS, and PAM scores, paired t-tests, Wilcoxon signed-rank tests, and McNemar tests were employed. An intention-to-treat analysis and sensitivity analysis were conducted using R 3.4.2 and SPSS version 19. Results with p-values below 0.05 were considered statistically significant.

2.2.8. Ancillary Analyses Design

A mixed-methods approach was employed to gather comprehensive data on both patient and clinician experiences. Patient feedback was collected through a longitudinal, app-based survey administered at three time points, supplemented by narrative feedback obtained during follow-up interactions. This survey was divided into three distinct sections focusing on different aspects of patient experience. (1) Part A: Understanding the Patient’s Study Experience aimed to capture the patient’s perspective on their overall participation in the study. It focused on the clarity of study instructions and sought to identify the most valuable insights gained during the process. (2) Part B: Evaluating the mHealth App’s User Experience concentrated on assessing the user experience of the mHealth app utilized in the study. It explored various aspects of the app’s design and functionality to understand how patients interacted with and perceived the technology. (3) Part C: Assessing Device Usage and Experience consisted of two questions designed to gather feedback on the experience of using the devices provided: the smart bracelet and the mobile phone.
Narrative feedback from follow-up interactions was transcribed verbatim and analyzed using thematic analysis. Two independent coders identified recurring themes and patterns. Discrepancies between coders were resolved through discussion to ensure inter-rater reliability.
Clinician feedback was gathered using a standardized user acceptance questionnaire designed to assess the usability and effectiveness of the PERSIST system and mClinician app (version 1.0). Additionally, a generic 18-question questionnaire with mixed answer types, including free-text options, was developed and distributed across all participating hospitals.

2.2.9. Ethical Considerations

The study adhered to the highest ethical and legal standards, including the Charter of Fundamental Rights of the EU (2000/C 364/01), the GDPR (Regulation (EU) 2016/679), the European Code of Conduct for Research Integrity, and the OECD Council Recommendations on Health Data Governance. Furthermore, this clinical study was conducted in accordance with the laws and regulations in force in the participating countries (Slovenia, Spain, Latvia, and Belgium).
A study protocol, informed consent forms, and clinical data forms were developed and translated into the local languages of the participating countries. These documents were reviewed and approved by the Institutional Review Board and Ethical Committee of each clinical center (Institutional Ethics Committee of CHU de Liege, approval ref. no.: 2020/248; Riga Eastern Clinical University Hospital Support Foundation Medical and Biomedical Research Ethics Committee, approval ref. no.: 8-A/20; Slovenia National Ethics Committee, approval ref. no.: 0120–352/2020/5; Spain Regional Institutional Review Board, approval ref. no.: 2020/394.).
Once institutional approvals were obtained, pilot studies were initiated. The recruited patients received the necessary devices for the proper execution of the follow-up phase of the clinical study: a smartphone and a wearable device for quantifying physical activity and measuring health parameters (such as blood pressure, steps, and heart rate).
The personal data of patients were pseudonymized before leaving the clinical facilities, and a set of privacy metrics were calculated to assess the risk of reidentifying patient data. The data necessary for conducting the PERSIST project research activities related to the clinical studies were stored in a high-performance computer center provided by CESGA (Spanish public institution).

3. Results

3.1. Participant Flow

Figure 6 illustrates the participant flow throughout the conducted study, from initial screening to final analysis. Only SUS was administered at a different baseline time point due to the delay in the development of the mHealth app. While the retention rates at baseline and the first follow-up for the CASE-Cancer, PAM, and SUS questionnaires were relatively high (75.9%, 77.1%, and 46.4%, respectively), significant attrition occurred by the end of the study (CASE-Cancer = 40.5%, PAM = 39.1%, and SUS = 64.9%). This attrition led to a reduced sample size for the final analysis, with data included from CASE-Cancer (n = 75), PAM (n = 77), and SUS (n = 27). A total of 41 out of 166 patients (25%) withdrew from the clinical study by the final follow-up. The majority (n = 31 out of n = 41, 76% of the total) were female, and withdrawals were more frequent in the breast cancer group (n = 24 out of n = 41, 59% of the total). The overall attrition rate for the clinical trial at completion was 25% (n = 41 out of 166 patients). Attrition rates for the CASE-Cancer, PAM, and SUS questionnaires were 40.47% (n = 51 out of 126 patients), 40% (n = 51 out of 128 patients), and 64.93% (n = 50 out of 126 patients), respectively.
A comprehensive analysis of the reasons for withdrawal (total n = 65) (Table 1) revealed that these were primarily related to personal circumstances (n = 11 out of 65, 17% of the total), technical issues (n = 10 out of 65, 15% of the total), including smart-bracelet malfunctions and other technical problems, and time constraints associated with participation (n = 9 out of 65, 14% of the total).

3.2. Baseline Data

Patients diagnosed with breast cancer (ICD-10 C50) and colorectal cancer (ICD-10 C18/C19) were recruited for this study. The average age of the patients was 55 years. A total of 166 patients were included, with 37 male (22%) and 129 female (78%) participants. Table 2 presents a summary of the demographic characteristics of the patients enrolled in this study. UL recruited the highest number of patients (n = 46, 28% of the total), UKCM had the oldest cohort of patients with a mean age of 57, and SERGAS had the highest proportion of male patients (12 out of 37, 32% of the total).

3.3. Outcomes and Estimation

3.3.1. Perceived Self-Efficacy of Patients by CASE-Cancer

A total of 75 questionnaires were analyzed, and descriptive statistics were computed for each score factor. No statistically significant differences were observed in scores between the recruitment phase and the last follow-up across any of the three factors, as assessed using the Wilcoxon signed-rank test (Table 3).

3.3.2. Activation Levels of Patients by PAM

A total of 78 completed PAM questionnaires, incorporating all data collection points, were analyzed. As presented in Table 4, the majority of patients reported activation levels 3 or 4 at both recruitment (42.3% and 32.1%, respectively) and the final follow-up (35.9% and 35.9%, respectively), with a modest increase (3.8%) in the number of patients reporting level 4 activation at follow-up. No statistically significant differences were found in the distribution of activation levels between the recruitment and last follow-up.

3.3.3. User Acceptance of mHealth App by SUS

The SUS was the most significantly affected questionnaire due to participant attrition throughout the study. While 77 questionnaires were completed at baseline, this number decreased to 27 (35%) by the final follow-up, resulting in a low retention rate of 35.06%. Consequently, the SUS analysis was conducted using data from these 27 complete questionnaires across all data collection points (Table 5).
At recruitment, most participants (n = 10 out of 27, 37%) rated the system as experiencing “usability issues” (level 50–70) and as “acceptable to good” (level 70–85). This could be attributed to participants’ prior technological proficiency and their ability to adapt to the evolving nature of the mHealth app, which was still under development. During the study, the percentage of participants who rated the system as having “excellent usability” (level > 85) increased from 14.82% to 22.22%. This improvement can likely be attributed to the ongoing upgrades made to the mHealth app in collaboration with technical partners.
At the final follow-up, the most frequently reported score group for the system was “Experiencing usability issues” (level 50–70), which could be explained by negative feedback from patients regarding the increasing complexity of the system.

3.4. Ancillary Analyses

3.4.1. General Feedback from Patients

A total of thirty-two participants were included in the analysis of Part A (six participants, 19%, from CHU; eight participants, 25%, from SERGAS; fourteen participants, 44%, from UKCM; and four participants, 12%, from UL) (Table 6 and Table 7).
Twenty participants responded to Part B at three different time points (four from CHU, four from SERGAS, and twelve from UKCM) (Table 8 and Table 9).
In total, fifteen participants completed the questions for Part C at all three time points (six participants, 40%, from CHU; three participants, 20%, from SERGAS; one participant, 7%, from UL; and five participants, 33%, from UKCM) (Table 10 and Table 11).
For all three sections, no statistically significant differences were found between any two time points for any of the questions. Additionally, narrative feedback provided valuable insights into patients’ experiences with healthcare services, complementing the quantitative measures (Box 1). Overall, the narrative feedback suggested that the mHealth app holds potential as a valuable tool for cancer survivors. However, it is important to balance its benefits with its potential drawbacks.
Box 1. Some examples from narrative feedback from patients.
‘It is interesting to record and monitor measurements. It’s good because it diverts your thoughts’ (female survivor of breast cancer).
‘I would like to know more about the development itself and how this technology works’ (female survivor of colorectal cancer).
‘I enjoyed the opportunity to monitor my features, and I think this may help other patients’ (male survivor of colorectal cancer).
‘The project has encouraged some positive emotions. It helps me to follow my state of health in general, the opportunity to view the data stimulates the consciousness of the need to get moving’ (female survivor of breast cancer).
‘The appreciation to the people who treated me motivated me to participate in clinical study. Technology can help cancer patients and survivors, however, the constant thinking about oneself may prevent them moving on’ (female survivor of colorectal cancer).

3.4.2. General Feedback from Clinicians

The user acceptance questionnaire was completed by 16 clinicians at the recruitment time and 17 clinicians at the last follow-up (see Table 12). According to the SUS results, most clinicians who responded considered the system to be “not easy to use” (score ≤ 50) (n = 7 at both time points) and reported experiencing usability issues (score 50–70) (increasing from n = 6 to 7).
The developed generic questionnaire was completed by eleven clinicians (four clinicians, 37%, from UL; two clinicians, 18%, from SERGAS; two clinicians, 18%, from CHU; and three clinicians, 27%, from UKCM) (see Supplementary File).
Clinician evaluations of the PERSIST system revealed generally favorable, albeit not uniformly enthusiastic, perceptions. The system received a mean overall rating of 6.27 out of 10, with usability assessed more positively, averaging 7. Precision in risk identification by the PERSIST system was rated with a mean score of 6.9. A substantial majority of clinicians (nine out of ten) expressed a desire to integrate the PERSIST system into their clinical practice; however, two clinicians raised concerns regarding system processing speed and alignment with oncology practice workflows. The most valued functionalities of the PERSIST system included the provision of patient feedback data, vital parameter tracking, subjective patient reports, statistical summaries, and risk factor identification. The system was most frequently associated with general practice, followed by psychology, infectious diseases, and inflammatory diseases. Among modifiable lifestyle factors, physical activity was considered the most salient, followed by blood pressure, heart rate, and depression indicators. The overarching added value of the PERSIST system was perceived to be its capacity for patient monitoring.
Regarding personalized care plans, clinician responses varied: some affirmed their contributions, while others expressed partial effectiveness or uncertainty. Suggested optimal strategies for implementing preventive measures based on individual patient trajectories included app automation, regular monitoring (weekly or bi-annually), and the incorporation of trained support staff.
In contrast, clinician evaluations of the mClinician web version (version 1.0) were less favorable compared with the PERSIST system. The web interface received a mean overall rating of 6.1 out of 10. Most respondents (nine out of ten) expressed reluctance or unwillingness to integrate the web version into their clinical workflows; five clinicians indicated ambivalence, while four explicitly declined. The most valued components of the web version were mHealth data, followed by tests, general and medical history, diagnoses, symptoms, and cancer treatment information. However, clinicians recommended modifications or the removal of the tests, diagnostic, and therapeutic modules from the mClinician web version (version 1.0).
In contrast, the mClinician mobile app (version 1.0) received significantly more positive feedback. The app earned a mean overall rating of 6 out of 10. A substantial majority of clinicians (nine out of ten) expressed a willingness to utilize the app in their practice, with only one clinician dissenting. The most highly valued features of the mClinician app (version 1.0) included alerts, patient overviews, appointments, recurrence prediction, cardiovascular disease risk assessment, usage statistics, and patient trajectories. Recommendations for the refinement of the mClinician app (version 1.0) focused on the removal of the trajectories feature and the reduction in electronic health record (EHR) data redundancy.

4. Discussion

4.1. Principal Findings

This study partially supports the initial hypothesis [25], suggesting that mHealth applications can positively influence self-efficacy, patient activation, and satisfaction with care among colorectal and breast cancer survivors, although statistical significance was not consistently achieved. While the CASE-Cancer questionnaire results (Table 3) did not show significant changes over time, the baseline scores for Factor 1 indicated a high level of patient understanding and engagement in their care at recruitment, consistent with previous findings [33]. The stability of the PAM scores (Table 4) suggests that patients maintained good self-management skills throughout the study, indicating the potential of the PERSIST app to support these skills [9], even without demonstrable improvement.
The increasing SUS scores (Table 5) reflect improvements in perceived usability, likely due to iterative app development based on user feedback. The final positive usability ratings, while not universally excellent, emphasize the importance of continuous user-centered design in mHealth development [42]. Positive patient feedback regarding study participation, clarity of instructions, and explanations (Table 6 and Table 7) indicates successful patient engagement. High patient ratings for the emotion wheel/detection feature, instructions, and questionnaires within the app (Table 8 and Table 9) suggest user acceptance of these specific functionalities. Furthermore, positive experiences with smart bracelets and mobile phones (Table 10 and Table 11), with no decline in user satisfaction over time, further support the feasibility of this technology within this population.
However, clinician evaluations of the mClinician web version (version 1.0) revealed significant challenges. Low SUS scores (Table 12) and predominantly negative or neutral responses regarding their integration into clinical practice highlight the need for substantial revisions to improve usability and align with clinical workflows. While the mClinician app (version 1.0) fared better, with most clinicians willing to use it, addressing identified usability issues could enhance its adoption and effectiveness. Critically, the disparity in clinician acceptance between the PERSIST system and the mClinician web version (version 1.0) suggests that system design and integration with existing clinical practices are crucial determinants of successful mHealth implementation. Positive clinician feedback on PERSIST, particularly regarding patient feedback integration, vital parameter tracking, and risk factor identification, points to specific areas of value in mHealth solutions for cancer survivors. Physical activity was identified as the most important potentially modifiable lifestyle factor for cancer survivors that the PERSIST system detects [43,44]. Future research should focus on identifying the factors contributing to successful mHealth implementation and tailoring interventions to meet the specific needs of both patients and clinicians.
The PERSIST system’s support for self-efficacy and patient activation can be attributed to several key features. The emotion wheel/detection feature, along with the app’s clear instructions and questionnaires, provided patients with tools to actively monitor their health and engage in their care. By empowering patients to track their emotional states and understand their health data, these features likely enhanced their sense of control and self-efficacy. Furthermore, the integration of smart bracelets and mobile phones facilitated consistent data collection and monitoring, reinforcing patient activation by enabling them to actively participate in their health management. The CDSS component, particularly the features related to patient feedback integration, vital parameter tracking, and risk factor identification, supported clinicians in providing personalized care, which in turn likely contributed to patient confidence and self-efficacy. Future studies should aim to isolate and evaluate the individual contributions of these components to better understand their specific impacts on self-efficacy and patient activation.

4.2. Comparison to Prior Work

This study builds upon a substantial body of research emphasizing the importance of self-efficacy and patient activation in cancer survivorship [4,5,6]. Consistent with previous findings [6,7,8,9], our study reinforces the notion that these constructs are critical for positive health outcomes, including improved quality of life, symptom management, and adherence to healthy behaviors. Prior research has also demonstrated the potential of DHIs to enhance these outcomes [11,12,13,14], with studies showing positive effects on treatment adherence, physical activity, and symptom burden [15,16]. Our findings align with this trend, suggesting that a mHealth application can contribute to self-efficacy, patient activation, and satisfaction with care among colorectal and breast cancer survivors. Specifically, the observed high baseline levels of understanding and participation in care (Factor 1 of CASE-Cancer) and maintained levels of patient activation throughout the study period, despite no statistically significant changes, suggest that the mHealth app effectively supported existing self-management skills. This aligns with findings from Van Der Hout et al. [16], who demonstrated that DHIs can be particularly beneficial for individuals with already moderate to high baseline self-efficacy and health literacy by reinforcing and maintaining positive behaviors.
However, this study distinguishes itself from prior work in several key aspects. While previous studies have often focused on either self-efficacy or patient activation in isolation [12,16,21,22,23], our research adopted a more holistic approach by examining the impact of a personalized mHealth intervention on multiple dimensions of patient well-being, including self-efficacy (measured using CASE-Cancer), patient activation (measured using PAM), satisfaction with care, and physical activity levels. Furthermore, this study addressed a critical gap in the literature by focusing on the real-world feasibility and acceptability of implementing a DHI in clinical settings. The high ratings received for the app’s instructions, explanations, and overall user experience, along with the positive feedback on the smart bracelets and the mHealth app’s features, suggest that the PERSIST system was well-received by patients and demonstrates promise for real-world implementation. This emphasis on implementation and user experience aligns with the growing recognition of the importance of user-centered design in DHI development [42]. Although the attrition rate posed a significant limitation, particularly for the SUS analysis, the positive feedback from the remaining participants, including their expressed enthusiasm for future participation, underscores the potential of this approach and highlights areas for improvement in future iterations.
Moreover, while previous studies have demonstrated the potential of DHIs to improve health outcomes, this study offers valuable insights into the specific challenges and facilitators of implementing such interventions in clinical practice. The identified usability issues with the mClinician app (version 1.0) highlight the importance of considering the needs of both patients and clinicians in DHI design and implementation. The clinicians’ feedback regarding the most useful aspects of the PERSIST system (patient feedback, alerts, vital parameter data, and risk factors) provides crucial information for future development and refinement of similar systems. By addressing the practical challenges of implementation, this study offers valuable guidance for translating the potential benefits of DHIs into real-world clinical practice.

4.3. Strengths and Limitations

The PERSIST project effectively engaged cancer survivors and provided a positive experience. Participants demonstrated patient activation and self-efficacy, and the project enhanced self-management among cancer patients. The majority of participants were satisfied with the mobile app and its usage. However, despite high attrition rates for questionnaires, particularly for the SUS, the overall retention rate for the trial remained moderate. The PERSIST system shows potential applications in various medical fields, particularly in primary care settings, where it could significantly improve patient outcomes and clinical decision-making. Patient engagement with the mHealth app was robust, as evidenced by consistent follow-up and active data monitoring. Additionally, the integration of mHealth apps and smart bracelets significantly reduced depression and anxiety symptoms, suggesting a positive impact on patients’ mental health. Clinicians also provided positive feedback, indicating that both the PERSIST system and the mClinician app (version 1.0) have the potential to be useful tools in clinical practice. They highlighted their value in monitoring patient parameters, receiving timely alerts, and providing personalized care plans.
Several limitations inherent to this study should be acknowledged. Firstly, the substantially smaller sample size than initially projected by the power analysis is the primary limitation of this study, which directly impacts our ability to detect statistically significant effects. The observed effects in our study may not reach statistical significance due to the limited sample size, even if the intervention had a positive impact on healthy habits. This limitation is particularly relevant for the SUS questionnaire, which had the highest attrition rate and the smallest final sample size. While we report observed effect sizes, the lack of statistical power weakens the strength of our conclusions regarding the intervention’s effectiveness. This highlights the need for cautious interpretation of the findings and underscores the importance of future research with larger, more stable samples to confirm these preliminary observations. Factors contributing to the attrition included participant burden, fluctuating health conditions, technical difficulties, and a lack of perceived benefit or decreasing motivation. To mitigate these challenges in future studies, researchers should focus on minimizing participant burden, recruiting more participants than initially estimated, providing enhanced technical support, maximizing perceived benefit, and incentivizing participation.
The second limitation was the challenge of ensuring continuous data integration into the medical workflow, particularly regarding the timely transfer of lifestyle data, laboratory results, and other EHR data. Future research should prioritize the development of automated data entry systems to ensure that the warning system and prediction models remain up-to-date, fast, and accurate, ultimately optimizing clinical decision-making.
Thirdly, technical considerations limited participant recruitment. To avoid burdening participants with complex devices, simple and cost-effective options were chosen. However, this decision potentially impacted the richness of the collected data. In the future, careful device selection and testing will be essential to balance participant comfort with comprehensive data collection.
Fourthly, the absence of a traditional control or comparison group is a key limitation resulting from the SCED methodology employed in this study. While SCED offers strong internal validity by controlling for individual variability, it lacks the between-participant comparison provided by traditional randomized controlled trials or cohort studies. This means we cannot directly compare the outcomes of participants using the PERSIST platform with a separate group receiving standard care or no intervention. The rigorous requirements of SCED, particularly the need for complete data across all measurement points, led to the exclusion of participants with incomplete data. This attrition could potentially introduce selection bias. However, the strength of SCED lies in its focus on within-participant effects, allowing for strong inferences about individual responses to the intervention, even with smaller sample sizes. Consequently, while we can demonstrate changes within individuals over time, attributing these changes solely to the PERSIST platform and generalizing the findings to a broader population requires careful consideration.
Finally, interoperability across different data types and clinical centers remains a significant hurdle in healthcare research. This challenge requires further investigation to facilitate seamless data exchange and collaboration.

4.4. Future Directions

This study has provided valuable insights into the potential and challenges of implementing the PERSIST platform for cancer survivor care. To maximize its impact and facilitate wider adoption, future research and development should prioritize several key areas.
Firstly, enhancing usability and accessibility is paramount. The high attrition rate due to technical challenges underscores the need for rigorous device testing, careful selection, and user-friendly interfaces. Detailed analysis of patient feedback and usability testing with target users are essential to identify and address specific usability issues. Future iterations should focus on simplifying the interface, allowing personalized customization, and providing enhanced technical support and training. To further improve accessibility, future developments should prioritize simplified user interfaces, multimedia educational resources, and personalized technical support, particularly for individuals with varying health literacy and technological skills.
Secondly, integrating personalized and holistic care is crucial. Future PERSIST app development should incorporate features addressing psychological factors like stress and motivation, including stress management tools, personalized goal-setting, and resilience resources. AI-driven personalized feedback, adapting to mental states, warrants exploration. Furthermore, to enhance the platform’s ability to deliver tailored and adaptable support, future developments should focus on leveraging its existing capabilities for patient feedback integration, vital parameter tracking, and risk factor identification. Dynamic, adaptive algorithms can be implemented to tailor alerts and interventions based on individual patient profiles, real-time data, and evolving health conditions. Additionally, the CDSS should be refined to provide personalized risk predictions, while the MRAST framework can provide insights into patient emotional states. AI can analyze the diverse data collected through the mHealth (version 1.0) and mClinician (version 1.0) applications to personalize interventions and provide real-time alerts and feedback. Integrating AI-powered chatbots or virtual assistants can further enhance patient support.
Thirdly, fostering long-term engagement and trust is essential for sustained platform use. To enhance long-term patient motivation and engagement, future iterations should focus on improving real-time feedback and patient progress tracking and incorporating gamification and social support features. Methods such as personalized goal setting, community support, and regular content updates should be introduced. Integrating the platform with existing healthcare services and incorporating gamification can further enhance engagement. Personalized reminders, data portability, and continued technical support are also crucial. To build patient trust, improvements in data privacy and security are essential, including transparent data handling policies, robust encryption, and granular data access control.
Fourthly, optimizing clinical integration and collaboration is vital for seamless adoption. To better integrate clinician input and align the intervention with standard cancer care practices, future development should prioritize early and continuous clinician engagement, including design workshops, customizable dashboards, and EHR integration. Regular feedback loops and comprehensive training are also essential.
Finally, addressing demographic variations and potential negative effects is necessary to ensure equitable access and patient well-being. Future research should examine variations in self-efficacy and patient activation outcomes across diverse demographic groups, adapting the intervention with culturally tailored content, language accessibility, and age-appropriate design. The study revealed unexpected negative effects, including heightened anxiety and frustration among some participants, particularly related to technical difficulties. Future iterations should prioritize user-centered design, enhanced technical support, and a personalized approach to mitigate these issues.
By incorporating these improvements, future definitive trials can build upon the lessons learned from this study, increasing the likelihood of successful outcomes. The PERSIST system has the potential to significantly impact clinical routines by providing valuable insights into patient behavior, treatment adherence, and overall health outcomes. Further research is essential to demonstrate its efficacy and generalizability through large-scale studies, ultimately facilitating the wider adoption of digital therapies in cancer survivor care.

5. Conclusions

The PERSIST project’s tools represent a significant advancement in cancer survivorship care by delivering personalized and dynamic care plans tailored to the unique needs of individual survivors. This personalized approach holds the potential to improve patient outcomes and overall quality of life while reducing healthcare costs. The system is user-friendly and easy to navigate, with participants generally expressing a neutral to slightly positive attitude towards frequent use. The high adherence rates across nearly all hospitals suggest that patients found the app easy to use and manage on a daily basis.
The PERSIST project’s tools align with the goals of the Precision Medicine Initiative (PMI) [45], which seeks to tailor medical treatments and preventive strategies to an individual’s unique genetic and environmental profile. This technology has the potential to revolutionize cancer survivorship care by enabling personalized and adaptive care plans, leading to improved patient outcomes, enhanced patient satisfaction, and significant cost savings for healthcare systems. By facilitating seamless coordination among healthcare providers and promoting healthy behaviors through mHealth apps and smart bracelets, this technology can help prevent chronic diseases and reduce long-term healthcare costs.
Overall, the PERSIST project’s approach represents a key opportunity to improve survivorship care for cancer patients and transition towards more personalized medicine strategies.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/app15094713/s1, Supplementary File: Generic questionnaire for clinicians.

Author Contributions

All authors conceived of the idea for the intervention for cancer survivors and led the application for funding for the project. D.B., I.P., K.A., A.M.L., M.R., M.H., V.F., A.M.M., B.C.-C., J.A.A., M.C., P.D., V.B., and C.L. led the patient recruitment, data collection, and execution of clinical trials in four different clinical centers. U.A., I.M., and U.S. developed the MRAST framework. T.C. and K.U. developed the mHealth application. U.A., I.M., U.S., V.Š., S.L., J.N., T.C., K.U., G.M., Y.P., and S.C.-A. contributed to the development of CDSS. Data analysis and results of the questionnaires were prepared by all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This research project is partially funded by the European Union’s Horizon 2020 research and innovation program, project PERSIST (Grant Agreement No. 875406) and the Slovenian Research and Innovation Agency, Advanced Methods of Interaction in Telecommunication Research Programme (grant number P2-0069). The funding sources approved the funding of project PERSIST of which this clinical study is a part of and ensured the funds for the implementation. The funding sources had no role in the design of this study and will not have a role beyond progress monitoring and evaluation of the quality of the project PERSIST during the execution, analyses, interpretation of the data of this clinical study, or decision to submit results.

Institutional Review Board Statement

This study has been approved by relevant ethical committees in Belgium (Institutional Ethics Committee of CHU de Liege, approved 25 August 2020, approval ref. no: 2020/248), Latvia (Riga Eastern Clinical University Hospital Support Foundation Medical and Biomedical Research Ethics Committee, approved 6 August 2020, approval ref. no.: 8-A/20), Slovenia (National Ethics Committee, approved 18 August 2020, approval ref. no.: 0120–352/2020/5), and Spain (Regional Institutional Review Board, approved 21 October 2020, approval ref. no.: 2020/394).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patient(s) to publish this paper.

Data Availability Statement

The deidentified data presented in this study are available on request from the corresponding author due to ethical reasons.

Conflicts of Interest

Author Simon Lin were employed by the company Symptoma GmbH. Author Jama Nateqi is shareholder of the company Symptoma GmbH. Authors Tunç Cerit and Kadir Uguducu were employed by the company Emoda Yazilim. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ANOVA Analysis of variance
ASR Automatic Speech Recognition
CASE-Cancer Communication and Attitudinal Self-Efficacy scale for cancer
CDSS Clinical Decision Support System
CHU Centre Hospitalier Universitaire De Liege
CI Confidence Interval
CTCs Circulating Tumor Cells
DHI Digital health intervention
HL7 FHIR Health Level 7 Fast Healthcare Interoperability Resources
ICD International Classification of Diseases
MRAST Multimodal Risk Assessment and Symptom Tracking
PAM Patient Activation Measure
PERSIST Acronym of project ‘Patients-centered SurvivorShIp care plan after Cancer treatments based on Big Data and Artificial Intelligence technologies’
PMI Precision Medicine Initiative
PREMs Patient Reported Experience Measures
PROMs Patient-reported outcome measures
REUH Riga East Clinical University Hospital
SERGAS Complejo Hospitalario Universitario de Ourense
SUS System Usability Scale
UKCM University Medical Centre Maribor
UL University of Latvia

References

  1. Chan, R.J.; Crawford-Williams, F.; Crichton, M.; Joseph, R.; Hart, N.H.; Milley, K.; Druce, P.; Zhang, J.; Jefford, M.; Lisy, K.; et al. Effectiveness and Implementation of Models of Cancer Survivorship Care: An Overview of Systematic Reviews. J. Cancer Surviv. 2023, 17, 197–221. [Google Scholar] [CrossRef]
  2. Shaffer, K.M.; Turner, K.L.; Siwik, C.; Gonzalez, B.D.; Upasani, R.; Glazer, J.V.; Ferguson, R.J.; Joshua, C.; Low, C.A. Digital Health and Telehealth in Cancer Care: A Scoping Review of Reviews. Lancet Digit. Health 2023, 5, e316–e327. [Google Scholar] [CrossRef]
  3. Bradbury, K.; Steele, M.; Corbett, T.; Geraghty, A.W.A.; Krusche, A.; Heber, E.; Easton, S.; Cheetham-Blake, T.; Slodkowska-Barabasz, J.; Müller, A.M.; et al. Developing a Digital Intervention for Cancer Survivors: An Evidence-, Theory- and Person-Based Approach. npj Digit. Med. 2019, 2, 85. [Google Scholar] [CrossRef] [PubMed]
  4. Hübner, J.; Welter, S.; Ciarlo, G.; Käsmann, L.; Ahmadi, E.; Keinki, C. Patient Activation, Self-Efficacy and Usage of Complementary and Alternative Medicine in Cancer Patients. Med. Oncol. 2022, 39, 192. [Google Scholar] [CrossRef]
  5. Huang, Q.; Wu, F.; Zhang, W.; Stinson, J.; Yang, Y.; Yuan, C. Risk Factors for Low Self-care Self-efficacy in Cancer Survivors: Application of Latent Profile Analysis. Nurs. Open 2022, 9, 1805–1814. [Google Scholar] [CrossRef] [PubMed]
  6. Mazanec, S.R.; Sattar, A.; Delaney, C.P.; Daly, B.J. Activation for Health Management in Colorectal Cancer Survivors and Their Family Caregivers. West. J. Nurs. Res. 2016, 38, 325–344. [Google Scholar] [CrossRef]
  7. Albrecht, K.; Droll, H.; Giesler, J.M.; Nashan, D.; Meiss, F.; Reuter, K. Self-efficacy for Coping with Cancer in Melanoma Patients: Its Association with Physical Fatigue and Depression. Psychooncology 2013, 22, 1972–1978. [Google Scholar] [CrossRef]
  8. Barlow, J.H.; Bancroft, G.V.; Turner, A.P. Self-Management Training for People with Chronic Disease: A Shared Learning Experience. J. Health Psychol. 2005, 10, 863–872. [Google Scholar] [CrossRef] [PubMed]
  9. Moradian, S.; Maguire, R.; Liu, G.; Krzyzanowska, M.K.; Butler, M.; Cheung, C.; Signorile, M.; Gregorio, N.; Ghasemi, S.; Howell, D. Promoting Self-Management and Patient Activation Through eHealth: Protocol for a Systematic Literature Review and Meta-Analysis. JMIR Res. Protoc. 2023, 12, e38758. [Google Scholar] [CrossRef]
  10. Hailey, V.; Rojas-Garcia, A.; Kassianos, A.P. A Systematic Review of Behaviour Change Techniques Used in Interventions to Increase Physical Activity among Breast Cancer Survivors. Breast Cancer 2022, 29, 193–208. [Google Scholar] [CrossRef]
  11. Aapro, M.; Bossi, P.; Dasari, A.; Fallowfield, L.; Gascón, P.; Geller, M.; Jordan, K.; Kim, J.; Martin, K.; Porzig, S. Digital Health for Optimal Supportive Care in Oncology: Benefits, Limits, and Future Perspectives. Support. Care Cancer 2020, 28, 4589–4612. [Google Scholar] [CrossRef] [PubMed]
  12. Elkefi, S.; Trapani, D.; Ryan, S. The Role of Digital Health in Supporting Cancer Patients’ Mental Health and Psychological Well-Being for a Better Quality of Life: A Systematic Literature Review. Int. J. Med. Inf. 2023, 176, 105065. [Google Scholar] [CrossRef] [PubMed]
  13. Marthick, M.; McGregor, D.; Alison, J.; Cheema, B.; Dhillon, H.; Shaw, T. Supportive Care Interventions for People with Cancer Assisted by Digital Technology: Systematic Review. J. Med. Internet Res. 2021, 23, e24722. [Google Scholar] [CrossRef]
  14. Burbury, K.; Wong, Z.; Yip, D.; Thomas, H.; Brooks, P.; Gilham, L.; Piper, A.; Solo, I.; Underhill, C. Telehealth in Cancer Care: During and beyond the COVID-19 Pandemic. Intern. Med. J. 2021, 51, 125–133. [Google Scholar] [CrossRef]
  15. Powley, N.; Nesbitt, A.; Carr, E.; Hackett, R.; Baker, P.; Beatty, M.; Huddleston, R.; Danjoux, G. Effect of Digital Health Coaching on Self-Efficacy and Lifestyle Change. BJA Open 2022, 4, 100067. [Google Scholar] [CrossRef]
  16. Van Der Hout, A.; Holtmaat, K.; Jansen, F.; Lissenberg-Witte, B.I.; Van Uden-Kraan, C.F.; Nieuwenhuijzen, G.A.P.; Hardillo, J.A.; Baatenburg De Jong, R.J.; Tiren-Verbeet, N.L.; Sommeijer, D.W.; et al. The eHealth Self-Management Application ‘Oncokompas’ That Supports Cancer Survivors to Improve Health-Related Quality of Life and Reduce Symptoms: Which Groups Benefit Most? Acta Oncol. 2021, 60, 403–411. [Google Scholar] [CrossRef]
  17. Courneya, K.S. Physical Activity and Cancer Survivorship: A Simple Framework for a Complex Field. Exerc. Sport Sci. Rev. 2014, 42, 102–109. [Google Scholar] [CrossRef] [PubMed]
  18. Bruggeman, A.R.; Kamal, A.H.; LeBlanc, T.W.; Ma, J.D.; Baracos, V.E.; Roeland, E.J. Cancer Cachexia: Beyond Weight Loss. J. Oncol. Pract. 2016, 12, 1163–1171. [Google Scholar] [CrossRef]
  19. Blanchard, C.M.; Courneya, K.S.; Stein, K. Cancer Survivors’ Adherence to Lifestyle Behavior Recommendations and Associations with Health-Related Quality of Life: Results from the American Cancer Society’s SCS-II. J. Clin. Oncol. 2008, 26, 2198–2204. [Google Scholar] [CrossRef]
  20. Henry-Amar, M.; Busson, R. Does Persistent Fatigue in Survivors Relate to Cancer? Lancet Oncol. 2016, 17, 1351–1352. [Google Scholar] [CrossRef]
  21. Soto-Ruiz, N.; Escalada-Hernández, P.; Martín-Rodríguez, L.S.; Ferraz-Torres, M.; García-Vivar, C. Web-Based Personalized Intervention to Improve Quality of Life and Self-Efficacy of Long-Term Breast Cancer Survivors: Study Protocol for a Randomized Controlled Trial. Int. J. Environ. Res. Public Health 2022, 19, 12240. [Google Scholar] [CrossRef] [PubMed]
  22. Merluzzi, T.V.; Pustejovsky, J.E.; Philip, E.J.; Sohl, S.J.; Berendsen, M.; Salsman, J.M. Interventions to Enhance Self-efficacy in Cancer Patients: A Meta-analysis of Randomized Controlled Trials. Psychooncology 2019, 28, 1781–1790. [Google Scholar] [CrossRef] [PubMed]
  23. Ekstedt, M.; Schildmeijer, K.; Wennerberg, C.; Nilsson, L.; Wannheden, C.; Hellström, A. Enhanced Patient Activation in Cancer Care Transitions: Protocol for a Randomized Controlled Trial of a Tailored Electronic Health Intervention for Men with Prostate Cancer. JMIR Res. Protoc. 2019, 8, e11625. [Google Scholar] [CrossRef] [PubMed]
  24. Patients-Centered SurvivorShIp Care Plan After Cancer Treatments Based on Big Data and Artificial Intelligence Technologies. Available online: https://cordis.europa.eu/project/id/875406 (accessed on 1 June 2023).
  25. Mlakar, I.; Lin, S.; Aleksandraviča, I.; Arcimoviča, K.; Eglītis, J.; Leja, M.; Salgado Barreira, Á.; Gómez, J.G.; Salgado, M.; Mata, J.G.; et al. Patients-Centered SurvivorShIp Care Plan after Cancer Treatments Based on Big Data and Artificial Intelligence Technologies (PERSIST): A Multicenter Study Protocol to Evaluate Efficacy of Digital Tools Supporting Cancer Survivors. BMC Med. Inform. Decis. Mak. 2021, 21, 243. [Google Scholar] [CrossRef]
  26. Arioz, U.; Smrke, U.; Plohl, N.; Mlakar, I. Scoping Review on the Multimodal Classification of Depression and Experimental Study on Existing Multimodal Models. Diagnostics 2022, 12, 2683. [Google Scholar] [CrossRef]
  27. Rojc, M.; Ariöz, U.; Šafran, V.; Mlakar, I. Multilingual Chatbots to Collect Patient-Reported Outcomes. In Chatbots—The AI-Driven Front-Line Services for Customers; Babulak, E., Ed.; IntechOpen: London, UK, 2023. [Google Scholar] [CrossRef]
  28. Manzo, G.; Pannatier, Y.; Duflot, P.; Kolh, P.; Chavez, M.; Bleret, V.; Calvaresi, D.; Jimenez-del-Toro, O.; Schumacher, M.; Calbimonte, J.-P. Breast Cancer Survival Analysis Agents for Clinical Decision Support. Comput. Methods Programs Biomed. 2023, 231, 107373. [Google Scholar] [CrossRef]
  29. Calvo-Almeida, S.; Serrano-Llabrés, I.; Cal-González, V.M.; Piairo, P.; Pires, L.R.; Diéguez, L.; González-Castro, L. Multichannel Fluorescence Microscopy Images CTC Detection: A Deep Learning Approach. In Proceedings of the International Conference of Computational Methods in Sciences and Engineering ICCMSE 2022, Virtual, 26–29 October 2022; AIP Publishing: Long Island, NY, USA, 2024; p. 030007. [Google Scholar] [CrossRef]
  30. Krasny-Pacini, A.; Evans, J. Single-Case Experimental Designs to Assess Intervention Effectiveness in Rehabilitation: A Practical Guide. Ann. Phys. Rehabil. Med. 2018, 61, 164–179. [Google Scholar] [CrossRef]
  31. Pope, Z.; Lee, J.E.; Zeng, N.; Lee, H.Y.; Gao, Z. Feasibility of Smartphone Application and Social Media Intervention on Breast Cancer Survivors’ Health Outcomes. Transl. Behav. Med. 2019, 9, 11–22. [Google Scholar] [CrossRef]
  32. Quintiliani, L.M.; Mann, D.M.; Puputti, M.; Quinn, E.; Bowen, D.J. Pilot and Feasibility Test of a Mobile Health-Supported Behavioral Counseling Intervention for Weight Management Among Breast Cancer Survivors. JMIR Cancer 2016, 2, e4. [Google Scholar] [CrossRef]
  33. Wolf, M.S.; Chang, C.-H.; Davis, T.; Makoul, G. Development and Validation of the Communication and Attitudinal Self-Efficacy Scale for Cancer (CASE-Cancer). Patient Educ. Couns. 2005, 57, 333–341. [Google Scholar] [CrossRef]
  34. Brooke, J. SUS: A ‘Quick and Dirty’ Usability Scale. Usability Eval. Ind. 1996, 189, 4–7. [Google Scholar]
  35. Hibbard, J.H.; Stockard, J.; Mahoney, E.R.; Tusler, M. Development of the Patient Activation Measure (PAM): Conceptualizing and Measuring Activation in Patients and Consumers. Health Serv. Res. 2004, 39, 1005–1026. [Google Scholar] [CrossRef] [PubMed]
  36. Alpert, J.M.; Amin, T.B.; Zhongyue, Z.; Markham, M.J.; Murphy, M.; Bylund, C.L. Evaluating the SEND eHealth Application to Improve Patients’ Secure Message Writing. J. Cancer Educ. 2024, 40, 182–191. [Google Scholar] [CrossRef]
  37. Pomey, M.-P.; Nelea, M.I.; Normandin, L.; Vialaron, C.; Bouchard, K.; Côté, M.-A.; Duarte, M.A.R.; Ghadiri, D.P.; Fortin, I.; Charpentier, D.; et al. An Exploratory Cross-Sectional Study of the Effects of Ongoing Relationships with Accompanying Patients on Cancer Care Experience, Self-Efficacy, and Psychological Distress. BMC Cancer 2023, 23, 369. [Google Scholar] [CrossRef] [PubMed]
  38. Baik, S.H.; Oswald, L.B.; Buscemi, J.; Buitrago, D.; Iacobelli, F.; Perez-Tamayo, A.; Guitelman, J.; Penedo, F.J.; Yanez, B. Patterns of Use of Smartphone-Based Interventions Among Latina Breast Cancer Survivors: Secondary Analysis of a Pilot Randomized Controlled Trial. JMIR Cancer 2020, 6, e17538. [Google Scholar] [CrossRef]
  39. Hibbard, J.H.; Mahoney, E.R.; Stockard, J.; Tusler, M. Development and Testing of a Short Form of the Patient Activation Measure. Health Serv. Res. 2005, 40, 1918–1930. [Google Scholar] [CrossRef] [PubMed]
  40. Ng, Q.X.; Liau, M.Y.Q.; Tan, Y.Y.; Tang, A.S.P.; Ong, C.; Thumboo, J.; Lee, C.E. A Systematic Review of the Reliability and Validity of the Patient Activation Measure Tool. Healthcare 2024, 12, 1079. [Google Scholar] [CrossRef]
  41. Lewis, J.R. The System Usability Scale: Past, Present, and Future. Int. J. Hum. Comput. Interact. 2018, 34, 577–590. [Google Scholar] [CrossRef]
  42. Bauer, A.M.; Iles-Shih, M.; Ghomi, R.H.; Rue, T.; Grover, T.; Kincler, N.; Miller, M.; Katon, W.J. Acceptability of mHealth Augmentation of Collaborative Care: A Mixed Methods Pilot Study. Gen. Hosp. Psychiatry 2018, 51, 22–29. [Google Scholar] [CrossRef]
  43. Clare, L.; Wu, Y.-T.; Teale, J.C.; MacLeod, C.; Matthews, F.; Brayne, C.; Woods, B.; CFAS-Wales Study Team. Potentially Modifiable Lifestyle Factors, Cognitive Reserve, and Cognitive Function in Later Life: A Cross-Sectional Study. PLoS Med. 2017, 14, e1002259. [Google Scholar] [CrossRef]
  44. Santiago, J.A.; Potashkin, J.A. Physical Activity and Lifestyle Modifications in the Treatment of Neurodegenerative Diseases. Front. Aging Neurosci. 2023, 15, 1185671. [Google Scholar] [CrossRef] [PubMed]
  45. Collins, F.S.; Varmus, H. A New Initiative on Precision Medicine. N. Engl. J. Med. 2015, 372, 793–795. [Google Scholar] [CrossRef] [PubMed]
Figure 1. The main data flow through PERSIST platform.
Figure 1. The main data flow through PERSIST platform.
Applsci 15 04713 g001
Figure 2. mHealth App screenshots: (a) Summary of vital parameters; (b) Login screen.
Figure 2. mHealth App screenshots: (a) Summary of vital parameters; (b) Login screen.
Applsci 15 04713 g002
Figure 3. mClinician app screenshots: (a) Patient trends; (b) Patient Summary.
Figure 3. mClinician app screenshots: (a) Patient trends; (b) Patient Summary.
Applsci 15 04713 g003
Figure 4. Feature extraction at MRAST framework.
Figure 4. Feature extraction at MRAST framework.
Applsci 15 04713 g004
Figure 5. Overview of CDSS structure.
Figure 5. Overview of CDSS structure.
Applsci 15 04713 g005
Figure 6. Participant flow diagram.
Figure 6. Participant flow diagram.
Applsci 15 04713 g006
Table 1. Summary of reasons patients mention upon leaving study.
Table 1. Summary of reasons patients mention upon leaving study.
Reasons for Leaving Times Mentioned
Personal life situation 11
Device malfunction, technical problems 10
Participation takes too much time 9
Does not like the system in general 7
Complaints about app 6
Induces stress, anxiety 6
Not specified 4
Reminds of cancer 3
No need for follow-up 2
Light at night from bracelet 2
Tired of participating2
Patient died2
Recurrence 1
Table 2. Characteristics of all participants at baseline.
Table 2. Characteristics of all participants at baseline.
UL UKCM CHU SERGAS TOTAL
Recruited Patients46404139166
Mean Age (years)54.1756.354.9254.8555.03
Std. Dev. Age (years)11.318.3411.0610.510.34
Breast Cancer Cases2420212085
Colorectal Cancer Cases2220201981
Male71171237
Female39293427129
Table 3. Descriptive statistics of CASE-Cancer Results (C.I.: Confidence interval).
Table 3. Descriptive statistics of CASE-Cancer Results (C.I.: Confidence interval).
Factor 1: Understand and Participate in CareFactor 2: Maintain Positive AttitudeFactor 3: Seek and Obtain Information
RecruitmentLast Follow-UpRecruitmentLast Follow-UpRecruitmentLast Follow-Up
N757575757575
Mean13.7313.7513.2813.1713.8113.55
Median141414141514
Std. Deviation1.92.012.32.442.312.21
Minimum996478
Maximum161616161616
Percentiles 25121212121212
50141414141514
70161515151616
p-value0.980.660.25
95% C.I.[−0.99 to 0.50][−0.50 to 0.99][−1.00 to 1.31 × 10−5]
Table 4. PAM Results: Comparison of the percentage of patients in each level at the recruitment vs. at the last follow-up (p-values have been calculated with McNemar test, C.I.: confidence interval).
Table 4. PAM Results: Comparison of the percentage of patients in each level at the recruitment vs. at the last follow-up (p-values have been calculated with McNemar test, C.I.: confidence interval).
LevelRecruitment (N = 78)
n (%)
Last Follow-Up (N = 78)
n (%)
p-Value
Level 1 5 (6.4)
95% C.I.: [2.2 to 14.9]
6 (7.7)
95% C.I.: [3.0 to 16.6]
1.0
Level 2 15 (19.2)
95% C.I.: [11.7 to 30.8]
16 (20.5)
95% C.I.: 12.7 to 32.3]
1.0
Level 3 33 (42.3)
95% C.I.: [32.6 to 55.9]
28 (35.9)
95% C.I.: [26.4 to 49.3]
0.49
Level 4 25 (32.1)
95% C.I.: [22.9 to 45.2]
28 (35.9)
95% C.I.: [26.4 to 49.3]
0.65
Table 5. Frequencies of each score group for SUS results.
Table 5. Frequencies of each score group for SUS results.
Score Group FrequencyPercent
Score at recruitment≤50311.11
50–701037.04
70–851037.04
>85414.82
Score at last follow-up≤50518.52
50–701037.04
70–85622.22
>85622.22
Table 6. Results of patient feedback—Part A (means of all centers with standard deviation).
Table 6. Results of patient feedback—Part A (means of all centers with standard deviation).
Question 1Time PointMean (SD)Median
1st Question—How do you rate your experience with participation in the PERSIST project (in general)?Initial7.41 (1.64)8
Middle7.75 (1.70)8
Final7.69 (1.53)8
2nd Question—Are the instructions and explanations about the project from personnel understandable?Initial8.53 (1.67)9.5
Middle8.53 (1.16)8.5
Final8.47 (1.24)8
3rd Question—How does the participation in the PERSIST project make you feel?Initial8.13 (1.86)8
Middle8.19 (1.55)8
Final8.06 (1.69)8
1 Friedman’s ANOVA: for 1st question, p = 0.58. For 2nd question, p = 0.83. For 3rd question, p = 0.50.
Table 7. Results of patient feedback—Part A (post-hoc Conover’s test p-values).
Table 7. Results of patient feedback—Part A (post-hoc Conover’s test p-values).
Questions Initial vs. MiddleInitial vs. FinalMiddle vs. Final
1st Question0.390.350.93
2nd Question0.870.670.55
3rd Question0.550.240.55
Table 8. Results of patient feedback—Part B (means of all centers with standard deviation).
Table 8. Results of patient feedback—Part B (means of all centers with standard deviation).
Question 1Time PointMean (SD)Median
1st Question—How do you rate the emotion wheel/detection in the app? From 1 (bad, confusing) to 10 (super, interesting)Initial6.60 (2.40)7
Middle6.35 (2.68)7.5
Final6.85 (2.21)8
2nd Question—How do you rate your experience with questionnaires in the app? From 1 (bad) to 10 (excellent)Initial7.60 (1.64)8
Middle7.25 (2.02)8
Final7.60 (1.79)8
3rd Question—How do you rate your experience with diary recording? From 1 (bad, confusing) to 10 (super, interesting)Initial6.65 (2.46)7
Middle7.00 (2.75)8
Final7.00 (2.70)8
4th Question—How do you rate your experience with the mHealth app? From 1 (really bad) to 10 (excellent)Initial7.60 (1.67)7.5
Middle7.35 (1.90)8
Final7.90 (1.55)8
5th Question—Are the instructions and explanations about mHealth app usage understandable? From 1 (completely confusing) to 10 (completely clear)Initial8.60 (1.31)9
Middle8.60 (1.27)9
Final8.25 (1.33)8
6th Question—Do you follow up on your gathered data in the mHealth app? From 1 (not at all) to 10 (all the time)Initial7.35 (2.89)8
Middle6.80 (2.78)7.5
Final6.90 (2.53)8
7th Question—Does the mHealth app affect your behavior? From 1 (not at all) to 10 (I modify my behavior after looking at the data)Initial5.50 (3.05)5
Middle5.75 (2.69)6
Final6.15 (2.98)6
1 Friedman’s ANOVA: For 1st question, p = 0.11. For 2nd question, p = 0.78. For 3rd question, p = 0.58. For 4th question, p = 0.28. For 5th question, p = 0.11. For 6th question, p = 0.39. For 7th question, p = 0.75.
Table 9. Results of patient feedback—Part B (post-hoc Conover’s test p-values).
Table 9. Results of patient feedback—Part B (post-hoc Conover’s test p-values).
QuestionsInitial vs. MiddleInitial vs. FinalMiddle vs. Final
1st Question>0.990.230.23
2nd Question0.490.840.62
3rd Question0.30.510.71
4th Question0.890.140.18
5th Question0.910.080.06
6th Question0.70.190.34
7th Question0.710.450.71
Table 10. Results of patient feedback—Part C (means of all centers with standard deviation).
Table 10. Results of patient feedback—Part C (means of all centers with standard deviation).
Questions 1Time PointMean (SD)Median
1st Question—How do you rate your experience with smart bracelets?Initial6.87 (2.23)7
Middle6.00 (2.10)6
Final6.93 (1.53)7
2nd Question—How do you rate your experience with mobile phone?Initial6.80 (2.15)7
Middle7.33 (1.99)8
Final6.87 (2.10)7
1 Friedman’s ANOVA: For 1st question, p = 0.04. For 2nd question, p = 0.227.
Table 11. Results of patient feedback—Part C (post-hoc Conover’s test p-values).
Table 11. Results of patient feedback—Part C (post-hoc Conover’s test p-values).
QuestionInitial vs. MiddleInitial vs. FinalMiddle vs. Final
1st Question0.03>0.990.035
2nd Question0.090.50.28
Table 12. SUS scores of mClinician at recruitment and last follow-up.
Table 12. SUS scores of mClinician at recruitment and last follow-up.
Score Group FirstFrequencyPercent
Score at recruitment≤50743.75
50–70637.50
70–85212.50
>8516.25
Last follow-up≤50741.18
50–70741.18
70–85211.76
>8515.88
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Arioz, U.; Smrke, U.; Šafran, V.; Lin, S.; Nateqi, J.; Bema, D.; Polaka, I.; Arcimovica, K.; Lescinska, A.M.; Manzo, G.; et al. Evaluating the Benefits and Implementation Challenges of Digital Health Interventions for Improving Self-Efficacy and Patient Activation in Cancer Survivors: Single-Case Experimental Prospective Study. Appl. Sci. 2025, 15, 4713. https://doi.org/10.3390/app15094713

AMA Style

Arioz U, Smrke U, Šafran V, Lin S, Nateqi J, Bema D, Polaka I, Arcimovica K, Lescinska AM, Manzo G, et al. Evaluating the Benefits and Implementation Challenges of Digital Health Interventions for Improving Self-Efficacy and Patient Activation in Cancer Survivors: Single-Case Experimental Prospective Study. Applied Sciences. 2025; 15(9):4713. https://doi.org/10.3390/app15094713

Chicago/Turabian Style

Arioz, Umut, Urška Smrke, Valentino Šafran, Simon Lin, Jama Nateqi, Dina Bema, Inese Polaka, Krista Arcimovica, Anna Marija Lescinska, Gaetano Manzo, and et al. 2025. "Evaluating the Benefits and Implementation Challenges of Digital Health Interventions for Improving Self-Efficacy and Patient Activation in Cancer Survivors: Single-Case Experimental Prospective Study" Applied Sciences 15, no. 9: 4713. https://doi.org/10.3390/app15094713

APA Style

Arioz, U., Smrke, U., Šafran, V., Lin, S., Nateqi, J., Bema, D., Polaka, I., Arcimovica, K., Lescinska, A. M., Manzo, G., Pannatier, Y., Calvo-Almeida, S., Ravnik, M., Horvat, M., Flis, V., Montero, A. M., Calderón-Cruz, B., Arjona, J. A., Chavez, M., ... Mlakar, I. (2025). Evaluating the Benefits and Implementation Challenges of Digital Health Interventions for Improving Self-Efficacy and Patient Activation in Cancer Survivors: Single-Case Experimental Prospective Study. Applied Sciences, 15(9), 4713. https://doi.org/10.3390/app15094713

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop