Background: Medication errors, defined as preventable events that may lead to inappropriate medication use or patient harm, remain a persistent global challenge contributing to adverse drug events, increased morbidity, and escalating healthcare costs. While prior research has primarily examined system-level and technological factors, limited attention has been given to the emotional and experiential dimensions influencing medication error occurrence and reporting. Objective: This study aimed to investigate public awareness, attitudes, and experiences related to medication errors, with a particular focus on the perceived role of artificial intelligence (AI) in enhancing medication safety. Methods: A cross-sectional descriptive design was employed using a structured, self-administered online questionnaire. A total of 501 valid responses were collected from adult residents in Saudi Arabia. The survey included items on demographics, knowledge of medication errors, attitudes toward AI in pharmacy practice, and preventive behaviors. Descriptive statistics and binary logistic regression were used for data analysis. Results: Approximately 79.9% of respondents reported awareness of medication errors, though 75.7% relied on informal online sources for information. While 47.6% strongly believed AI could improve dispensing accuracy, concerns regarding data privacy, reliability, and job displacement were noted. Logistic regression indicated a strong association between pro-AI attitudes and willingness to adopt AI, though model convergence was limited due to polarized responses. Conclusion: Findings highlight the necessity of fostering non-punitive reporting cultures, strengthening digital health literacy, and carefully integrating AI into pharmacy practice. These measures are critical to enhancing patient safety and align with national healthcare transformation goals, including Saudi Arabia’s Vision 2030.
Medication errors remain a significant challenge to patient safety across healthcare systems worldwide. Broadly defined as preventable events that may lead to inappropriate medication use or patient harm, these errors can occur during prescribing, dispensing, administration, or monitoring phases [1-2]. Their consequences include adverse drug events, prolonged hospitalizations, increased morbidity, and rising healthcare costs [3].
Initial efforts to address medication errors have focused on systemic and technological interventions. The Institute of Medicine’s seminal report To Err is Human emphasized structural changes to reduce clinical errors [2], and subsequent initiatives have promoted digital tools such as barcoded medication administration and electronic prescribing to minimize risk [4]. However, more recent scholarship has highlighted that emotional and human factors -such as fear of blame, knowledge gaps, and moral distress-also play a crucial role in both the occurrence and underreporting of errors [5-7]. Healthcare professionals operating in hierarchical or punitive environments may hesitate to report mistakes [6], while patients lacking formal education in medication safety often rely on unverified online sources, increasing their vulnerability [8].
Although pharmacists and physicians remain essential in ensuring safe medication practices, the introduction of artificial intelligence (AI) technologies presents both promise and concern. AI is increasingly utilized to enhance dispensing accuracy and detect errors, yet its adoption is often tempered by concerns about data privacy, algorithmic reliability, and workforce displacement [9-10]. These issues are especially pertinent in Saudi Arabia, where Vision 2030 advocates for healthcare digitalization, workforce training, and patient-centered reforms [8].
Despite growing interest in AI and digital health, limited research has explored how the general public perceives medication errors and the use of AI in medication management. Existing studies have not sufficiently addressed the emotional, educational, and experiential dimensions that influence medication safety behaviors.
This study addresses this gap by examining public awareness, attitudes, and experiences related to medication errors, with a specific focus on the perceived role of AI in enhancing medication safety. The study employs a cross-sectional descriptive design to assess current perceptions among adults in Saudi Arabia.
This paper contributes by offering empirical insights into how the public navigates medication safety challenges in a digitally evolving healthcare system. It integrates human, technological, and policy considerations to inform educational strategies, system design, and national transformation efforts.
Study Design
This study adopted a cross-sectional descriptive design to examine public knowledge, attitudes, and experiences related to medication errors, as well as perceptions of artificial intelligence (AI) in enhancing medication safety. A cross-sectional design was selected for its efficiency in capturing current views and behavioral patterns at a single time point. While it does not permit causal inference, this design is appropriate for identifying prevalent associations and informing hypothesis generation in exploratory research.
Study Setting
Data collection was conducted online without geographic restrictions, although the target population was situated in the Kingdom of Saudi Arabia. The digital setting enabled broad participation, reflecting the high internet penetration rate in the country. This approach allowed for real-time data acquisition while preserving participant anonymity, which is especially pertinent for topics involving medication safety and technology adoption.
Sampling Strategy and Participants
Sampling Procedure: Participants were recruited using a non-probability convenience sampling method. Eligibility was restricted to individuals aged 18 years or older who provided informed electronic consent. Recruitment was conducted via institutional mailing lists and social media platforms (e.g., Twitter, WhatsApp, Facebook), through a standardized message containing a hyperlink to the survey.
Inclusion and Exclusion Criteria
Inclusion criteria encompassed adults (≥18 years) willing to voluntarily participate and complete the electronic survey. Exclusion criteria included respondents who left entire sections of the questionnaire unanswered or provided duplicate submissions. Data integrity was maintained by identifying duplicates through IP checks and timestamp comparisons, with flagged entries removed from analysis.
Sample Size
The survey remained open for four weeks. A total of 501 valid and complete responses were retained for analysis, exceeding the minimum target of 400 responses. This sample size provided sufficient statistical power for both descriptive analyses and exploratory regression modeling.
Data Collection Instrument
Questionnaire Development: A structured, self-administered online questionnaire was developed based on a review of relevant literature concerning medication errors, AI adoption in healthcare, and digital health literacy. The instrument was composed of four primary sections:
The questionnaire included a combination of closed-ended (multiple choice, Likert-scale) and open-ended questions. Likert items ranged from "Strongly Disagree" to "Strongly Agree" to capture the intensity of participant attitudes. Open-text fields were included for qualitative feedback on privacy concerns and personal experiences with medication errors.
Pilot Testing
Prior to full deployment, the questionnaire was pilot tested with 20 individuals representing a range of educational backgrounds and limited healthcare experience. Feedback focused on item clarity, neutrality of language, and survey length. Minor revisions were made to wording and layout to enhance usability and minimize response bias.
Data Collection Procedure
The finalized questionnaire was hosted on a secure online platform (e.g., Google Forms). A standardized recruitment message with study details and survey access was disseminated. Participants were informed of the study’s objectives, confidentiality measures, and their right to withdraw at any point. Completion of the survey required 10–15 minutes. The survey was designed to maintain a consistent item order, with an option for participants to revise their responses before final submission. Data were automatically captured and stored in a secure digital database.
Ethical Considerations
This study was approved by the Research Ethics Committee (REC), and the research adhered to the ethical principles outlined in the Declaration of Helsinki. Participants provided informed consent electronically prior to beginning the survey. Personally identifiable information was not collected, and IP addresses were not permanently stored. The raw dataset was accessible only to the principal investigator and designated team members and was secured in encrypted, password-protected storage.
Data Analysis
Following the closure of data collection, responses were exported to Microsoft Excel for preliminary data cleaning. Incomplete or duplicate entries were removed. The final dataset was imported into IBM SPSS Statistics (version 25) for analysis.
Descriptive statistics (frequencies, percentages, means, and standard deviations) were computed for demographic variables and key survey responses. Cross-tabulation and Pearson’s chi-square (χ²) tests were conducted to assess associations between demographic factors (e.g., age, gender, education) and primary outcomes (e.g., AI acceptance, error reporting attitudes).
A binary logistic regression model was developed to evaluate the relationship between a positive attitude toward AI (independent variable) and the likelihood of adopting AI for medication safety (dependent variable), controlling for covariates including age, gender, and prior awareness of medication errors. Although the model demonstrated a strong association, it exhibited quasi-complete separation, which limited model convergence and suggests a high degree of polarization in participant views. As such, regression results are presented with interpretive caution.
The majority of respondents (47.01%, N=236) were aged 21-23 years, followed by 22.71% (N=114) aged 18-20 years. The 24-26 years group accounted for 17.33% (N=87), while 12.55% (N=63) were older than 26 years.
In terms of gender, 67.53% (N=339) were female, whereas 32.27% (N=162) were male. The most common educational qualification was a bachelor's degree (48.61%, N = 244), followed by secondary education (34.86%, N = 175) and postgraduate education (11.16%, N = 56) (Table 1).
Table 1: Demographic Characteristics
Category |
Response Options |
Frequency (N) |
Percentage |
Age |
21-23 years |
236 |
47.01 |
18-20 years |
114 |
22.71 |
|
24-26 years |
87 |
17.33 |
|
Over 26 years |
63 |
12.55 |
|
Under 18 |
1 |
0.20 |
|
Gender |
Female |
339 |
67.53 |
Male |
162 |
32.27 |
|
Highest Level of Education |
Bachelor’s degree |
244 |
48.61 |
Secondary |
175 |
34.86 |
|
Postgraduate |
56 |
11.16 |
Table 2: Knowledge of Medication Errors
Category |
Response Options |
Frequency (N) |
Percentage |
Heard of Medication Errors |
Yes |
401 |
79.88 |
No |
100 |
19.92 |
|
Not Sure |
2 |
0.20 |
|
Primary Source of Knowledge |
Online resources |
380 |
75.70 |
University courses |
52 |
10.36 |
|
Conferences/workshops |
48 |
9.56 |
|
Other |
2 |
0.40 |
Table 3: Attitudes Toward Medication Errors
Category |
Response Options |
Frequency (N) |
Percentage |
AI Improves Dispensing Accuracy |
Strongly Agree |
239 |
47.61 |
Agree |
149 |
29.68 |
|
Neutral |
162 |
32.27 |
|
Disagree |
52 |
10.36 |
|
Strongly Disagree |
48 |
9.56 |
|
Reporting Medication Errors |
Strongly Agree |
254 |
50.60 |
Agree |
100 |
19.92 |
|
Neutral |
147 |
29.28 |
Table 4: Experiences and Preventative Practices
Category |
Response Options |
Frequency (N) |
Percentage |
Likelihood of Using AI |
Very Likely |
105 |
20.92 |
Likely |
211 |
42.03 |
|
Neutral |
142 |
28.29 |
|
Unlikely |
19 |
3.78 |
|
Very Unlikely |
24 |
4.78 |
|
Recommend AI Training |
Definitely Yes |
127 |
25.30 |
Probably Yes |
213 |
42.43 |
|
Unsure |
103 |
20.52 |
Table 5: Model Summary
Metric |
Value |
Observations |
650 |
Pseudo R-squared |
0.587 |
Log-Likelihood |
-186.14 |
LLR p-value |
< 0.0001 (highly significant) |
AIC |
376.27 |
Converged |
No (did not fully converge) |
Table 6: Coefficients
Variable |
Coefficient |
Std. Error |
z-score |
p-value |
95% CI |
Intercept |
-20.51 |
1757.42 |
~0 |
0.991 |
[-3464.98, 3423.96] |
AI_Positive_View |
21.99 |
1757.42 |
~0 |
0.99 |
[-3422.48, 3466.46] |
Figure 1: Concerns About AI in Medication Safety
A large proportion of respondents (79.88%, N=401) were aware of medication errors, while 19.92% (N=100) had never heard of them. The primary source of knowledge was online resources (75.70%, N=380), whereas formal education sources such as university courses (10.36%) and conferences/workshops (9.56%) played a minor role (Table 2).
Regarding AI’s role in improving dispensing accuracy, 47.61% of respondents strongly agreed that AI can enhance medication dispensing accuracy, while 32.27% remained neutral, reflecting a degree of uncertainty about AI’s effectiveness in reducing errors. When asked about reporting and medication safety, 50.60% strongly agreed that reporting medication errors contributes to overall patient safety. However, 29.28% were unsure, suggesting that while many recognize the importance of reporting errors, there is still some skepticism about how effectively such reports lead to meaningful improvements.
There was also caution regarding AI adoption in pharmacy practice, with 28.09% of respondents strongly agreeing that healthcare professionals should exercise caution when integrating AI into medication management. Concerns centered around data security, ethical implications, and the potential for AI errors to go undetected.
Despite the growing presence of AI in healthcare, respondents strongly supported the continued role of pharmacists in medication safety. A majority (54.98%) strongly agreed that pharmacists will continue to play a critical role in patient care, even with AI advancements, reinforcing the belief that human expertise remains essential in preventing and managing medication errors (Table 3).
The findings indicate that 42.03% of respondents were likely to use AI to prevent medication errors, while 20.92% were very likely. However, 4.78% were very unlikely, reflecting skepticism regarding AI’s reliability.
Additionally, 42.43% recommended integrating AI training in pharmacy curricula, but 20.52% remained unsure, suggesting a need for further research into how AI can be best incorporated into medication safety strategies (Table 4).
Figure 1 illustrates participants’ concerns regarding the implementation of artificial intelligence (AI) in medication safety practices. Among the surveyed concerns, job displacement emerged as the most prominent issue, accounting for 39.45% of total responses. This was followed by data security concerns, reported by 33.33%, and AI reliability, cited by 27.22% of respondents.
To further examine the relationship between participants’ attitudes toward artificial intelligence (AI) and their likelihood of using AI to prevent medication errors, a binary logistic regression analysis was conducted. The dependent variable was the likelihood of using AI, dichotomized as 1 = Likely/Very Likely and 0 = Neutral/Unlikely/Very Unlikely. The independent variable was a simplified binary indicator of positive attitude toward AI, where 1 = Agree/Strongly Agree that AI improves accuracy, and 0 = Neutral/Disagree/Strongly Disagree.
Model Performance
As presented in Table 5, the model included 650 observations and yielded a pseudo-R-squared of 0.587, suggesting a strong explanatory power. The log-likelihood was -186.14, and the model's likelihood ratio test was highly significant (LLR p<0.0001), indicating that the model as a whole significantly outperformed the null model. However, it is important to note that the model did not converge, likely due to quasi-complete separation in the data.
As shown in Table 6, the coefficient for positive attitude toward AI was 21.99, indicating a strong positive association with the likelihood of using AI. However, the standard error (1757.42) and wide confidence interval ([-3422.48, 3466.46]) suggest instability in the estimate, further supported by a non-significant p-value (p = 0.99). The intercept was similarly unstable, with a large standard error and no statistical significance.
This study provides empirical insight into the human dimensions of medication errors, particularly focusing on emotional, ethical, and behavioral factors that shape public awareness, reporting practices, and acceptance of artificial intelligence (AI) in medication safety. The findings respond to the central research question by demonstrating that while awareness of medication errors is relatively widespread among the general population, this knowledge is often derived from informal or unregulated digital sources. This underscores a critical gap in formal education and highlights the necessity of structured digital health literacy interventions.
The emotional impact of medication errors was evident on both professional and patient levels. Consistent with previous research, healthcare professionals may experience moral distress and fear of blame, which can deter open communication and reporting of errors [1-2]. Non-punitive reporting environments, by contrast, have been shown to promote transparency and collective learning, contributing to overall improvements in patient safety [3]. From the patient’s perspective, misunderstanding medication use and anxiety about errors further reinforce the need for empathetic communication and educational outreach [4].
In terms of technological integration, the study identified cautious optimism toward AI-based interventions. Nearly half of the participants strongly agreed that AI could improve dispensing accuracy, a sentiment that aligns with global trends in healthcare digitization [5]. However, substantial concerns regarding data privacy, algorithmic reliability, and job displacement were also noted, mirroring patterns observed in prior literature [6]. The relatively modest support for incorporating AI training in pharmacy education (42.43%) suggests hesitancy may be mitigated through targeted, evidence-based curricular interventions.
A notable finding was the quasi-complete separation observed in logistic regression analysis, indicating polarized views on AI adoption. This polarization points to a broader challenge in policy and system design, where overly generalized digital health strategies may fail to address heterogeneous stakeholder perspectives. Future policy initiatives may benefit from incorporating interprofessional collaboration, participatory AI design, and transparent governance frameworks to promote trust and usability [7]. These strategies are especially relevant in the context of national health transformation agendas such as Saudi Arabia’s Vision 2030, which emphasizes both technological advancement and human-centered care.
Limitations Several limitations should be acknowledged. First, the use of convenience sampling restricts the generalizability of findings, as the sample may not reflect the broader population's views. Second, the reliance on self-reported data introduces potential biases, including social desirability and recall bias. Third, the cross-sectional design limits the ability to infer causal relationships or longitudinal changes in perception. Finally, the observed quasi-complete separation in regression analysis indicates limitations in model stability and interpretability.
Future Work
Further research should consider:
This study advances the understanding of medication errors by integrating public perspectives on emotional impact, digital literacy, and technological interventions. The findings emphasize the need for non-punitive error-reporting systems, supportive institutional environments, and evidence-informed AI education. These insights contribute to ongoing discussions on patient safety policy and digital transformation, particularly within healthcare systems undergoing reform, such as those guided by Saudi Arabia’s Vision 2030. By addressing both the human and technological dimensions of medication safety, the study supports more inclusive, effective, and ethically grounded approaches to healthcare innovation.
1. Alhur, Anas. “Redefining Healthcare With Artificial Intelligence (AI): The Contributions of ChatGPT, Gemini, and Co-Pilot.” Cureus, vol. 16, no. 4, April 2024. https://pubmed.ncbi.nlm.nih.gov/38721180/.
2. Alhur, Anas. “Overcoming Electronic Medical Records Adoption Challenges in Saudi Arabia.” Cureus, vol. 16, no. 2, February 2024. https://pubmed.ncbi.nlm.nih.gov/38465069/.
3. Alhur, Anas Ali et al. “Attitudes Towards AI in Healthcare Among University of Hail Health Sciences Students: A Qualitative Exploration.” Journal of Pioneering Medical Sciences, vol. 14, no. 3, March 2025, pp. 1–6. http://dx.doi.org/10.47310/jpms2025140301.
4. Alhur, Anas Ali. “Public Perspectives on Digital Innovations in Pharmacy: A Survey on Health Informatics and Medication Management.” Journal of Infrastructure, Policy and Development, vol. 8, no. 8, 2024. https://systems.enpress-publisher.com/index.php/jipd/article/view/5450.
5. Alhur, Anas et al. “Enhancing Patient Safety Through Effective Interprofessional Communication: A Focus on Medication Error Prevention.” Cureus, vol. 16, no. 4, April 2024. https://pubmed.ncbi.nlm.nih.gov/38738027/.
6. Parkinson, Beth et al. “How Sensitive Are Avoidable Emergency Department Attendances to Primary Care Quality? Retrospective Observational Study.” BMJ Quality & Safety, vol. 30, no. 11, November 2020, pp. 884–892. http://dx.doi.org/10.1136/bmjqs-2020-011651.
7. Carayon, P. et al. “Evaluation of Nurse Interaction With Bar Code Medication Administration Technology in the Work Environment.” Journal of Patient Safety, vol. 3, no. 1, 2007, pp. 34–42. https://doi.org/10.1097/01209203-200703000-00007.
8. James, John T. “A New, Evidence-Based Estimate of Patient Harms Associated With Hospital Care.” Journal of Patient Safety, vol. 9, no. 3, September 2013, pp. 122–128. http://dx.doi.org/10.1097/pts.0b013e3182948a69.
9. Kohn, L.T., J.M. Corrigan, and M.S. Donaldson, editors. To Err Is Human: Building a Safer Health System. National Academies Press, 2000.
10. Rassin, M. et al. “Chronology of Medication Errors by Nurses: Accumulation of Stresses and PTSD-like Consequences.” Issues in Mental Health Nursing, vol. 26, no. 8, 2005, pp. 873–886. https://doi.org/10.1080/01612840500184666.