Research Article | In-Press | Volume 15 Special Issue 1 (January to April, 2026) | Pages 44 - 52

Artificial Intelligence Literacy and Intention to Use AI in Clinical Practice among Healthcare Students in Saudi Arabia: A Cross-Sectional Study

orcid
 ,
 ,
orcid
1
Department of Nursing, Faculty of Nursing, Majmaah University, Al-Majmaah, Saudi Arabia
2
Nursing Management and Administration, Head of Basic Nursing Care Department, College of Nursing, Majmaah University, Al-Majmaah, 11952, Saudi Arabia
Under a Creative Commons license
Open Access

Abstract

Background: Artificial intelligence (AI) is increasingly shaping healthcare education and clinical practice; however, the preparedness of healthcare students in Saudi Arabia to adopt AI remains unclear. Aim: To assess AI literacy and examine its relationship with intention to use AI among healthcare students at Majmaah University. Methods: A cross-sectional study was conducted with 802 undergraduate students from Nursing, Applied Medical Sciences and Medicine. Data were collected using MAIRS-MS and a TAM-based behavioural intention scale. Descriptive statistics, correlation and hierarchical regression analyses were performed. Results: A notable “education–interest gap” emerged. Although students reported high interest in AI (M = 5.13), 80% had little to no formal AI education and relied mainly on self-learning (71.7%) and social media (65.1%), with only 37.5% learning through coursework. Intention to use AI was moderately strong (M = 4.85/7), while overall readiness was moderate (M = 3.02/5). Domain scores showed higher “Ability” (M = 3.15) than “Cognition” (M = 2.90) or “Ethics” (M = 2.93). AI literacy correlated positively with intention (r = 0.381, p<0.001). Regression analysis confirmed AI literacy as a significant predictor, explaining an additional 11.2% of variance in intention (β = 0.383, p<0.001). Conclusion: Students show strong interest but limited formal preparation for AI adoption. Enhancing AI education, particularly theory and ethical/legal content, is essential to support safe and effective clinical integration aligned with Vision 2030.

Keywords
Artificial Intelligence, AI Literacy, Behavioural Intention, TAM, Clinical Practice, Healthcare Education

INTRODUCTION

Artificial intelligence (AI) has rapidly evolved into a transformative force across various sectors, with healthcare emerging as one of the fields experiencing the most profound impact. AI technologies ranging from machine learning algorithms and predictive modelling to automated diagnostic systems and virtual simulation platforms have fundamentally reshaped approaches to clinical diagnosis, treatment planning and health-professional education [1]. As these innovations continue to advance, AI is increasingly recognized as a central driver of improved accuracy, efficiency and safety in patient care. Evidence suggests that AI-supported tools can enhance diagnostic precision, streamline clinical workflows, reduce errors and support more personalized approaches to treatment [2]. Consequently, the integration of AI into healthcare systems has introduced both new opportunities and new expectations for future practitioners.

 

The growing digitalization of healthcare has intensified the need for a workforce capable of effectively engaging with AI technologies. It is no longer sufficient for healthcare professionals to possess traditional clinical skills; they must also demonstrate digital competence, technical literacy and the capacity to interpret and evaluate AI-generated outputs [3]. Within this context, AI literacy has emerged as a critical competency that encompasses theoretical understanding, practical skills, ethical and legal awareness and readiness to adopt AI tools in clinical environments [4,5]. Developing AI literacy enables students to critically appraise AI-driven recommendations, ensure patient safety during AI-supported decision-making and collaborate effectively with intelligent clinical systems.

 

Despite the increasing importance of AI literacy, international research indicates that healthcare students often feel unprepared to engage with AI technologies in practical settings. Studies conducted in Europe and Asia report that although medical and nursing students generally demonstrate positive attitudes toward AI, they frequently exhibit limited confidence and inadequate readiness to implement AI tools in clinical practice [6,7]. These findings suggest a substantial educational gap rooted in insufficient exposure to AI-related content within traditional health-science curricula. Students commonly express uncertainty regarding the ethical, legal and professional implications of AI, as well as concerns about its potential impact on job roles, clinical autonomy and patient trust. In the Saudi Arabian context, the urgency of addressing this gap is heightened by national digital transformation goals. Saudi Vision 2030 places significant emphasis on integrating advanced technologies, including AI, big data, telemedicine and automation, into the healthcare sector to enhance quality, accessibility and operational efficiency. The Ministry of Health, in collaboration with the Saudi Data and Artificial Intelligence Authority (SDAIA), has pioneered several initiatives aimed at expanding the application of AI in clinical care, health research and medical training [8]. These initiatives underscore the country’s commitment to building a digitally skilled health workforce capable of leveraging intelligent technologies. However, despite these national-level priorities, empirical studies exploring AI literacy, readiness and intention to use AI among healthcare students remain scarce, particularly in regional academic institutions such as Majmaah University.

 

Objectives

 

  • To measure the level of AI literacy and readiness among healthcare students
  • To assess students’ intention to use AI in clinical practice
  • To examine the relationship between AI literacy and the intention to use AI
  • To identify demographic and academic predictors, such as academic program, year of study and previous exposure to AI, that influence AI literacy and intention to use AI

METHODS

A quantitative, cross-sectional, descriptive–correlational research design was conducted at Majmaah University, Saudi Arabia and included undergraduate students from three colleges: The College of Medicine, the College of Nursing and the College of Applied Medical Sciences. These colleges represent the primary health-science disciplines within the university, providing a diverse sample of future healthcare practitioners. A convenience sampling technique was used to recruit participants. An online survey link was distributed to all students enrolled in the target colleges across all academic levels. The sample size was calculated using the formula for finite populations as described by Krejcie and Morgan, yielding an estimated minimum of 343 participants. Each of the three colleges, Nursing, Applied Medical Sciences and Medicine, enrols approximately 700–800 students, resulting in a total population of about 2,100–2,400 male and female students. The sample size was determined using the formula for finite populations as outlined in Sample Size Determination for Social Science Research. Assuming a 95% confidence level, 5% margin of error and a population proportion of 0.5, the required sample size was estimated using the equation:

 

n = N / (1 + N(e)²)

 

Substituting N=2,300 and e=0.05, the computed minimum sample size was 343 participants.

 

Additionally, an a priori power analysis using G*Power 3.1.9.7 for a two-tailed correlation test (α = 0.05, power = 0.80, medium effect size ρ = 0.30) indicated a minimum of 85 participants to detect significant relationships between study variables. To ensure both statistical power and representativeness, the study aimed to recruit at least 343 participants, with proportional distribution across the three colleges according to enrolment and gender ratios.

 

Inclusion Criteria

 

  • Currently enrolled undergraduate students in the three colleges; Nursing, Applied Medical Sciences and Medicine

 

Exclusion Criteria

 

  • Interns, postgraduate students and incomplete responses

 

Data Collection

The survey instrument comprised three main sections. The first section collected demographic and academic information, including gender, age, major, year of study, interest in AI and prior AI training. Interest in AI was rated on a 7-point Likert scale (1 = not interested at all to 7 = extremely interested), while prior AI training was assessed with four response options ranging from “hardly any training” to “more than 120 hours of intensive AI coursework.” The second section measured AI literacy and readiness using the Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) (Karaca et al., 2021), a 22-item tool covering cognition, ability, vision and ethics on a 5-point Likert scale (1 = strongly disagree to 5 = strongly agree). Higher scores indicated greater readiness and the scale has shown strong reliability (α = 0.93). The English version was reviewed by two subject experts to ensure clarity and contextual relevance. The final section assessed behavioural intention to use AI in clinical practice using three items adapted from the Technology Acceptance Model (TAM) [9], rated on a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). Example items included “I intend to use AI tools in clinical contexts” and “I plan to use AI tools in my future clinical practice.”

 

Pilot Study

A pilot study was conducted with a sample of 25–30 students from various programs to evaluate the clarity, completion time and reliability of the survey instrument. Participants were asked to provide feedback on the wording, structure and ease of understanding of the items. Based on their feedback, minor adjustments were made where necessary to improve clarity and consistency. As no major issues arose, the pilot data were excluded from the main analysis to maintain the integrity of the final dataset

 

Data Analysis

Data were analysed using SPSS version 29. Descriptive statistics summarized the data and Cronbach’s alpha (α ≥0.70) assessed reliability. Pearson’s correlation examined the relationship between AI literacy and intention to use AI, while multiple regression identified significant predictors. A p-value <0.05 was considered statistically significant.

RESULTS

With N=802, this research has a very robust sample size (much higher than the initial target of 84), which strengthens the results significantly. A total of 802 undergraduate healthcare students at Majmaah University completed the survey (Table 1).

 

Table 1: Descriptive Summary of Demographic & Background Information (N=802)

Demographic & Background Information

Frequency

Percent

Age

18 – 20 years

235

29.3

21 – 25 years

407

50.7

25 years & above

160

20.0

Gender

Female

574

71.6

Male

228

28.4

College/ Program

College of Medicine

200

24.9

College of Applied Medical Sciences

231

28.8

College of Nursing

371

46.3

Year of Study

1

99

12.3

2

119

14.8

3

183

22.8

4

153

19.1

5

106

13.2

6

36

4.5

7

50

6.2

8

56

7.0

Heard about AI in Healthcare before?

No

131

16.3

Yes

671

83.7

Primary Sources of AI Exposure

Coursework

301

62.5

Workshops/ Seminars

276

34.4

Self-learning/ Online

575

71.7

Clinical Placement

169

21.1

Social Media

522

65.1

Other “Gemini”

1

.1

Interest in AI (mean=5.13)

(1) Not interested at all

42

5.2

(2)

24

3.0

(3)

84

10.5

(4)

113

14.1

(5)

172

21.4

(6)

105

13.1

(7) Extremely interested

262

32.7

Prior AI Training

Hardly any training

396

49.4

<30 hours of coursework or self-learning

248

30.9

>30 hours of coursework or self-learning

162

20.2

>120 hours of intensive AI coursework or study

55

6.9

Digital/ Tech Confidence (mean=3.51)

“I am confident using new digital tools in my studies.”

Strongly disagree

47

5.9

Disagree

22

2.7

Neutral

336

41.9

Agree

269

33.5

Strongly agree

128

16.0

 

The majority of participants were female (71.6%, n = 574), while 28.4% (n = 228) were male. The age distribution showed that most students were between 21 and 25 years old (50.7%), followed by those aged 18–20 years (29.3%). For academic discipline, students from the College of Nursing represented the largest group (371, 46.3%). Following are the College of Applied Medical Sciences (231, 28.8%) and the College of Medicine (200, 24.9%). Participants were distributed across all years of study, with the highest participation from Year 3 students (22.8%).

 

The majority of students (83.7%) stated that they have heard about AI in healthcare prior to this study. However, there was a great variation in the sources of this exposure. As shown in Figure 1, most participants gained knowledge through informal channels, with 71.7% stating that they use “Self-learning/Online” and 65.1% stating that they use “Social Media” as their primary sources. On the contrary, students used formal educational sources much less as only 37.5% reported that they utilized coursework and only 21.1% mentioned clinical placement. For formal training, results indicated that there is a lack of structured AI education. Approximately half of the participants (49.4%) reported having “hardly any training” and 30.9% had less than 30 hours of coursework or self-learning. Only a small minority (6.9%) had completed intensive training (>120 hours).

 

 

Figure 1: AI Exposure and Training

 

Despite the limited formal training, interest in AI was high; i.e., a mean of 5.13 falls between “Slightly Interested” and “Interested”, which confirms that the students have a positive attitude and are open to learning, even if they lack training. On the 7-point scale, 32.7% of students selected “Extremely Interested” (Score = 7) and 21.4% selected a score of 5. Conversely, only 5.2% reported being “not interested at all”. Self-reported digital confidence was moderate, with the largest proportion of students (41.9%) remaining neutral, while 33.5% agreed and 16.0% strongly agreed that they were confident using new digital tools.

 

 

Figure 2: Prior AI Training

 

Table 2 presents the descriptive statistics for the Medical Artificial Intelligence Readiness Scale (MAIRS-MS). The overall mean score for AI Literacy and Readiness was 3.02±0.983. On a 5-point scale, this indicates that the students are just above the midpoint of “Neutral”, suggesting that the student is currently in a transition state; i.e., they are neither fully unprepared nor fully ready to integrate AI into clinical practice.

 

Table 2: Descriptive Summary of AI Literacy and Readiness (MAIRS-MS), N=802

Domains

Scalea

Statisticsb

(1)

(2)

(3)

(4)

(5)

M

SD

AI Literacy and Readiness (MAIRS-MS)

(16.2)

(18.9)

(27.2)

(22.2)

(15.5)

3.02

.983

Cognition

(17.9)

(19.9)

(28.5)

(21.4)

(12.3)

2.90

1.025

1

I can define the basic concepts of data science.

270

(33.7)

157

(19.6)

181

(22.6)

111

(13.8)

83

(10.3)

2.48

1.350

2

I can define the basic concepts of statistics.

136

(17.0)

166 (20.7)

254

(31.7)

160

(20.0)

86

(10.7)

2.87

1.224

3

I can explain how AI systems are trained.

137

(17.1)

192

(23.9)

213

(26.6)

167

(20.8)

93

(11.6)

2.86

1.256

4

I can define the basic concepts and terminology of artificial intelligence.

122

(15.2)

173

(21.6)

226

(28.2)

186

(23.2)

95

(11.8)

2.95

1.237

5

I can properly analyse data obtained by AI in healthcare.

129

(16.1)

165

(20.6)

250

(31.2)

164

(20.4)

94

(11.7)

2.91

1.231

6

I can differentiate between the functions and features of AI-related tools and applications.

120

(15.0)

131

(16.3)

249

(31.0)

196

(24.4)

106

(13.2)

3.05

1.239

7

I can organize workflows in accordance with the logic of AI.

112

(14.0)

168

(20.9)

242

(30.2)

175

(21.8)

105

(13.1)

2.99

1.230

8

I can express the importance of data collection, analysis, evaluation and safety for the development of AI in healthcare.

121

(15.1)

127

(15.8)

212

(26.4)

212

(26.4)

130

(16.2)

3.13

1.288

Ability

(13.8)

(16.9)

(27.3)

(24.8)

(17.3)

3.15

1.102

9

I can use AI-based information in combination with my professional knowledge.

120

(15.0)

131

(16.3)

205

(25.6)

216

(26.9)

130

(16.2)

3.13

1.290

10

I can use AI technologies effectively and efficiently in healthcare delivery.

104

(13.0)

142

(17.7)

238

(29.7)

202

(25.2)

116

(14.5)

3.10

1.232

11

I can use artificial intelligence applications in accordance with their purpose.

110

(13.7)

130

(16.2)

181

(22.6)

223

(27.8)

158

(19.7)

3.24

1.313

12

I can access, evaluate, use, share and create new knowledge using information and communication technologies.

108

(13.5)

131

(16.3)

223

(27.8)

208

(25.9)

132

(16.5)

3.16

1.264

13

I can explain how AI applications in healthcare offer solutions to problems.

107

(13.3)

135

(16.8)

231

(28.8)

197

(24.6)

132

(16.5)

3.14

1.260

14

I find it valuable to use AI for education, service and research purposes.

103

(12.8)

128

(16.0)

185

(23.1)

199

(24.8)

187

(23.3)

3.30

1.329

15

I can explain the AI applications used in healthcare services to patients.

121

(15.1)

149

(18.6)

249

(31.0)

167

(20.8)

116

(14.5)

3.01

1.256

16

I can choose the proper AI application for problems encountered in healthcare.

115

(14.3)

136

(17.0)

238

(29.7)

176

(21.9)

137

(17.1)

3.10

1.279

Vision

(15.6)

(16.7)

(28.0)

(23.3)

(16.5)

3.08

1.152

17

I can explain the limitations of AI technology.

142

(17.7)

137

(17.1)

242

(30.2)

161

(20.1)

119

(14.9)

2.97

1.295

18

I can explain the strengths and weaknesses of AI technology.

124

(15.5)

134

(16.7)

225

(28.1)

182

(22.7)

136

(17.0)

3.09

1.299

19

I can foresee the opportunities and threats that AI technology can create.

108

(13.5)

131

(16.3)

205

(25.6)

217

(27.1)

141

(17.6)

3.19

1.281

Ethics

(18.4)

(23.5)

(22.9)

(16.6)

(18.5)

2.93

1.143

20

I can use health data in accordance with legal and ethical norms.

116

(14.5)

102

(12.7)

194

(24.2)

191

(23.8)

199

(24.8)

3.32

1.356

21

I can act in accordance with ethical principles while using AI technologies.

159

(19.8)

221

(27.6)

182

(22.7)

113

(14.1)

127

(15.8)

2.79

1.341

22

I can follow the legal regulations regarding the use of AI technologies in healthcare.

168

(20.9)

243

(30.3)

176

(21.9)

96

(12.0)

119

(14.8)

2.69

1.328

Scale: 1: Strongly disagree, 2: Disagree, 3: Neutral, 4: Agree, 5: Strongly agree Statistics: M: Mean, SD: Standard Deviation

 

There was an obvious disagreement between the domains of Ability and Cognition. Ability was the highest domain with a mean score of 3.15, where students scored the highest. Items 11 and 14 reflect personnel that is comfortable with the concept of using digital tools. This can be attributed to the fact that students are “digital natives”; i.e., they are accustomed to using technology in their daily life. Conversely, Cognition was the lowest domain with a mean score of 2.90, measuring technical understanding. Item 1 received the lowest score across the entire survey (M = 2.48), falling into the “Disagree” range. Also, Item 2 was notably low (M = 2.87).

 

Table 3: Descriptive Summary of Intention to Use AI in Clinical Practice, N=802

Intention to Use AI in Clinical Practice

(1)

(2)

(3)

(4)

(5)

(6)

(7)

M

SD

(6.3)

(6.5)

(12.0)

(12.8)

(18.7)

(21.4)

(22.4)

4.85

1.594

1

I intend to use AI tools in clinical contexts.

64

(8.0)

38

(4.7)

105

(13.1)

106

(13.2)

153

(19.1)

164

(20.4)

172

(21.4)

4.78

1.837

2

I will frequently use AI applications in my clinical training.

47

(5.9)

56

(7.0)

90

(11.2)

92

(11.5)

166

(20.7)

169

(21.1)

182

(22.7)

4.88

1.799

3

I plan to use AI tools in my future clinical practice.

40

(50.0)

62

(7.7)

94

(11.7)

109

(13.6)

131

(16.3)

181

(22.6)

185

(23.1)

4.89

1.799

Scale: 1: Strongly disagree, 2: Disagree, 3: Slightly disagree, 4: Neutral, 5: Slightly agree, 6: Agree, 7: Strongly agree, Statistics: M: Mean, SD: Standard Deviation

 

The overall mean score for intention to use AI in critical practice was 4.85±1.594, indicating a moderately positive tendency toward using AI, falling between “Neutral” and “Slightly Agree”. Students showed the strongest intention regarding their long-term careers. Item 3 received the highest score (4.89±1.799), closely followed by item 2, regarding use during clinical training (4.88±1.80). The general statement of intent (item 1) was slightly lower (4.78). It is important to note the relatively high standard deviations (SD >1.594) across all items, suggesting a lack of consensus among the students’ sample. That is, while a large portion of students are willing to adopt AI (with approximately 62.5% selecting “Slightly Agree” or higher), a notable subgroup is still neutral or hesitant. This inconsistency may be reflected in the different levels of digital confidence and AI exposure explained in the demographic analysis.

 

Cronbach’s alpha coefficient was used to assess the internal consistency of the Medical Artificial Intelligence Readiness Scale (MAIRS-MS) and the Intention to

 

Table 4: Reliability Analysis Statistics

Domain

N of Items

Cronbach’s alpha

AI Literacy and Readiness

22

0.967

Cognition

8

0.928

Ability

8

0.951

Vision

3

0.871

Ethics

3

0.811

Intention to Use AI in Clinical Practice

3

0.854

 

Use AI scale. According to Table 4, the overall MAIRS-MS demonstrated excellent reliability, with α of .967. All four subdomains of AI literacy exceeded the recommended threshold of 0.70. For instance, Cronbach’s α for Cognition was 0.928 and 0.951 for Ability, showing particularly high internal consistency. The Intention to Use AI scale also demonstrated high reliability with α = .854. These results confirm that the instruments used in this study were reliable and consistent for the sample population at Majmaah University.

 

 

Figure 3: Percent Distribution for Ability Domain Items

 

A Pearson correlation coefficient was computed to measure the relationship between AI literacy/readiness and the intention to use AI in clinical practice. As presented in the correlation matrix in Table 5, there was a statistically significant positive correlation between overall AI Literacy and Intention to Use AI (r = 0.381, p<0.001). All four subdomains of AI literacy were also significantly correlated with intention (p<0.001). The strongest relationship was between the Ability domain and Intention to Use AI (r = 0.373), followed by Cognition (r = 0.343) and Vision (r = 0.321). The Ethics domain showed the weakest correlation (r = 0.298).

 

Table 5: Correlation Matrix

Parameter

AI Literacy & Readiness

Cognition

Ability

Vision

Ethics

Intention to Use AI

AI Literacy and Readiness

1

-

-

-

-

-

Cognition

0.934**

1

-

-

-

-

Ability

0.953**

0.833**

1

-

-

-

Vision

0.872**

0.767**

0.795**

1

-

-

Ethics

0.741**

0.585**

0.644**

0.616**

1

-

Intention to Use AI

0.381**

0.343**

0.373**

0.321**

0.298**

1

**Correlation is significant at the 0.01 level (2-tailed)

 

In the second step, AI Literacy and Readiness (MAIRS-MS) was added to the model. Adding this variable significantly improved the model's predictive power (Fchange(1, 783) = 111.612, p<.001). The final model (model 2) explained 21.5% of the total variance in intention (R2 = 0.215). Adding AI Literacy accounted for an additional 11.2% of the variance (R2change = 0.112) regardless of demographics. In the final model, AI Literacy was the strongest unique predictor of intention (β = 0.383, p<0.001), indicating that for every 1-SD increase in AI literacy, intention to use AI increases by 0.383 SDs, holding all other factors constant. Even after controlling for literacy, interest in AI still has significant predicting power (β = 0.185, p<0.001), suggesting that both literacy and interest are different motivators of adoption.

 

 

Figure 4: Percent Distribution for Intention to Use AI in Clinical Practice Domain Items

 

 

Figure 5 a,b: Histogram & Normal Q-Q Plot of Regression Standardized Residuals

 

Table 6: Hierarchical Regression Analysis Predicting Intention to Use AI in Clinical Practice

Predictor

Model 1

Model 2

β

t

β

t

(Constant)

-

9.304**

-

7.908**

Demographics

Age

-.107**

-2.674

-.102**

-2.735

Gender (Male)

.053

1.470

0.064

1.920

College (Nursing)

.008

0.178

-0.011

-0.273

College (Medicine)

-.010

-0.251

0.003

0.066

Background

Interest in AI

.256**

7.213

0.185**

5.450

Prior Training (<30h)

-0.089

-1.445

-0.084

-1.458

Coursework Exposure

0.074*

2.021

0.078*

2.282

Main Predictor

AI Literacy (MAIRS-MS)

-

-

0.383*

10.565

Model Summary

R2

0.103

0.215

Adjusted R2

0.083

0.197

ΔR2

0.103

0.112

F Change

5.292**

111.612**

N = 802, β: Standardized Beta Coefficient, *p<0.05, **p<0.01

 

 

Figure 6: Scatterplot of Regression Standardized Residuals

DISCUSSION

This study investigated healthcare students’ levels of AI literacy and readiness and examined how these factors shape their intention to adopt artificial intelligence within clinical practice. The results provide meaningful insights into the preparedness of future healthcare professionals for digital transformation and illuminate critical areas where educational reform is urgently required. A central finding of this study is the notable gap between students’ strong enthusiasm for AI and their limited formal exposure to AI-focused educational content. This discrepancy reflects a structural shortcoming within current healthcare curricula. Recent empirical evidence similarly demonstrates that while medical and health-profession students exhibit high interest in AI, they frequently report minimal formal instruction or structured competency-based AI training [10,11]. Studies among nursing and allied health students echo this pattern, underscoring that positive attitudes are often accompanied by low readiness and insufficient curricular integration [7,12]. The students’ reliance on self-directed and informal learning observed in this study aligns with global trends, where learners increasingly turn to online media to compensate for curricular gaps, raising questions regarding the accuracy, quality and ethical grounding of such information [13]. Taken together, these findings emphasize the urgent need for comprehensive, evidence-informed AI content within undergraduate healthcare curricula, consistent with WHO [14] recommendations for fostering safe, ethical and competency-based digital health education.

 

Although participants demonstrated a moderately high intention to use AI, their overall readiness, particularly in relation to theoretical understanding, interpretation of AI outputs and ethical competence, was only moderate. This discrepancy indicates that intention alone does not translate into meaningful or responsible AI integration. A similar conclusion was highlighted in the systematic review by El Arab et al. [15], which revealed that nursing students’ positive perceptions of AI often coexist with insufficient conceptual, procedural and ethical preparedness. The finding that students rated their “Ability” higher than “Cognition” and “Ethics” suggests a superficial sense of confidence that may not reflect deep or applied competence in AI use. This phenomenon has been documented internationally; Ahmad et al. [16] found that students tend to overestimate their practical familiarity with AI while lacking foundational understanding and ethical reasoning. These gaps are concerning in light of global frameworks that stress fairness, transparency, safety and accountability in healthcare AI [14]. Therefore, AI training must extend beyond technical operation to include critical appraisal, interpretation, legal considerations and ethical decision-making. The study’s finding that AI literacy is a significant predictor of students’ intention to adopt AI, even after controlling for demographic characteristics, reinforces the central role of knowledge and competence in technology adoption. This relationship aligns strongly with the Technology Acceptance Model (TAM), which posits that perceived usefulness and perceived competence are core determinants of behavioural intention [17]. Parallel results from European and Asian contexts provide additional support: Laupichler et al. [18] demonstrated that German medical students with higher AI literacy had more positive attitudes and stronger willingness to integrate AI into their future practice. Similar associations were reported in cross-national investigations by Busch et al. [19] and Adzim et al. [20]. These converging findings underscore that enhancing AI literacy, particularly conceptual, analytical and ethical dimensions, is essential for improving students’ readiness and for equipping the future workforce to engage safely and effectively with AI technologies. This evidence collectively supports the integration of structured, competency-based AI education within health-professional programs, including simulations, case-based learning and interprofessional digital health modules.

 

The results also hold broader implications for national and institutional strategic agendas, particularly in countries undergoing rapid digital health transformation such as Saudi Arabia under Vision 2030. While students’ positive attitudes toward AI represent an enabling foundation, the moderate levels of literacy, conceptual understanding and ethical awareness highlight potential risks, including inappropriate tool use and excessive reliance on algorithmic outputs. Addressing these gaps will require revising curricula to include competency-based AI modules, embedding ethical training throughout clinical education and offering interprofessional learning opportunities that align with real-world digital health workflows. Despite its contributions, the study is limited by its single-institution sample and reliance on self-reported measures, which may introduce bias. Future research should incorporate multi-institutional sampling, longitudinal designs to track literacy development over time and mixed-methods approaches to deepen understanding of how AI competence influences actual clinical behaviour.

CONCLUSIONS

This study contributes valuable evidence regarding healthcare students’ readiness to adopt artificial intelligence in clinical practice by examining AI literacy, readiness levels and predictors of adoption intention. The findings reveal strong interest but insufficient formal training, highlighting critical gaps in conceptual, practical and ethical competence. By identifying these gaps and linking them to students’ adoption intentions, the study offers direction for curriculum enhancement and workforce development. Strengthening AI education through structured, competency-based modules, ethical training and clinically integrated simulations will support the development of a digitally empowered healthcare workforce. In the context of national digital health priorities, such as Saudi Vision 2030, these efforts are essential to ensure safe, effective and ethically grounded AI adoption. Future multi-centre and longitudinal research will be crucial for further understanding how AI literacy evolves and how it ultimately influences clinical practice.

 

Limitations

Despite its contributions, the study has several limitations that should be acknowledged. First, data were collected from a single institution, which may limit the generalizability of findings to other healthcare colleges or regions. Second, the use of self-reported measures carries the risk of social-desirability bias and may not fully reflect students’ actual competence or behaviours when interacting with AI systems. Third, the cross-sectional design restricts the ability to assess how AI literacy and readiness evolve over time or how they influence real clinical performance. Future research should address these limitations by employing multi-centre and longitudinal designs and incorporating objective assessments or performance-based measures. Such efforts will deepen understanding of how AI competencies develop and how they translate into safe, ethical and effective clinical practice.

 

Ethical Statement

Ethical approval was obtained from the Majmaah University Institutional Review Board (HA-01-R-088). Participation was voluntary, with electronic consent obtained before data collection.

REFERENCES

  1. Bajwa, J. et al. “Artificial intelligence in healthcare: transforming the practice of medicine.” Future Healthcare Journal, vol. 8, no. 2, 2021, pp. e188-e194. https://doi.org/10.7861/fhj.2021-0095.
  2. Topol, Eric. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books, 2019.
  3. Stoumpos, A.I. et al. “Digital transformation in healthcare: technology acceptance and its applications.” International Journal of Environmental Research and Public Health, vol. 20, no. 4, 2023, pp. 3407. https://doi.org/10.3390/ijerph20043407.
  4. Karaca, O. et al. “Medical artificial intelligence readiness scale for medical students (MAIRS-MS): development, validity and reliability study.” BMC Medical Education, vol. 21, 2021, pp. 112. https://doi.org/10.1186/s12909-021-02546-6.
  5. Oh, S. et al. “Exploring nursing students’ readiness and attitudes toward artificial intelligence: a cross-sectional study.” Nurse Education Today, vol. 122, 2023, pp. 105707. https://doi.org/10.1016/j.nedt.2022.105707.
  6. Karaca, A. et al. “Development and validation of the Medical Artificial Intelligence Readiness Scale for medical students (MAIRS-MS).” BMC Medical Education, vol. 21, no. 1, 2021, pp. 121. https://doi.org/10.1186/s12909-021-02557-8.
  7. Pinto Dos Santos, D. et al. “Medical students’ attitude towards artificial intelligence: a multicentre survey.” European Radiology, vol. 29, no. 4, 2019, pp. 1640-1646. https://doi.org/10.1007/s00330-018-5601-1.
  8. Saudi Data & AI Authority. National Strategy for Data and Artificial Intelligence (NSDAI). Riyadh, Saudi Arabia, 2020.
  9. Davis, F.D. “Perceived usefulness, perceived ease of use and user acceptance of information technology.” MIS Quarterly, vol. 13, no. 3, 1989, pp. 319-340.
  10. Buabbas, A.J. et al. “Investigating students' perceptions towards artificial intelligence in medical education.” Healthcare, vol. 11, no. 9, 2023, pp. 1298. https://doi.org/10.3390/healthcare11091298.
  11. Al Hadithy, Z.A. et al. “Knowledge, attitudes and perceptions of artificial intelligence in healthcare among medical students at Sultan Qaboos University.” Cureus, vol. 15, no. 9, 2023. https://doi.org/10.7759/cureus.44887.
  12. Yalcinkaya, T. et al. “Exploring nursing students' attitudes and readiness for artificial intelligence: a cross-sectional study.” Teaching and Learning in Nursing, vol. 19, no. 4, 2024, pp. e722-e728. https://doi.org/10.1016/j.teln.2024.02.004.
  13. Berger-Estilita, J. et al. “Self-directed learning in health professions: a mixed-methods systematic review.” PLOS One, vol. 20, no. 5, 2025. https://doi.org/10.1371/journal.pone.0320530.
  14. World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. World Health Organization, 2021.
  15. El Arab, R.A. et al. “Artificial intelligence in nursing: a systematic review of attitudes, literacy, readiness and adoption intentions among nursing students and practicing nurses.” Frontiers in Digital Health, vol. 7, 2025, pp. 1666005. https://doi.org/10.3389/fdgth.2025.1666005.
  16. Ahmad, Z. et al. “Measuring students' AI competence: development and validation of a multidimensional scale integrating educational psychology perspectives.” Acta Psychologica, vol. 259, 2025, pp. 105446. https://doi.org/10.1016/j.actpsy.2024.105446.
  17. Lee, A.T. et al. “Understanding psychosocial barriers to healthcare technology adoption: a review of TAM and UTAUT frameworks.” Healthcare, vol. 13, no. 3, 2025, pp. 250. https://doi.org/10.3390/healthcare13030250.
  18. Laupichler, M.C. et al. “Medical students’ AI literacy and attitudes towards AI: a cross-sectional two-center study.” BMC Medical Education, vol. 24, no. 1, 2024, pp. 401. https://doi.org/10.1186/s12909-024-05400-7.
  19. Busch, F. et al. “Medical students’ perceptions towards artificial intelligence in education and practice: a multinational, multicenter cross-sectional study.” medRxiv, 2023. Preprint.
  20. Adzim, M.R.S. et al. “Exploring nursing students’ intention to use artificial intelligence: a mixed-methods study based on the Technology Acceptance Model and Theory of Planned Behaviour.” Holistic Nursing Plus, vol. 3, no. 2, 2025, pp. 174-188.
Recommended Articles
Research Article In-Press

Validity and Reliability of Skeletal Maturity Assessment Using South Indian Objective Method in Maharashtrian Population

...
pdf Download PDF
Research Article In-Press

Pain Self-Efficacy is Associated with Pain Intensity and Disability in Hospital Housekeepers with Chronic Low Back Pain

...
pdf Download PDF
Case Report In-Press

Hip Heterotopic Ossification and Pellegrini-Stieda Disease After Guillain-Barré Syndrome: A Rare Case Report

pdf Download PDF
Research Article In-Press

Relationship between Body Mass Index (BMI) and Menstrual Patterns among Nursing Students: A Cross-Sectional Study

...
pdf Download PDF
Copyright © Journal of Pioneering Medical Sciences until unless otherwise.