Research Article | In-Press | Volume 14 Issue 11 (November, 2025) | Pages 8 - 15

Attitude Toward AI in Healthcare and the Divide Among the Stakeholders: A Cross-Sectional Study in Saudi Arabia

orcid
1
Department of family and community medicine, faculty of medicine, university of Tabuk, Tabuk, Saudi Arabia
Under a Creative Commons license
Open Access

Abstract

Background: Artificial Intelligence (AI) has the potential to transform healthcare; however, its implementation faces ethical and perceptual barriers among different stakeholders. This study examined some of these stakeholders’ attitudes toward AI in healthcare, including those with medical sciences, computer science/information technology (IT) backgrounds, as well as the general public from other backgrounds. Methods: A cross-sectional survey was conducted in Tabuk, Saudi Arabia, from November 24, 2024, to May 15, 2025. A validated 43-item questionnaire assessed demographics, technology affinity, AI familiarity, and attitudes. Data analysis included descriptive statistics, chi-square, and Kruskal-Wallis tests. Results: Among 581 participants (mean age 31.24, 52.8% male), 15% were of medical background, 18.2% from the computer science/IT field, and 66.8% represented the general public. Technology affinity and AI familiarity were highest among the computer science/IT group, followed by the medical sciences group and then the general public (p<0.001). Significant attitude differences emerged on nine of 26 items, with a clear divide between them regarding ethical issues and patient safety potentials, as the medical sciences group disagreed on the liability in the case of harm. All groups agreed on the necessity of physician oversight. The computer science/IT group showed the most positive sentiment, followed by the general public from other backgrounds, while the medical sciences group showed the greatest caution (p<0.001). Conclusions: This study suggests that those with a medical sciences background, although they have a higher technology affinity than the general public, are more cautious toward AI, while those with an IT background are the most optimistic. It also highlighted the gaps that need to be addressed to bridge the divide between these stakeholders, including the development of clear policies governing medicolegal and ethical issues.

Keywords
Artificial Intelligence, Healthcare, Perception, Attitude, Professional Background, Health Sciences, Computer Science, Information Technology

INTRODUCTION

Artificial Intelligence (AI) has made a massive leap in science. Many physicians never anticipated that such a tool, with its huge capabilities, would be available, with profound implications for healthcare delivery and patient outcomes. This concept, a machine that can think, is not recent; it was first introduced by Alan Turing in 1950 [1]. Since then, AI has evolved from a theoretical construct into sophisticated systems capable of mimicking human cognition and, in some cases, surpass human capabilities in specific medical tasks [2]. Over the last few decades, we have witnessed how technology has transformed medicine, and the advent of AI has the potential to transform healthcare radically. The integration of advanced AI technologies has demonstrated remarkable potential to revolutionize multiple aspects of healthcare, including diagnosis, treatment planning, personalized medicine, and drug discovery [2-7]. Since 2020, we have witnessed an escalating wave of literature about AI utilization in health care and the growing debate about AI [8-10].

 

The last COVID-19 pandemic has prompted healthcare leaders worldwide to adopt AI in the healthcare sector as an effective tool to manage big health-related data and to improve healthcare delivery under such constraints [11]. Yet seemingly an effective tool, its utilization in usual patient care is still limited due to ethical, legal, technical, and security concerns [12-14]. These challenges are compounded by varying levels of acceptance among healthcare professionals and the general public, whose perspectives are shaped by cultural backgrounds, educational levels, personal values, and prior experiences with technology [15-19]. Although an important topic, there have been only a few studies that have focused on exploring the attitude toward AI in the Middle East, and how the professional and scientific background shapes their attitude [17-26]. To date, no study has collectively examined the public's attitude toward AI in healthcare compared to other key groups, such as individuals with backgrounds in medical sciences and those in the computer science/Information Technology (IT) field.

 

Saudi Arabia has positioned digital transformation of healthcare as a strategic priority, with AI playing a central role in this transformation [11]. The successful integration of AI into healthcare systems requires a comprehensive understanding of the attitudes of various stakeholders toward such integration. Therefore, this study aims to address this knowledge gap by examining the attitudes of these stakeholders toward the use of AI in healthcare. The findings can provide valuable insights and guide strategic planning by identifying stakeholder concerns and perceptual gaps, facilitating the development of targeted approaches for successful AI implementation in healthcare.

 

The objectives of this study are:

 

  • Explore the attitudes toward AI use in healthcare among participants from medical sciences and computer science/IT backgrounds, and the general public
  • Compare technology affinity and AI familiarity among these key stakeholders
  • Analyze how the scientific/professional background shapes attitudes toward various aspects of AI in healthcare

METHODS

Study Design and Settings

A cross-sectional study was conducted to assess the attitudes toward AI in healthcare among the public and other stakeholders. The study took place in the Tabuk region of Saudi Arabia from November 24, 2024, to May 15, 2025.

 

Participants and Sampling

The target population consisted of adults aged 18 and above who live in Tabuk, Saudi Arabia. It included both the public and the local university community to increase the likelihood of finding individuals with medical sciences and computer science/IT backgrounds. A non-probability convenience sampling method was used. Participants within the university community were approached via the institutional email list with an online link to a Google Form. The outside community was approached with the help of local social influencers who distributed the link to different groups on the country's most popular social media platform (WhatsApp). No exclusion criteria were specified; only those who are unwilling to participate. The participants later were stratified based on their background into three categories: medical sciences group, computer sciences/It group and a group for the public from other backgrounds.

 

Data Collection Tool and Validation

The questionnaire was developed after a literature review. It was based on previously validated instruments used in similar studies, with modifications to address the specific research objectives relevant to our context [26]. The original questionnaire was first translated into Arabic and then a backward translation into English was done independently. A pilot test was conducted with 25 participants to ensure reliability, readability, and cultural appropriateness. Based on participants' feedback, minor adjustments were made to the wording of a few items to improve clarity and readability. The internal consistency of the questionnaire was assessed using Cronbach's α, which yielded a value of 0.76.

 

The final questionnaire consisted of 43 items organized into three main sections:

 

  • Demographic information: Age, gender, education level, professional background/field of study. The background was classified into three categories: medical sciences, computer science and IT, and the general public from other backgrounds
  • Technology affinity and AI experience: six items with a 5-point Likert scale assessed participants’ affinity for technology. Frequency of internet and computer use, confidence in using digital devices, and experience with electronic devices and functions. The average of these items was used to evaluate technology affinity. An additional five items with dichotomous (yes/no) responses reflect their use of smart devices and applications (ownership of smartphones/tablets, use of smartwatches/wearables, use of medical applications, and self-assessed technology expertise). Finally, an item with a 5-point Likert scale was used to assess their perceived AI experience
  • Attitudes toward AI in healthcare: 26 Likert-scale items measuring various aspects of AI perception, including perceived benefits, concerns about doctor-patient relationships, trust in AI versus human judgment, data security, regulatory approval, and overall sentiment toward AI in medicine

 

Responses to attitude items were recorded on a 5-point Likert scale (1, strongly disagree; 5, strongly agree). For negatively worded items, scores were reversed during analysis to maintain consistency in interpretation, with higher scores consistently indicating more positive attitudes toward AI.

 

Sample Size Determination

To estimate the required sample size of hypothesized outcome frequency of 50% with a margin of error of ±5%, and a 95% confidence interval, an online calculator (OpenEpi V3.01) was used. It revealed a required sample size of 386, and participants number of 581 was considered sufficient.

 

Ethical Considerations

The study protocol was reviewed and approved by the local Institutional Review Board at the University of Tabuk (Approval Number: UT-424-237-2024). It was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki and local regulations governing research involving human subjects. Participation was voluntary, and informed consent was obtained from all respondents prior to their participation. Participants were assured of anonymity and confidentiality, and no personally identifiable information was collected.

 

Statistical Analysis

Data were analyzed using SPSS version 27. Descriptive statistics were calculated for all variables, including frequencies and percentages for categorical variables and means and standard deviations for continuous variables. The normality of data was assessed using the Shapiro-Wilk test. Chi-square and Kruskal-Wallis tests were used to test differences between groups as appropriate. A p-value of <0.05 was considered statistically significant for all tests. RStudio (version 2025.09.1) was used to visualize the distribution of technology affinity among participants of different backgrounds.

RESULTS

Participant Characteristics

Of the 606 individuals who reacted to the survey link, a total of 581 participants completed the questionnaire, while 18 declined participation. The demographic characteristics of the respondents are detailed in Table 1. The mean age of participants was 31.24 years with a relatively balanced gender distribution (52.8% male, 47.2% female). The majority of respondents held a Bachelor’s degree (65.4%).

 

Table1: Characteristics of study participants (N = 581)

Variable

Category

N (%) or Mean (SD)

Age

-

31.24 (11.2)

Gender

Male

307 (52.8)

Female

274 (47.2)

Education Level

High School or below

48 (8.3)

Diploma

53 (9.1)

Bachelor’s

380 (65.4)

Postgraduate

100 (17.2)

Background

Medical sciences

87 (15)

Computer Science/IT

106 (18.2)

Others (general public)

388 (66,8)

 

The sample was diverse, with most participants (66.8%) classified as members of the general public with backgrounds outside medical sciences or computer science/IT.

 

Technology Affinity and AI Experience

Significant differences in technology affinity and experience with AI were observed across professional backgrounds (Table 2). Participants from Computer Science/IT reported the highest frequency of computer use, confidence with electronic devices, and self-assessed technical expertise, with 71.7% considering themselves tech experts, compared to 27.6% in Medical Sciences and 29.1% in other fields (p<0.001). This group also reported the highest level of knowledge about AI (3.45). The general public reported the lowest frequency of computer use and the lowest self-reported knowledge about AI.

 

These findings indicate significant variation in technology affinity and AI experience by professional background. The Computer Science participants generally demonstrated higher proficiency and confidence with technology and AI-related applications compared to the medical sciences and the general public groups.

 

The total Technology affinity scores differed significantly by professional background group, as illustrated in Figure 1. Participants in computer science/IT demonstrated the highest mean proficiency score, followed by medical sciences and the general public (Others) (p<0.001). The distribution of scores in the computer science/IT group was more at the upper end of the scale. In contrast, the medical sciences group and the general public showed wider distributions and more variability. These results suggest that a background in computer-related fields is associated with a greater and more uniform affinity for technology. Furthermore, those with a health sciences background showed a higher affinity for technology when compared to the general public from other backgrounds.

 

 

Figure 1: Violin Plot of Technology Affinity by Background

 

Attitudes Toward AI in Healthcare

Analysis of the 26 attitude items revealed statistically significant differences between professional groups for nine items (34.6%), indicating that professional background is a key factor influencing specific views on AI in medicine. For example, computer science/IT professionals were significantly more likely to believe that AI would lead to fewer treatment errors than their counterparts in the medical field (p = 0.005).

 

Table 2: Technology proficiency and AI experience by background

Variable

Medical Sciences

Mean (SD) / %

Computer Science/ IT

Mean (SD) / %

General Public

Mean (SD) / %

p-value

Technology Affinity

How often do you use the internet

(1-5: never-daily)

5.00 (0.00)

5.00 (0.00)

4.92 (0.37)

0.008

How often do you use a computer

(1-5: never-daily)

3.76 (1.41)

4.81 (0.46)

3.48 (1.36)

<0.001

Confidence in using computers/ smartphones, tablets

(1-5: low-high)

4.52 (0.59)

4.60 (0.55)

4.11 (0.84)

<0.001

Ease of learning a new electronic device/ function

(1-5: difficult-easy)

4.06 (0.70)

4.53 (0.64)

3.95 (0.85)

<0.001

Ease of learning a new application/ electronic task

(1-5: difficult-easy)

4.09 (0.69)

4.40 (0.75)

3.93 (0.83)

<0.001

How much do you like using computers (1-5: dislike-like)

4.48 (0.76)

4.62 (0.62)

4.16 (0.85)

<0.001

AI Experience

Own smartphone or tablet

97.7%

100.0%

98.2%

0.341

Used smartwatches

71.3%

62.3%

54.1%

0.009

Used medical programs/ applications

73.6%

57.5%

64.2%

0.068

Used AI applications

83.9%

85.8%

61.1%

<0.001

Consider yourself a tech expert

27.6%

71.7%

29.1%

<0.001

Knowledge about AI (1-5 scale)

3.08 (0.7)

3.45 (0.82)

2.87 (0.82)

<0.001

SD: Standard deviation, Percentages were rounded

 

Table 3: Participants’ attitudes toward AI use in medicine by background

AI Attitude Item

Medical Sciences Mean (SD)

Computer Science/ IT Mean (SD)

General Public Mean (SD)

p-value

I think that the use of AI brings benefits for the patient.

3.69 (1.09)

3.93 (1.06)

3.82 (0.96)

0.210

Doctors will play a less important role in the therapy of patients in the future.

2.62 (1.36)

2.49 (1.12)

2.63 (1.25)

0.666

Through the use of AI, there will be less treatment errors in the future.

2.99 (1.22)

3.54 (1.09)

3.39 (1.07)

0.005

AI should not be used in medicine as a matter of principle.

2.82 (1.21)

2.58 (1.14)

2.69 (1.09)

0.278

Doctors are becoming too dependent on computer systems.

2.86 (0.86)

2.91 (1.03)

3.04 (0.97)

0.169

The testing of AI before it is used on patients should be carried out by an independent body (e.g. authority, SFDA).

4.49 (0.71)

4.40 (0.93)

4.38 (0.86)

0.675

I would trust the assessment of an AI more than the assessment of a doctor.

2.23 (1.03)

2.54 (1.00)

2.43 (1.01)

0.076

Doctors know too little about AI to use it on patients.

2.67 (1.03)

3.00 (0.91)

3.05 (0.84)

0.005

If a patient has been harmed, a doctor should be held responsible for not following the recommendations of AI.

2.02 (1.14)

2.76 (0.99)

2.53 (0.99)

< 0.001

The influence of AI on medical treatment scares me.

3.21 (1.12)

3.09 (1.05)

3.19 (1.05)

0.888

The use of AI prevents doctors from learning to make their own correct judgment of the patient.

3.67 (1.13)

3.40 (1.07)

3.30 (0.98)

0.005

If AI predicts a low chance of survival for the patient, doctors will not fight for that patient's life as much as before.

2.63 (1.38)

2.89 (1.30)

3.01 (1.27)

0.056

The use of AI is changing the demands of the medical profession.

2.56 (1.30)

2.92 (1.05)

3.06 (1.21)

0.003

I would like my personal medical treatment to be supported by AI.

2.94 (1.08)

3.13 (1.06)

2.84 (1.11)

0.021

I would make my anonymous patient data available for non-commercial research (universities, hospitals, etc.) if this could improve future patient care.

3.28 (1.22)

3.46 (1.04)

3.37 (1.17)

0.663

AI-based decision support systems for doctors should only be used for patient care if their benefit has been scientifically proven.

3.61 (1.03)

3.85 (1.07)

3.69 (1.00)

0.122

I am more afraid of a technical malfunction of AI than of a wrong decision by a doctor.

3.86 (1.01)

3.69 (0.97)

3.76 (0.96)

0.288

I am not worried about the security of my data.

2.55 (1.19)

2.79 (1.26)

2.82 (1.15)

0.099

By using AI, doctors will again have more time for the patient.

3.02 (1.07)

2.92 (0.93)

3.19 (1.04)

0.050

A doctor should always have the final control over diagnosis and therapy.

4.49 (0.63)

4.38 (0.96)

4.33 (0.84)

0.339

I am worried that AI-based systems could be manipulated from the outside (terrorists, hackers,.).

4.07 (0.87)

4.13 (1.04)

3.96 (0.95)

0.078

The use of AI impairs the doctor-patient relationship.

3.71 (1.07)

3.28 (1.08)

3.48 (1.14)

0.017

The use of AI is an effective instrument against the overload of doctors and the shortage of doctors.

3.85 (0.90)

3.57 (0.99)

3.42 (0.98)

< 0.001

I would like my doctor to override the recommendations of AI if he comes to a different conclusion based on his experience or knowledge.

4.02 (0.89)

4.04 (0.98)

3.79 (0.99)

0.013

The use of AI will reduce the workload of doctors.

3.52 (1.07)

3.41 (1.02)

3.52 (0.90)

0.765

The use of AI will reduce appointment waiting time.

3.66 (1.05)

3.91 (0.99)

3.80 (1.00)

0.209

I think that the use of AI brings benefits for the patient.

3.69 (1.09)

3.93 (1.06)

3.82 (0.96)

0.210

Doctors will play a less important role in the therapy of patients in the future.

2.62 (1.36)

2.49 (1.12)

2.63 (1.25)

0.666

Through the use of AI, there will be less treatment errors in the future.

2.99 (1.22)

3.54 (1.09)

3.39 (1.07)

0.005

AI should not be used in medicine as a matter of principle.

2.82 (1.21)

2.58 (1.14)

2.69 (1.09)

0.278

Doctors are becoming too dependent on computer systems.

2.86 (0.86)

2.91 (1.03)

3.04 (0.97)

0.169

The testing of AI before it is used on patients should be carried out by an independent body (e.g. authority, SFDA).

4.49 (0.71)

4.40 (0.93)

4.38 (0.86)

0.675

I would trust the assessment of an AI more than the assessment of a doctor.

2.23 (1.03)

2.54 (1.00)

2.43 (1.01)

0.076

Doctors know too little about AI to use it on patients.

2.67 (1.03)

3.00 (0.91)

3.05 (0.84)

0.005

If a patient has been harmed, a doctor should be held responsible for not following the recommendations of AI.

2.02 (1.14)

2.76 (0.99)

2.53 (0.99)

< 0.001

The influence of AI on medical treatment scares me.

3.21 (1.12)

3.09 (1.05)

3.19 (1.05)

0.888

The use of AI prevents doctors from learning to make their own correct judgment of the patient.

3.67 (1.13)

3.40 (1.07)

3.30 (0.98)

0.005

If AI predicts a low chance of survival for the patient, doctors will not fight for that patient's life as much as before.

2.63 (1.38)

2.89 (1.30)

3.01 (1.27)

0.056

The use of AI is changing the demands of the medical profession.

2.56 (1.30)

2.92 (1.05)

3.06 (1.21)

0.003

I would like my personal medical treatment to be supported by AI.

2.94 (1.08)

3.13 (1.06)

2.84 (1.11)

0.021

I would make my anonymous patient data available for non-commercial research (universities, hospitals, etc.) if this could improve future patient care.

3.28 (1.22)

3.46 (1.04)

3.37 (1.17)

0.663

AI-based decision support systems for doctors should only be used for patient care if their benefit has been scientifically proven.

3.61 (1.03)

3.85 (1.07)

3.69 (1.00)

0.122

I am more afraid of a technical malfunction of AI than of a wrong decision by a doctor.

3.86 (1.01)

3.69 (0.97)

3.76 (0.96)

0.288

I am not worried about the security of my data.

2.55 (1.19)

2.79 (1.26)

2.82 (1.15)

0.099

By using AI, doctors will again have more time for the patient.

3.02 (1.07)

2.92 (0.93)

3.19 (1.04)

0.050

A doctor should always have the final control over diagnosis and therapy.

4.49 (0.63)

4.38 (0.96)

4.33 (0.84)

0.339

I am worried that AI-based systems could be manipulated from the outside (terrorists, hackers,...).

4.07 (0.87)

4.13 (1.04)

3.96 (0.95)

0.078

The use of AI impairs the doctor-patient relationship.

3.71 (1.07)

3.28 (1.08)

3.48 (1.14)

0.017

The use of AI is an effective instrument against the overload of doctors and the shortage of doctors.

3.85 (0.90)

3.57 (0.99)

3.42 (0.98)

< 0.001

I would like my doctor to override the recommendations of AI if he comes to a different conclusion based on his experience or knowledge.

4.02 (0.89)

4.04 (0.98)

3.79 (0.99)

0.013

The use of AI will reduce the workload of doctors.

3.52 (1.07)

3.41 (1.02)

3.52 (0.90)

0.765

The use of AI will reduce appointment waiting time.

3.66 (1.05)

3.91 (0.99)

3.80 (1.00)

0.209

AI: Artificial Intelligence, SFDA: Saudi Food and Drug Authority

 

Table 4: Overall sentiment toward AI in medicine by background (Taken all together: How positive or negative do they feel about the use of AI in medicine?)

Sentiment Level

Medical Sciences

Computer Science/ IT

General Public

p-value

Very negative

3.4%

2.8%

3.9%

< 0.001

Negative

11.5%

9.4%

6.4%

Neutral

39.1%

28.3%

36.6%

Positive

32.2%

23.6%

38.1%

Very positive

13.8%

35.8%

14.9%

AI: Artificial intelligence, Percentages were rounded

 

Conversely, medical professionals expressed greater concern that the use of AI would prevent doctors from developing their own clinical judgment compared to the IT group (p = 0.005). Additionally, medical sciences groups disagreed on the physician's liability in cases of harm due to noncompliance with AI recommendations (mean 2.02), while other groups were almost neutral (p<0.001). The significant gap and caution among the medical sciences group indicate the need for targeted educational initiatives to build further familiarity and trust, which is crucial for the successful adoption of AI in clinical practice.

 

Across all groups, there was strong agreement on the need for physician oversight and independent testing and regulation. There was also agreement about the fear that AI could be manipulated by intruders. The complete findings are presented in Table 3.

 

Overall Sentiment Toward Ai in Medicine

When asked about their overall feeling toward the use of AI in medicine, a significant association was found with professional background (χ² = 31.173, p<0.001), as shown in Table 4. The computer science/IT group reported the most positive sentiment, with a combined 59.4% feeling 'positive' or 'very positive'. This was followed by the general public at 53.0%. The medical sciences group was the most cautious, with 46.0% reporting 'positive' or 'very positive' sentiment and a correspondingly larger proportion (39.1%) feeling 'neutral'. This pattern might reflect the prior concerns of medical sciences groups about medicolegal liability and patient safety issues, and this caution may hinder the adoption of AI use in healthcare.

DISCUSSION

This study analyzed the attitudes of various stakeholders toward the integration of AI into healthcare in Saudi Arabia. While the findings suggested that the public is generally receptive to AI use in healthcare, this optimism appeared to be moderated by concerns regarding patient safety and data security. Furthermore, this study identified a significant difference in attitudes toward AI in healthcare based on scientific/professional background, revealing a spectrum from cautious optimism among those in the medical sciences to pronounced optimism among those with an IT background.

 

The findings of this study aligned with and contributed to the emerging body of research on AI perceptions in Saudi Arabia and the broader Arab region. A recent study by Syed et al. conducted in Riyadh among 830 participants, found similarly favorable public opinion towards AI in healthcare, with 84.1% reporting awareness of AI and generally positive perceptions [27]. However, their study did not identify significant differences based on demographic factors and did not stratify by professional background. The current research builds upon this foundation by suggesting that, while the general sentiment may be positive across Saudi regions, statistically significant and practically important variations may exist in relation to professional expertise and technological familiarity. The complementary research by Alshutayli et al. investigated public acceptance of AI as a partial replacement for human doctors among 386 participants across multiple Saudi regions [28]. Their finding that 52.3% were comfortable with this more extreme proposition provides important context for our results. Understanding the difference between AI as a supportive tool (our primary focus) and AI as a replacement technology (their focus) helps clarify an important point in public acceptance. Our study's emphasis on AI as supportive technology may explain the slightly more positive sentiment observed across our professional cohorts, suggesting that implementation strategies emphasizing human-AI collaboration rather than replacement might find greater public acceptance.

 

When situated within the wider Arab context, our findings are consistent with the multinational study by Allam et al. which surveyed 4,492 medical students across nine Arab countries [29]. Their discovery of a significant knowledge deficit regarding AI (87.1% had low knowledge) coupled with high optimism (84.9% believed AI would revolutionize medicine) provides crucial context for understanding the apprehension we observed among our medical sciences group. This paradox suggests that healthcare professionals' caution may stem from a recognized lack of formal teaching or training combined with a deeper awareness of clinical care realities, underscoring the potential need for integrating comprehensive AI education into medical curricula across the Arab region.

 

A strength of our study is the use of questionnaire items that have been used in different cultural contexts, enabling international comparison. The majority of the attitude items section on AI perception in our survey is identical to that used by Fritsch et al. in their cross-sectional study of 452 hospital patients in Germany [26]. This parallel provides a unique opportunity to distinguish between culturally specific attitudes and more universal human responses to AI in healthcare. The Fritsch et al. study reported that 53.2% of respondents rated AI in medicine as positively, a figure that aligns closely with the sentiment range of 46.0% to 59.4% we observed across our groups. More significantly, both studies identified a strong consensus on the need for physician oversight. Fritsch et al. noted that patients strongly agreed that AI must be controlled by a physician keeping the ultimate responsibility for diagnosis and therapy [26]. This apparent cross-cultural consensus lends support to our findings and suggests that the demand for human-in-the-loop governance models for medical AI may transcend cultural boundaries.

 

International literature provides additional support for our findings. The comprehensive systematic review by Beets et al. of 11 nationally representative US surveys revealed that Americans generally view healthcare as an area where AI applications could be particularly beneficial, yet maintain substantial concerns about AI decision-making and health information privacy [30]. Similarly, the systematic review by Young et al. covering multiple countries found generally positive attitudes toward clinical AI, but with consistent preferences for AI as augmentative rather than replacement technology [17].

 

The finding that professional background appears to influence AI attitudes is supported by previous international literature. The Japanese study by Tamori et al. comparing 399 doctors with 600 members of the public revealed that healthcare professionals were more optimistic about AI in medicine (mean 3.43 vs 3.23 on a 5-point scale) [31]. This pattern aligns with our observation that individuals with healthcare knowledge, while expressing caution, may have a greater understanding of AI's potential benefits. A study in Australia by Stewart et al. found generally positive attitudes toward AI in healthcare, with strong support for integrating AI education into medical curricula [32]. The Canadian Digital Health Survey by Cinalioglu et al. revealed that 42% of Canadians were moderately knowledgeable about AI and 43% were moderately comfortable with AI in healthcare, with interesting age-related variations [33]. These findings collectively suggest that professional background and technical familiarity may be key factors associated with AI acceptance across diverse cultural contexts. However, the current study found that while those in the medical field have a higher affinity for technology than the public, they showed reservations toward AI due to medicolegal issues. The ambiguity and lack of knowledge regarding the medicolegal implications of AI use in health are supported by a recent study in Saudi Arabia [34].

 

While universal patterns emerge from international comparisons, the importance of cultural and regional variations should not be overlooked. A cross-cultural study by Ikkatai et al. comparing attitudes across Japan, the US, Germany, and South Korea found that Asian countries generally exhibited more positive attitudes toward AI, suggesting cultural factors that may influence acceptance patterns [35]. Their findings suggest that cultural factors specific to different regions may significantly influence how AI technologies are perceived and accepted in healthcare settings. Our study contributes to this understanding by indicating how the scientific and professional background in a different cultural context, such as Saudi Arabia, shapes attitudes toward AI in healthcare.

 

Implications for Policy and Practice

The results of this study have implications for the strategic integration of AI into the Saudi healthcare system. The divide between IT and medical stakeholders underscores the need for interdisciplinary collaboration from the outset of AI projects in healthcare. The concerns raised by different stakeholders may serve as a roadmap for policymakers and healthcare leaders. Policy development must prioritize clarifying medicolegal liability, as the strong disagreement from medical professionals on this issue represents a significant barrier to adoption. Furthermore, educational initiatives should be tailored: for the public, to build accurate understanding and manage expectations; for IT professionals, to foster a deeper appreciation of clinical workflows and ethical complexities; and for healthcare providers, to address their concerns and to position them as key partners in the co-design and implementation of AI solutions rather than as passive end-users.

 

Limitations and Future Directions

This study has several limitations that should be acknowledged. The cross-sectional design cannot establish the causal relationships between professional backgrounds and attitudes. While our focus on the Tabuk region provides important local data, generalizability across all of Saudi Arabia requires validation, although comparisons of public attitude with studies from other Saudi regions suggest broad consistency [27, 28]. The nonprobability online convenience sampling approach may introduce selection bias toward more digitally literate participants, inflating the technology affinity. However, this limitation is shared with much of the international literature in this field. As with all self-administered questionnaires, social desirability bias cannot be ruled out, which may have led participants to express greater acceptance and understanding of AI than they genuinely hold. Finally, as a quantitative survey, it captures the breadth of opinion about AI in healthcare but lacks the depth to fully explore the nuanced reasoning behind these attitudes.

 

Longitudinal studies are warranted to track the evolution of public perception as AI technologies become more integrated into daily life. Qualitative studies, such as in-depth interviews and focus groups with patients, clinicians, and AI developers, would also be invaluable for a deeper understanding of the hopes, fears, and expectations surrounding AI in healthcare. Finally, expanding the research to include more diverse geographic regions within Saudi Arabia and the wider Middle East would help to contextualize these findings further.

CONCLUSION

The findings of this study suggested that the general public is generally receptive to the use of AI in healthcare in Saudi Arabia. The professional background is significantly associated with attitudes toward AI in healthcare, with a clear divide between the optimism of IT professionals and the caution of individuals in the medical sciences field, especially regarding liability and patient safety potentials. The alignment of these results with international findings highlights essential elements for effective AI implementation, including human-centric governance, stakeholder-specific education, and collaborative integration involving healthcare professionals. Also, our findings highlight the need for a clear policy to govern the ethical and medicolegal issues associated with AI use in healthcare. Mixed-methods study designs are needed for an in-depth examination of attitudes toward AI. Furthermore, broader, multi-regional studies are warranted to examine attitudes and their evolution over time, and to evaluate the impact of educational interventions and implementation strategies as AI adoption expands.

 

Data Availability Statement

The datasets are available from the author upon request.

 

Conflict of Interest

The author has no conflicts of interest to declare.

 

Funding

No funding was received for this study.

REFERENCES

1. A.M. Turning, "Computing machinery and intelligence," Mind, vol. LIX, no. 236, 1950, pp. 433–460.

2. H.A. Haenssle et al. "Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists," Ann Oncol, vol. 29, no. 8, 2018, pp. 1836–1842.

3. S.A.A. Alowais et al. "Revolutionizing healthcare: the role of artificial intelligence in clinical practice," BMC Medical Education, vol. 23, no. 1, 2023, p. 689.

4. C.C. Bennett and K. Hauser, "Artificial intelligence framework for simulating clinical decision-making: a Markov decision process approach," Artificial Intelligence in Medicine, vol. 57, no. 1, 2013, pp. 9–19.

5. A.A. Mirza et al. "The use of artificial intelligence in medical imaging: a nationwide pilot survey of trainees in Saudi Arabia," Clin Pract, vol. 12, no. 6, 2022, pp. 852–866.125

6. Secinaro, D. Calandra, A. Secinaro, V. Muthurangu and P. Biancone, "The role of artificial intelligence in healthcare: a structured literature review," BMC Medical Informatics and Decision Making, vol. 21, no. 1, 2021, p. 125.

7. Leibig et al. "Combining the strengths of radiologists and AI for breast cancer screening: a retrospective analysis," Lancet Digit Health, vol. 4, no. 7, 2022, pp. e507–e519.

8. A.I. Stoumpos, F. Kitsios and M.A. Talias, "Digital transformation in healthcare: technology acceptance and its applications," Int J Environ Res Public Health, vol. 20, no. 4, 2023.243

9. Jiang et al. "Artificial intelligence in healthcare: past, present and future," Stroke Vasc Neurol, vol. 2, no. 4, 2017, pp. 230–243.

10. Pedro, A.R. et al. "Artificial intelligence in medicine: a comprehensive survey of medical doctor's perspectives in Portugal," PLoS One, vol. 18, no. 9, 2023, p. e0290613.1539

11. Al Knawy et al. "The Riyadh declaration: the role of digital health in fighting pandemics," Lancet, vol. 396, no. 10262, 2020, pp. 1537–1539.

12. Aldwean and D. Tenney, "Artificial intelligence in healthcare sector: a literature review of the adoption challenges," Open Journal of Business and Management, vol. 12, no. 1, 2024, pp. 129–147.

13. Mirbabaie, M. et al. "Artificial intelligence in hospitals: providing a status quo of ethical considerations in academia to guide future research," AI Soc, vol. 37, no. 4, 2022, pp. 1361–1382.

14. Khan et al. "Drawbacks of artificial intelligence and their potential solutions in the healthcare sector," Biomed Mater Devices, 2023, pp. 1–8.

15. Hassan et al. "Clinicians' and patients' perceptions of the use of artificial intelligence decision aids to inform shared decision making: a systematic review," The Lancet, vol. 398, 2021, p. S80.

16. Schleidgen et al. "The concept of ‘interaction’ in debates on human–machine interaction," Humanities and Social Sciences Communications, vol. 10, no. 1, 2023, p. 551.

17. Young, A.T. et al. "Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review," Lancet Digital Health, vol. 3, no. 9, 2021, pp. e599–e611.

18. Esmaeilzadeh, P. et al. "Patients' perceptions toward human-artificial intelligence interaction in health care: experimental study," J Med Internet Res, vol. 23, no. 11, 2021, p. e25856.

19. Zhang, Z. et al. "Patients’ perceptions of using artificial intelligence (AI)-based technology to comprehend radiology imaging data," Health Informatics Journal, vol. 27, no. 2, 2021, p. 14604582211011215.

20. H.S.J. Chew and P. Achananuparp, "Perceptions and needs of artificial intelligence in health care to increase adoption: scoping review," J Med Internet Res, vol. 24, no. 1, 2022, p. e32939.

21. Gao, S. et al. "Public perception of artificial intelligence in medical care: content analysis of social media," J Med Internet Res, vol. 22, no. 7, 2020, p. e16649.

22. Stai, B. et al. "Public perceptions of artificial intelligence and robotics in medicine," Journal of Endourology, vol. 34, no. 10, 2020, pp. 1041–1048.

23. Wu, C. et al. "Public perceptions on the application of artificial intelligence in healthcare: a qualitative meta-synthesis," BMJ Open, vol. 13, no. 1, 2023, p. e066322.

24. Aljerian, N. et al. "Artificial intelligence in health care and its application in Saudi Arabia," International Journal of Innovative Research in Medical Science, vol. 7, no. 11, 2022, pp. 666–670.

25. Wu, A. et al. "Assessment of patient perceptions of artificial intelligence use in dermatology: a cross-sectional survey," Skin Res Technol, vol. 30, no. 3, 2024, p. e13656.

26. J. Fritsch et al. "Attitudes and perception of artificial intelligence in healthcare: a cross-sectional survey among patients," Digital Health, vol. 8, 2022, p. 20552076221116772.

27. Syed, W. et al. "Assessment of Saudi public perceptions and opinions towards artificial intelligence in health care," Medicina, vol. 60, no. 6, 2024, p. 938.

28. A.M. Alshutayli et al. "Assessing public knowledge and acceptance of using artificial intelligence doctors as a partial alternative to human doctors in Saudi Arabia: a cross-sectional study," Cureus, vol. 16, no. 7, 2024.

29. H. Allam et al. "Knowledge, attitude, and perception of Arab medical students towards artificial intelligence in medicine and radiology: a multi-national cross-sectional study," European Radiology, vol. 34, no. 7, 2024, pp. 1–14.

30. Beets et al. "Surveying public perceptions of artificial intelligence in health care in the United States: systematic review," J Med Internet Res, vol. 25, 2023, p. e40337.

31. Tamori et al. "Acceptance of the use of artificial intelligence in medicine among Japan’s doctors and the public: a questionnaire survey," JMIR Hum Factors, vol. 9, no. 1, 2022, p. e24680.

32. Stewart et al. "Western Australian medical students’ attitudes towards artificial intelligence in healthcare," PLoS One, vol. 18, no. 8, 2023, p. e0290642.

33. Cinalioglu et al. "Exploring differential perceptions of artificial intelligence in health care among younger versus older Canadians: results from the 2021 Canadian digital health survey," J Med Internet Res, vol. 25, 2023, p. e38169.

34. Alanazi, "Assessing clinicians’ legal concerns and the need for a regulatory framework for AI in healthcare: a mixed-methods study," Healthcare, vol. 13, no. 13, 2025, p. 1487.

35. Ikkatai et al. "The relationship between the attitudes of the use of AI and diversity awareness: comparisons between Japan, the US, Germany, and South Korea," AI & Society, vol. 40, no. 4, 2024, pp. 2369–2383.

Recommended Articles
Research Article In-Press

Genotoxic and Cytotoxic Effects of Quinoline Yellow in Albino Mice

pdf Download PDF
Research Article In-Press

Prediction of Mortality in Patients with Secondary Peritonitis Using POMPP Versus Pulp Scoring System

pdf Download PDF
Research Article In-Press

Pilot Study: Effectiveness of a Nurse-Led Intervention on Biophysical and Biochemical Parameters in Adolescent Girls with Menstrual Irregularities

...
pdf Download PDF
Research Article In-Press

The Use of Medicinal Plants by Hemodialysis Patients in Northern Morocco

...
pdf Download PDF
Copyright © Journal of Pioneering Medical Sciences until unless otherwise.