Artificial Intelligence in Medicine
PDF
Cite
Share
Request
Review
E-PUB
18 September 2025

Artificial Intelligence in Medicine

Turk Thorac J. Published online 18 September 2025.
1. Clinic of Chest Diseases Süreyyapaşa Chest Diseases and Thoracic Surgery Training and Research Hospital, İstanbul, Türkiye
No information available.
No information available
Received Date: 26.01.2025
Accepted Date: 07.08.2025
E-Pub Date: 18.09.2025
PDF
Cite
Share
Request

Abstract

Artificial intelligence (AI) holds the potential to influence and change the world through many different fields such as science, economics, technology, and art. The modern foundations of AI, with theoretical roots dating back to ancient Egyptian and Greek civilisations, were laid by Alan Turing and John McCarthy in the twentieth century. Early practices in the medical field focused on the archiving and interpretation of radiologic images and possible preliminary diagnoses. As the processing capacity of computers has advanced, so has their skill competence, and it has become possible to implement them in different specialty branches of medicine. On the other hand, ethical, and social problems, dilemmas, and conflicts have begun to arise with practices in the healthcare field. In this sense, AI should be addressed with its potential benefits and problems, knowing that it is tool, free from social prejudices, demographic changes, socio-economic inequalities, and cultural differences and without indulging in dichotomies such as technophilia and technophobia.

Keywords:
KEYWORDS: Language models, ChatGPT, medicine, chest diseases, ethics

INTRODUCTION

Health can be defined as a dynamic process of adaptation between individuals and their continuously changing physical, social, and psychological environments. As these environments evolve, temporary periods of maladaptation are an inherent part of this process, underscoring the fluid and context-dependent nature of health.

Although ideal health may never be achieved, striving for a healthy life improves human civilization. However, it is a reductionist approach to explain this development through the biomedical model. History has shown humankind that the incidence of most infectious diseases fell before the discovery of vaccines and antibiotics. In tuberculosis, for example, mortality due to the disease began to fall long before the discovery of any biomedical intervention. Indeed, reforms inspired by the sanitation movement and a rising standard of living due to socioeconomic progress led to a more significant decline in mortality caused by diseases in society than the introduction of germ theory.

Epidemics, particularly the plague, and the research of competent scientists, notably Louis Pasteur and Robert Koch, popularized the “germ theory” and led to the refinement of antimicrobial therapies. However, besides these gains, the germ theory also led to a world where lacking a “One Health” approach, the correlation between health and the environment was overlooked and the medical paradigm was constructed on the pillars of specific aetiology, the internal environment, and the machine body.

Today, similar to the past, “genetic theory” is once again seeking to shape the new medical paradigm based on the concepts of specific aetiology, internal environment, and machine body, all founded on genes and interactions. Particularly, the positive potential of the combination of omics technology and artificial intelligence (AI) indicate that the existing health problems will be solved to a significant extent and the advancing technology will upgrade human civilisation. This article aims to examine AI alongside its possibilities and challenges for medical doctors.

Artificial Intelligence

Although the term AI today is associated with futuristic concepts, the origins of the term date back to ancient Egypt and ancient Greece. Indeed, the hermetic scriptures of the era describe mechanical sculptures as wise and full of emotion.1 These scriptures refer to the desire of the artisans, who crafted these sculptures, to replicate that creativity by analysing God’s nature and magic.1

Although the idea of AI appeared in many literary works throughout history, it was tangibly introduced in computer science in 1950 with the words by Alan Turing, “Machines can think too,” and was named by John McCarthy in 1955.2

To understand the notion of AI, it is first necessary to properly define “intelligence”. Intelligence refers to the ability to learn and utilise knowledge and skills. In this regard, it must be clearly distinguished from “consciousness”; the ability of an entity to recognise, perceive, comprehend, and realise its environment and what goes on around it. Since the advent of life, intelligence began to survive and excel at it contributes to the ability to sustain life as a part of the evolutionary process.

AI is the ability to function mentally through its own hardware. Its genesis and progression have been gradual, just like those of organic beings. Neurological research in the late 1930s showed that the human brain is a network of neurons that emit electrical pulses, paving the way for the idea that an electronic brain could be developed by computer scientists. Through the fields of logic and mathematics, the “Theory of Computation” was developed and steps began to be taken to endow computers with the ability to perform mental operations.3 The research was inspired by nature and progressed until the formation of an artificial neural network through codes and algorithms using inorganic silicon instead of organic neurones, and the possibility of machines that can learn.4 However, much like certain neuronal pathways in the human brain remain incompletely understood, the internal workings of advanced AI models have become increasingly opaque—even to experts. This complexity gives rise to what is known as the “black-box” problem, where the reasoning behind an AI system’s outputs cannot be easily explained or interpreted.

Language Models

The first examples of speech robots, that can recognise and respond to spoken language, today referred to as AI in society, date back sixty years.5 In this context, Dendral® software was developed to analyse organic substances spectrophotometrically and was made available for academic and industrial use in 1965.6 However, since this software was inadequate for finalising the analyses on its own, specialists in the field of biochemistry had to use it.

Over the years, AI has evolved along with the advancement of computer technologies. During the period 1950-2000, the processing capacity of computers increased one billion times.7 Along with such acceleration, the processing volume of AI has also expanded, reaching the capacity to achieve much more than a human can in specific fields per unit of time. However, the “electronic brain” dreamed of for AI was actually turned into a tangible form called “neural networks,” and the ability to learn could be taught to AI through analyzing large amounts of data.8

On the other hand, the “big bang” regarding the capability repertoire of AI occurred with the advent of the language model called ChatGPT®. ChatGPT® has reached a level far beyond the performance of its predecessors in language analysis, translation, summarisation, and solving several mathematical problems. However, despite this advancement, ChatGPT® has a narrow spectrum of intelligence. Even if it can analyse, summarise, and write data within this framework, it lacks the capacity to “understand” what has been written. None of the AI models developed so far have been able to exhibit anything similar to the general intelligence of a human being.

Artificial Intelligence and Medicine

The first and simplest use of AI is for medical archiving. As internet-based medical literature search engines were developed, it became easier to access medical resources. Over time, as computer technologies have advanced and their processing capacities have expanded, data processing by AI has also accelerated; its applicability in evidence-based medicine is being discussed. Basically, logic sentences derived from “if so, do as follows” have laid the groundwork for AI-based diagnostic programmes.9 “Computer Aided Detection” (CAD) developed for primary care physicians has been one of the earliest practices.10 Similarly, the software called MYCIN®, developed by Stanford University in 1972, was a computer program that analysed the symptoms and examination data of the patient, especially for patients with blood infections, and suggested the probable diagnosis, additional examinations that may be required, and antibiotherapy options.10

As medical imaging techniques became digitalised in the 1990s, the potential of AI to analyse and make reports on these examinations was brought to the forefront. Scientific research has shown that analysing mammography examinations used in breast cancer screening using AI provides support to radiologists when specialised breast radiologists are not available, improving the diagnostic accuracy of the physician.11 In 2017, the United States Food and Drug Administration (FDA) also approved Arterys®—an AI algorithm model, using neural networks. This model, approved by the FDA, interpreted cardiac magnetic resonance images in seconds and delivered reliable data on ejection fraction.9

The digitisation of endoscopic examinations led to the use of computer-aided diagnostic practices in this field. Gastroenterology has accomplished pioneering practices in this field. AI has assisted physicians with high accuracy, especially for the diagnosis of chronic pancreatitis and pancreatic cancer, both of which are very difficult to differentiate with endoscopic ultrasonography.9 Analysis of the localisation and morphology of polyps detected in colonoscopic procedures facilitated the differentiation of malignant from benign.

Complicated relationships within the organism have also begun to be identified through “deep learning”—one of the basic learning methods of AI. Tracking chronic diseases and monitoring the response to oncotherapies were made possible through deep learning. Besides the diagnosis of chronic diseases such as diabetes and hypertension, critical data such as the prognosis of the disease, the risk of complications, and the key points to focus on in order to avoid them have also been predicted through AI.12

Artificial Intelligence and Chest Diseases

Lung cancer: Due to the mortality burden imposed by lung cancer and the contributions of AI to the field of imaging, the AI-based radiologic approach to thoracic malignancies was the pioneering AI practice in the field of chest diseases in the 1960s. It rapidly became popular with the advancement of AI. AI-integrated radiologic practices aim to screen chest roentgenograms for nodules to give a preliminary idea to the physician involved in the case.13 Due to the limited nature of the human eye and the variability in diagnostic interpretation among physicians, small nodules are often skipped in screening.14 Research with AI-integrated CAD systems have shown that these systems can provide accurate diagnoses with equal power or at a higher rate compared to radiologists, but also have a higher rate of false positive errors compared to physicians.15, 16 Furthermore, it has become possible to reduce artefacts and achieve clearer images by processing low-dose lung computed tomography images with AIs trained in a deep learning system after imaging.17 On the other hand, it has been found that AIs, which were trained using validated pathological materials, achieve great success in the detection of adenocarcinoma and squamous cell carcinoma subtypes.18 Some additional algorithms that were developed allowed AI to obtain preliminary information about genetic mutations, normally detected by genetic sequencing under normal conditions with a detection time of up to two weeks, by analysing the image of the malignant tissue.19

Interstitial lung diseases: AI holds great promise in this field. A study conducted at Sapporo Medical School Hospital reported that the probability score of AI that was trained using chest radiography and thoracic computed tomography achieved high diagnostic sensitivity for chronic fibrosing interstitial disease.20 A meta-analysis comparing radiologists and AI in analyzing the tomographic findings of interstitial lung diseases showed that the diagnostic accuracy of AI ranged between 78% and 91%.21

Asthma-chronic obstructive pulmonary disease: A study showed that the success rate for recognising spirometry patterns was 74.4% for clinicians and 100% for AI.22 The rate of correct diagnosis of asthma-chronic obstructive pulmonary disease (COPD) patients by AI was 82%.22 Furthermore, an AI program that records and interprets lung sounds together with medical records has been shown to differentiate asthma, COPD, and asthma-COPD overlap syndrome (ACOS).23 Furthermore, the AI programme was able to obtain preliminary information on managing episodes in asthma patients, taking into account the frequency of episodes and previous treatments.24, 25 Similarly, environmental factors such as dust, particulate matter, temperature, and humidity to which children diagnosed with asthma are exposed, and physiological data such as pulse rate and blood pressure were monitored with several sensors. These parameters were interpreted with AI to identify the risk of an episode with a success rate of 80% and to communicate the episode via mobile notification.26

Respiratory failure and acute respiratory distress syndrome: In addition to its applications in chronic pulmonary conditions, AI has also demonstrated value in the early detection of acute respiratory syndromes. Among these, acute respiratory distress syndrome (ARDS) poses particular diagnostic difficulties due to its complex presentation. Recent systematic reviews suggest that AI-based models can support clinicians by identifying ARDS more accurately and efficiently than traditional methods, contributing to earlier recognition and improved management strategies.27 As these technologies continue to evolve, their integration into critical care settings holds potential for enhancing diagnostic confidence and clinical outcomes.

Coronavirus disease-2019: During the pandemic, many programmes related to diagnosis, treatment, filiation, and transmission predictions based on AI have been introduced. An information network was built by using AI to take into account the polymerase chain reaction results of the nasal swabs of the patients, the location where the cases resided, and the people with whom they were in contact.28

Tuberculosis: There are AI programmes developed for the interpretation of chest radiographs, used as a screening method for tuberculosis in peripheral health institutions where specialist physicians are not available. The only commercially available computer-aided diagnostic program for tuberculosis is CAD4TB®, developed in the Netherlands. This software was developed based on deep learning algorithms and can screen a chest radiograph for abnormalities in less than 15 seconds.29

Smoking cessation: Another pandemic by itself, yet the support of healthcare providers is far behind the number needed to assist patients smoking cessation. Beyond just diagnostics and disease management, AI has also shown considerable promise in behavioral health, particularly in supporting smoking cessation. AI-driven interventions—especially conversational agents or chatbots—offer personalized, on-demand guidance that can adapt to the user’s needs over time. Recent systematic reviews have demonstrated that such tools can significantly enhance quit rates when compared to standard care, emphasizing their value as scalable, low-cost solutions in public health.30, 31 By carefully integrating AI into smoking cessation strategies, it would become possible to extend support to wider populations, including those with limited access to traditional healthcare services, while also reducing the burden of tobacco-related diseases.

Bronchoscopy: Invasive procedures are one of the areas where technology has been used extensively. One of the most novel diagnostic opportunities in chest diseases is robotic bronchoscopy. Unlike traditional video bronchoscopy, this type of procedure involves no manual manipulation of the bronchoscope by the bronchoscopist, but rather, a robotic mechanism that manoeuvres the bronchoscope behind a console. Using specialised and thin bronchoscopes, this system allows easier access and sampling of lesions than video bronchoscopy, because of a three-dimensional bronchial tree map, created by thoracic computed tomography before the procedure. On the other hand, the digital processing of the images in traditional video bronchoscopy by computers and the real-time tracking of these images by AI, have made it possible to identify a pathway by analysing morphology and tissues in scenarios where it is difficult to differentiate endobronchial anatomy.32

Neoliberalism and Medicine - Artificial Intelligence

Neoliberalism is a political and economic paradigm that emerged prominently in the late 20th century, characterized by an emphasis on free markets, privatization, deregulation, and reduced state intervention in social services. It promotes the idea that sustained economic growth is the driver of progress, and that economic growth is best achieved through freedom of trade and capital, with minimal government interference. Originating in economic theory, neoliberalism has gradually permeated numerous sectors—including healthcare and scientific research—by redefining public goods as commodities and public services as market-driven enterprises. Within this ideology, success is often measured through productivity, profitability, and consumer satisfaction rather than collective well-being or equity. As a result, healthcare and scientific institutions have gradually shifted away from values like solidarity and universal access, adopting priorities that reflect corporate and market-driven interests.

In the field of medicine, this ideological influence has and and is still leading to a significant transformation in how healthcare is organized, delivered, and evaluated. Medical systems influenced by neoliberal logic prioritize cost-effectiveness, competition among providers, and the use of performance-based metrics. Clinical decision-making is increasingly shaped by standardized protocols, output targets, and financial incentives. At the same time, public funding for health infrastructure and preventive care has declined in many regions, while privatized services and direct payment models have expanded.33 Scientific research has, perhaps unfortunately, not been immune to this influence, with priorities having begun to align with commercial viability and industry sponsorship, creating an environment where innovation is driven by profitability rather than public health need. AI technologies developed and implemented within this context reflect and often amplify these systemic priorities. Predictive algorithms used to allocate healthcare resources may be optimized for efficiency or revenue generation rather than equitable access.34 Health surveillance tools, biometric sensors, and wearable devices are frequently marketed as consumer products, with data flows directed toward private platforms that profit from behavioral analytics.35 These tools risk exacerbating disparities should they encode structural biases or be selectively deployed in high-income markets while neglecting underserved populations. Furthermore, the framing of health data as a commercial asset—rather than a collective resource—raises ethical concerns regarding consent, ownership, and accountability.

Neoliberal discourse also tends to individualize responsibility for health, framing outcomes as a function of personal choice rather than acknowledging the structural determinants—such as poverty, housing, education, and environmental exposure—that shape health trajectories. Within this framework, AI-based interventions risk reinforcing narratives that blame individuals for poor outcomes while obscuring the broader socioeconomic forces at play.36 As AI systems become increasingly embedded in clinical workflows, it is vital to ensure that they are designed and governed in ways that prioritize equity, transparency, and patient autonomy.

To guide ethical AI integration in medicine, policymakers, developers, and healthcare leaders must move beyond purely technological or market-based solutions and instead address the underlying power dynamics that shape how these tools are funded, deployed, and evaluated. This requires a multidisciplinary effort that includes ethicists, clinicians, public health experts, and communities affected by health inequities. Only through such inclusive governance can AI technologies avoid reproducing the limitations of the systems they are intended to improve.

The Other Side of the Coin

Given the digitalisation and virtualisation of the stethoscope identified with medicine, the prevalence of robotic surgeries, and the rapid inclusion of big data in the field of health within the framework of the personalised medicine approach in the near future, it is essential to acknowledge that it is not possible to avoid digitalisation and AI within this framework—and even if it were possible, such an attitude would not be appropriate. In fact, AI itself imagines an all-digital world as the future healthcare delivery setting that will encircle the physician (Figure 1). Therefore, the pulmonologists of the future need to be physicians who have adapted to digital technology and utilize it rationally for the benefit of patients and the public.

Smartphone applications that have expanded in recent years, wearable smart devices, and access to internet-based medical resources can empower individuals to protect and improve their own health and allow them to make accurate and conscious decisions about their health based on information.

However, this transformation, alongside the problems of personal privacy, stigma, and exclusion in a surveillance civilisation, where everything and every value turns into quantifiable data, may turn the concept of health into an obsession with disease. People may believe they are not healthy enough due to information and warnings that constantly reach them through big data and AI, causing them to make their lives unhealthy and maximize the consumption of healthcare services to achieve a better quality of health.

Health and science literacy can mitigate some of these problems. However, studies indicate that the telehealth approach has the potential to exacerbate social health inequalities, contrary to expectations, and income and ethnicity are determinants in such inequality.37 As a reflection of the inequality in the world, the ever-growing ‘digital inequality’ poses both a barrier and a problem that aggravates inequality in the digitalisation of health. Since most of the software and hardware produced in the field of health requires continuous internet access and mobile data usage, it is possible for people with medium-high socioeconomic status and/or those who live in urban centres with adequate infrastructure for base stations to access AI-assisted applications. In contrast, people with low socioeconomic status and/or those who live in rural areas with limited internet access, who lack adequate infrastructure for base stations despite having access to smart devices, are unable to use the same devices and channels effectively.38 This places people who lack easy access to healthcare services and therefore have the potential to benefit more from telehealth applications at a disadvantage and leads to the persistence of the reverse service problem. The problems created by the inequality that exists around the world with regard to AI applications and the ethical issues that may arise due to such inequality are realities that have been acknowledged and stated by organizations or experts in AI as of today (Figure 2).

Another problem with smart devices and telehealth applications is the inattention to cultural diversity and the under-representation of different social groups within the scope of these applications. It has been found that telemedicine and telehealth applications, which have grown with the Coronavirus disease-2019 (COVID-19) pandemic, are used more by women than men in high-income countries.39 However, it appears that the languages and images used in health and sports applications, in particular, are often designed with sexist biases, and as a consequence of the patriarchal approach, emphasizing muscle gain and strength for men while highlighting a slim physique and fitness for women are brought to the forefront.40 Similarly, the role of women is almost always emphasised in the practices used for following the pregnancy process, and applications aimed at men or promoting men’s support before and during pregnancy remain in the minority.41 Babylon®—an AI-based diagnostic application implemented by the National Health Service of the United Kingdom reports the probable diagnosis of individuals based on their history and the need for elective or urgent consultation with a physician. It diagnoses depression and panic attacks in a 59-year-old female smoker when she complains of chest pain, shortness of breath, and restlessness, while the AI reports suspicion of myocardial infarction in a male patient with the same background.42 Similar problems are also experienced in ethnicity.

Such examples reflect the broader issue of opacity in AI systems, many of which function as “black boxes” with internal processes that cannot be readily explained or interpreted. This lack of transparency poses serious concerns in clinical settings, where the inability to understand how an AI model arrives at a conclusion can wound the trust between patients and physicians, and complicate medico-legal accountability. These risks highlight the urgent need for clear regulatory frameworks to guide the ethical integration of AI into healthcare. International efforts such as the World Health Organization’s guidelines on AI ethics and the European Union’s AI Act emphasize principles like transparency, accountability, human oversight, and fairness to help ensure that technological advancement does not come at the cost of justice or patient safety.43, 44

Beyond the concerns on lack of structure, the integration of AI in medical decision making also raises bioethical dilemmas. The principle of informed consent would be brought into question: Could an AI-driven decision, one that the mechanisms behind couldn’t be fully understood by a patient, or maybe even a physician, fit the definition of informed consent? Questions arise over how clinical responsibility and decision-making should be shared between patient and machine, especially as AI systems are becoming increasingly prevalent in areas where they assist and even supplant human judgment. The need for a sufficiently clear AI infrastructure becomes even more apparent when ethical concerns are addressed.

AI-assisted wearable health devices are another technological innovation that will be more important for chest diseases in the future. Smartwatches, which are one type of these devices, have started to be used for clinical follow-up. The integration of glucose sensors and insulin pumps with smartwatches within the scope of wearable health devices aims to be used successfully in many different areas, such as recognising the moment of seizure in patients with epilepsy and calling for emergency services, tracking sleep data, detecting cardiac arrhythmias, or dose monitoring of inhaler drugs. The collection of such data can also be added to a person’s electronic records, allowing multiple healthcare professionals to access immediate and reliable data on a patient’s history. However, today,, the limited use of resources only for people with a certain status, as a reflection of economic and social inequality, exacerbates the existing inequality in the field of health.

The advancement of technology not only significantly improves the possibilities of medical diagnosis and treatment but also provides a better understanding of the physiopathological basis of diseases. In this sense, medical genetic technologies are noteworthy. On the other hand, such a technological advancement will also enable the shaping of a prevention-treatment approach that prioritises the patient over the disease and is patient-specific. Certainly, such new knowledge can both better elucidate the developmental mechanisms of diseases and create new treatment options that target these developmental pathways. However, it should be noted that omics approaches are not the Holy Grail possessing miraculous powers. More importantly, this kind of research should not lead to ignoring the social determinants of health or describe health as an individual issue.

Finally, the next century holds the potential to usher in a new era of transhumanism, described as positive eugenics. Accordingly, the concept of transhumanism was introduced by Chinese scientist He Jiankui to the agenda of physicians by intervening in the genetics of infants named Lulu and Nana, and ensuring their birth despite ethical sanctions. Transhumanism means allowing people to transcend their biologically limited capacities with up-to-date technology and upgrading their bodies. Undoubtedly, such an intervention would eliminate some hereditary diseases such as sickle cell anaemia. However, it also involves the danger of transforming the existing socioeconomic inequality into an anatomobiological structure on an individual and corporeal basis and bringing about new hierarchical inequalities in a world where national and global inequalities have been increasing.45 On the other hand, this transformation, which is called the ‘digital health revolution,’ may also entail a process of commercialisation leading to digital health colonialism, accompanied by the rhetoric of modernity, rationality, and progress.46 Digital health data may become a new area for transnational companies in the global north, where they can maximise their profits.46 Therefore, physicians should not be satisfied with merely technically adapting to the knowledge and developments in their own professional fields; they should take into account the meaning and background of the varying demographic structure, disease burden, and novel technologies of the forthcoming age and their impact on inequalities, primarily in their own professional practices, and take a stand for a more equitable humanitarian health setting.

Neoliberal Life and AI

In 1840, the longest life expectancy in the world was 48 years in Sweden. In 2019, it was 88 years in Japan.47 However, this positive change has not been equally applicable to people all over the world. On the other hand, the future of the world seems bleak. Besides the destruction caused or will be caused by the climate crisis and the migrations that this destruction will trigger, 3 billion people are predicted to have difficulty in accessing water—one of the most basic factors of health, in 2025. In 2050, one-third of the world’s population will be over 60 years old.

The major problem is that life expectancy in the United States of America (USA) as a developed country has shortened for the first time in this century. The life expectancy was shortened in non-hispanic middle-aged white Americans between 1999 and 2013.48 In 2014-2015, life expectancy was observed to shorten in all groups in the USA.49 However, these shortened life expectancies have in no way meant that social classes were ‘equalised at the bottom’. Moreover, the inequality that existed in the first two years of the COVID-19 pandemic further deepened, and the gap between the lowest and highest life expectancy reached 20.4 years in 2021.50 More interestingly, contrary to what was expected before the COVID-19 pandemic, life expectancy in groups that earned the most, such as “White Americans” who held a high school degree, did not rank first but fourth or fifth.50

Epidemiological studies indicate that health outcomes in the United States have been declining since the 1990s, particularly among middle-class populations adversely affected by industrial and economic restructuring. The rise in substance use disorders, including alcohol and opioid dependency, alongside increasing suicide rates, has been described in public health literature as “deaths of despair”.47 These patterns are closely tied to structural changes driven by neoliberal economic policies, which have contributed to worsening social determinants of health and heightened political polarization.

In a global context marked by growing inequality in access to income, housing, nutrition, and social security, the integration of AI into healthcare systems raises significant ethical concerns. Without appropriate safeguards, AI technologies may disproportionately benefit private stakeholders while reinforcing existing power asymmetries—particularly in politically repressive environments. The proliferation of biometric sensors and wearable devices, although intended for health monitoring, also poses risks of large-scale surveillance and data exploitation if deployed without strong regulatory oversight.

CONCLUSION

AI holds immense potential to advance medical practice by enhancing diagnostic accuracy, streamlining workflows, enabling real-time data analysis, and expanding access to care—particularly in underserved regions. AI tools can assist clinicians in making more informed decisions, improve resource allocation, and facilitate predictive modeling for early disease detection and public health planning. As data infrastructures and computational power continue to grow, AI will likely become an integral part of medical ecosystems worldwide. Its expansion offers an unprecedented opportunity to improve health outcomes, personalize treatments, and optimize healthcare delivery—provided it is implemented with clear goals, robust oversight, and alignment with clinical needs.

However, the integration of AI into medicine does not occur in a vacuum. It is shaped by the economic, social, political, and ideological systems in which it is embedded. When AI technologies are developed and deployed within these systems, they risk reinforcing inequities, privatizing health data, and shifting healthcare toward profit-driven rather than patient-centered models. Ethical AI integration requires users to be aware of the landscape in which they operate and not to solely rely on technical safeguards already implemented. Addressing these concerns is essential to ensure that AI contributes not only to innovation, but also to justice, inclusivity, and the collective well-being of societies.

Authorship Contributions

Concept: U.K., O.E., Design: U.K., O.E., Analysis or Interpretation: U.K., O.E., Literature Search: U.K., O.E., Writing: U.K., O.E.
Conflict of Interest: No conflict of interest was declared by the authors.
Financial Disclosure: The authors declared that this study received no financial support.

References

1
McCorduck P. Machines who think. 2nd ed. New York, NY: A K Peters/CRC Press; 2004:4-5.
2
Haenlein M, Kaplan A. A brief history of artificial intelligence: on the past, present, and future of artificial intelligence. Calif Manage Rev. 2019;61(4):5-14.
3
Sipser M. Introduction to the theory of computation. 3rd ed. Boston, MA: Cengage Learning; 2013.
4
Hebb D. The organization of behavior. New York, NY: Wiley; 1949.
5
Anyoha R. The history of artificial intelligence. Harvard’s SITN Blog. Published 2017. Last accessed date: 18.11.2024.
6
Encyclopaedia Britannica. DENDRAL. Last accessed date: 18.11.2024.
7
Mcguffie K, Henderson-Sellers A. Forty years of numerical climate modeling. Int J Climatol. 2001;21(9):1067-1109.
8
Burkov A. The hundred-page machine learning book. Polen: Andriy Burkov; 2019.
9
Kaul V, Enslin S, Gross SA, et al. History of artificial intelligence in medicine. Gastrointest Endosc. 2020;92(4):807-812.
10
Kulikowski CA. Beginnings of artificial intelligence in medicine (AIM): computational artifice assisting scientific inquiry and clinical art–with reflections on present aim challenges. Yearb Med Inform. 2019;28(1):249-256.
11
Dembrower K, Crippa A, Colón E, Eklund M, Strand F; ScreenTrustCAD Trial Consortium. Artificial intelligence for breast cancer detection in screening mammography in Sweden: a prospective, population-based, paired-reader, non-inferiority study. Lancet Digit Health. 2022;5(10):703-711.
12
Fitriyani NL, Syafrudin M, Alfian G, Rhee J. Development of disease prediction model based on ensemble learning approach for diabetes and hypertension. IEEE Access. 2019;7:144777-144789.
13
Lodwick GS, Keats TE, Dorst JP. The coding of roentgen images for computer analysis as applied to lung cancer. Radiology. 1963;81:185-200.
14
Svoboda E. Artificial intelligence is improving the detection of lung cancer. Nature. 2020;587:20-22.
15
Chassagnon G, De Margerie-Mellon C, Vakalopoulou M, et al. Artificial intelligence in lung cancer: Current applications and perspectives. Jpn J Radiol. 2023;41:235-244.
16
Sim Y, Chung MJ, Kotter E, et al. Deep convolutional neural network-based software improves radiologist detection of malignant lung nodules on chest radiographs. Radiology. 2020;294:199-209.
17
Artificial intelligence enables low-dose CT scans, faster scan time. National Institute of Biomedical Imaging and Bioengineering. Last accessed date: 18.11.2024.
18
Chang HY, Jung CK, Woo JI, et al. Artificial intelligence in pathology. J Pathol Transl Med. 2019;53(1):1-12.
19
Dercle L, Fronheiser M, Lu L, et al. Identification of non-small cell lung cancer sensitive to systemic cancer therapies using radiomics. Clin Cancer Res. 2020;26:2151-2162.
20
Nishikiori H, Hirota K, Suzuki T, et al. Validation of the artificial intelligence software to detect chronic fibrosing interstitial lung diseases in chest X-ray. Eur Respir J. 2021;58(Suppl 65):OA1211.
21
Soffer S, Morgenthau AS, Shimon O, et al. Artificial intelligence for interstitial lung disease analysis on chest computed tomography: A systematic review. Acad Radiol. 2022;29:226-235.
22
Topalovic M, Das N, Burgel PR, et al. Artificial intelligence outperforms pulmonologists in the interpretation of pulmonary function tests. Eur Respir J. 2019;53(4):1801660.
23
Hafke-Dys H, Kuźnar-Kamińska B, Grzywalski T, et al. Artificial intelligence approach to the monitoring of respiratory sounds in asthmatic patients. Front Physiol. 2021;12:745635.
24
Spathis D, Vlamos P. Diagnosing asthma and chronic obstructive pulmonary disease with machine learning. Health Inform J. 2019;25(3):811-827.
25
Qin Y, Wang J, Han Y, Lu L. Deep learning algorithms-based CT images in glucocorticoid therapy in asthma children with small airway obstruction. J Healthc Eng. 2021;2021:5317403.
26
Hosseini A, Buonocore CM, Hashemzadeh S, et al. Feasibility of a secure wireless sensing smartwatch application for the self-management of pediatric asthma. Sensors (Basel). 2017;17(8):1780.
27
Xiong Y, Gao Y, Qi Y, et al. Accuracy of artificial intelligence algorithms in predicting acute respiratory distress syndrome: a systematic review and meta-analysis. BMC Med Inform Decis Mak. 2025;25(1):44.
28
Cresswell K, Tahir A, Sheikh Z, et al. Understanding public perceptions of COVID-19 contact tracing apps: artificial intelligence–enabled social media analysis. J Med Internet Res. 2021;23:26618.
29
Feng PH, Lin YT, Lo CM. A machine learning texture model for classifying lung cancer subtypes using preliminary bronchoscopic findings. Med Phys. 2018;45:5509-5514.
30
Bendotti H, Lawler S, Chan GCK, Gartner C, Ireland D, Marshall HM. Conversational artificial intelligence interventions to support smoking cessation: a systematic review and meta-analysis. Digit Health. 2023;9:20552076231211634.
31
Li S, Qu Z, Li Y, Ma X. Efficacy of e-health interventions for smoking cessation management in smokers: a systematic review and meta-analysis. EClinicalMedicine. 2024;68:102412.
32
Yoo JY, Kang SY, Park JS, et al. Deep learning for anatomical interpretation of video bronchoscopy images. Sci Rep. 2021;11:23765.
33
Mataria WA, Chun S. Global health in the grip of neoliberalism: a combined retrospective comparative stages heuristic policy analysis. Medical Research Archives. Published November 2024;12(11). Last accessed date: 24.06.2025.
34
Ramezani M, Takian A, Bakhtiari A, Rabiee HR, Fazaeli AA, Sazgarnejad S. The application of artificial intelligence in health financing: a scoping review. Cost Eff Resour Alloc. 2023;21(83).
35
Banerjee S, Longstreet P, Hemphill T. Wearable devices and healthcare: data sharing and privacy. The Information Society. 2017;34:1-9.
36
Chinta SV, Wang Z, Palikhe A, et al. AI-driven healthcare: Fairness in AI healthcare: a survey. PLOS Digit Health. 2025;4(5):864.
37
Latulippe K, Hamel C, Giroux D. Social health inequalities and ehealth: a literature review with qualitative synthesis of theoretical and empirical studies. J Med Internet Res. 2017;19(4):136.
38
Koehle H, Kronk C, Lee YJ. Digital health equity: addressing power, usability, and trust to strengthen health systems. Yearb Med Inform. 2022;31(1):20-32.
39
Sandhu N, Gambon E, Stotz C, et al. Femtech is expansive—it’s time to start treating it as such. rock health. Published: 03.08.2020. Last accessed date: 18.11.2024.
40
Doshi MJ. Barbies, goddesses, and entrepreneurs: discourses of gendered digital embodiment in women’s health apps. Womens Stud Commun. 2018;41:183-203.
41
Gann B. Transforming lives: combating digital health inequality. IFLA J. 2019;45:187-195.
42
Trendall S. Gender bias concerns raised over GP app. Public Technology. Published: 13.09.2019. Last accessed date: 18.11.2024.
43
World Health Organization. Ethics and Governance of Artificial Intelligence for Health : WHO Guidance. Geneva: World Health Organization; 2021. Last accessed date: 30.03.2025.
44
European Commission. Proposal for a regulation of the European parliament and of the council laying down harmonised rules on artificial intelligence (artificial intelligence act ). Brussels: European Commission; 2021. Last accessed date: 30.03.2025.
45
Elbek O. Risk Medicine and transhumanism. Thorac Res Pract. 2023;24(6):325-329.
46
Sekalala S, Chatikobo T. Colonialism in the new digital health agenda. BMJ Glob Health. 2024;9(2):014131.
47
Zeitoun JD. Sağlığın Tarihi- Uzayan Ömrümüz ve Geleceğimiz. Translated by Asçı Dalar Y. Türkiye İş Bankası Kültür Yayınları; 2024.
48
Case A, Deaton A. Rising morbidity and mortality in midlife among white non-Hispanic Americans in the 21st century. Proc Natl Acad Sci USA. 2015;112(49):15078-15083.
49
Case A, Deaton A. Mortality and morbidity in the 21st century. Brookings Pap Econ Act. 2017:397-476.
50
Dwyer-Lindgren L, Baumann MM, Li Z, et al. Ten Americas: a systematic analysis of life expectancy disparities in the USA. Lancet. 2024;404(10469):2299-2313.