Varieties of Decision-Making
Starting with a broad use of the term, this chapter examines different types of decision-making and agents. A typology is proposed that can help to avoid confusion and promote understanding, especially in discussions between different scientific disciplines. The typology includes reporting/non-reporting agents, fully/partially transparent agents, and observer-/self-transparent agents. However, the most profound distinction turns out to be between non-discursive and discursive agents. This is mainly because discursivity and responsibility are closely linked. Discursive agents therefore have a special position: they must decide how they deal with other types of agents and what they use them for and how they use them in their own decision-making. So far, only humans can be considered as discursive agents.
Heinrichs, B. (2025). Varieties of Decision-Making. In: Ettinger, U., Heinrichs, B., Murawski, C. (eds) Decision Making. Studies in Neuroscience, Psychology and Behavioral Economics. Springer, Cham. https://doi.org/10.1007/978-3-032-00880-0_1
Künstliche Intelligenz in der Behandlung von Diabetes bei minderjährigen Patienten – Ethische Aspekte
In diesem Kapitel befassen wir uns mit Kindern, die an Diabetes erkrankt sind, und der Rolle, die künstliche Intelligenz (KI) auf deren Weg als Patienten spielen kann. Als Ethiker nutzen wir diesen Fall, um die wichtigsten ethischen Fragen zu untersuchen, die sich aus dem Einsatz medizinischer künstlicher Intelligenz zur Unterstützung der Diagnose und Behandlung von Diabetes ergeben. Die Einführung von KI in die Pädiatrie ist bereits im Gange und lässt sich nicht mehr aufhalten. Wir zeigen auf, welche die wichtigsten ethischen Kriterien sind, die beachtet werden sollten, und kommen zu dem Schluss, dass die Anwendung von KI-Technologien in der Pädiatrie wahrscheinlich sehr positive Auswirkungen haben wird, wenn die Anforderungen der Moral erfüllt werden.
Bruni, T., Heinrichs, B. (2025). Künstliche Intelligenz in der Behandlung von Diabetes bei minderjährigen Patienten – Ethische Aspekte. In: Pfannstiel, M.A. (eds) Künstliche Intelligenz im Einsatz für die erfolgreiche Patientenreise. Springer Gabler, Wiesbaden. https://doi.org/10.1007/978-3-658-48573-3_28
Künstliche Intelligenz in der Gesundheitsvorsorge von Kindern und Jugendlichen – Anwendungsmöglichkeiten und Akzeptanz
Der Einsatz von künstlicher Intelligenz (KI) in der Kinder- und Jugendmedizin bietet vielfältige Möglichkeiten, insbesondere in der Prävention von chronischen Erkrankungen. KI-gestützte Anwendungen wie maschinelles Lernen zur Analyse von Sprach- oder Bewegungsmustern können beispielsweise helfen, frühzeitig die Diagnose von Autismus-Spektrum-Störungen oder motorischen Entwicklungsverzögerungen zu stellen. Zudem unterstützen KI-basierte Systeme die Therapie von Kindern mit Diabetes mellitus Typ 1 durch automatisierte Insulin-Dosierungssysteme (AID).
KI ermöglicht präzisere Diagnosen, personalisierte Therapieansätze und eine Entlastung des medizinischen Personals. Gleichzeitig gibt es Herausforderungen, die den Einsatz von KI betreffen und bedingen, dass nur wenige Anwendungen bislang Einzug in die klinische Routine gefunden haben. Dazu zählen der Schutz sensibler Daten und die Wahrung der informationellen Selbstbestimmung, die Sicherstellung von Diskriminierungsfreiheit, die Transparenz algorithmischer Entscheidungsprozesse sowie die Akzeptanz durch alle beteiligten Gruppen wie Kinder und Jugendliche, Eltern und medizinisches Personal. Alle beteiligten Gruppen sehen potenzielle Fehlentscheidungen, den Verlust persönlicher Interaktionen sowie die mögliche kommerzielle Nutzung von Daten kritisch. Eltern und Fachkräfte betonen die Bedeutung von klarer Kommunikation, Mitbestimmung und Schulungen für ein besseres Verständnis. Zudem mangelt es oft an strukturierten, hochwertigen großen Datensätzen kompatibler Formate, um die KI zu trainieren.
Eine nachhaltige Integration von KI in der Kinder- und Jugendmedizin erfordert groß angelegte klinische Studien, Zugang zu hochwertigen Datensätzen sowie eine differenzierte Analyse ethischer und sozialer Implikationen.
Kerth, JL., Bischops, A.C., Hagemeister, M. et al. (2025). Künstliche Intelligenz in der Gesundheitsvorsorge von Kindern und Jugendlichen – Anwendungsmöglichkeiten und Akzeptanz. Bundesgesundheitsbl 68, 907–914. https://doi.org/10.1007/s00103-025-04096-4
Künstliche Intelligenz in der Kinder- und Jugendmedizin
Durch die rasanten Entwicklungen im Bereich der künstlichen Intelligenz (KI) ergeben sich zahlreiche neue Diagnostik- und Therapieoptionen in der Kinder- und Jugendmedizin. Gleichzeitig werfen diese Entwicklungen aber auch vielfältige Fragen auf. In diesem Beitrag beleuchten wir, welche Faktoren für die Akzeptanz von KI in der Kinder- und Jugendmedizin wichtig sind, und skizzieren einige ethische und rechtliche Fragen. Eine interdisziplinäre Perspektive und ein sorgsamer Umgang mit Bedenken oder Ängsten ist unabdingbar, um diesen Herausforderungen zu begegnen und die sich durch KI eröffnenden Chancen nutzen zu können.
Kerth, JL., Heinrichs, B., Bischops, A.C. et al. (2025). Künstliche Intelligenz in der Kinder- und Jugendmedizin. Monatsschr Kinderheilkd 173, 290–296. https://doi.org/10.1007/s00112-025-02139-3
Von der experimentellen Phase zur produktiven Anwendung
Mittlerweile existiert eine Vielzahl zugelassener Medizinprodukte, die Künstliche Intelligenz beinhalten – Zeit, eine Zwischenbilanz zu ziehen und Überlegungen für eine mögliche anstehende Phase des produktiven Einsatzes medizinischer KI anzustellen.
J. Caspers et al. (2025). Von der experimentellen Phase zur produktiven Anwendung. Deutsches Ärzteblatt 122 (Sonderausgabe KI), 24-27. Die Vollversion ist hier verfügbar.
Four notions of autonomy. Pitfalls of conceptual pluralism in contemporary debates
The concept of autonomy is indispensable in the history of Western thought. At least that’s how it seems to us nowadays. However, the notion has not always had the outstanding significance that we ascribe to it today and its exact meaning has also changed considerably over time. In this paper, we want to shed light on different understandings of autonomy and clearly distinguish them from each other. Our main aim is to contribute to conceptual clarity in (interdisciplinary) discourses and to point out possible pitfalls of conceptual pluralism.
Wagner, R., & Heinrichs, B. (2024). Four notions of autonomy. Pitfalls of conceptual pluralism in contemporary debates. Human-Machine Communication 9, 37–50. https://doi.org/10.30658/hmc.9.3
Artificial intelligence in the care of children and adolescents with chronic diseases: a systematic review
The integration of artificial intelligence (AI) and machine learning (ML) has shown potential for various applications in the medical field, particularly for diagnosing and managing chronic diseases among children and adolescents. This systematic review aims to comprehensively analyze and synthesize research on the use of AI for monitoring, guiding, and assisting pediatric patients with chronic diseases.
Kerth, JL., Hagemeister, M., Bischops, A.C. et al. (2025). Artificial intelligence in the care of children and adolescents with chronic diseases: a systematic review. Eur J Pediatr 184, 83. https://doi.org/10.1007/s00431-024-05846-3
Prediction and explainability in AI: Striking a new balance?
The debate regarding prediction and explainability in artificial intelligence (AI) centers around the trade-off between achieving high-performance accurate models and the ability to understand and interpret the decisionmaking process of those models. In recent years, this debate has gained significant attention due to the increasing adoption of AI systems in various domains, including healthcare, finance, and criminal justice. While prediction and explainability are desirable goals in principle, the recent spread of high accuracy yet opaque machine learning (ML) algorithms has highlighted the trade-off between the two, marking this debate as an inter-disciplinary, inter-professional arena for negotiating expertise. There is no longer an agreement about what should be the “default” balance of prediction and explainability, with various positions reflecting claims for professional jurisdiction. Overall, there appears to be a growing schism between the regulatory and ethics-based call for explainability as a condition for trustworthy AI, and how it is being designed, assimilated, and negotiated. The impetus for writing this commentary comes from recent suggestions that explainability is overrated, including the argument that explainability is not guaranteed in human healthcare experts either. To shed light on this debate, its premises, and its recent twists, we provide an overview of key arguments representing different frames, focusing on AI in healthcare.
Raz, A., Heinrichs, B., Avnoon, N., Eyal, G., & Inbar, Y. (2024). Prediction and explainability in AI: Striking a new balance? Big Data & Society, 11(1). https://doi.org/10.1177/20539517241235871
Artificial intelligence in child development monitoring: A systematic review on usage, outcomes and acceptance
Objectives: Recent advances in Artificial Intelligence (AI) offer promising opportunities for its use in pediatric healthcare. This is especially true for early identification of developmental problems where timely intervention is essential, but developmental assessments are resource-intensive. AI carries potential as a valuable tool in the early detection of such developmental issues. In this systematic review, we aim to synthesize and evaluate the current literature on AI-usage in monitoring child development, including possible clinical outcomes, and acceptability of such technologies by different stakeholders. Material and methods: The systematic review is based on a literature search comprising the databases PubMed, Cochrane Library, Scopus, Web of Science, Science Direct, PsycInfo, ACM and Google Scholar (time interval 1996–2022). All articles addressing AI-usage in monitoring child development or describing respective clinical outcomes and opinions were included. Results: Out of 2814 identified articles, finally 71 were included. 70 reported on AI usage and one study dealt with users’ acceptance of AI. No article reported on potential clinical outcomes of AI applications. Articles showed a peak from 2020 to 2022. The majority of studies were from the US, China and India (n = 45) and mostly used pre-existing datasets such as electronic health records or speech and video recordings. The most used AI methods were support vector machines and deep learning. Conclusion: A few well-proven AI applications in developmental monitoring exist. However, the majority has not been evaluated in clinical practice. The subdomains of cognitive, social and language development are particularly well-represented. Another focus is on early detection of autism. Potential clinical outcomes of AI usage and user's acceptance have rarely been considered yet. While the increase of publications in recent years suggests an increasing interest in AI implementation in child development monitoring, future research should focus on clinical practice application and stakeholder's needs.
Lisa Reinhart, Anne C. Bischops, Janna-Lina Kerth et al. (2024). Artificial intelligence in child development monitoring: A systematic review on usage, outcomes and acceptance. Intelligence-Based Medicine 9: 100134. https://doi.org/10.1016/j.ibmed.2024.100134
Künstliche Intelligenz in der Medizin. Ein interdisziplinärer Blick auf den Verordnungsentwurf der Europäischen Kommission
Anwendungen künstlicher Intelligenz (KI) halten Einzug in unseren Alltag, seien es nun automatisierte Entscheidungen im Bereich der Finanzdienstleistungen, die Nutzung von Gesichtserkennungsprogrammen durch Sicherheitsbehörden oder autonomes Fahren. Im Bereich der Medizin geht es um die Optimierung von Diagnose und Therapie, die Nutzung von Gesundheits-Apps verschiedenster Couleur, die Bereitstellung von Instrumenten der personalisierten Medizin, die Möglichkeiten roboterassistierter Chirurgie, ein optimiertes Krankenhausdatenmanagement, oder die Entwicklung und
Bereitstellung intelligenter Medizinprodukte. Aber handelt es sich bei diesen Konstellationen überhaupt um Beispiele künstlicher Intelligenz? Und was bedeutet „Intelligenz“ bzw. welche Bedingungen müssen erfüllt sein, um von „Intelligenz“ reden zu können? Diese Fragen werden je nach Kontext und disziplinärer Verankerung äußerst unterschiedlich beantwortet, sodass sich im Ergebnis ein denkbar weit- reichender und facettenreicher Diskurs zeigt. Unumstritten ist hingegen der Befund, dass die Qualität und die Quantität der unter KI firmierenden Anwendungen zahlreiche rechtliche, ethische, soziale sowie auch naturwissenschaftliche Fra- gen aufwerfen, die bislang einer umfassenden Behandlung und befriedigender Lösungen harren.
Heinrichs, B., Karger, C., Heyl, K. et al. (2023). Künstliche Intelligenz in der Medizin. MedR 41, 259–264. https://doi.org/10.1007/s00350-023-6432-x
Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics
Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called trustworthy AI. In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem is that, by framing the ethics of AI in terms of trustworthiness, we reinforce unjustified anthropocentric assumptions that stand in the way of clear analysis. Furthermore, even if we insist on a purely epistemic interpretation of the concept, according to which trustworthiness just means measurable reliability, it turns out that the analysis will, nevertheless, suffer from a subtle form of anthropocentrism. The paper goes on to develop the concept of strange error, which serves both to sharpen the initial diagnosis of the inadequacy of trustworthy AI and to articulate the novel epistemological situation created by the use of AI. The paper concludes with a discussion of how strange error puts pressure on standard practices of assessing moral culpability, particularly in the context of medicine.
Rathkopf C, Heinrichs B. (2024). Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics. Cambridge Quarterly of Healthcare Ethics 33, 333-345. https://doi.org/10.1017/S0963180122000688
AI, Suicide Prevention and the Limits of Beneficence
In this paper, we address the question of whether AI should be used for suicide prevention on social media data. We focus on algorithms that can identify persons with suicidal ideation based on their postings on social media platforms and investigate whether private companies like Facebook are justified in using these. To find out if that is the case, we start with providing two examples for AI-based means of suicide prevention in social media. Subsequently, we frame suicide prevention as an issue of beneficence, develop two fictional cases to explore the scope of the principle of beneficence and apply the lessons learned to Facebook’s employment of AI for suicide prevention. We show that Facebook is neither acting under an obligation of beneficence nor acting meritoriously. This insight leads us to the general question of who is entitled to help. We conclude that private companies like Facebook can play an important role in suicide prevention, if they comply with specific rules which we derive from beneficence and autonomy as core principles of biomedical ethics. At the same time, public bodies have an obligation to create appropriate framework conditions for AI-based tools of suicide prevention. As an outlook we depict how cooperation between public and private institutions can make an important contribution to combating suicide and, in this way, put the principle of beneficence into practice.
Halsband, A., Heinrichs, B. (2022). AI, Suicide Prevention and the Limits of Beneficence. Philos. Technol. 35, 103. https://doi.org/10.1007/s13347-022-00599-z
Ethical Implications of e-Health Applications in Early Preventive Healthcare
As a means of preventive medicine early detection and prevention examinations can identify and treat possible health disorders or abnormalities from an early age onwards. However, pediatric examinations are often widely spaced, and thus only snapshots of the children’s and adolescents’ developments are obtained. With e-health applications parents and adolescents could record developmental parameters much more frequently and regularly and transmit data directly for ongoing evaluation. AI technologies could be used to search for new and previously unknown patterns. Although e-health applications could improve preventive healthcare, there are serious concerns about the unlimited use of big data in medicine. Such concerns range from general skepticism about big data in medicine to specific challenges and risks in certain medical areas. In this paper, we will focus on preventive health care in pediatrics and explore ethical implications of e-health applications. Specifically, we will address opportunities and risks of app-based data collection and AI-based data evaluation for complementing established early detection and prevention examinations. To this end, we will explore the principle of the best interest of the child. Furthermore, we shall argue that difficult trade-offs need to be made between group benefit on the one hand and individual autonomy and privacy on the other.
Stake, M., Heinrichs, B. (2022). Ethical Implications of e-Health Applications in Early Preventive Healthcare. Frontiers in Genetics 13: 902631. https://doi.org/10.3389/fgene.2022.902631
Can we read minds by imaging brains?
Will brain imaging technology soon enable neuroscientists to read minds? We cannot answer this question without some understanding of the state of the art in neuroimaging. But neither can we answer this question without some understanding of the concept invoked by the term “mind reading.” This article is an attempt to develop such understanding. Our analysis proceeds in two stages. In the first stage, we provide a categorical explication of mind reading. The categorical explication articulates empirical conditions that must be satisfied if mind reading is to be achieved. In the second stage, we develop a metric for judging the proficiency of mind reading experiments. The conception of mind reading that emerges helps to reconcile folk psychological judgments about what mind reading must involve with the constraints imposed by empirical strategies for achieving it.
Rathkopf, C., Heinrichs, J. H., & Heinrichs, B. (2022). Can we read minds by imaging brains? Philosophical Psychology, 36(2), 221–246. https://doi.org/10.1080/09515089.2022.2041590
Discrimination in the Age of AI
In this paper, I examine whether the use of artificial intelligence (AI) and automated decision-making (ADM) aggravates issues of discrimination as has been argued by several authors. For this purpose, I first take up the lively philosophical debate on discrimination and present my own definition of the concept. Equipped with this account, I subsequently review some of the recent literature on the use AI/ADM and discrimination. I explain how my account of discrimination helps to understand that the general claim in view of the aggravation of discrimination is unwarranted. Finally, I argue that the use of AI/ADM can, in fact, increase issues of discrimination, but in a different way than most critics assume: it is due to its epistemic opacity that AI/ADM threatens to undermine our moral deliberation which is essential for reaching a common understanding of what should count as discrimination. As a consequence, it turns out that algorithms may actually help to detect hidden forms of discrimination.
Heinrichs, B. (2022). Discrimination in the age of artificial intelligence. AI & Soc 37, 143–154. https://doi.org/10.1007/s00146-021-01192-2
Ein vollständiges Schriftenverzeichnis steht hier zum Download zur Verfügung: