How Moral Frameworks Apply to aDBS Technology
- meganjungers
- Feb 15, 2025
- 5 min read
Updated: Mar 31, 2025
The cumulation of aDBS when contextualized in Utilitarianism, Deontological, and Virtue Ethics includes several invaluable considerations. However, each moral framework can help act as a guide for navigating whether a policy regulating a course of treatment or clinical practice is in alignment with your beliefs. We lay out different scenarios in which each of the different moral frameworks can be applied to assess the morality of different scenarios relating to aDBS technology.
Moral Frameworks
First, some context on the different moral frameworks discussed in this post. These are rather simplified versions, as the philosophical interpretations of each are extensive and have different degrees of variance. However, when we discuss Utilitarianism, Deontology, and Virtue Ethics we are using these elements in our operational definitions:
Utilitarianism: aims to bring about the most good for the most people. This takes into account the consequences of an action and how many people will stand to benefit from the action, as it derives from a Consequentialist framework1. To Learn More About Utilitarianism, please visit this link.
Deontology: aims to fulfill an innate duty to do good to others and to obey the moral integrity with respect for persons, regardless of the negative consequences. To Learn More About Deontology, please visit this link:
Virtue Ethics: aims to use traits and characteristics of individuals to determine whether an action is right or wrong, but is also more focused on unconscious decision-making2. To Learn More About Virtue Ethics, please visit this link.
Privacy
When it comes to privacy concerns, the item of consequences becomes a driving issue in determining the permissibility of an action. With these assessments, we can formulate policy to be a vehicle driving more impactful and valuable regulation in alignment with our moral priorities.
The sharing of an individual's health data without consent can bring about harm through a number of possibilities. These can include both consequences that impact insurance and employment discrimination, while also inducing emotional distress of potentially accessible sensitive data that could be used in an exploitative manner1. From a utilitarian perspective, the compounded possibility that several versions of these negative consequences can harm a large number of people all from the result of failing to protect data, and thus would not support its aim of trying to bring about the most good for the most people1.
In contrast, a deontological approach to the issue of privacy derives from one being wronged by a lack of respect for a person and a duty to do good unto others1. Even if a patient's data is stolen and they are never made aware of the cyberattack, the patient's loss of control over their data is a failure to fulfill a moral obligation to respect their privacy2. In this sense, deontologists put forth the importance of doing right by others, regardless of whether your actions will protect them from the influence of harm.
Similarly, a virtue ethicist would consider the characteristics of the person in charge of creating safeguards around health data privacy: if responsibility is a guiding value of a patient in knowing that their sensitive health data is protected, then there is significant value in maintaining this foundation of trust2,3. To this point, an interpretation of virtue ethics would default to the positive virtue of being responsible and thus trustworthy towards all, which is a virtuous trait by virtue ethicists' standards3.
Cumulatively, each of the moral frameworks grants justification for action preserving the privacy of patient health data. In support of these conclusions, we can argue that there is a moral incentive to develop policy in pursuit of the protection of the health data of patients everywhere.
Data-Sharing in AI
A different question relates to the intentional data-sharing of AI models without a patient's knowledge. Another dimension of privacy, but this is the factor of data directly contributing to the machine learning model, in the hopes of improving an algorithm to be more accurate and reliable in the future.
From a utilitarian perspective, the supplement of data to the betterment of a treatment that will benefit others is supportive of the goal to supply the largest utility to the most people3. Because of the nature of AI models, having more data helps to mitigate the influence of biases as the sample size theoretically would be more representative4. An AI algorithm is only as good as the information it is trained on, and so if there are more contributions of data, there is a better chance that the algorithm will both have fewer implicit biases and also contribute to future data models. While a tradeoff of rights, the potential to help support improving treatment and moving medicine forward is justification for allowing data sharing for AI purposes.
However, a deontological stance would not shift from protecting patient privacy. Again, the importance of respecting an individual's rights and preserving their moral integrity is invaluable when weighing the moral goodness of improving a source of patient care. Above all, respect for persons is at the forefront of a deontological approach to the issue of sharing data.
Lastly, the virtue ethics argument contains weighing the unintentional bases behind the person making the decision, rather than the action that results. To this point, if the decision falls on a person possessing the virtues of justice, honesty, and responsibility, then they can justify both sharing information and protecting it from the AI model. A just person would be inclined to contribute to the AI to ensure just representation of data limits discrimination, particularly for vulnerable and underrepresented populations3. If they are a responsible and honest person, they may be motivated to protect a patient from having their privacy infringed upon3. In both situations, the individual would be acting virtuously, but it would be up to the interpretation of the individual which virtues would most contribute to human flourishing3.
Ultimately, there is not a consensus among moral theories on what course of action is correct when it comes to Data-sharing to progress medical AI models forward. As such, it is more clear that the correct course of action is contingent on the interpretation and values of the individuals for whom it serves. If a community feels strongly that there is a truly right course of action, and that the other is not morally permissible, then the community may work together in building policies to follow to retain accountability and instill protections. Further, particularly with AI policy, there may need to be continued discussions of priorities and what a society wants its health system to look like, in addition to who is most vulnerable if the status quo remains. Overall, moral frameworks can help structure these ideas, however, it is ultimately up to the actions of the people how they choose to address the pressing issues of AI in healthcare.
Citations
Price II, W. Nicholson, and I. Glenn Cohen. 2019. "Privacy in the Age of Medical Big Data." Nature Medicine 25: 37–43. https://doi.org/10.1038/s41591-018-0272-7
Hagendorff, Thilo. 2022. "A Virtue-Based Framework to Support Putting AI Ethics into Practice." Philosophy & Technology 35 (55). https://doi.org/10.1007/s13347-022-00553-z.
Bilal, Adil, Stephen Wingreen, and Ravishankar Sharma. 2020. "Virtue Ethics as a Solution to the Privacy Paradox and Trust in Emerging Technologies." In Proceedings of the 3rd International Conference on Information Science and Systems (ICISS '20), 224–228. New York: Association for Computing Machinery. https://doi.org/10.1145/3388176.3388196
Agostini, Martino. 2024. "AI Utilitarianism vs. Libertarianism: An Ethical Dilemma." Medium, June 19, 2024. https://medium.com/@tarifabeach/ai-utilitarianism-vs-libertarianism-an-ethical-dilemma-54c4ac3df482.Medium+1Medium+1




Comments