top of page

Why Does the US Have Guidances for AI Policy?

Updated: Mar 31

Disclosure: Policy analysis updated through Dec 2024.



In comparison to other nations, the United States has taken a more reserved approach to AI policy. Rather than establishing all-encompassing regulations that apply to all forms of data, the US has approached AI policy with a "sectoral approach" where they follow "cross-sectoral, non-binding principles."1 This approach is intentional in some ways, though it certainly comes with respective ethical concerns. By not placing all AI and privacy policies under one jurisdiction, the power to implement and enforce regulations is left to the respective sectors and state/local governing entities to govern how they see fit1. While the US has AI guidance in many of the different sectors, the lack of a cohesive regulation leaves some items with multiple applications subject to not falling directly into a specific sector, and therefore having to be subject to multiple guideline reviews. Neuromodulatory-AI technology is an example of one of those technologies.  


The leading legal directive regarding AI law in the United States is the Executive Order (E.O.) 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence2. This is the only legally binding legislation enacted in the country at the federal level, as all other national policies pertaining to AI in the United States fall under the categories of standards, guidance, and frameworks. The goal of the Executive Order was to create a government-wide effort to "guide responsible artificial intelligence (AI) development and deployment through federal leadership, regulation of industry, and engagement with international partners."2

Specifically, eight categories were highlighted for attention concerning the jurisdiction of:


  • Safety and security

  • Innovation and competition

  • Worker support.

  • Consideration of AI bias and civil rights

  • Consumer protection

  • Privacy

  • Federal use of AI

  • International leadership


Given the legal obligations that follow from a law like an executive order, a number of subsequent policies in the form of guidelines and frameworks across all areas of the country's governmental entities were created. The EO outlined 28 agencies for whom the order was directed, and established deliverables of human-centered requirements for giving governmental oversight to the development and deployment of AI relevant to the eight respective categories. The executive order was seen as a first step toward establishing other federal mandates as necessary, or adopting standards as regulation, should legal accountability be neccesary2.


NIST AI RISK Management Risk Framework


Most notably, the NIST AI Risk Management Framework for AI was one of the most significant responses to the executive order. While a supportive policy that promotes many of the goals articulated in EO 14110, the NIST grants guidance to personally identifiable information in their specific guideline, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile3. While not exclusively written for use by the medical device technology community, there are items that directly apply to elements of neuromodulatory device technology and sensitive data that may be invaluable information in the future. 


For example:


"2.4. Data Privacy

 GAI systems raise several risks to privacy. GAI system training requires large volumes of data, which in some cases may include personal data. The use of personal data for GAI training raises risks to widely accepted privacy principles, including to transparency, individual participation (including consent), and purpose specification. For example, most model developers do not disclose specific data sources on which models were trained, limiting user awareness of whether personally identifiably information (PII) was trained on and, if so, how it was collected. Models may leak, generate, or correctly infer sensitive information about individuals. For example, during adversarial attacks, LLMs have revealed sensitive information (from the public domain) that was included in their training data. This problem has been referred to as data memorization, and may pose exacerbated privacy risks even for data present only in a small number of training samples. In addition to revealing sensitive information in GAI training data, GAI models may be able to correctly infer PII or sensitive data that was not in their training data nor disclosed by the user by stitching together information from disparate sources. These inferences can have negative impact on an individual even if the inferences are not accurate (e.g., confabulations), and especially if they reveal information that the individual considers sensitive or that is used to disadvantage or harm them. Beyond harms from information exposure (such as extortion or dignitary harm), wrong or inappropriate inferences of PII can contribute to downstream or secondary harmful impacts. For example, predictive inferences made by GAI models based on PII or protected attributes can contribute to adverse decisions, leading to representational or allocative harms to individuals or groups

Trustworthy AI Characteristics: Accountable and Transparent, Privacy Enhanced, Safe, Secure and Resilient" (NIST, 2024 p.7)



The guidelines primarily call out risks to be aware of, and further the suggested action to combat the existence of a violation occurrence3. Notably, though, should someone feel that the technology is not working within data privacy standards and is not being used for its purpose, the NIST's guidance is to simply disengage the technology3. 

Further, there is not much else the organization can do beyond the statutory limitations of the executive order, given that it merely called for guidelines rather than enforceable regulation.  As such, it is the hope of the organization that their voluntary guidelines are followed for the development of medical device technology with AI.


FDA AI SaMD Action Plan


The Federal Drug Administration additionally maintains their own review process for medical device use, however when it comes to AI use in these devices, they are just now addressing best practices to follow as an organization.  To date, they had been complying with the International Organization for Standardization’s compliance standards for medical device with AI development, however per the adoption of the FDA SaMD-AI AI Action Plan they are developing predetermined change control plans (PCCPs) and looking to limit exhaustive review processes for devices under development whenever a new AI regulation enters the scene5.  


Additionally, the FDA announced at the end of 2024 that they intended to craft specific guidance to the United States for managing risk to the life cycles of medical devices enabled by AI.  This is a novel task to address, and grants the FDA the capacity to invest in post-market surveillance and retain more of a regulatory role in the AI-medical device field4. These two commitments from the FDA illuminate potential regulatory direction in the near future for SaMD-AI in the US, which would be progress towards protecting privacy and mitigating some of the barriers to the regulatory approval process. 


  1. OECD. n.d. "How Countries Are Implementing the OECD Principles for Trustworthy AI." https://oecd.ai/en/wonk/national-policies-2.

  2. Harris, Laurie A., and Chris Jaikaran. 2024. "Highlights of the 2023 Executive Order on Artificial Intelligence for Congress." Congressional Research Service Report R47843, April 3, 2024. https://www.congress.gov/crs-product/R47843.

  3. National Institute of Standards and Technology (NIST). 2025. NIST AI 600-1: Artificial Intelligence Risk Management Framework. Gaithersburg, MD: NIST https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.600-1.pdf

  4. U.S. Food and Drug Administration. 2025. "FDA Issues Comprehensive Draft Guidance for Developers of Artificial Intelligence-Enabled Medical Devices." FDA News Release, January 6, 2025.https://www.fda.gov/news-events/press-announcements/fda-issues-comprehensive-draft-guidance-developers-artificial-intelligence-enabled-medical-devices

  5. Reddy, Sandeep. 2024. "Global Harmonization of Artificial Intelligence-Enabled Software as a Medical Device Regulation: Addressing Challenges and Unifying Standards." [Mayo Clinical Proceedings: Digital Health] [https://doi.org/10.1016/j.mcpdig.2024.100191].





Comments


bottom of page