Buyer Beware: Implementing Ai in the US Healthcare Industry

As the use of Ai explodes across the globe, the rapid implementation of various Ai systems in the US healthcare industry is no exception.  Every day it seems that more healthcare providers, EMR vendors, health plans and others promote their implementation of Ai.  “Generative Ai” is being deployed by large EMR vendors and some large health systems to improve patient interaction by leveraging the ability of the Generative Ai to “chat” with patients via patient portals and other patient facing systems.


As someone who has worked in healthcare for over 40 years, I am amazed at the rapid implementation of Ai.  I do wonder whether we fully understand the implications and the impact on day-to-day operations and on patient care. The delivery of healthcare in the US happens within a comprehensive and heavily regulated ecosystem. As Ai becomes more prevalent in healthcare operations and delivery, it is fair to ask whether Ai is regulated in the US?  The answer is, “no”.  The National Ai Initiative Act of 2020 directed NIST to develop the Ai Risk Management Framework (RMF).  NIST published the RMF in January 2023 along with several companion guides and tutorials, but compliance is voluntary.  Congress has not passed any comprehensive legislation to regulate Ai, unlike the EU which is advancing the Artificial Intelligence Act that would establish a risk-based framework for the adoption of Ai with the EU member countries.  The Trump and Biden administrations have each issued Executive Orders dealing with Ai, but they only provide guidance to federal agencies on Ai.  State legislatures are beginning to consider Ai legislation which is likely to create a patch-work approach to Ai regulation. 


None of this is terribly surprising, but it is concerning. Absent a clear regulatory framework, healthcare organizations that implement Ai are left to conduct their own diligence and risk assessment. Even if the Ai your organization has implemented does not directly “touch” patient care, there might be implications should the Ai misfunction.  Do you really understand how Ai works?  What about Ai bias driven by the data sets used to “train” the Ai?  Is the Ai “trustworthy”?  These are not simple questions but healthcare organizations need to be asking them of all Ai that they implement.  This can be tricky if your EMR vendor, or other vendors, are the ones that are putting Ai into their products.  We are in the early stages of the “Ai revolution” and many of these issues will work their way out as we move forward.  For now, the mantra should be “buyer beware”.