Skip to main content

Health Care

Viewpoints

Filter by:

Health Care Viewpoints Thumbnail

As the first state law to regulate the results of Artificial Intelligence System (AI System) use, Colorado’s SB24-205, “Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (the Act), has generated plenty of cross-industry interest, for good reason. In some ways similar to the risk-based approach taken by the European Union (EU) in the EU AI Act, the Act aims to regulate developers and deployers of AI Systems, which are defined by the Act as “any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”

Read more
Health Care Viewpoints Thumbnail

Preventing discrimination and bias in connection with the use of artificial intelligence (AI) in health care is among the principal current focuses of U.S. Department of Health and Human Services (HHS) and was among the health care directives in the recent Biden Administration Executive Order on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order). Consistent with these priorities, on April 26, 2024, the HHS Office for Civil Rights (OCR) and the Centers for Medicare & Medicaid Services (CMS) released an unpublished version of a new final rule under Section 1557 of the Affordable Care Act (ACA) that aims to broadly address inequity across health care but also requires certain actions of entities covered under Section 1557 around their use of AI in clinical decision-making (Final Rule).

Read more
Health Care Viewpoints Thumbnail

We have been writing about software as a medical device (SaMD) for years, tracking the Food and Drug Administration's (FDA) efforts to keep up with the fast-paced development of digital technology, such as launching the Digital Health Center of Excellence, implementing predetermined change control plans, and issuing various digital health guidances on device software functions, clinical decision support software, cybersecurity, and other topics. In anticipation of FDA’s Artificial Intelligence /Machine Learning (AI/ML) Medical Devices Workshop in October 2021, we posted a brief history of the agency’s regulatory oversight of software through the traditional medical device regulatory framework established in the 1970s, in which we highlighted the numerous challenges associated with such an approach. But now, with the rise of artificial intelligence and machine learning and the proliferation of AI/ML-enabled software throughout the health care industry, FDA is facing enormous challenges using an outdated, procrustean regulatory framework to maintain standards of safety and quality for such software devices. It is becoming increasingly clear that innovation in the AI/ML and digital health technology space is advancing rapidly, as FDA Commissioner Rob Califf has emphasized in many recent public appearances, and that the traditional device framework is quickly becoming unworkable for such technologies.

Read more
Health Care Viewpoints Thumbnail

As we reflect on the flurry of activity in the health care data privacy and security space in 2023 and look ahead to what will continue to be a busy 2024, we are seeing the early stages of federal agency movement to align the regulatory environment with modern health care delivery, cutting-edge technologies, and innovative data-sharing techniques. Some of this work has been done in the form of federal agency guidance in which health care organizations will be looking for additional updates and there are also a handful of pending U.S. Department of Health and Human Services (HHS) Office for Civil Rights (OCR) proposals that call for substantial changes to the HIPAA Privacy Rule.

Read more
Health Care Viewpoints Thumbnail

The Department of Health and Human Services (HHS) was tasked with formalizing and coordinating efforts to regulate artificial intelligence (AI) in health care under the November 2023 Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (AI EO) and has already begun its regulation of AI within certain certified health IT.  HHS and Office of the National Coordinator for Health Information Technology (ONC) recently published the Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing (HTI-1) Final Rule.

Read more
Health Care Viewpoints Thumbnail

On October 30, 2023, the Biden Administration released and signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order) that articulates White House priorities and policies related to the use and development of artificial intelligence (AI) across different sectors, including health care.

Read more
Health Care Viewpoints Thumbnail

The information age in which we live is reaching a new milestone with the development and ready access to conversational artificial intelligence based on advanced transformer algorithms, or AI chatbots, including their upcoming integration into multiple Internet search engines. This development creates exciting opportunities and potentially terrifying risks in the health care space. Inevitably, people will ask AI chatbot-enabled search engines for information on diseases, conditions, medicines, or medical devices and use the response in some way to make certain medical decisions. But what happens when the AI chatbot’s response is inaccurate or even provides advice that may lead to harm if the user follows it? Can AI chatbots be regulated by the U.S. Food and Drug Administration (FDA)? What are the liability implications if a user is harmed? We provide some initial thoughts on such legal issues in this post.

Read more
Health Care Viewpoints Thumbnail
In the weeks leading up to FDA’s October 14, 2021 Transparency of AI/ML Enabled Medical Devices Workshop (Workshop) we took a brief look at the history of FDA’s regulation of medical device software and the agency’s more recent efforts in regulating digital health. In this post, we will provide an overview of the topics discussed at the Workshop and our impressions of the agency’s likely next steps.
Read more
Health Care Viewpoints Thumbnail
In our last post, we took a brief look back through history at FDA’s approach to regulating medical device software and found that there is little distinction from the agency’s approach to hardware devices. Recently, however, FDA has announced several digital health initiatives aimed at improving the agency’s resources and policies governing software and data systems (including its own internal data systems) and changing the way the agency handles pre-market reviews of and compliance activities for software as a medical device (SaMD) and SaMD manufacturers. In this post, we will review FDA’s digital health improvement highlights from the past few years and take a quick look at the agenda for the transparency of AI/ML-enabled medical devices workshop scheduled for October 14, 2021.
Read more
Health Care Viewpoints Thumbnail
In anticipation of FDA’s virtual public workshop on transparency of artificial intelligence/machine learning (AI/ML)-enabled medical devices scheduled for October 14, 2021, we will be posting a series detailing the history behind FDA’s regulation of software and then reporting our impressions of FDA’s presentations and statements from various attending stakeholders following meeting. In this part, we briefly summarize FDA’s traditional approach to regulating software and how software development quickly revealed the limitations of the original regulatory framework established in the 1976 Medical Device Amendments to the Federal Food, Drug, and Cosmetic Act (FD&C Act).
Read more
Practice Hero Artificial-Intelligence Mintz
Artificial Intelligence is a growing part of our day-to-day life. And AI promises to improve our health care system. ML Strategies Vice President Christian Tomatsu Fjeld recently sat down with other experts for a panel discussion hosted by the San Francisco Business Times to discuss AI and some business and policy considerations across multiple industries. This viewpoint considers some of the impacts on health care specifically, and links out to the panel's discussion.
Read more
Viewpoint Thumbnail
Regular readers of this blog know that we’re closely following the FDA’s proposed regulatory framework for software as a medical device (SaMD), known as precertification—Pre-Cert for short. Generally, Pre-Cert involves a premarket evaluation of a software developer’s culture of quality and organizational excellence and continual, real-time postmarket analyses to assure software meets the statutory standard of reasonable assurance of safety and effectiveness.
Read more
Viewpoint Thumbnail
In this sixth post in our series on artificial intelligence in health care, Julie Korostoff highlights the importance of securing adequate data rights to commercialize an AI technology. The post addresses the contractual commitments that a developer of a healthcare AI tool should secure in order to have the data rights necessary for development and commercialization.
Read more
Viewpoint Thumbnail
As our use of AI technology becomes more frequent, interconnected, and integral to daily life, the liability exposure to AI product designers and manufacturers continues to escalate. There are more potential liability risks, including product liability risks, in our current environment than ever. With AI technology embedded in interconnected software and hardware products, gone are the days where we can neatly separate data security and privacy from product liability exposure.
Read more
Viewpoint Thumbnail
The rise of artificial intelligence (AI) developments over the last decade has had profound implications for the health care industry. From IBM’s Watson to lesser known innovations that have flown under the radar, such as clinical decision software and predictive analytics, these changes have infiltrated the field’s daily functions. Congress generally views AI with trepidation and fascination. We expect Congress to keep the subject at arm’s length until provoked to action.
Read more
Viewpoint Thumbnail
Software developers are racing to develop health care products that leverage artificial intelligence (AI), including machine learning and deep learning. Examples include software that analyzes radiology images and pathology slides to help physicians diagnose disease, electronic health records software that automates routine tasks, and software that analyzes genetic information to support targeted treatment. The one thing that all of these products have in common is a need to interact, in some way, with real world medical data. However, this real world data can be protected by the Health Insurance Portability and Accountability Act of 1996 (HIPAA) as well as a patchwork of federal and state laws and regulations. Below we discuss the contexts in which developers may encounter these laws, as well as strategies to navigate related legal issues.
Read more
Viewpoint Thumbnail
Artificial intelligence—AI—is the future of everything. But when patient health is on the line, can we trust algorithms to make decisions instead of patients or their health care providers? This post, the second in our blog series about AI in health care, explores FDA’s proposed regulatory model that is supposed to be better suited for AI (and similar technologies) while still protecting patients.
Read more
Viewpoint Thumbnail
The Journal of the American Medical Association in its September 18, 2018, issue included four articles on deep learning and Artificial Intelligence (AI). In one of several viewpoint pieces, On the Prospects for a (Deep) Learning Health Care System, the author’s conclusions aptly describes why health care providers, entrepreneurs, investors and even regulators are so enthusiastic about the use of AI in health care.
Read more
Viewpoint Thumbnail
As in any technology area, it is important to consider patent protection early in the development of an AI-related invention. However, AI inventions raise a number of particular issues that, if not addressed fully or at the right time, could be fatal to securing U.S. patent protection that would otherwise be available to prevent others from making, using, selling, or importing the invention. This article identifies common pitfalls in getting a patent for AI inventions and provides insights on how to avoid them. These principles apply not only to AI-related inventions, but also to digital health inventions more broadly.
Read more
Sign up to receive email updates from Mintz.
Subscribe Now

Explore Other Viewpoints: