Skip to main content

The AI Search Engine Doctor Is Always In: What Are the Regulatory and Legal Implications?

The information age in which we live is reaching a new milestone with the development and ready access to conversational artificial intelligence based on advanced transformer algorithms, or AI chatbots, including their upcoming integration into multiple Internet search engines. This development creates exciting opportunities and potentially terrifying risks in the health care space. Ever since the advent of the modern search engine, people all over the world have browsed the Internet for medical advice on symptoms, potential procedures or treatments (including prescription drugs), and the possible progression of specific diseases or conditions, and have found various pieces of relevant or irrelevant information from a multitude of websites. Integrating AI chatbots into search engines allows an algorithm to gather information from the web and convert it into a narrative, conversational response to a user’s specific question. Inevitably, people will ask AI chatbot-enabled search engines for information on diseases, conditions, medicines, or medical devices and use the response in some way to make certain medical decisions. But what happens when the AI chatbot’s response is inaccurate or even provides advice that may lead to harm if the user follows it? Can AI chatbots be regulated by the U.S. Food and Drug Administration (FDA)? What are the liability implications if a user is harmed? We provide some initial thoughts on such legal issues in this post.

Is an AI Chatbot a Regulated Medical Device?

If an algorithm will be used in some way to provide medical advice, one might expect it to be regulated by FDA. To be sure, FDA does regulate artificial intelligence or machine learning-based medical device software, which we have discussed in previous posts. However, to be regulated as a medical device, an algorithm, software, or any other product must be intended for a medical or clinical use, namely to diagnose, cure, mitigate, treat, or prevent a disease or condition or to affect the structure or function of the body.

AI chatbots, at least those that have been developed so far, are not specifically created or intended to provide medical advice. The algorithm simply farms information that it perceives as relevant to the question or conversational input from various online sources, including news articles, websites, and social media posts, as well as conversations with humans, and transforms it into structured, conversational responses. Presumably, developers do not specifically train such algorithms to provide the most accurate data from reputable sources in response to medical queries or to respond in a certain way to such questions. Unless the developer, or a company leveraging the algorithm in its product, such as a search engine, promotes the AI chatbot as a source of advice on medical issues such as the diagnosis or treatment of a disease or condition, there is no discernable intent to market the chatbot for a medical purpose.

In 2021, FDA revised the definition of “intended use” in 21 CFR 801.4 to clarify that the requisite intent “may be shown by [a legally responsible] persons' expressions, the design or composition of the article, or by the circumstances surrounding the distribution of the article.” The definition states that objective intent to market a product for a medical purpose may be shown “by circumstances in which the article is, with the knowledge of such persons or their representatives, offered or used for a purpose for which it is neither labeled nor advertised” but generally exempts simple knowledge that the product is being used for an unapproved medical purpose as sufficient evidence of an intended use. Thus, even if the company marketing an AI chatbot knows the algorithm could be or even is being used to provide medical advice, it could not be regulated by FDA as a medical device unless the company is somehow promoting or encouraging such use.

Of course, there are circumstances in which an AI chatbot could be designed specifically for use in health care environments, and such algorithms may fall within FDA’s regulatory jurisdiction. For instance, a transformer algorithm that is specifically trained to sort through medical texts, scientific articles, and other medical information to provide interpretations of patient-specific data or clinical images would be regulated by FDA as a clinical decision support software device. However, an algorithm designed simply to aggregate and produce general background information on medical topics or to assist a doctor in creating clinical notes may not be classified as a medical device based on the specific exemptions created by the 21st Century Cures Act (see our previous posts on the Cures Act and CDS software here, here, and here).

How Could Medical Misinformation from a Chatbot be Addressed?

As mentioned above, a transformer algorithm used in an AI chatbot simply scrapes information that the algorithm determines to be relevant to the topic of conversation from the Internet and conversations with humans, condenses the information, and converts it into a conversational response. If a user asks the chatbot for information on a medical condition, the algorithm could incorporate information from reputable medical sources, scientific articles, and fringe alternative medicine websites into its answer. If a user asks about the uses of a specific drug product, the algorithm could create a response from the official FDA-approved labeling, medical texts, as well as social media posts from people who have used the drug for various uses, which may incorporate descriptions of off-label uses of the drug. Any inclusion of unscientific or disreputable information on medical conditions or treatments, or information on unconfirmed off-label uses of a drug product or medical device, in a chatbot response could lead to harm if the user takes actions based on the response.

In part, the issue of potential misinformation stems from the degree to which the user may independently review the information provided by a search engine in response to a query. When an individual uses a conventional search engine, the user can decide which websites to review and can evaluate the content based on the source and the context, even though the search engine determines which results are most relevant. However, an AI chatbot aggregates information from online sources and provides a direct, narrative response. From the descriptions of certain AI chatbot-enabled search engines, it appears that some of them display references and links to the websites and other online resources the algorithms use to generate a response. However, it is possible that a user would not separately review the source websites and independently evaluate the information, making it more likely that the user would blindly trust the information provided.

Requiring AI chatbot-enabled search engines to display a disclaimer that warns about the potential unreliability of responses to health care queries and advises users to consult their doctors, or creating white lists or black lists of websites that may be used for responses to such queries, may help narrow the potential for harm. However, such solutions may be onerous to develop and integrate into the chatbot algorithm and would likely require specific legislative action to implement.

What Are the Possible Liabilities that Could Result from AI Chatbot Misinformation?

To date, several commentators have identified copyright infringement, defamation and data protection as three of the most common liabilities that could ensue from the use of AI chatbots.

Medical misinformation could give rise to negligence claims, but issues relating to duty—including who has a cognizable duty and to whom does the duty run—the application of the “learned intermediary” doctrine, reasonable reliance, and causation would most likely muddy any such claim.  In a strict product liability action, the failure to warn of a known or knowable risk may be a viable claim, but the “state of the art” defense as well as general and specific causation issues could also create problems for any claimant.


Artificial intelligence and its applications are at the cutting edge of technology and are gradually seeping into and overlapping with goods and services in many industries. As usually, the law and regulatory agencies lag far behind the pace of innovation and must resort to applying ill-fitting laws and regulations to new technologies and products. AI chatbots likely will soon pervade connected devices and platforms, and curbing medical misinformation (and misinformation, in general) will become an increasingly challenging problem. Based on the current state of the law, any claims against AI software developers relating to medical misinformation probably will be cases of first impression for the courts that consider them. We will keep track of these developing issues, so keep watching this space for updates.

Subscribe To Viewpoints


Benjamin advises pharmaceutical, medical device and biotech companies on the FDA regulatory process to identify the correct regulatory pathway, assisting with FDA communications and strategy.

Daniel J. Herling

Member / Co-chair, Product Liability Practice

Daniel J. Herling is a highly regarded product liability defense attorney at Mintz. He handles litigation and class actions involving consumer products, leveraging his deep knowledge of California's consumer protection regulations and laws.