ChatGPT, an AI chatbot developed by OpenAI, recently passed a graduate level quantum computing exam – unsurprisingly, employers are keen to leverage its simple conversational interface and wide domains of knowledge to boost productivity. And ChatGPT isn’t the only such tool in town – others, such as Google Bard and Microsoft Bing (which is also powered by OpenAI) – promise similar results. Many tasks carried out by knowledge workers appear to be in the scope of these AI tools – the email announcing a team reorganization, writing copy for a top secret new product, and even creating complex code. Is this a good idea? It may be, but it is important to understand some of the potential risks, which are wide-ranging, and may result in the loss of intellectual property.
Consider this: Would you select a third party consultant without considering their background and training? Would you be comfortable sending confidential business information to that consultant without an NDA? Would you assume liability for copyright infringement or plagiarism within the work product of the consultant without protections in place? Would you allow the consultant to use your confidential business information in their work for another client? These are the types of questions employers should ask themselves when considering using an AI tool like ChatGPT.
AI tools may use your data to further train the tool
AI tools like ChatGPT are natural language processing tools, meaning that they can understand and mimic human speech. They usually operate in the form of a chatbot, where users provide prompts and the tool responds in kind. These “conversations” may be further used to train the AI tool. This means that the tool uses the conversations to learn and further improve itself. What makes these AI tools unique from prior chatbots is their scale (the number of trainable parameters), their adaptability to all different types of knowledge, their optimization (by humans through a fine-tuning process), and their ease of use.
AI tools may have restrictions on type of use and require disclosure of that use
Data input into AI tools may risk violating privacy laws
As detailed above, many AI tools use the inputs and outputs toward further training of their models. Thus, employers should carefully monitor the type of data being input into ChatGPT. Employees may submit sensitive business information or data protected under privacy laws such as the California Consumer Privacy Act (CCPA) and/or General Data Protection Regulation (GDPR). However, AI tools may use this data to further train their models. For example, OpenAI provides that: “Data submitted through non-API consumer services ChatGPT or DALL·E may be used to improve our models.” And because AI tools may use data to further train their models, there’s a possibility that a user’s input data could be output verbatim to another user.
Data input into AI tools may risk loss of intellectual property protection
The disclosure of business information to an AI tool may render that information ineligible for intellectual property protection, such as trade secrets or patents. These risks are present even if the AI tool does not use the user data for as part of its training.
Trade secrets are protected by virtue of their secrecy. Thus, it is critical to take reasonable steps to protect the trade secrets from disclosure. This generally includes implementing measures such as clearly labeling trade secrets, training employees on confidentiality policies, and limiting access to secret information. Inputting a trade secret into an AI tool is akin to disclosing that trade secret to a third party – generally such disclosures should not be made without a confidentiality agreement in place. Because there may be no confidentiality guarantee with respect to data input into an AI tool, courts may conclude that the use of the AI tool resulted in public disclosure of the information and loss of trade secret protection. Having policies in place prohibiting the disclosure of sensitive business information to AI tools such as ChatGPT may help businesses protect their trade secrets.
Similar concerns arise with respect to patents. Although, with patents, inventions are ultimately protected through their disclosure, the timing and manner of that disclosure plays a critical role. As explained in a previous blog, a party’s prior public disclosure of an invention may cause an inadvertent surrender of patent rights. Off the cuff public disclosure may lead to frantic last-minute patent drafting and filing efforts, which may be unsuccessful, to avoid such surrender. The disclosure of an invention to an AI tool may be considered one of these off the cuff public disclosures. Because the business is unaware that a public disclosure has occurred, it may not file the required patent application by the correct date (usually prior to one year after the public disclosure) to protect its invention.
AI tools may be susceptible to risks commonly associated with information stored on the cloud
Because the user data is likely retained by AI tools, that data could be susceptible to security concerns such as a data breach due to bugs or malicious actors. Just last month, OpenAI CEO Sam Altman tweeted that ChatGPT had “a significant issue . . . due to a bug in an open source library.” As a result, a small percentage of users were able to see the titles of other users’ conversation histories.
Data output from AI tools may be subject to IP protections
Employers may open themselves up to risk by using content generated by an AI tool. As mentioned above, these AI tools are trained on a large dataset. Some of this information may be protected already by intellectual property laws, or may contain personally identifiable information. Thus, what is output in response to a prompt may contain portions of potentially trademarked, copyrighted, or otherwise protected material. In addition, some of the output generated for one user may be the same or similar to that generated to another user. Employers will face difficulties in trying to determine whether or not generated content contains any already protected data, meaning they may be liable to the actual IP owner for infringement. As an example, OpenAI was recently sued over Codex, an AI tool akin to ChatGPT but specifically for coding, for alleged copyright violations by outputting code that is identical to sample code in a textbook on computer programming.
Data output from AI tools may not be protectable
The protectability of an AI tool’s generated material is questionable. The USPTO issued guidance providing that AI generated material cannot be copyrighted. The USPTO further explained that to the extent a human modified that material, however, those modified aspects could be protected. Similarly, with respect to patents, the Federal Circuit ruled last year in Thaler v. Vidal that AI systems were ineligible as “inventors,” because they are not human, and the Supreme Court recently denied Thaler’s petition for review of the Federal Circuit’s decision.
There is no guarantee that AI tools will present accurate information
Use of AI tools may subject businesses to other liabilities
Best practices for implementing AI tools