Mintz On Air: Practical Policies — Real vs. Robot: Holding AI Accountable for Trade Secret Theft
| Follow us on: | Apple Podcasts | Spotify | |||||
In the latest episode of the Mintz On Air: Practical Policies podcast, Member Jen Rubin is joined by Member Paul Huston for an unscripted conversation about the growing challenge of protecting trade secrets in an AI-driven workplace. This episode is part of a series of conversations designed to help employers navigate workplace changes and understand general legal considerations.
Together, Jen and Paul explore:
- How AI models could expose trade secrets – as well as use them
- Steps employers can and should take to create a culture of protection and responsibility through acceptable AI use
- Why banning AI isn’t realistic or desirable in the workplace – and how to balance adoption with risk mitigation
- The thorny question of accountability: How can a bot be held legally responsible for AI use?
Listen for insights on prevention strategies and future litigation trends in the age of AI.
Practical Policies — Real vs. Robot: Holding AI Accountable for Trade Secret Theft Transcript
Jen Rubin (JR): Welcome to the Mintz On Air: Practical Policies podcast. Today's topic: Real vs. Robot: Holding AI Accountable for Trade Secret Theft. I'm Jen Rubin, a Member of the Mintz Employment Practice with the San Diego–based Bicoastal Employment Practice, representing management, executives, and corporate boards. Thank you for joining our Mintz On Air podcast. If you have not tuned in to our previous podcasts and would like to access our content, please visit the Insights page at Mintz.com, or you can find us on Spotify. Today, I'm joined by my San Diego–based colleague, Mintz Member Paul Huston, who, like me, advises employers on all aspects of employment law – including helping them protect their own trade secrets and ensuring that businesses don’t unwittingly receive trade secrets from others.
Today's podcast focuses on that inescapable topic: AI. But the angle we're taking today relates to steps employers can try – and I'm emphasizing the word try – to ensure that employees don’t unwittingly or intentionally allow AI to access and misappropriate trade secrets.
Thanks for joining Mintz On Air, Paul. I'm looking forward to exploring how we humans can attempt to rein in this all-consuming, seemingly omniscient force known as AI as it permeates our workplaces – and, apparently, takes over the world.
Paul Huston (PH): Thanks for having me, Jen. ChatGPT is telling me that a good response is as follows:
“Thanks for having me. It's an exciting and important conversation. AI is certainly powerful, but the key is remembering that it's a tool created and guided by humans. The challenge isn't about stopping AI – it's about shaping its development responsibly, setting clear guardrails, and ensuring it serves human values rather than replacing them. I'm looking forward to diving into how we can strike that balance.”
I’m not surprised to say that's actually a pretty decent response.
JR: In fact, it's so good that I'm sitting here thinking, why am I doing this podcast with you, my human colleague, Paul Huston? Maybe I should have just hosted ChatGPT. But here we are, human to human.
Why Protect Trade Secrets?
JR: Before we dive in, I have a few questions for you, Paul, to help set the stage. The first one is easy. The second one – and I'm warning you – is harder.
The first question is this: Why do employers need to protect their trade secrets? And what types of steps do they typically take to do so?
PH: That's a good question. The trouble with trade secrets is that if you don't protect them, you can lose them. Taking reasonable steps to protect a trade secret is actually part of the very definition of a trade secret.
Trade secrets are often the lifeblood of a business. They represent the unique processes, strategies, and innovations that give a company its competitive edge – and they make up a substantial portion of the assets of most companies listed in public stock exchanges.
If those secrets are exposed, competitors can replicate them, which can erode market position and profitability.
How AI Puts Trade Secrets at Risk
JR: Exposure is obviously an issue when it comes to trade secrets. How might AI expose trade secrets?
PH: AI tools – primarily large language models (LLMs) like ChatGPT – can expose company trade secrets in several ways if not used carefully.
The first and most obvious is by a user inputting confidential data. For example, if an employee pastes proprietary code, product designs, strategic plans, or similar information into an LLM to get help, that data might be processed by the AI system and stored. Depending on the platform’s policies, the data can even be stored and then used to improve the model, which creates a risk of leakage.
The second scenario involves a lack of data isolation or the use of input for model training. Public AI tools often operate in what we call shared environments. If the tool doesn't guarantee strict data isolation, sensitive information that a user inputs can inadvertently influence the LLM’s responses to other users. For example, if another developer asks an LLM to solve a problem that your trade secret solves, the LLM may offer the solution it learned from your code. Some AI providers also use user inputs to train future versions of the model. If trade secrets are included, they can become part of the model's knowledge base and surface in unrelated queries from other users.
Another risk – and one that’s less obvious but important to understand – is that LLMs and other AI tools can be exploited by hackers and other bad actors. They can craft prompts that trick the AI into revealing previously entered confidential information if the system isn’t properly vetted and tested. There are also more nuanced risks, such as third-party integration. If the LLM is connected to apps or other APIs – application programming interfaces, which are small pieces of software that perform specific tasks like verifying an ID or fetching external information such as the weather – those integrated apps or APIs can
create vulnerabilities. If the integration doesn’t have proper security, data can flow to external systems unintentionally, and those systems can be subject to attack.
Layers of Protection: Practical Steps for Employers
JR: We have the concept of AI ingestion. We have the concept of a shared environment. We have the concept of integrating information into the systems themselves. We have a little bit of tricksterism, and then we have situations involving third parties. What can employers do to avoid some of these issues?
PH: I think the main way to approach protecting your IP and your trade secrets in this environment is the same as any other security step: a layered process. Any security model should be layered to avoid a single point of failure. The layers here should look familiar.
First, there are cybersecurity measures to prevent data breaches. This is your first line of defense. It includes vetting AI models and ensuring they won’t leak your data. Make sure you understand the parameters of how the model uses data, where it pulls from, and whether you can input information into a closed environment or whether it's in a shared environment. Just like using any tool, if you're going to put the weight of the company’s IP on it, you need to make sure it won’t crack under pressure. So, the first step is understanding the limitations and usage conditions of the tool you’re using.
The second layer is confidentiality agreements with employees and vendors. A lot of AI usage is not just employees using LLMs to help them do their job, but there’s AI use with vendors. Clients and customers that the company interacts with may use AI and input company information into sensitive places. Having contracts that govern those arrangements is very important.
Just like with any other trade secret protection step, the third layer is having access controls – limiting sensitive information to those who really, truly need it. You don’t want your source code, IP, or marketing strategy for the next year just laying around on the system’s network drive for anybody with login credentials to access it. Keep these most important company resources in secured environments where they’re only accessible to people who need them.
In addition to that, I’d say the fourth layer is employee training. Everyone needs to understand what qualifies as a trade secret and how to safeguard it. Employees should know what they’re working with – or may work with – that the company values highly. Training promotes vigilance in how information is used. Employees also need training on how to use AI in their work: which models to use and which have been vetted and approved by the company. The overall idea is that you really want to create a culture of protection with both legal and technical safeguards in place.
JR: That's really helpful to understand. Breaking it down that way makes it feel less overwhelming. We have stress testing, layers of protection, access controls, and steps to guard trade secrets. And, of course, you ended on something that we as employment lawyers hold near and dear: what are you doing for your employee training and education? Are you creating a culture where people understand these issues are of primary importance to the business? Thank you for summarizing that.
Balancing AI Adoption with Risk Mitigation and Policy
JR: So, we have that groundwork. Now I want to move on to the second question, which I warned you would be a little harder than the first: How do you prevent your employees from using AI in ways that could be damaging or harmful? And how do you prevent them from disclosing your trade secrets – or, just as bad, using someone else's trade secrets because AI has offered your employees that option?
PH: Yeah, this is really tough. It's where the technology – in the stage we have it now – and policy intersects. The reality is that AI tools can inadvertently expose sensitive data or pull in information that isn't properly licensed. So, you know, there’s a risk of exposing your own information as well as accidentally integrating someone else's. It's a two-way risk.
Prevention starts with the policies and training that we talked about, right? Technical safeguards – such as restricting access to external AI tools or using vetted, enterprise-grade closed-loop AI solutions that keep data secure – are going to be critically important for companies going forward. And the number of companies that are not using AI is already pretty slim, and it's going to be almost nonexistent in the future.
It's really important to identify what sorts of AI tools are going to be able to meet the company's needs and to find a model that works with your sensitivity and confidentiality needs. So, I think it's important to note here that I do not think that the correct solution is preventing employees from using AI. At this stage, categorically turning down AI tools is putting your company at a huge competitive disadvantage.
For many industries, the reality right now is that this is the MLB in 1998 and AI is steroids – you're either on them or you're not competing. You're left behind. The difference is nobody's coming down to shut down AI and keep good prompters out of the hall of fame. This is just the new normal. In a few years – or maybe even not that long – not using AI is going to be like somebody who refuses to use a cell phone or email. It might be fun and quirky, but it's completely unserious from a corporate standpoint. So, it's going to be critically important for companies to identify the AI tools that they need or can help them succeed in what it is they are doing – and to vet them, make sure they are in a secured and closed loop system, not a shared environment where information can be fed back to the machine to train it or leaked to others.
At the end of the day, this is really built on a culture of responsibility between both company management and employees, because there are so many points of failure and you can never account for humans doing unpredictable things. That will always be the case. So, having safeguards in place, training people so that they understand how to use the tools and why it's important to, is going to be your best bet to minimize the risk of loss here.
JR: You know, a lot of interesting issues come from this. Clearly, we have to accept that this is part of the landscape right now. It's part of our workplace. It's part of our world. Making sure that we've accepted that reality, I think, is an important step.
The Accountability Puzzle: Humans vs. Bots
JR: I want to turn to another question – what I think is a thorny one – about all of these things. And I completely accept this concept of responsibility and shared responsibility. And of course, there's the accountability piece. So, let's turn to that because that is the thorny issue I'm thinking of.
You're familiar, Paul; there's been a lot of litigation relating to misuse of AI or alleged misuse of copyrighted information by various AI engines to train their LLMs. And there's been a new wave of litigation focused on bringing cases against AI companies for defamation by their AI models, right? Some of this AI-generated information turns out to be false. For example, what if an AI program defames an individual?
I'm thinking about a recent case involving conservative activist Robby Starbuck. He filed a lawsuit against Google, alleging that Google's AI tool fabricated facts about him – facts that were then repeated by other AI bots as the truth. I'm sure there are other examples out there, but really the question comes down to accountability.
We can agree that when an employee misappropriates trade secrets belonging to another party, several things can happen. Let's run through them before I ask you some questions, Paul. First, an employer could face liability in court. Second, an employer could take disciplinary action against a current employee for misusing trade secrets – whether through AI or not. Third, you can sue a former employee who you allege has used or misappropriated trade secrets.
But notice that in each of these scenarios, we're talking about taking action against a human – someone you can usually, though not always, find and hold accountable. So, here’s the thorny question: How do you act against a bot? Can a nonhuman creator hold or violate trade secrets? And how do you punish or discipline a bot?
PH: Yeah, this is a great way to frame the issue. When a human misappropriates trade secrets, you have a clear path forward: discipline, litigation, enforcement. With a bot, things are tricky because it's not a legal person. It can't own a trade secret. It can't violate trade secrets. It can't even necessarily misappropriate to the extent that there's any sort of affirmative action or intention in misappropriation. It certainly can play a part in a human doing that.
It's funny to think about where the interaction between the human component and the AI component comes into play. As I was thinking about this topic, I was visualizing this dystopian future where an AI engine hallucinates a case or something in a legal brief, right? It's a case that doesn't exist or makes a statement about the law, right? You know, ChatGPT says:
“The law in California says X.” It totally doesn't. It's just absolutely wrong – which happens frequently. And then a lawyer doesn't check that, puts it into a brief, and opposing counsel gets it and says, “Really? That's the law?” and tries to check it with ChatGPT. And ChatGPT says, “Yeah, that's the law.” Okay. And then the judge gets these briefs. They both say, “This is the law,” and the clerk – maybe the clerk checks it with AI and says, “Yeah, that's the law.” Everybody's just relying on ChatGPT or an LLM to do their job. And it's wrong, but it's consistently wrong. It's art creating life instead of the other way around.
JR: Yeah, and what happens if all of a sudden that's our reality?
PH: Yeah, so the short answer is you can't sue the bot, right? But you might see liability for those who built it or used it without adequate controls or warnings. And I think that's really where the rubber meets the road on this issue. That's where you're looking at these errors – or even if not errors, right? It could be just the way the model is structured: leaking information, using confidential information, storing it, and learning a solution to a problem.
For example, the question is going to be: Was there adequate notice to make the user understand that the information being entered might not be safe? Or was it represented as safe when, in fact, it was not? This is where the governance and contractual protections around AI are becoming critical. And this is especially fascinating for me because it brings up some very old philosophical and theological thought experiments about who's responsible when there's the creator and the creation. AI is different from any tool we've ever had before because its capabilities are so profound and expansive. They can really surprise even the developer, and sometimes they act in ways that almost make it seem like they’re making their own decisions. I don't think I've seen anything that would lead me to believe that the traditional causation rules should be applied differently here.
The issue is: We're discussing reasonably foreseeable problems with AI platforms, right? We are foreseeing these problems right now. That's what we're doing as we talk about the places where these could happen. Some of them already have happened, but there are many that have not – and you can totally see them happening. That puts us right back into the world of torts and contracts that we've always known. People are going to be responsible for developing and using these tools in a way that avoids these issues. I don't think we're quite ready to start suing bots yet. But check back in a few years.
I think ultimately, in situations like this, accountability is going to hinge on a couple of factors: Did the provider have safeguards to prevent harmful outputs? Was the user warned about limitations or possible adverse outcomes? Were reasonable steps taken to correct errors once they were discovered? Did the user do anything to vet the information? These are the classic contract and tort law principles of causation, foreseeability, warnings, a duty to warn, a duty to read – all of these applied in this context. I think that gives you the legal path to addressing these issues when they come up.
Ultimately, it's a mix of regulation, contractual obligations, and technical solutions – things like watermarking AI-generated content and improving fact-checking. These models are constantly growing and developing, and they're only going to get sharper as we go forward. For now, companies using AI need strong governance and transparency to mitigate these risks. They have to have good policies in place, and they need to do their due diligence.
Working in a closed-loop system is going to be very important. And just so we're clear, that's to differentiate it from a shared environment – using an open model where there's free transfer of information between the user and the model. Closed-loop systems are the ones that guarantee confidentiality. It's sort of a one-way door: The information can come in, but it doesn't go out.
Users really need to understand how to use those systems, why they’re important, and that they're ultimately responsible for any AI-generated content they choose to use.
JR: It's interesting when you take this to a more metaphysical level: What is law, right? At its core, law is a framework of rules that helps people operate and regulate their behavior. It's fundamental to governance and to the social construct that shape how we interact.
The question of responsibility – creator versus creation – introduces a Frankenstein-like concept: setting something loose in the world and then not being able to pull it back. The flip side is not even understanding what's real and what's not. We don't have time to pull all those things apart today, so let’s bring it back to where we started: trade secrets and the ease with which many of these assets, which are intangible in many respects, can be misappropriated, used, incorporated, generated, and shared.
Future Litigation and Prevention Strategies
JR: Businesses need clarity on how these assets will be treated and must ensure strong protections are in place. As always on this podcast, we return to practicality – accountability and the steps businesses can take. This is especially critical given recent federal discussions about deregulating AI. We won’t dive into that today, except to note that governance and uniform rules are essential moving forward.
So, when AI remembers or reproduces trade secret data, how do you prove it? How do you determine whether AI has ingested, used, or disclosed your trade secrets? And then I come back to – we touched on this briefly – how do you prevent this from happening so that you don't put yourself in a position of having to litigate this issue?
PH: Yeah. That's the cutting edge of future trade secret litigation right there. I think we're going to see a lot in the next couple of years that involve AI. And really, it shouldn't happen if you're careful because there are already tools in place that operate within a closed system. It's bringing information to you, but it's not taking any of your information to train the model. It's not going to remember it or even be available anywhere outside a verified, certified closed-loop system. And if you're staying in that environment, a lot of the protections are already in place for you. But that tool being available doesn't stop an employee from using an open ChatGPT model on a phone or tablet when they're thinking about an issue outside the office. And that's where I think these problems are going to come up. Honestly, I would expect a lot of these to be discovered after the fact – after a competitive product is introduced by Company B, reverse engineered by Company A, and people from Company A are looking at it and saying, “There's no way they came up with the exact same solution or code or whatever it is that we thought of for our product.”
They have to have copied it. And then there's a filing for an injunction or an allegation of trade secret misappropriation and an identification of the exact trade secret at issue. Discovery in that lawsuit may show that Company B used an AI model to generate the solution that Company A thought it had kept secret. Then Company A goes back and does an audit and finds out that Steve, intern, put the code in a public LLM to try to get help fixing a bug and inadvertently exposed the company's trade secret. That's a very bad day for Company A. And you can see secondary litigation in those cases, looking at the particular LLM's terms of use, warnings, and so forth – the issues we were discussing earlier. I can totally see that happening.
I can also see cases where this incredible access to information – at a depth and complexity we haven't seen since the internet became widely available and full of information – creates new challenges. I think this step, going from internet to AI powered tools, is probably even bigger than going from no internet to internet. We may see these AI models coming up with solutions that somebody thought was secret. And they just came up with the same answer. So, we might see a lot of that sort of dummy litigation as well, where somebody is surprised to find that something they thought of on their own is also being manufactured by another company. And it turns out that the company just came up with the same answer or the same solution to a problem.
I do think it’s going to be very difficult to determine how leaks happened, where they happened, and how they happened – which is why prevention is already intensely important. It's incredibly important you really can't call something back once it's out. Once it's lost, it's usually lost, and you're trying to pick up the pieces. So, prevention is the key. And with AI and the incredible power it has, prevention is going to be more important than ever.
JR: It's interesting. Prevention is the best way to avoid having to deal with the accountability question. You can solve for layers of protection – the culture, the training, the thoughtfulness about these closed-loop systems – and make sure nothing gets out through these open systems. That would help you avoid the accountability question because I'll tell you, Paul, I don't know how to answer the accountability question. It's really interesting from our perspective as employment lawyers because we always worry about our clients as employers. We worry about how you train employees and how you help employees understand culturally how important these issues are. But you want to avoid having to get to that accountability question because that's where the rubber meets the road. If you can't hold a bot accountable, I don't know where you go from there.
Wrap-up
JR: With that, I'm going to leave our listeners – and thank you, Paul Huston, for joining me today because this has been an absolutely fascinating conversation. I wish we had more practical advice for our listeners in dealing with these issues, but developments are happening so quickly. And of course, we at Mintz are trying to stay on top of these issues, and we have entire Practice Groups devoted to this. So, thank you for joining, and visit us at Mintz.com for more information and commentary.
You can also find our Practical Policies podcast on Spotify. I'm Jen Rubin. Thank you, Paul Huston, and thanks to our listeners.
PH: Thanks for having me.



