Mintz On Air: Practical Policies — Real vs. Robot: AI’s Impact on the Workplace Life Cycle
AI is reshaping the employment life cycle so quickly employers are racing to keep up. In this episode of the Mintz On Air: Practical Policies podcast titled “Real vs. Robot: AI’s Impact on the Workplace Life Cycle,” Mintz Member Jen Rubin sits down with Associate Emma Follansbee to discuss how AI is reshaping some employment systems and offers advice for employers on how best to adapt their workplaces to AI developments.
Insights include:
- How AI is changing hiring and onboarding and tips for employers on building workplace protections for these changes
- The importance of modifying basic employee documents such as confidentiality agreements and job descriptions
- How to reshape monitoring and training on AI
- A reminder about the employer’s obligation to stay vigilant while adopting these new technologies
- AI’s impact on investigations and employee complaints
Listen for insights on how employers can adjust established practices to address AI’s growing influence across the employment life cycle.
Practical Policies — Real vs. Robot: AI’s Impact on the Workplace Life Cycle Transcript
Jen Rubin (JR): Welcome to the Mintz On Air: Practical Policies podcast. Today's topic: Real vs. Robot: AI’s Impact on the Workplace Life Cycle. I'm Jen Rubin, a Member of the Mintz Employment Group with the San Diego–based Bicoastal Employment Practice representing management, executives, and corporate boards. Thank you for joining our Mintz On Air podcast. If you have not tuned in to our previous podcasts and would like to access our content, please visit us at the Insights page at Mintz.com, or find us on Spotify.
Today I'm joined by my colleague, Mintz Associate Emma Follansbee, from our Boston office. Emma is an employment attorney who counsels clients on a wide variety of employment issues and litigates employment disputes before state and federal courts and administrative agencies. Her litigation practice includes restrictive covenant agreements, discrimination, sexual harassment, and retaliation claims. Emma also litigates wage and hour cases and counsels on wage and hour compliance.
Like many Mintz employment attorneys these days, Emma has spent considerable time advising clients on the impact of AI in the workplace, and that is the subject of our conversation today. Welcome, Emma, and thanks for being here.
Emma Follansbee (EF): Thank you so much for having me. I'm thrilled to be here.
JR: AI is turning out to be what I'm going to call “a boon and a bane” for human resources professionals and in-house counsel. These folks are wrestling with so many issues that AI has raised in the workplace, and frankly, Emma and I only have a limited amount of time, so we can't discuss all these issues on today's podcast. But if any of you listened to my prior podcast with my partner, Mintz Member Paul Huston, about AI's impact on protecting trade secrets in the workplace, you know that Paul and I identified some thorny issues that employers may not have previously considered — but really should — when it comes to AI in the workplace.
Building on that prior podcast, I thought I would focus our discussion today on AI impacts on the employment life cycle — hiring, working, terminating — always with an eye toward the practical. Not just because that's the title of this pod, but because it's our job as counselors to help clients identify issues before they happen and problem-solve in advance, if that is at all possible.
AI at the Hiring Stage
JR: Emma, the first topic I want to surface with you is one that comes up at the time of hiring, and it really has two parts.
Part one: we know how easily AI can impersonate and trick — and we'll get back to some of that later. But can you give our listeners some practical guidance on things they should consider adding to the pre-hire list to account for some of these AI issues?
EF: Yeah, that's such a good question, Jen. I think we're all experiencing — in our personal lives, at work, and in the media — this question of what can be real and what can be fake when it comes to AI.
The good news is that employers already have a lot of tools at their fingertips, but it's about being thoughtful about the tools you already have in place and how you can use them to focus more on sussing out whether there's AI trickery at issue.
Let me give you an example. Take something as simple as your offer letter. We often include language stating that everything a candidate represents during the hiring process — in their résumé, credentials, and experience — is accurate, and that by accepting the offer, they are not misrepresenting any facts.
We're increasingly seeing that this isn’t always the case with AI. It's now very easy for candidates to download a polished résumé, invent a work history, or provide information that may not be accurate. One practical step employers can take is to ensure that onboarding documents — such as offer letters — include provisions that protect the organization if they later discover false information generated or assisted by AI.
JR: What about any state or federal legislation that impacts how employers deploy AI in the onboarding process? Can you speak briefly about the impact that might have?
EF: Absolutely. This is a constantly changing landscape at both the federal level and state levels. We are already seeing states — and even cities — develop their own AI rules and regulations governing how AI is used in the hiring process.
For example, there is a New York City local law that governs how AI may be used and aims to prevent discrimination when employers rely on automated decision-making tools during hiring.
At the same time, we're seeing activity on the federal level. Recently, President Trump issued an executive order seeking to slow the flow of new state-level AI regulations to establish a more uniform federal regulatory scheme. We're likely to see challenges to that approach, and it has not stopped new rules from cropping up.
All of this makes compliance challenging, especially for multistate employers. A requirement may take effect in New York City, another in California, and yet another in Texas — each addressing completely different aspects of AI. They all touch on AI-related issues, and the rapid pace of change makes it difficult for employers to stay compliant.
JR: It's interesting because everything is changing so quickly, and at the same time, employers may be using AI tools that don't account for those rapid changes. In many cases, they're not consulting humans about how these tools should be deployed or how decisions made by AI need to be backed up, verified, and vetted by a human.
Toward that end — and relatedly — Emma, what do you think about job descriptions and proficiencies? We're still in the onboarding process, where employers are putting together job descriptions and advertising for open positions. How do you account for AI in those job descriptions and in the proficiencies employers are looking for?
EF: I think there are two aspects to that, Jen. The first is whether we understand what it means when a candidate says they are “proficient in AI” at the hiring stage. It's not enough to insert something into ChatGPT or another LLM system, get an answer back, and call yourself an efficient user of that product. We need to know whether employees have experience prompting, verifying, and sussing out false or incorrect information that an AI system might generate.
Because if AI is going to be used in your workplace, you want to know that employees are using it responsibly.
JR: It's very interesting to me, because you may have a situation where someone has learned to use AI to write résumés and job descriptions. It almost becomes a loop — where does the human insert themselves? I won’t go off on a tangent, but these issues raise more issues. It becomes one giant onion, at least to me, and I think probably to many people.
So, let me move on to another question I have related to onboarding.
Let's say you've set up your job application, you're advertising for the right type of position, and that might include AI proficiencies that you hope are being accurately represented. Let's assume a human has accounted for developments and changes in the laws and has reflected that in the systems being deployed. And let's go a step further and say that applicants are being notified that AI is being used — whether as part of an applicant tracking system or elsewhere in the onboarding.
Are there things employers should be doing with respect to the onboarding documents themselves? For example, should they be thinking about offer letters, arbitration agreements, or restrictive covenants to the extent they're applicable? What should employers be doing at this point?
EF: Absolutely. You raise a good point about restrictive covenants. I'm also really interested in how AI is going to change the way employers think about contracts involving trade secrets, confidential information, and intellectual property. A few questions come up right away.
The first is: what happens — and how are you documenting it — when employees use AI to create information or materials? Who owns that? And how do you ensure your agreements make clear that whatever an employee creates, even if they use a separate system to create it, still belongs to the employer?
We haven’t really dealt with this before. It used to be the case that an employee walked in with their skill set and used it to create work on behalf of the employer. That's no longer the full picture. So how do we make sure employers are protected? And I know you touched on this with Paul in your last podcast, but it’s critically important that employees understand which products — whether AI or something else — are permissible use cases.
If an employee uses an AI product that isn’t a closed-loop system, your confidential information can easily end up outside the organization and used by others.
I want to go back to something you asked earlier and tie it together, which is the importance of employee training.
JR: Yes.
EF: If a new hire comes in and says they're proficient in AI and they used it at their last job, we, the employer, still need to train them on our systems, processes, and confidentiality expectations. That piece is important. AI isn’t going away, so we need to make sure employees know how we expect them to use it — and how they can't.
JR: Many employers think of confidentiality agreements and training as a given, right, Emma? It's not controversial, if you think about it, to ask someone to join your company and keep your information safe. Almost everyone does it. It's hard to find companies that don't have some sort of confidentiality agreement.
What's interesting now, to me, is that employers really do need to go back and look at those agreements — restrictive covenants, confidentiality agreements, training materials — and make sure they clearly communicate the importance of using AI properly. These things used to feel like a given. I don't think they're a given anymore. Employers really need to rethink them and double down.
AI’s Impact During Employment
JR: So, let's transition. Let's assume the employee is now hired. You’ve updated your documents — created with human judgment, not an AI tool — and the person has walked through the door. Let's talk about some of the issues that arise during employment where AI is having a significant impact.
EF: Some of the laws and frameworks we're seeing at the state level apply not only when an employee is hired but also when AI or other automated systems are used to help employers make any employment-related decision. That includes setting the terms of compensation, issuing discipline, making promotion decisions — whatever the scenario may be.
We’re watching a growing framework around how we use AI in these decisions. Employers need to be thoughtful and make sure they understand how the AI is being used. As you know, Jen, we’re already seeing litigation under current federal and state anti-discrimination laws. If an employer doesn't understand how AI factored into an employment decision, it becomes very difficult to defend that decision when an employee raises a complaint or inquiry — for example, about a performance improvement plan, discipline, termination or, you know, insert relevant employment-based action.
AI is touching everything. It's not going away, and the issue is only going to become more prominent. Employers should be in regular conversation with their counsel because the landscape is changing so quickly.
JR: Let's talk about employee complaints. As we know, complaints are a regular feature of the workplace. When humans work together, conflicts arise. How, if at all, does AI affect an internal workplace investigation?
EF: We're seeing that if we have access to these tools, employees do too. Employees can pop into ChatGPT or another AI system, explain a situation that has happened, and ask what the situation sounds like or what potential issues it raises. Employers are starting to receive complaints, letters, and demands that clearly read as if AI helped draft them.
Does that change employers’ obligations? Not necessarily. But employers still need to make sure they fully understand the facts and circumstances. In the example you gave — an investigation — employers still need to understand what the employee is saying on the ground and what other employees report. There's heightened vigilance when there's essentially a third party in the room — the AI — and we have to be aware of how it's influencing these situations.
JR: From the investigation standpoint, you can't overstate the value of sitting across from a person — watching body language, hearing tone of voice, and experiencing all of those nuances of human communication. Whether you're in HR or counsel conducting the investigation, that interpersonal interaction gives you information you can’t get from a written complaint.
So even if the complaint looks beautifully written, uses a lot of great buzzwords, and invokes all the different provisions of the employee handbook, you still, as an employer, have that obligation to investigate. And if it turns out that something was fabricated or embellished using AI, you're likely going to suss that out quickly once you talk with the person. It's much harder to “mock that up,” so to speak, in real time.
Here’s another question — still in the employment life cycle. What about using AI in a surveillance context? At the firm, for example, when we use Zoom we have to turn off any recording features. I personally don't like having that option available, but it seems so easy for people to record surreptitiously or have AI running in the background. What should employers consider?
EF: I’ve had the same experience. There are so many AI tools that will record conversations or take notes automatically. They seem almost implicit and helpful in a lot of ways. But employers have to pause and consider when and how these tools make sense, whether they affect the quality of work, and whether more surveillance in the workplace affects employee morale.
There’s also a significant legal landscape around the issue of recordings — including single-party versus dual-consent states. And there are broader privacy concerns emerging across jurisdictions. It's another one of those things employers should keep in mind when auditing where AI is being used because this area can be a little quieter but have a huge impact. It’s easy to think “Sure, I'll hit the AI button on Zoom,” without considering whether it changes how people participate or whether they feel comfortable speaking freely.
AI at the Termination Stage
JR: That’s a great segue to the last part of the employment life cycle — termination. Terminations are inevitable; the question is why they occur and what the consequences are, which is where you and I spend a good part of our professional life asking.
I want to go back to what you mentioned earlier about employees using AI in the investigatory context. How do you see it being used on the other end — after termination?
EF: Yeah, I think it’s connected. We’re seeing this technology create more polished materials, which also means employers are receiving more polished post-employment demand letters and complaints — often full of legal buzzwords and structured as if written by outside counsel.
We're seeing this in courts too. Litigants can use AI to draft complaints that read as if an attorney has drafted them. There can be benefits to that — increased access and more capabilities for individuals to advocate for their own rights — but the downside is that these documents are not always accurate and can conflate issues. We're seeing these impacts more frequently, and it’s clear how AI is shaping the tone and tenor of communications employees send after leaving an organization.
JR: Are there systems or processes employers should change to account for receiving AI-generated demands? Or should employers handle them the same way they always have?
EF: Yes and no. Employers should absolutely continue taking these things seriously, responding within appropriate timeframes, and treating them with the level of seriousness they deserve.
But there should also be heightened scrutiny. We need to understand what’s truly being asked or demanded. We're all learning to identify when something “sounds like AI” — certain phrasings, grammar, patterns, punctuation. From there, employers can assess whether the AI-generated nature of the document affects their response or next steps.
JR: The takeaway from this episode is that it's never a bad idea to take a new and hard look at each of your processes — hiring, training, internal investigations, and termination — understanding that there is a growing use of AI affecting every stage of the employment life cycle. And appreciating that.
Where it really matters is remembering that human judgment and nuance can’t be replicated by AI. So while it’s important to understand these programs, appreciate how they influence decisions, and have confidence in yourself as the person responsible — whether you’re an HR professional, in-house counsel, or someone else handling these issues — it’s equally important to go back to your own judgment. That means speaking with people directly and confirming that you have that human aspect in the process.
All of this connects to the concept of trust, which, as you know if you follow my podcast, is fundamental to employment relationships. And trust ultimately comes from human-to-human interaction.
Wrap-up
JR: Thank you, Emma Follansbee — this has been an interesting discussion. Talking about AI raises more questions than we can answer, but we're all learning as we go. I really appreciate you joining us today.
EF: Thank you very much for having me. It's my pleasure.
JR: Thank you to those who have tuned in to the Mintz On Air: Practical Policies podcast. Please feel free to visit us at Mintz.com for more content and commentary, or you can find us on Spotify. Thanks again.

