How many times have you been a part of a technological change that didn’t take people into account? Artificial Intelligence (AI) implementation is no different; while there is so much hype around AI innovation, there is often less attention paid to the people side of the change – specifically, how to reduce the human risks along the way.  

Within the AI space, a term often used is “human in the loop”, which IBM defines as a system or process in which a human actively participates in the operation, supervision, or decision-making of an automated system. Human in the loop is crucial to ensure that AI outputs are well-vetted, reliable, and make sense. It helps avoid errors and hallucinations.

 

While having a human in the loop brings important safeguards, it also introduces some unique risks. In this article, which takes root in my new book Data Governance Change Management, I will take you through some of the biggest risks of AI and how to mitigate them. You’ll walk away with a better picture of your first steps toward responsible and compliant AI.  

 

So let’s start at the beginning.  

 

The Human Risks of AI and How to Address Them

 

TL;DR: AI has huge potential, but the biggest risks often come from the people side—not the technology itself. Employees may resist AI, misuse it, or get stuck in gray areas without clear accountability. To avoid these pitfalls, organizations need strong change management: involve employees early, set clear guardrails, and make sure humans stay accountable. Treat the human element with as much care as the tech, and AI becomes a trusted partner, rather than a threat.

 

Risk 1: Employee Resistance

This first risk is a common one, and one we’ve talked about for a long time, but it absolutely cannot be overlooked or glossed over. Your employees are people, and they are backbone of your organization.

 

That said: Some employees fear that AI is going to steal their jobs now or in the future, while others may feel anxious about having to learn a new way of working. These modes of resistance can be loud and vocal, or may be subtle - showing up as slower adoption, lack of active engagement, or quiet pushback. Employees who fear their own obsolescence may also withdraw, repeating the “quiet quitting” patterns of the early 2020s.  

 

How to mitigate this risk: The key here is to bring employees in early to the anticipated change, explaining to them that AI will strengthen, not replace, human work. Highlight the benefits of AI in reducing repetitive tasks, thereby freeing up time for value-added work.  

 

Some practical ways to do this include:  

  • Transparent communication campaigns: Hold town halls, Q&A sessions, or even “myth-buster” style workshops where leaders can explain what AI will and won’t do.  
  • Upskilling opportunities: Provide training in AI literacy, data fluency, or role-specific AI use cases so employees feel prepared rather than left behind.  
  • Redesign jobs with a human focus: Reframe roles to emphasize the unique human contributions – such as creativity, empathy, and judgement – traits AI can’t replicate. For example, a claims processor might spend less time on paperwork and therefore have more time for customer advocacy.
  • Pilot programs with employee champions: Involve employees from across business lines in early AI pilots, allowing them to experience the tools and their limitations first-hand. Ask them to share their experiences with their peers; experiences are more successful than leadership talking points.  

 

Keep in mind that cascades of layoffs are unhelpful to the perception of AI within your organization. Instead of laying off employees, try to repurpose them into new departments and roles, and communicate that AI will not lead to the mass job loss that they are hearing about in the industry. Helping employees upskill allows them to step into more creative and strategic work. And remember: Employees aren’t inherently “resistors”. Most want what’s best for themselves and the company - they just need clarity, reassurance, and support.  

 

What about leadership?

Leaders who need to communicate information about AI will also have feelings about how they are personally affected by AI tools; afterall, they aren’t just neutral messengers. CxOs are human, too. They’re often sandwiched between executive pressure to roll out AI and the very real fears or skepticism of their teams. It’s important to support leaders emotionally and practically if you want them to communicate and lead with confidence and authenticity. Here are some ways to take care of them:  
 

  • Acknowledge their fears and uncertainty.  
  • Provide leaders with their own learning pathways so that they can ask “basic” questions without the fear of looking uninformed.  
  • Arm them with clear messaging and talking points. Don't ask the leaders to figure out this communication on their own.  
  • Normalize vulnerability in conversations.  

 

Risk 2: AI Misuse

When employees don’t fully understand AI tools, they may unintentionally misuse them. Examples include feeding sensitive data into tools,  or over-relying on outputs. These missteps can lead to flawed decisions, ethical concerns, reputational damage, or even compliance breaches.  

 

Beyond data security, misuse also shows up in subtle ways: using AI to cut corners instead of enhance quality, leaning on biased outputs that reinforce inequality, or assuming that, because AI said it, it must be accurate. In high-stakes areas like healthcare, finance, or HR, these errors can have real consequences for people’s lives and livelihoods.  

 

How to mitigate this risk: Remind employees that AI doesn’t know everything - it’s a tool. Leaders need to create clear guardrails around the use of AI, including:

 

  • Data practices: Establish clear dos and don’ts for what kind of information can be safely entered into AI systems.  
  • Verification standards: Encourage a “trust but verify” mindset, where employees are encouraged to fact-check AI outputs before using them in decisions.  
  • Use-case guidelines: Spell out where AI is helpful (such as in drafting, summarizing, brainstorming) and where it should never be the sole authority or, in some cases, used at all (such as legal advice, hiring decisions, medical guidance).  
  • Training and practice scenarios: Offer real-life examples of misuse in training sessions so employees can learn what not to do.  
  • Localized environments: Organizations best practices for how employees can ensure data security, verify the answers they receive from AI, and help everyone make prudent decisions. Organizations may also consider enterprise versions of AI that provide a safer sandbox for experimentation, keep data secure, and allow administrators to set custom rules.  

 

The key to overcome misuse is building awareness and responsibility. It’s usually not malicious, but rather accidental.  

 

Risk 3: Lack of Accountability

Employees may not always know where their responsibility ends and AI’s begins. If a tool generates an output, who owns the decision that follows? The AI, the employee, or the leader who approved the process? Without clarity, you end up with accountability gaps, which are situations where no one feels responsible, or worse, everyone assumes someone else is responsible. This creates dangerous gray areas and risky decision-making. For example, in a customer service interaction, an employee may follow an AI-generated response to a client question without validating it and later put blame on the system when something goes wrong. Another example: If AI recommends a course of action that is not in line with local regulations, and no one double-checks, the organization could face fines or reputational damage.  

 

Fuzzy accountability empowers employees to hide behind the AI (“It wasn’t my fault”), or defer too much to the system, breaking down trust and decision quality. Over time, this can lead to finger pointing and poor outcomes for the business.  

 

How to mitigate this risk: Assign clear ownership of tasks and decisions. Every AI-enabled process should have a named human accountable for decisions, outcomes, and oversight. The ability to explain a process (“explainability”) is critical to any data strategy, and should be baked into the process irrespective of AI. Embedding explainability requirements into AI adoption means employees can trace how a decision was made at a later time, and  also helps maintain trust and accountability.

 

The big picture: AI has benefits, but only if humans are at the center

AI can bring enormous benefits, efficiencies, and advantages to businesses, but only if organizations treat the human element as seriously as the technology itself. By addressing resistance, misuse and accountability from the forefront, teams can build a culture where AI is not seen as a threat, but a trusted partner in innovation.  

 

Visit us at Chicken N Pickle on October 22

Aakriti Agrawal will be speaking at DI Squared’s event, “Is Your Data in a Pickle?” on October 22, 2025, in Glendale, Arizona. Sign up for your free spot today to learn more about AI risk management while networking over a great game of pickleball.