
By Kevin Hawkins
Brokerages, MLSs, and Associations may not see it, but their agents are already using it. It’s called Shadow AI. These are unapproved generative AI tools like ChatGPT, image and video generators, and other apps, most often free, that agents use to support their business without any oversight.
While today’s leadership is focusing on AI strategy, policy, and systems, Shadow AI is already active inside their organizations. And just because these tools are invisible to leadership doesn’t mean they’re harmless.
Shadow AI can expose client data, violate copyright, trigger fair housing violations, and compromise your brand’s trust.
What exactly is Shadow AI? Why is it dangerous? And what can real estate leaders do to address Shadow AI head-on?
What is Shadow AI, really?
Shadow AI occurs when agents use generative AI tools without their brokerage, MLS, or association’s knowledge or approval. It’s like Shadow IT, where employees install unapproved apps or software to get work done. But Shadow AI comes with higher stakes. These tools don’t just process information: They learn from the data they’re given.
In real estate, Shadow AI includes everything from writing and image tools to video and voice platforms. Agents are using ChatGPT, Claude, Midjourney, Gemini Pro, CoPilot, and other AI tools to write property descriptions and create marketing content.
Agents are experimenting with Eleven Labs and Artlist for voiceovers and Suno for songs. They’re editing videos with Descript and generating digital avatars with HeyGen. Unfortunately, most are using the free versions. None of these AI tools they are using goes through a formal review. There’s no data policy, no brand check, and no compliance layer.
Agents aren’t trying to break rules. They’re trying to move fast, look polished, and stand out. But the second these tools pull in client data, listing information, or personal details, the risk moves from theoretical to real.
Shadow AI isn’t something that’s coming soon. It’s already in use, and unless you’ve built visibility into your AI ecosystem, you just haven’t seen it yet.
Why Shadow AI is so pervasive in real estate
Agents aren’t waiting for broker-approved tools. They’re already using AI on their own terms, in their own way, and they don’t realize the risk. This mirrors what’s happening across other industries. A 2025 MIT study found that while only 40 percent of companies have formal AI subscriptions, employees at more than 90 percent of organizations are using generative AI in their daily work.
That same pattern has taken root in real estate, but with one important difference: agents aren’t employees. They’re independent contractors. They’re not wired to seek approval before trying a new tool. That’s part of the model and the mindset. For an agent, a free tech tool has always been a good thing.
Almost all the AI tools agents are using today have a free version. Even the free versions are seen by agents as AI tools that save them time, make them look more polished, and improve their workflow, so they’re going to use them. But they’re doing it without oversight, without policy, and with little understanding of where the data goes or how that free AI tool may be used.
Shadow AI disconnect
Meanwhile, leadership is focused on the big-picture AI rollout: enterprise pilots, internal platform integration, AI policy, and strategy development. But the real adoption wave is happening from the bottom up. Agents aren’t asking for permission. They’re looking for faster ways to work, and Shadow AI is giving them what they want.
This creates a gap. On one side is formal investment in AI that often shows little or no impact on the bottom line. On the other, there is a growing Shadow AI economy where agents are boosting productivity with tools completely outside the organization’s visibility or control.
That’s the Shadow AI divide. And it’s getting wider.
The real risks of Shadow AI in real estate
Shadow AI does not mean that all use of free AI products is dangerous. But when tools operate outside your policies and systems, even basic tasks can potentially create serious problems.
Again, Shadow AI introduces a different level of risk. These tools don’t just perform functions. They take in information, process it, and in many cases, retain it. That makes a prompt a potential data exposure.
The biggest risks are data leakage and data accuracy. We know agents are entering client names, property addresses, listing details, and personal notes into tools that were never designed to protect sensitive real estate data. Most of these AI tools, and especially the free versions, don’t offer data processing agreements. They don’t provide audit trails. And they don’t tell you what happens to the information once it’s submitted.
There’s also accuracy and compliance risk. Shadow AI use can lead to Fair Housing violations, copyright issues, or MLS rule conflicts. If the data is not siloed, Gen AI will layer in other data, and the risk of hallucinations increases dramatically. Then there’s operational risk. AI-generated content can be wrong, incomplete, or misleading (see Victor Lund’s column about California’s new AI laws and data deletion rules).
Without proper review, errors can pass through unnoticed. Property descriptions, CMAs, and listing summaries may include hallucinated data or incorrect pricing references. The problem isn’t just bad content. It’s the assumption that AI got it right.
The final risk is about trust. Consumers trust the agent. Agents trust the tools. Brokers, MLSs, and associations have a role to play in protecting that trust. If leadership has no visibility into the tools that agents are using, they can’t protect them or step in when something goes wrong.
How to address Shadow AI head-on
You can’t stop Shadow AI by pretending it isn’t happening. Agents turn to these tools for a reason. They’re fast, free, and effective. The solution isn’t restriction. It’s direction.
Start with visibility. Before you build any AI policy, find out what tools your agents are already using. Run a discovery audit. Talk to team leaders. Review workflows. You don’t need to punish Shadow AI use, but you do need to understand the scope.
Then create a policy that makes sense. Not one written by legal just to check a box, but one that agents can actually follow. Be clear about what’s allowed, what’s not, and what’s still undecided. Most important: explain why. Agents are more likely to follow the rules when they understand the risk.
Next, provide alternatives. If you want to stop agents from using free AI tools, give them better ones. Your solutions need to be just as fast, just as useful, and built for real estate.
But tools alone won’t fix the problem. You also need the right systems behind them. That’s where Model Control Platforms come in. MCPs allow brokerages, MLSs, and associations to manage which models get access to what data. They set parameters for how prompts are handled, how outputs are filtered, and what gets stored. MCPs are not just another layer of tech. They are the new foundation for safe, scalable AI adoption.
At WAV Group, our CTO, David Gumpper, and founding partner, Victor Lund, are building MCP frameworks through Fluente. This offering is designed to help real estate organizations govern AI usage with confidence. The goal isn’t to slow down innovation: it is to enable it with the right gates and guardrails, and performance that continues to “wow” your agents.
Shadow AI isn’t going away. But with the right approach, you can bring it out of the shadows and turn it into a competitive advantage.