Back to Blog
Business Resilience

Shadow AI is already in your business - here's why that's a problem

4 min read

73% of businesses have employees using AI tools without approval. Here is what that means for your data security.

A 2025 Salesforce survey found that 73% of businesses have employees using AI tools that IT or management never approved. Your business is almost certainly one of them. The risk is not that your employees are being irresponsible. The risk is that nobody knows what data is flowing out of your organisation. Or where it ends up.

This post explains what shadow AI actually looks like, why it matters for compliance and how to get control of it without shutting down productivity.

What shadow AI looks like

Shadow AI is not just ChatGPT. It includes AI features embedded in tools your team already uses:

  • Grammarly's AI writing suggestions
  • Google Workspace's AI drafting
  • Microsoft Copilot
  • Canva's AI design tools
  • Notion AI, Otter.ai and dozens of others

Your marketing team is using AI to generate content. Your sales team is using it to personalise outreach at scale. Your finance team is using it to analyse spreadsheets. Each interaction potentially involves customer data, financial information or proprietary business data flowing to external systems.

The data exposure risk

When an employee pastes a customer contract into ChatGPT to "summarise the key terms," that contract data is stored on OpenAI's servers. When a salesperson feeds CRM data into an AI tool to generate personalised emails, customer information leaves your controlled environment.

Most free-tier AI tools use submitted data for model training. Your confidential business information could be surfaced in responses to other users.

Under the Australian Privacy Act, you are responsible for the data your employees share with third parties, including AI tools. A data breach via an unapproved AI tool carries the same regulatory consequences as any other breach. The OAIC has made it clear that ignorance of employee behaviour is not a defence.

The compliance gap

Most Australian businesses do not have an AI usage policy. That means:

  • No guidelines on what data can be shared with AI tools
  • No approved list of AI applications
  • No oversight of AI-generated content in client-facing communications
  • No audit trail of AI interactions

For businesses in regulated industries (financial services, healthcare, legal), the gap is particularly dangerous. APRA, AHPRA and legal professional bodies are all developing AI governance expectations. Being caught without a framework when regulations arrive is a real compliance risk. Understanding where your business actually stands on AI readiness is the first step.

Four steps to get control

  1. Discovery. Map every AI tool in use across your organisation. Survey your team, check software inventories and review browser extensions. Most businesses find 10-20 AI tools in active use beyond the ones they know about.
  2. Classification. Categorise each tool by risk level based on the data it processes. Tools handling customer data or financial information are high risk. Tools used only for internal productivity (scheduling, formatting) are low risk.
  3. Policy. Create a simple AI usage policy specifying which tools are approved, what data can be shared with each one and what approval is needed for new tools. Keep it practical. An overly restrictive policy will be ignored.
  4. Training. Brief your team on the policy, the reasons behind it and the specific risks of uncontrolled AI use. Most employees comply willingly when they understand the rationale.

The goal is not to ban AI. It is to channel it safely so your team stays productive without putting the business at risk. The same principle applies to building AI fluency at the leadership level: informed adoption beats reactive restriction.

Start here

Run a simple survey this week: ask every team member which AI tools they use and what data they put into them. The results will tell you exactly how exposed you are. From there, draft a one-page AI usage policy covering the essentials.

If you want a structured approach, the AI Security Audit program maps your organisation's AI usage, assesses data exposure risks and builds a governance framework in four weeks. For a quick self-assessment, try the AI Readiness Scorecard.

Frequently Asked Questions

What is shadow AI?
Shadow AI refers to AI tools employees use without management or IT approval. It includes ChatGPT, Grammarly AI, Google Workspace AI, Microsoft Copilot, Canva AI and dozens of other tools with embedded AI features. A 2025 Salesforce survey found 73 percent of businesses have employees using unapproved AI tools.
What are the data risks of shadow AI?
When employees paste customer contracts, financial data or proprietary information into AI tools, that data is stored on external servers. Most free-tier AI tools use submitted data for model training, meaning your confidential information could surface in responses to other users. Under the Australian Privacy Act, you are responsible for this data exposure.
How do you get control of shadow AI without killing productivity?
Follow four steps: discovery (map every AI tool in use), classification (categorise by risk level based on data processed), policy (specify approved tools and data-sharing rules) and training (brief the team on rationale). The goal is not to ban AI but to channel it safely. An overly restrictive policy will simply be ignored.

About the Author

James Killick
James Killick

Co-founder at Njin. Building AI-powered sales systems for B2B businesses.

Want to implement these strategies?

Talk to our AI about how we can help automate your sales process.

Start The Conversation