Know your privacy responsibilities

If your business collects or holds personal information, there are rules you must follow for how you collect, use, store and disclose it. When using AI in your business, these rules still apply and are even more important. 
 
Key privacy rules for using AI are that you must:

  • store and protect information securely
  • make sure information is accurate before using it
  • be careful when disclosing information, particularly outside of New Zealand.

Privacy checklist for AI tools

Before you use an AI tool in your business, ask yourself these questions: 

  • Do I know what kind of information I’m putting into the tool? For example, text, documents or customer messages.
  • Can I avoid sharing personal or sensitive information?
  • Is this a safe and trusted tool? And what protection does it have in place for my information?
  • Is there a paid version that offers better security for business use?
  • Can I change the tool settings so that the results aren't used to improve the tool? 
  • Do staff have a clear, simple process to follow if something goes wrong?
  • Who is my information shared with, where is it stored, and how long is it kept for?
  • If it’s kept, could I or the service provider ever be legally required to disclose it?

Keep a short list of approved AI tools for your staff to use and treat add‑ons as higher risk. It’s safer to stick to one or two tools with business accounts. Be careful with browser extensions, plugins, and connectors – they can access and share files, emails, or customer information.

Prioritise cyber security

Keeping your information and systems safe is important for business resilience. When you introduce new tools or systems – including AI – you introduce new risks, and it's important to make sure you’ve got strong cyber security measures in place.

To help protect your business from IT or cyber risks you can:

  • turn on multi-factor authentication (MFA) for email, file storage, and AI tools
  • use strong passwords and a password manager
  • limit who can access sensitive folders and customer information  
  • keep business devices automatically updated 
  • create an incident plan for your business, to cover who does what in a cyber-related emergency
  • consider specific cyber insurance to protect your business information and systems.

Transparent and responsible AI use

To ensure responsible AI use in your business you need to take a balanced approach. You should consider:

  • where AI can deliver the most value rather than automating all areas of your business 
  • what processes and guidance you’ll need to ensure accuracy
  • how you can be transparent with customers and staff when they’re interacting with AI
  • what the related environmental impacts are to using AI. 

Practical ways to embed AI ethically and responsibly include:

  • keeping humans in your review or quality assurance processes – particularly if the work affects customers, staff, pricing, eligibility, or finances. 
  • being transparent when a customer or staff member is interacting with AI. For example, when you have an AI chatbot on your website, or transcribing a meeting.
  • using AI tools for drafts and suggestions, not final decisions or approvals.
  • researching how you can offset energy usage related to AI tools in your business. 

Understand data sovereignty

Data sovereignty is about whose laws and rules apply to your data.

It depends on:

  • who the data is about,
  • where it’s collected, and
  • where it’s stored or processed.

This matters because privacy and access rules can be very different between countries. Things can get complicated if your AI provider stores or processes data offshore or in cloud environments outside New Zealand.

If you’re a business in New Zealand, you should also respect Indigenous Data Sovereignty, and Māori Data Sovereignty principles.

At a high level, this means Māori data should be governed by Māori, and decisions about how Māori data is used, shared, or stored should involve the right Māori voices.

How to apply data sovereignty day to day

To follow data sovereignty principles in your day-to-day business you should ask:  

  • Where does my AI provider store and process data? – Check their data residency information carefully.
  • Am I handling personal information? – Be extra careful – New Zealand privacy laws place limits on sending personal information overseas.
  • Am I using a tool designed for business use? – Business plans often offer better security, clearer contracts, and stronger controls.
  • Do the contracts clearly protect my information? – Make sure there are clear controls around data retention, access, and audit rights.
  • Have I looked into Māori Data Sovereignty guidance? – Understand what it is and whether it applies to your business or use case.

Include some New Zealand-specific triggers for when you should stop and check.  
 
If personal information is stored or processed offshore:

  • Treat this as higher risk
  • Do extra checks on the supplier
  • Use tighter security and privacy settings

If the work involves:

  • Māori data
  • iwi, hapū, whānau, or Māori organisations
  • or could materially affect Māori customers or communities

Pause after a small pilot and get the right Māori governance in place before scaling up.

This should include clear agreement on:

  • the purpose and expected benefits
  • who can access, share, and keep the information (and for how long)
  • and how people can raise concerns or issues. 

What AI governance means for your business

AI governance is how you set your business up to use AI in a way that’s responsible, safe and aligned with your business goals 
 
It covers the systems, policies, and everyday processes that guide how AI is chosen, used, and checked.  
 
In practice, it means being clear with your team about:

  • when AI can be used
  • how it should be used
  • and when a human needs to step in

Good governance helps ensure AI is used fairly, transparently, and in a way your staff and customers can trust. This may sound a bit daunting or very formal – especially if you’ve got a small team.  
 
The AI Forum New Zealand provides practical tools and step‑by‑step guidance to help businesses of any size build trustworthy, future‑ready AI governance. 
 
Their resources can help you feel confident choosing tools, managing risk, and using AI in a way that supports your business and your customers. 

Big picture and day to day

A high‑level framework sets the overall direction for how AI fits into your business. But staff also need practical, easy‑to‑follow guidance they can use day to day.

You need both:

  • a shared view of why and where AI is used, and
  • clear guidance on how to use it safely in real situations.

Your approach should protect your business while still allowing people to experiment and learn – without putting data, people, or trust at risk.

To make governance stick, support it with regular training, realistic examples from your own work and regular reviews as tools and risks change. 

What effective AI governance looks like

For example:

  • being open about when AI is used
  • staying accountable for decisions
  • protecting privacy
  • keeping humans involved where judgment matters 

For example:

  • where data must be stored (such as New Zealand)
  • whether audits or oversight are possible
  • whether the tool is suitable for business use 

Help employees understand:

  • when AI is appropriate
  • how to use it safely
  • which tools are approved for different tasks 

Be clear about:

  • what information can and can’t be used
  • what should never be entered (for example, personal or sensitive information) 

Before using a tool, check:

  • where information is stored or processed
  • whether customer information is used to train the tool
  • how long information is kept
  • who can access it (including subcontractors)
  • how incidents are reported and managed 

Give staff a simple way to:

  • identify low, medium, or high‑risk uses 
  • understand what extra steps are needed as risk increases 

Have a clear process for raising concerns or questioning AI output. If something goes wrong:

  1. Stop the activity
  2. Contain the issue
  3. Check what was shared or generated
  4. Fix the process
  5. Tell anyone who needs to know 

Add a quick two‑step check before using AI for a new task:

  • Low risk? — you can go ahead.
  • Higher risk? — slow down. Use tighter data controls, get a review, and double‑check the tool or supplier first. 

AI for New Zealand Businesses: Complete Implementation Guide

Understand AI agents and their risks

Some AI tools can act on your behalf — these are “agents”. Unlike tools that only draft or summarise text, agent tools can take actions across your systems, like sending emails, updating records, or moving files. This can save time, but it also introduces extra risks.

Risks to watch out for include:

  • Wider access: agents often need permissions across email, files, and business apps.
  • Hidden instructions: agents can act on unsafe prompts found in content (prompt injection).
  • Accidental sharing: data may be sent or uploaded to the wrong place.
  • Hard‑to‑undo changes: automated actions can affect customers, finances, or bookings.
  • Low visibility: some tools don’t provide clear logs of what happened.

Learn more about

Getting started with Artificial Intelligence