Addressing systemic bias in AI: A leadership Imperative in New Zealand

A simple guide for NZ leaders on governing AI and managing risk

AI is now running key decisions in your business; hiring, sales, and strategy. For New Zealand executives, the top priority is making sure this technology is fair. The biggest risk is algorithmic gender bias: when your computer systems automatically repeat and increase unfairness based on gender.

This isn’t a job for the IT team alone. It’s a top-level business risk that needs your direct leadership.

1. The real risks: Why bias costs money and talent

If you ignore AI bias, it hits your organisation in three major ways:

A. Talent and growth risk (hiring and promotions).

If your AI-driven screening tools are trained on historical hiring data, they don’t learn who is qualified; they learn who you used to hire.

  • The unfair outcome: The system might automatically filter out female or non-binary candidates for certain senior roles because the past successful candidates were overwhelmingly male (this famously happened with a major US retailer’s recruiting tool).
  • The problem: The AI uses harmless data points (like a name, or specific university) as proxies for gender, effectively rejecting highly talented people. Unfair AI means you are actively losing good people and compromising your diversity targets.

B. Data governance and privacy risk (Privacy Act 2020)

This addresses the mandated compliance risk associated with how data is sourced and used.

  • The problem: Your AI systems are only as reliable as the data they learn from. If you use a historical dataset for customer or employee profiling that is known to contain errors or is significantly incomplete for specific groups (e.g., poor data quality for women in non-traditional roles), the AI’s subsequent predictions are flawed.
  • The compliance breach: Using that demonstrably inaccurate or incomplete personal information to make a decision about a person risks breaching multiple principles of the Privacy Act 2020. Specifically, Principle 8 demands that information be accurate, complete, and not misleading before it is used.
  • The consequence: This failure in data governance is a direct path to intervention by the Privacy Commissioner, leading to potential enforcement action, which is a major, non-negotiable compliance risk for the organisation.

C. Service and financial risk (Human Rights Act 1993).

This is where systemic bias becomes an illegal act of discrimination in pricing and service delivery.

  • The proxy problem: Your AI won’t use a field called “Gender” to set an insurance premium or approve a service. Instead, it might use proxies like “Part-time employment history” or “Primary vehicle model,” which are correlated with female applicants.
  • The consequence: If the system offers a woman a less favourable price or a lower service tier based on these unfair proxies, it constitutes indirect discrimination based on sex in the provision of goods and services under the Human Rights Act. This leads to costly legal challenges and public reputational damage.

2. Take control: Simple fules for governing your AI

The best thing you can do for your organisation is to stop relying on technical teams and to set clear, non-negotiable rules for how AI is used.

i. Create an AI policy: Define “Fair”

You need a simple, written policy that forces your teams to prove fairness.

  • Rule for audits: Demand regular, outside checks (audits) for all AI tools used in critical areas like hiring or lending. These audits must prove the system treats all groups equally.

  • Fixing mistakes: Your policy must guarantee that people have a clear way to challenge an AI decision and get a real person to review it.

ii. Manage your data like a risk

Since data is the foundation of AI, data errors become legal liabilities.

  • Check data for bias: Before using any data to train a new AI, your team must check it for unfair patterns or imbalances across gender groups.

  • Use less data: Only use the personal data that is absolutely necessary. The less sensitive data your AI sees, the less chance it has to use it to discriminate.

3. Lead the change: Fairness in people decisions

Use AI to make your people management fairer, not less fair.

A. Check internal mobility tools

If you use AI to decide who gets promoted, it must be audited to ensure it looks at objective skills and potential, not gender-based proxies like which departments traditionally got promoted in the past. Your goal is to use AI to remove human bias, not introduce new machine bias.

B. Train your leaders

It’s your job to make sure every team touching AI—from data scientists to managers—understands the legal and ethical risks. Mandate training on AI ethics and the requirements of the Human Rights Act. Make fairness a key part of every manager’s performance review.

In short, ethical AI governance is about leading with integrity. It’s the only way to make sure your technology builds a fairer, stronger business, instead of just repeating old mistakes faster.

Next steps for your executive team:

  • Review existing policies: Check your Privacy Act compliance for all data used in AI training, particularly focusing on consent and data minimisation.
  • Formalise a redress channel: Establish a clear, publicised internal process for employees to challenge promotion or performance decisions flagged as potentially AI-biased.
  • Prioritise audit budget: Allocate dedicated budget for independent, external ethical audits of your riskiest systems within the next six months.
Newsletter landing page (#18)

Tips, tricks and insights, straight to your inbox

Get a weekly dose of my thoughts, to help you on your biz journey