Personality Rights Protection: New Zealand

“AI increases the risk of misuse of personal information, both in terms of the information that businesses are feeding into, and receiving from, AI platforms.” — Thomas Huthwaite

By Julius Melnitzer | March 5, 2026

With the advent of artificially-generated images and viral content, the growing ubiquity of deepfakes and dupes has become a growing concern, especially for advocates of personality rights, also known as “publicity rights”.

Susceptible targets, particularly famous people, women and victims of bullying, are increasingly made the subject of pornographic deep fakes and fabricated nude images.

Unfortunately, the synthetic-media technologies that enable this behaviour appear to be developing faster than the law has been able to regulate them in many jurisdictions.

That’s probably no surprise: technology has always evolved faster than legislation or regulation. Existing laws were not designed to apply specifically to deepfakes, anonymous and instantaneous artefacts that can be globally distributed and nearly impossible to trace. Judges are frequently left to apply old principles to new facts, which can lead to uncertain standards and inconsistent outcomes.

And because deepfakes engage multiple platforms, defy borders, and rarely give rise to identifiable defendants, they can create jurisdictional issues that undermine enforcement.

This two-part series of articles will examine the protection afforded to “personality rights” in two IPH jurisdictions, New Zealand and Canada.

This article, Part I, explains the situation in New Zealand.

New Zealand

“Unfortunately, New Zealand has no ‘misappropriation of personality’ tort,” says Thomas Huthwaite, a principal in the Wellington office of AJ Park, an IPH member firm. “While there are several different laws that operate around the fringes of ‘personality’, there is a good argument that we have a real gap in the law on this topic.”

What protections are available?

New Zealand uses the common law doctrine of “passing off” or the provisions of the Fair Trading Act as the country’s primary protection mechanisms for misappropriation of personality.

“Neither is well designed to protect personality rights,” Huthwaite says.

Indeed, passing off, which involves the heavy evidential burden of showing that one’s likeness has sufficient goodwill, or is well-known enough that the public would be misled into thinking that the individual whose likeness appear endorses or approves a product, is often referred to as a ‘famous person’ tort, because this threshold  isn’t really available to the average person.

For its part, the Fair Trading Act focuses on protecting consumers from misleading representations.

The Harmful Digital Communications Act 2015 (HDCA), intended to deter and mitigate harm caused to individuals by digital communications and provide them with efficient means of redress, may also come into play. Defying court orders made under the legislation is a criminal offence.

“The legislation may be somewhat helpful for victims of online misconduct, although its applicability to personality rights has not been well tested yet,” Huthwaite says.

Guidelines issued by Netsafe, the government agency responsible for HDCA, however, opine that the legislation covers “AI-generated or modified images if they cause or are likely to cause harm.”

The HDCA also criminalises posting intimate visual recordings without consent. But there’s a gap here.

“The definition of “intimate visual recording” requires a record of a real-world event, and so is considered to only apply to genuine recordings—not deepfake videos,” Huthwaite says. “That means non-consensual intimate deepfake material will not fall foul of this specific offense provision, although their online posting might still be deemed contrary to other provisions of the Act.”

The Films, Videos and Publications Classification Act 1993 could also prove helpful to deepfake victims. Under the legislation, a publication is objectionable and illegal if it involves sex, horror, crime, cruelty, or violence that is “injurious to the public good”. According to Netscape, this covers AI-generated or AI-modified images.

How do Privacy Laws factor in?

Privacy laws may also be a consideration. While they relate primarily to collection, storage and access to personal information, and do not directly engage personality rights, there are some areas in which these laws become relevant.

“AI increases the risk of misuse of personal information, both in terms of the information that businesses are feeding into, and receiving from, AI platforms,” Huthwaite states. “Businesses should therefore take necessary precautions in the handling of sensitive, personal or otherwise protected information that may be produced or used by GenAI, including the scraping of, or learning from, data that could contain personal information.”

Furthermore, when AI is used to create synthetic biometric or deepfake profiles, it may involve the use of data relating to appearance, voice, fingerprints, and keystroke patterns, among other details.

“Such biometric data may be considered personal information under the Privacy Act,” Huthwaite states.

In 2025, the New Zealand Privacy Commissioner released the Biometric Processing Privacy Code for new biometric processing activities.

“The Code provides businesses with rules relating broadly to the collection, disclosure, access, retention use and disclosure of biometric information,” Huthwaite says.

Reform in the making?

Critics have decried the lack of legislation to deal with deepfakes specifically.

In response, there is currently a private member’s bill before Parliament introducing the Deepfake Digital Harm and Exploitation bill which proposes to amend the HDCA to expand the definition of “intimate visual recording” to explicitly include images created synthesized, or altered to show a person’s likeness without their consent.

Finally, the Privacy Amendment Act 2025, which comes into force on May 1, 2026, expands disclosure obligations when anyone collects personal information.

“The existing principle requires an agency to disclose to an individual and explain the reason for collection, amongst other things, when they collect information from that individual directly; the amendment will require the same when an agency collects information indirectly,” Huthwaite states. “This rule could have implications for AI developers and platforms who rely on data scraping or the transfer of personal information from one agency to another.”

What’s still missing are laws giving individuals clear intellectual property rights – the kind of rights that can be commercialized or tightly controlled, at the owner’s discretion – to their own images and voices. The extent to which these laws need to be and will be enacted remains to be seen, but the current trajectory of technology indicates that some regulation will be required.

Julius Melnitzer is a Toronto-based writer who focuses on law, legal affairs, and the business of law. Follow him on LegalWriter.net or email him at julius@legalwriter.net.

Social Media Auto Publish Powered By : XYZScripts.com