Learn about the University’s Gen-AI Usage Standard
Using generative AI at work? Read through the University’s Gen-AI Usage Standard for safe and ethical use of AI.
The Standard ensures the safe, ethical, and legal use of Gen-AI tools and services, and is designed to protect both the users and the data they work with. It applies to everyone in the University community who uses Gen-AI tools for any University-related activity.
Did you know?
Whatever you include in your prompts/queries to a Gen-AI tool (be it text, data, images), potentially exposes that information to future users; essentially you are training the chatbot to ‘remember’ your data. Consequently, others can use the chatbot to elicit your information—or at least, contextually similar information—through prompts of their own.
How is this relevant to me?
Gen-AI usage is becoming ever more ubiquitous. As educators, you might use Gen-AI tools for various purposes, such as creating content or analysing data.
The Standard helps ensure that you use these tools responsibly, ensuring the privacy of both your and your students’ data. It also helps you understand the limitations and potential biases of these tools, leading to more informed and ethical use.
Assess data sensitivity before using it with Gen-AI
The document sets out specific standards for using Gen-AI tools.
For example, it requires you to assess any data according to the University’s data classification before you submit the data to a Gen-AI tool.
The University has four data classifications:
- public
- internal
- sensitive
- restricted
You should only use Gen-AI tools that are appropriate for the type of data you’re working with.
A very simple overview of adhering to the standard in practice might look like this:

When using Gen-AI
- Understand the Generative Artificial Intelligence Usage Standard before you use any generative AI tool and apply it in your work.
- Use Microsoft Copilot, Gemini, and NotebookLM tools when logged in with your University user ID for extra data protection.
- If you use AI-generated outputs in your work then you’re still responsible for their accuracy, tone and content. Review outputs carefully and use your judgement. It’s your name on it, not the AI’s. Do you think it’s right? Are the sources credible? Does it comply with policies, rules and regulations?
- Complete a Privacy Impact Assessment (PIA) before using a Gen-AI tool with anyone’s personal information.
- Complete your privacy module in Hono.
Other considerations
- The Standard, likewise, mandates that any content (including text, image, or video) created substantially by a Gen-AI tool should be labelled/cited as such.
- It reminds us that before using Gen-AI tools, we should be aware of their limitations and the potential for bias in their results.
- The Standard also specifies that in case of Māori data usage with Gen-AI, users should first consult with the Office of the Pro-Vice Chancellor Māori.
Limitations
- Outdated information—their knowledge base is only current up to a point in time.
- Lacking personalisation—no ability to adapt responses to an individual or a learning type.
- No emotional intelligence—an inability to understand emotional cues results in no provision for learning support or motivation.
- Inaccuracy and bias—they are not infallible and prone to information bias. They can even generate fake references in an attempt to sound plausible. Therefore it is imperative for users to cross-verify against reliable sources.
- Lack of context—they don’t remember past interactions when starting a new conversation, leading to an inability to provide relevant responses longitudinally.
Ethical considerations
- Privacy and data security—potential breach of sensitive information into the public domain.
- Misinformation and manipulation—can be used to creative fake content that is used for causing harm.
- Accountability and responsibility—people are ultimately responsible for the content generated, though often this is not considered and is not easily determinable when things go wrong.
- Intellectual property—breaches of copyright are commonplace and raise questions around legality.
Environmental impact
- Like everything we do in the digital realm, these tools are not exempt from impacting our environment. However, users should be aware that generative AI tools require a particularly large amount of computational power.
Where can I learn more?
The Generative Artificial Intelligence Usage Standard refers to several other important documents, including the IT Acceptable Use Policy, the Data Classification Standard. These, alongside other listed key documents, provide additional guidelines on the responsible use of IT resources and data at the University.
Please familiarise yourself with the Standard. It is a roadmap for responsibly navigating the complex world of AI and crucial for maintaining ethical standards and legal compliance in our increasingly digital academic environment. By adhering to the Standard you will contribute to a safer, more ethical use of AI tools in our university.
Copilot, Gemini and NotebookLM
Enterprise licenses for these three tools has been acquired by the University to provide a more secure data-protected environment for our use.
Microsoft Copilot used to be called ‘Bing Chat Enterprise’ and was only available to businesses that paid extra for the corporate version of Microsoft 365. It is based on ChatGPT4 and DALL-E 3.
Staff and students can access MS Copilot when signed in with their UoA Microsoft account. Access to these tools via your UoA account means a lot more certainty that your uploaded data privacy/copyright/IP, does not contribute to the improvement of another business’ Large Language Model (LLM). It also has the ability to delete uploaded data after 30 days.
Read the instructions on how to log in to Copilot.
In 2023, Google launched Gemini (formerly Bard) and NotebookLM. In July 2025 the University extended our Google enterprise license so that staff and students have extra data protection (while logged in with their University Google account) while using them. This means that our data is not used to train its LLM, thus preventing sensitive information from becoming ‘discoverable’ by the public.
Read the instructions on how to log in to Gemini and NotebookLM.
See also…
Page updated 30/07/2025 (added Gemini and NotebookLM and expanded other considerations)