Learn about the University’s Gen-AI Usage Standard
Using generative AI at work? Read through the University’s Gen-AI Usage Standard for safe and ethical use of AI.
The Standard ensures the safe, ethical, and legal use of AI tools and services, and is designed to protect both the users and the data they work with. It applies to everyone in the University community who uses AI tools for any University-related activity.
Did you know?
Whatever you include in your prompts/queries to an AI tool (be it text, data, images), potentially exposes that information to future users; essentially you are training the chatbot to ‘remember’ your data. Consequently, others can use the chatbot to elicit your information—or at least, contextually similar information—through prompts of their own.
How is this relevant to me?
AI usage is becoming ever more ubiquitous. As educators, you might use Gen-AI tools for various purposes, such as creating content or analysing data.
The Standard helps ensure that you use these tools responsibly, ensuring the privacy of both your and your students’ data. It also helps you understand the limitations and potential biases of these tools, leading to more informed and ethical use.
Assess data sensitivity before using it with AI
The document sets out specific standards for using AI tools.
For example, it requires you to assess any data according to the University’s data classification before you submit the data to an AI tool.
The University has four data classifications:
- public
- internal
- sensitive
- restricted
You should only use AI tools that are appropriate for the type of data you’re working with.
A very simple overview of adhering to the standard in practice might look like this:
When using Gen-AI
- Understand the Generative Artificial Intelligence Usage Standard before you use any AI tool and apply it in your work.
- Use Microsoft Copilot, Gemini, and NotebookLM tools when logged in with your University user ID for extra data protection.
- If you use AI-generated outputs in your work then you’re still responsible for their accuracy, tone and content. Review outputs carefully and use your judgement. It’s your name on it, not the AI’s. Do you think it’s right? Are the sources credible? Does it comply with policies, rules and regulations?
- Complete a Privacy Impact Assessment (PIA) before using an AI tool with anyone’s personal information.
- Complete your privacy module in Hono.
Other considerations
- The Standard, likewise, mandates that any content (including text, image, or video) created substantially by a AI tool should be labelled/cited as such.
- It reminds us that before using AI tools, we should be aware of their limitations and the potential for bias in their results. The affordances and limitations of AI in academia are discussed at: the use of generative AI tools in coursework.
- The Standard also specifies that in case of Māori data usage with Gen-AI, users should first consult with the Office of the Pro-Vice Chancellor Māori.
Where can I learn more?
The Generative Artificial Intelligence Usage Standard refers to several other important documents, including the IT Acceptable Use Policy, the Data Classification Standard. These, alongside other listed key documents, provide additional guidelines on the responsible use of IT resources and data at the University.
Please familiarise yourself with the Standard. It is a roadmap for responsibly navigating the complex world of AI and crucial for maintaining ethical standards and legal compliance in our increasingly digital academic environment. By adhering to the Standard you will contribute to a safer, more ethical use of AI tools in our university.
Copilot, Gemini and NotebookLM
Enterprise licenses for these three tools has been acquired by the University to provide a more secure data-protected environment for our use.
Microsoft Copilot used to be called ‘Bing Chat Enterprise’ and was only available to businesses that paid extra for the corporate version of Microsoft 365. It is based on ChatGPT4 and DALL-E 3.
Staff and students can access MS Copilot when signed in with their UoA Microsoft account. Access to these tools via your UoA account means a lot more certainty that your uploaded data privacy/copyright/IP, does not contribute to the improvement of another business’ Large Language Model (LLM). It also has the ability to delete uploaded data after 30 days.
Read the instructions on how to log in to Copilot.
In 2023, Google launched Gemini (formerly Bard) and NotebookLM. In July 2025 the University extended our Google enterprise license so that staff and students have extra data protection (while logged in with their University Google account) while using them. This means that our data is not used to train its LLM, thus preventing sensitive information from becoming ‘discoverable’ by the public.
Read the instructions on how to log in to Gemini and NotebookLM.
See also…
Page updated 10/11/2025 (moved ‘affordance and limitations’ to other page)
