Learn about the University’s Gen-AI Usage Standard
Here we offer an overview of the Generative Artificial Intelligence Usage Standard, which can be read in full on the Staff Intranet.
The Standard ensures the safe, ethical, and legal use of Gen-AI tools and services, and is designed to protect both the users and the data they work with. It applies to everyone in the University community who uses Gen-AI tools for any university-related activity.
Did you know?
Whatever you include in your prompts/queries to a Gen-AI tool (be it text, data, images), potentially exposes that information to future users; essentially you are training the chatbot to ‘remember’ your data. Consequently, others can use the chatbot to elicit your information—or at least, contextually similar information—through prompts of their own.
How is this relevant to me?
Gen-AI usage is becoming ever more ubiquitous. As educators, you might use Gen-AI tools for various purposes, such as creating content or analyzing data.
The Standard helps ensure that you use these tools responsibly, ensuring the privacy of both your and your students’ data. It also helps you understand the limitations and potential biases of these tools, leading to more informed and ethical use.
Assess data sensitivity before using it with Gen-AI
The document sets out specific standards for using Gen-AI tools.
For example, it requires you to assess any data according to the University’s data classification before you submit the data to a Gen-AI tool.
The University has four data classifications:
- public
- internal
- sensitive
- restricted
You should only use Gen-AI tools that are appropriate for the type of data you’re working with.
A very simple overview of adhering to the standard in practice might look like this:
My input contains … | … so I can use … | … because |
---|---|---|
Restricted data | Nothing! | Restricted information must be held at the University. AI chat services are cloud hosted so are not held at the University. |
Internal or sensitive data | Copilot when logged in with my University of Auckland account.* Look for the privacy symbol If you don’t see this symbol on the page, your login was unsuccessful. Read the instructions on how to log in to Copilot. | We need this information to stay private. Once you have finished your session your data will be erased and not used to train the AI. |
Public data | Any AI chat tool. | The University doesn’t mind if AIs know this information or share it with others. It is freely available anyway. |
My own personal data | Any AI chat tool I feel happy sharing my information with. | It is my data, so my choice who I share it with, and I take the risk. |
Or, put simply, you can use Copilot with Commercial Data Protection for all input except restricted data.
When using Gen-AI
- Understand the Generative Artificial Intelligence Usage Standard before you use any generative AI tool and apply it in your work.
- Use Copilot (logged in with your University User ID) for Generative AI queries.* This will ensure that University work remains private.
- If you use AI-generated outputs in your work then you’re still responsible for their accuracy, tone and content. Review outputs carefully and use your judgement. It’s your name on it, not the AI’s. Do you think it’s right? Are the sources credible? Does it comply with policies, rules and regulations?
- Complete a Privacy Impact Assessment (PIA) before using a Gen-AI tool with anyone’s personal information.
- Complete your privacy module in CareerTools.
Other considerations
The Standard, likewise, mandates that any content (including text, image, or video) created substantially by a Gen-AI tool should be labelled as such.
It reminds us that before using Gen-AI tools, we should be aware of their limitations and the potential for bias in their results.
The Standard also specifies that in case of Māori data usage with Gen-AI, users should first consult with the Office of the Pro-Vice Chancellor Māori.
Where can I learn more?
The Generative Artificial Intelligence Usage Standard refers to several other important documents, including the IT Acceptable Use Policy, the Data Classification Standard. These, alongside other listed key documents, provide additional guidelines on the responsible use of IT resources and data at the University.
Please familiarise yourself with the Standard. It is a roadmap for responsibly navigating the complex world of AI and crucial for maintaining ethical standards and legal compliance in our increasingly digital academic environment. By adhering to the Standard you will contribute to a safer, more ethical use of AI tools in our university.
* What is Copilot?
Copilot used to be called ‘Bing Chat Enterprise’ and was only available to businesses that paid extra for the corporate version of Microsoft 365. It is based on ChatGPT4 and DALL-E 3.
Staff and students can access Copilot when signed in with their UoA Microsoft account. Access to these tools via your UoA account means a lot more certainty that your uploaded data privacy/copyright/IP, does not contribute to the improvement of another business’ Large Language Model (LLM). It also has the ability to delete uploaded data after 30 days.
You will notice that there is a daily cap on your interactions – 30 responses per day.
- Read the instructions on how to log in to Copilot.
- Read our article called MS Copilot for staff and students: Essential things to know.
See also
Read the full Standard
The full Generative Artificial Intelligence Usage Standard is available on the Staff Intranet.
The use of generative AI tools in coursework
Information for teachers and examples for adapting teaching.
Page updated 11/09/2024 (minor edit)