TeachWell Digital
Enabling excellence in learning and teaching
University of Auckland logo

The use of generative AI tools in coursework

How do we talk to students about engaging critically and appropriately with AI tools in coursework?

Students will need AI skills for their future employment, and though we can only surmise the long-term impact that AI will have on industry, it has been suggested that AI may not replace a role, but a person with AI skills will. But learning how to generate output from AI tools alone—without also understanding a subject and its application, or acquiring the critical thinking skills that underpin learning—does not equip our graduates with the necessary skills to succeed in the workplace. Therefore it is important to have an open discussion with your students about the affordances and limitations of AI.

Discussion prompts might include:

1

Supporting learning vs cognitive outsourcing

When used effectively, AI tools can enhance and support learning. Conversely, they can also harm learning when used to replace tasks that were designed to develop understanding. Students should be guided on the appropriate use of AI and be critical of the output. They should be made aware of the purpose of learning and discouraged from offloading intellectual work to AI.

2

Learning vs productivity

We have no effective ways to detect or prevent the use of AI tools in insecure environments. Instead of focusing efforts on detection and prosecution, we should guide students on the implications of AI use—when these tools can legitimately increase productivity without being detrimental to learning, i.e., by automating tasks that the students can already do.

3

Behavioural norms

We can encourage the behaviour we believe is most appropriate for the task but we cannot legislate behavioural norms in conditions where we cannot detect nor enforce that behaviour. Given this framing, the two-lane approach to assessment is most appropriate.

4

Limitations

Before using AI tools, we should be aware of their limitations and the potential for bias in their results:

  • Outdated information—their knowledge base is only current up to a point in time.
  • Lacking personalisation—no ability to adapt responses to an individual or a learning type.
  • No emotional intelligence—an inability to understand emotional cues results in no provision for learning support or motivation.
  • Inaccuracy and bias—they are not infallible and prone to information bias. They can even generate fake references in an attempt to sound plausible. Therefore it is imperative for users to cross-verify against reliable sources.
  • Lack of context—they don’t remember past interactions when starting a new conversation, leading to an inability to provide relevant responses longitudinally.

5

Ethical considerations

Users should heed an AI tool’s potential for causing harm:

  • Privacy and data security—potential breach of sensitive information into the public domain.
  • Misinformation and manipulation—can be used to create fake content.
  • Accountability and responsibility—people are ultimately responsible for the content generated, though often this is not considered and is not easily determinable when things go wrong.
  • Intellectual property—breaches of copyright are commonplace and raise questions around legality.

6

Environmental concerns

Like everything we do in the digital realm, AI tools have some environmental impact and users should be aware that they require a large amount of computational power.

Notes and guidance

 

  • Consider how AI is being used in the workplace within you discipline or industry and how you will incorporate and teach this in your course/programme.
  • Clearly communicate to students the decision and expectation of how AI tools should be used during your course. Use the templates found on the academic honesty declaration page to create your own instructions to students on the use of generative AI.
  • Familiarise yourself with the two-lane approach to assessment—the University’s recommendations designed to help teachers effectively manage assessments.
  • Read the University’s guidelines for permitted use of software in assessment activities and .
  • Consider the purpose of assessment, the ‘why’. Effective assessment design in the age of AI still focuses on principles of good assessment design.
  • Consider ‘whole of programme’ approach—an arrangement of different assessment methods deliberately designed across the entire curriculum.
  • Decide what matters most in your discipline and what learning and skills students should be graduating with. Do the learning outcomes align with these skills, and therefore, which assessment should be controlled and which could be teaching collaboration and human skills.
  • Consider assessments that value and mark the learning process rather than the end goal. Encourage feedback literacy in students.1 Feedback literacy is based on social constructivist theory and focuses on students learning and sense making.
  • Danny Liu and Adam Bridgeman (University of Sydney) make assessment suggestions that will have longevity even as AI advances. This article asks the reader to consider the humanness of teaching and learning.
  • Watch this 3-part video series from Danny Liu and Benjamin Miller (University of Sydney) as they introduce options for embracing AI in setting writing- and multi-modal assessments.

Page updated 10/11/2025 (added 4, 5, & 6)

  1. Carless, D. & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment & Evaluation in Higher Education(43), 1315-1325. 10.1080/02602938.2018.1463354
Send us your feedback

What do you think about this page? Is there something missing? For enquiries unrelated to this content, please raise a ticket with the Staff Service Centre or call +64 9 923 6000.

This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.