Two-lane approach to assessment
The two-lane approach offers a way of thinking about assessment and generative AI.
The two-lane approach adopted at the University of Auckland is designed to help teachers effectively manage assessments in response to the widespread availability and use of AI tools like MS Copilot, ChatGPT, Google Gemini and NotebookLM, to name a few. It helps us be more deliberate about the conditions under which AI will be used in assessment tasks, and offers clarity to students. It’s important to recognise that both lanes work in tandem. The two-lane approach supports good assessment practice by helping you:
- Strategically assess core knowledge and skills where authenticity of student work is essential
- Design authentic, varied tasks that require critical and responsible AI use
- Ensure assessments are inclusive, transparent, and adaptable to evolving AI capabilities
Always check with your Associate Dean Learning and Teaching or Programme Director as to what this means in your faculty.
Image: Anonymous author on Pixabay
Lanes 1 and 2 explained
Assessment policy and principles
There are a number of current policy settings that align with the two-lane approach. These include a greater emphasis on taking a programme approach to assessment, moving away from course-by-course assessment design, making use of contemporary digital modalities and increasing the amount of formative assessment.
1
Authentic assessment
2
Emphasis on formative assessment
3
Encouraging the use of digital modalities
The policy encourages assessment design and delivery that takes account of new technologies, including digital modalities for all stages of the assessment process. It emphasises the role of assessment in helping students build the ability to work ethically with technologies such as generative AI.
4
Whole of programme design
The policy advocates for a ‘whole-of-programme’ approach, ensuring assessment tasks are coherent at a programme level and aligned to the Graduate Profile. This approach has the potential to support an overall reduction in assessments and emphasises validation of students’ attainment at critical points across a programme. Where this approach is not practical we encourage colleagues to explore opportunities within a more contained slice, for example: stage 1, a common core, a major.
Notes and guidance
Academic Heads play an important role in ensuring staff and students in their departments/schools build capability with AI and can use it ethically and effectively. Program leaders are encouraged to revise graduate capabilities for their major with consideration of AI. Course directors are encouraged to embed AI across courses in ways that help students to use AI tools ethically, critically and effectively. Staff are encouraged to develop their own capability with AI tools. It should be noted that UoA is committed to assessing capabilities as a ‘whole of programme’ approach, so a broader view of assessment consistent with the Assessment Policy and Procedures should be adopted. The following documents and notes may clarify the position adopted by University.
Two-lane approach at University of Sydney
Our approach is informed by the position of University of Sydney. A lot has been written about this approach, but the following is a good introduction:
University of Auckland position
- “We will adopt and embrace Artificial Intelligence confidently and ethically in ways that maximise value and benefit for our people, our institution, and our world.” — University Executive Committee, October 2024
The AI in Education Action Plan, endorsed in June 2025, sets the wider framework for this work. Its five action areas—policy, guidance, tools, professional learning, and research—are where the two-lane approach to assessment sits. As the Plan notes:
- “… the University seeks to improve student AI literacy, and provide access to a dynamic set of ‘AI’ skills…including …the ability to operate with AI in a safe, ethical and effective way.” (2.0)
- “… the University seeks to ensure that students in every discipline have the opportunity to learn about AI, and to be ethical users and creators of AI. As a strategic principle, AI is to be integrated into the curriculum.” (3.0)
Agency
In thinking about generative AI, we take the position that it has no agency, so the user who is prompting the AI tool is to be treated as the author. They are responsible for the work generated by the model. It is important that authors are aware of the limitations of AI and treat the output critically since they are responsible for that output.
Supporting learning vs cognitive outsourcing
While our graduates will need AI skills for their future employment, we must remember that these tools can also replace tasks that help develop understanding. Therefore it is prudent to discuss with students the potential risk to their education should they simply outsource their coursework to AI. Conversation starters on the affordances and limitations of AI in academia are provided at:
FAQs
Why not just call Lane 2 ‘open’ or ‘unsecured’?
Lane 2 assessments are not simply “unsecured” or “open-book” tasks. They are designed to align with the five action areas in the AI Education Action Plan, supporting students to develop the skills needed in a world where AI is ubiquitous. Lane 2 assessments are authentic, often mirroring real-world or disciplinary contexts where AI tools are already in use. Just as academic staff may use AI tools in teaching, research, and administration, students are expected to use them thoughtfully in Lane 2 assessments.
A better term for assessments that are not fully secured but not yet fully embracing AI might be “towards Lane 2”. In contrast, Lane 1 assessments (assessment of learning) are typically more controlled and may occur in less authentic settings (e.g. exam halls). The rise of generative AI makes securing these assessments more challenging and potentially more artificial.
Is Lane 1 just 'no AI' and Lane 2 'full AI'?
Not exactly. The distinction is not about the presence or absence of AI, but about assessment conditions and purpose.
- Lane 1 assessments are controlled environments used to verify attainment of learning outcomes. These may or may not involve AI, depending on the discipline and task.
- Lane 2 assessments are more open and formative, supporting students to learn with AI tools where appropriate.
For example, in architecture, generative AI is already used in industry for ideation, so it follows that architecture programs should help students engage with these tools in Lane 2, and therefore aligning learning outcomes to reflect this. However, a Lane 1 assessment might still involve a live, authentic task (e.g., a mock client meeting) without AI support.
Is Lane 1 just for tests and exams?
Not exactly. Lane 1 speaks to controlled conditions and not to a specific assessment type. There are a broad range of assessment tasks which may be appropriate to Lane 1 conditions, depending on the discipline, purpose of assessment, learning outcomes to be evidenced and the ability to administer, e.g., viva voces, contemporaneous in-class assessments, and skills development, performance and so forth.
Lane 2 provides an opportunity for colleagues to continue to think differently about assessment. We have seen across the University some innovative and creative responses in assessment design. For example authentic assessment that mirrors professional practice, and varied tasks that are adapting to evolving AI capabilities and that require critical and responsible AI use.
Who decides where secured assessments (Lane 1) should be placed?
Strategic placement of Lane 1 assessments should be coordinated at the faculty level, with input from:
- Programme directors who oversee curriculum coherence
- Associate Deans Learning and Teaching who ensure alignment with faculty assessment policy
This approach supports consistency, reduces duplication, and helps manage workload across programmes.
What are we really assessing in Lane 2?
Lane 2 assessments are primarily assessment for and as learning. The focus is on how students engage with tools, apply disciplinary knowledge, and develop evaluative judgement.
We are not assessing how “good” the AI is, but how well students:
- Select appropriate tools
- Use them effectively
- Critically evaluate outputs
This aligns with University’s Graduate Profile, which emphasises critical thinking, digital capability, and ethical judgement.
How does Lane 2 ensure students still “use their brains”?
Using AI well requires cognitive effort. Students must:
- Understand the task
- Choose the right tools
- Interpret and refine outputs
- Justify their decisions
These are not passive processes. They demand disciplinary knowledge, critical thinking, and ethical awareness, all of which are tested in Lane 1 assessments.
Put simply: if students bypass learning in Lane 2, they will struggle in Lane 1.
Can course directors apply limitations to Lane 2 assessments?
… for example: “AI use is prohibited, or only certain AI tools are permitted or not permitted?”
As currently conceived, no. It is not possible to restrict AI use in assessments that are not undertaken in controlled environments. That is the point of Lane 2; it requires an acceptance that AI may be part of the assessment artefact.
From Liu & Bridgeman, University of Sydney — “It is also not possible to reliably or equitably detect that it has been used, either because a student has access to the latest technology, because they know how to change the raw output, or can afford to pay someone to do this for them. Any unenforceable restriction damages assessment validity so a scale or traffic light approach of telling students that they can only use AI for certain purposes, or use certain AI tools, is untenable. A clearer and more realistic approach is to consider the use of AI in Lane 2 assessments as a menu, where students can pick anything and it is our role as educators to guide them which options are more delectable (better for learning).”
Teachers should consider simplicity/clarity of messaging to avoid complexity/vagary in how this is understood by students.
To what extent is academic integrity still applicable in a Lane 2 assessment?
With respect to academic integrity and AI use, this year the University of Sydney shifted its policy setting to assume the use of AI in ‘open’ or uncontrolled assessments’ — meaning that course coordinators will not be able to prohibit its use in such assessments. Note too that part of the appeal for the two-lane approach is that it has the potential to shift our focus away from policing student misuse of AI, to enabling students to be ethical, discerning and productive users of these tools. It also responds to student concern more widely of being falsely accused of inappropriate AI use. This is not to say that there may be other aspects of academic integrity behaviour and breaches that are still relevant, only that the act of using AI is, in and of itself, not academic misconduct.
In thinking about generative AI, we take the position that it has no agency, so the user (student) who is prompting these tools is to be treated as the author. They are responsible for the work generated by the ‘model’. It is important that authors are aware of the limitations of AI and treat the output critically since they are responsible for that output.
What about students who cheat? Can AI detection software help?
The two-lane approach is built on the principle that using AI appropriately is not cheating.
Detection tools are improving, but they will always lag behind the latest generative models. They are also more likely to catch students who use AI poorly or who lack access to premium tools, raising equity concerns.
Instead of relying on detection, UoA’s approach focuses on:
- Assessment design that integrates AI use transparently
- Clear expectations for students
- Strategic use of Lane 1 assessments to assure learning outcomes
Do you have a question of your own?
Drop us a line and help us grow this series of FAQs.
Examples of lane two
Insights: Scaffolding academic writing through AI-powered formative feedback
Jet Tonogbanua uses an AI agent with his students to provide instant writing feedback, helping them to build confidence using AI (and in their own abilities), reflect on feedback, and take ownership of their writing.
Teaching Tip: Use Gen-AI to deepen learning and spark creativity
Dr Courtney Ruha (School of Chemical Sciences) helps students connect chemistry to real life through creative, collaborative group projects using generative AI.
Insights: Designing with AI – reimagining architectural education
Can Gen-AI tools enhance architectural students' design process and critical thinking skills? Read on to find out.
Support
- AI literacy self-paced sessions and facilitated workshops (as available) via the TeachWell Professional Learning Series.
- For tailored advice, book some time with a learning designer through the TeachWell Consult Service.
Page updated 05/11/2025 (additional FAQs)
- Notes and guidance provided curtesy Professor Andrew Luxton-Reilly, Associate Dean Learning and Teaching, Faculty of Science. ↩



