Free GenAI Risk Assessment Tool
- sasha97518
- Jun 4
- 2 min read
Updated: Jun 6
Quickly generate a department-wide snapshot of academic integrity risk based on assessment structure and design using my free tool.
Sasha Nikolic, 04 June 2025
The academic integrity risks posed by Generative Artificial Intelligence are now widely recognised, and educational institutions are responding in various ways to manage them. These efforts range from minor assessment adjustments to enhance security, to broader, systemic policy reforms.
I was the lead author of the publication Beyond Assessment Security: A Critical Policy Analysis of Four Alternative Strategies to Uphold Academic Integrity and Adopt the GenAI Transformation of Teaching and Learning for an Accredited Engineering Degree. This paper explores and critically evaluates a range of policy options to address the challenges introduced by GenAI.
One strategy proposed is to undertake a GenAI risk audit of assessments used across units. This is an effective method for departments or faculties to quantify assessment integrity risks across units. It offers a structured opportunity to reflect on current practices and prioritise redesign efforts for the most vulnerable units, allowing a more manageable and phased transition toward integrating GenAI into teaching and learning. An example of its use:

I developed the audit tool and applied it within my school, where it was used to measure the scale of risk, prioritise subjects, and support the creation of a long-term redesign strategy. You can use Table 10 in the linked study to help consider the risks and assessment security options across different assessment types.
The tool is available for FREE
If you find it useful, I kindly ask that you let me know, as this helps me track its impact.
If you have any suggestions or would like a modification, please reach out to me.
While the tool primarily helps quantify risk, it also allows users to visualise how changes to a unit's passing grade can reduce the risk created by the ratio of supervised to unsupervised assessments. I encourage experimenting with the simulation to explore these dynamics.
Within the published article, the potential to raise the passing threshold for a subject/unit from 50% to 60% is discussed. This creates an opportunity to extend flexibility in the ratio between supervised and unsupervised assessments. To encourage this discussion at your university, I present the following table:

The table assumes that a student could gain 75% of the mark for any unsupervised assessment task. With this in mind, when the unit pass mark is 50%, and when the supervised/unsupervised split is 50/50, a student only needs to get 25% in the secured assessment component. However, when the unit pass mark is increased to 60%, this increases to 45%, a more respectable figure. I encourage institutions to consider such an option for their strategy. Feel free to simulate such scenarios via my free tool.




Comments