Use of AI in Assessment Policy

1. Purpose

This policy sets out RADD Training Limited (“RADD Training”) requirements for the ethical and appropriate use of artificial intelligence (AI) tools in learning and assessment.

The aims are to:

  • protect the integrity of assessment and certification
  • ensure fairness and consistency for all learners
  • promote independent learning and authentic evidence of competence
  • support responsible use of AI where it enhances learning without replacing the learners own work

Where an awarding organisation has specific rules on AI use, those rules take precedence.

2. Scope

This policy applies to:

  • all learners/delegates
  • all trainers, assessors, invigilators, IQAs/IVs and staff
  • all assessment types, including assignments, projects, presentations, professional discussions, exams/tests and online assessments
  • all delivery modes (classroom, on-site, blended, distance and online)

3. Key principles

  • Learner accountability: learners remain fully responsible for the content they submit.
  • Transparency: any AI use must be declared.
  • Authenticity: evidence must reflect the learners own knowledge, skills and understanding.
  • No unfair advantage: AI must not be used to gain an advantage over other learners.
  • Confidentiality: learners must not input confidential, personal or restricted assessment materials into AI tools.

4. Definitions

4.1 AI tools

AI tools include (but are not limited to) AI writing assistants, chatbots, summarisation tools, code generators, image generators and problem-solving apps.

4.2 AI-assisted vs AI-generated work

  • AI-assisted work: the learner creates the work and uses AI for limited support (e.g., grammar, structure suggestions).
  • AI-generated work: AI produces substantial content (e.g., full answers/sections) that the learner then submits.

5. Transparency and disclosure requirements

Learners must disclose any AI tools used in the preparation of assessment work.

5.1 What to disclose

The declaration must include:

  • the name of the AI tool(s) used
  • what the tool was used for (e.g., brainstorming, grammar checking)
  • which parts of the work were affected

5.2 Standard disclosure statement

Learners should include a statement such as:

“This work was developed with the assistance of [AI tool], used for [specific task(s)]. The final content and evidence submitted are my own.”

Where no AI was used, learners may be asked to confirm:

“No AI tools were used in the preparation of this submission.”

6. Permitted uses (unless the assessment brief states otherwise)

AI may be used for limited support activities that do not replace the learners thinking, such as:

  • spelling and grammar checking
  • improving readability and plain-English editing (without changing meaning)
  • translation support (where permitted)
  • brainstorming and idea generation
  • creating study aids (e.g., practice questions) for revision (not for submission)

7. Prohibited uses (unless explicitly authorised in writing)

AI must not be used to:

  • generate complete answers to assessment questions
  • produce full essays, reports, reflective accounts or assignment sections for submission
  • automatically complete exams, quizzes or closed-book tests
  • rewrite learner evidence in a way that misrepresents the learners competence or understanding
  • fabricate references, legislation, sources, data, incidents or workplace evidence
  • create or alter evidence (including photos, logs, witness statements or documents) to mislead
  • share or upload confidential assessment materials, test questions, learner data or employer/client information into an AI tool

8. Assessment-specific instructions

Trainers/Assessors will communicate acceptable and unacceptable AI use for each assessment. This may be done by:

  • referencing this policy in the assessment brief, and/or
  • providing assessment-specific instructions (e.g., AI permitted for proofreading only)

If the assessment brief conflicts with this policy, the stricter requirement applies.

9. How RADD Training may check authenticity

To protect assessment integrity, RADD Training may use a range of methods, including:

  • questioning learners on submitted work (professional discussion)
  • comparing writing style and previous submissions
  • reviewing version history/drafts where available
  • using plagiarism and/or AI-detection tools as an indicator (not as sole proof)
  • applying internal quality assurance sampling

10. Suspected misuse, malpractice and consequences

Misuse of AI that undermines assessment integrity may be treated as malpractice and managed under RADD Trainings Malpractice & Maladministration Policy and awarding organisation procedures.

Depending on severity and awarding organisation rules, outcomes may include:

  • requirement to revise and resubmit
  • mark reduction, assessment failure or disqualification
  • removal from the course/qualification
  • reporting to the awarding organisation

RADD Training will keep records of concerns, investigations and outcomes.

11. Support and guidance

RADD Training will provide guidance, where requested, on ethical AI use in the context of its courses.

Learners who are unsure whether a particular use is permitted must ask their Trainer/Assessor before submitting work.

12. Appeals

The procedure for appealing assessment decisions is available on the RADD Training website (Learner Appeals Policy).

13. Review and document control

Approved By:Chrisy McLeod – Division Director
Version:v2
Issue date:23/02/2026
Last Review:23/02/2026
Review date:22/02/2027