AI Workflow for Safety-Initial Instructional Design

A Safety First Framework for Using AI in Your Educational Project Workflow
Everyone in Learning and Development (L&D) is currently under pressure to produce more content, faster. Artificial Intelligence seems like the obvious answer. It promises to cut development time in half and quickly generate scenarios, queries, and summaries. But there is a catch. Large Language Models (LLMs) like ChatGPT or Claude are not knowledge engines; they are prediction engines. They don’t care about the truth. They care what the next word is it should it might be. In a creative writing class, that’s a feature. In compliance training, safety protocols, or technical onboarding, is mandatory.
When an AI “congratulates” (that is, confidently tells a false truth), it creates chaos. If a student follows the desired safety measure, people get hurt. If a manager follows a forgotten HR policy, the company is sued. This guide details the security workflow first; think of it as treating AI like an expert, but like an untrustworthy student who needs their work checked line by line.
In this article…
Why AI Helps, And Where It Breaks In L&D
AI is amazing at structural tasks. It can take messy text and get valuable points. It can rewrite passive voice into active voice. It can put together ten ideas for a role play in seconds. But it fails when you ask it to be accurate without guardrails.
Failure Modes
- Definition of phantom: He calls for research on adult learning. AI provides you with complete APA citations for missing research.
- Context wraps: You are loading the 2019 policy. The AI uses it, ignoring the 2023 update you mentioned in the notification because the 2019 text was longer.
- The “middle” trap: The AI is trained online. When you ask for a leadership course, we give you general advice that may conflict with your specific company culture.
- To increase the bias: Without being tested, AI often switches to gendered language (eg, doctors are “she,” assistants are “she”) based on historical training data.
“Safe AI” Workflow in 6 Steps
To use AI safely, you must change the way you write your information. Never ask open-ended questions like “Write a lesson on fire safety.” That’s gambling. Instead, use the “source pack” method.
Step 1: Define the Purpose and the “Don’t Do” List.
Start with the end in mind. What is the specific purpose of the operation?
- WHO: Senior Sales Managers.
- Goal: Use the new “Consultation Closure” matrix.
- Limitation: Do it not use the general marketing advice available online. Use our internal words only.
Step 2: Create a Source Pack (Boundary)
This is a very important safety step. Collect PDFs, transcripts, and slide decks the truth.
- Clean the data. When uploading a transcript, remove the small talk first.
- Notification strategy: Tell the AI clearly: “Use ONLY the source text provided to answer. If the answer is not in the text, say ‘I don’t know.’ Don’t use outside information.”
Step 3: Produce Draft, Not Final Content
Use the tool to build the skeleton, not the muscle.
- Request a frame based on the source of the pack.
- Ask for three different similes to explain a complex word from the text.
- It asked him to condense the 10-page technical manual into a one-page user manual.
Step 4: Fact Check Tag and Evidence
Before you fix the flow, you have to fix the facts.
- Traceability: If AI makes a claim, can you find a sentence in your source code that supports it?
- Numbers and dates: AI is notorious for statistics and timelines. Check all numbers manually.
- Links: Click the entire URL. AI often produces dead or fake links.
Step 5: QA Instructional Design
Once the facts are clear, look at the science of learning.
- Cognitive burden: Has AI abandoned the wall of text? Break up.
- Bloom’s Taxonomy: Are the quizzes just testing the memory (low level), or are they testing the application (high level)? AI automatically adapts to memory questions because they are easy to generate.
- Tone: Does it sound like a robot? Add human warmth and compassion.
Step 6: Pilot and Iterate
Don’t start at every company. Send the module to five users. Watch them take it. If they stick to the AI-generated explanation, it’s not clear. Rewrite yourself.
QA Checklist
Before you publish any AI-generated content, start with this six-point check.
Accuracy and Availability
- “Ctrl+F” test: Can all factual claims be found in your original documents?
- To test for hallucination: Ensure that there are no extraneous figures, dates, or values generated by the AI.
- Link verification: Click on all the links. Make sure they lead to live, relevant pages, not dead ends.
Alignment with Objectives
- Fluff filter: Did AI add “good to know” history or background information? If it doesn’t support the learning goal, delete it.
- Action oriented: Does the content teach the reader How to do work, or just about work?
- Audience similarity: Is the level of complexity correct? (eg. Don’t explain “what a browser is” to application developers).
Eligibility Assessment
- To check the bug: In multiple choice questions, are the wrong answers visible? AI often writes obvious bugs that make questions too easy.
- Answer key: Is the correct answer unequivocally correct based on your goal?
- Answer: Did the AI generate a useful answer for why the answer is wrong?
Mental Load and Clarity
- Brevity: Are paragraphs short (3-4 sentences)? AI tends to be verbose.
- Active voice: Did the AI use passive voice (eg, “The form must be signed”)? Change it to something active (eg, “Sign the form”).
- Formatting: Are lists used instead of dense blocks of text?
Basics of Accessibility
- Another text: If the AI has suggested images, are the descriptions useful and descriptive for screen readers?
- Level of study: Is the language simple enough? (Check the Grade 8 reading level for compliance with the general rule).
- Compare: If AI generated slide layouts, is the text readable on the background?
Tone, Inclusion, Policy, and Compliance
- Bias scanning: Check the pronouns and clauses. Did AI make the manager “him” and the assistant “him”?
- Product word: Does it sound robotic or human? Add warmth and compassion when needed.
- Safety and law: Make sure no blanket promises are made (eg, “Follow this, and you’ll never get hurt”) that could create liability.
Two Small Examples
Example A: SME Transcript
- Context: You have a 45-minute recording of a Product Manager explaining a new software feature.
- Bad way: You paste everything in and say, “Write a script.”
- Result: AI compiles the Product Manager’s complaints about the engineering team and misses an important input step.
- A safe way:
- Clean: Remove complaints from transcripts manually.
- Notify: “Act like a technical writer. Based only in the attached transcript, make a step-by-step log-in list. Format it as a numbered list.”
- QA: Verify steps against the actual software sandbox. You see that the AI missed “Click Save.” You can come in person.
Example B: Question Generator
- Context: You need questions for a Code of Conduct course.
- Bad way: “Write 5 difficult questions about behavior.”
- Result: AI asks philosophical questions like “What is the nature of reality?” which is against company policy.
- A safe way:
- Notify: “Using the attached ‘Gifts and Strangers’ PDF, write 3 questions based on each situation. The student must decide whether or not to accept the gift. For each correct answer, quote the specific paragraph from the PDF.”
- QA: You check the clauses. He makes sure that the situations feel real, not cartoonish.
Rating: Did It Work?
Creative speed is a vanity metric. You need to measure efficiency.
What You Can Track
- Reliability of questions: Look at the statistics. If 100% of students get Question 3 correct, it’s too easy. If 0% get it right, the AI wrote a confusing question, or the content didn’t cover it.
- Confidence points: Ask students, “How confident are you in using this skill?” If confidence is low, the content generated by AI may not be very visible.
A/B testing
If you want to prove that this works, do an experiment.
- Group A: It takes the form of an old, human-written legacy.
- Group B: It takes a new AI-assisted (and human-validated) course.
Compare time and expertise. If Group B reads the same amount in half the time, your workflow is successful.
It closes
AI is a tool, like a calculator or spell checker. You wouldn’t publish a financial report without double-checking the calculator’s input. Don’t publish training without double-checking the AI output. The goal is not to let AI do the work. The goal is to let AI do the boring (summarizing, formatting, editing) so you can focus on the high-value work: strategy, context, and human interaction.



