While I love using AI—and have found that it saves me time and helps me think through things more clearly—I will be the first one to say that it has a dark side, especially for students. AI has made it easy to create convincing fake or sexualized content fast. Apps that can take a photo of someone who is dressed and undress it or face-swap it with someone else’s face can be used to harass, humiliate, or extort students. As educators, we need to prepare our students to safely navigate the AI world with age-appropriate AI safety for grades 7-12. The goal isn’t to scare them, but to equip them—and us—with clear norms, quick responses, and compassionate support.
The Dangers of AI
Here are just a few of the scenarios that are currently possible with AI tools: A ninth-grader’s yearbook photo is pulled from Instagram and “undressed” with AI, then posted to a group chat. A senior’s face is swapped into a sexual video and used in an extortion scheme demanding gift cards. A coach’s voice is cloned from a game livestream and used to prank-call a student with hateful remarks. These ai-facilitated incidents, and others like them, can escalate quickly; students need to know how best to prevent them and what to do if and when it happens.
Teach “Privacy by Default”
Many of us, including students, share too much information with the online world. And while it’s fun to post photos of our new car or our redecorated bedroom, it may not be the smartest thing to do. Instead, we must learn to use privacy by default, especially when using AI.
- Remember that anything shared with a public AI (including the vast majority of tools being used today) is not private and can be seen by others or used by the AI company to train its model.
- Keep identities separate. Never use full name + school + team + daily routine in posts or prompts.
- Never upload classmates’ images, voices, or schoolwork to public AIs without explicit permission: a clear “yes” from them for this specific upload. Silence or “they won’t mind” doesn’t count.
- Model safe prompting. Use placeholders (e.g., “Student A”), blur faces, and avoid unique photos/videos.
Consent in Practice: Pause, Ask, Support, Report
As adults, we must emphasize that editing or sharing someone’s image/voice without permission is a violation of consent and community norms and possibly the law. If students encounter harmful content, teach them: don’t forward it, support the targeted student, and report it immediately to a trusted adult. Use plain language, visuals, translated materials, captions, and alt-text so ELLs and students with disabilities understand both the risks and their rights.
Show Students Ways to Check Information
The teacher should model quick checks without turning the class into detectives.
- Run a reverse image search (Google Lens/Bing) to find the earliest appearance of a photo.
- Upload an image or paste its URL into the Google Images search bar and click on the camera icon, or use the “Search with Google Lens” option in the Chrome browser’s right-click menu.
- Google will display search results that may include visually similar images, websites where the image appears, and more information about the image.
- Scan for obvious AI artifacts (warped text, mismatched lighting or reflections).
- Look for Content Credentials (C2PA) labels that attach verifiable provenance to media. Explain the limits: a C2PA label can help confirm origin when present, but absence doesn’t prove a fake.
What Adults Should Do If Harm Happens
Stay calm, believe the student, and preserve evidence (URLs, usernames, timestamps, screenshots). Report on-platform and through school channels (administrator or counselor). For minors’ sexual images, either real or AI-generated, use:
- NCMEC CyberTipline to report online exploitation.
- Take It Down to create a hash and help participating platforms locate and remove the image(s).
For other help, you can reach out to:
- For adults (e.g., recent graduates or staff), StopNCII.org offers a similar hashing process with participating companies.
- For round-the-clock support and practical guidance, the Cyber Civil Rights Initiative operates a 24/7 Image Abuse Helpline (1-844-878-2274).
- If there are “sextortion” threats, file a report with the FBI’s Internet Crime Complaint Center (IC3) and follow their safety guidance.
AI Safety Lessons You Can Use Tomorrow
Open with a short, calm mini-lesson: “Three Red Flags Before You Upload,” followed by two anonymized scenarios. Give students a decision check—Would I post this if my principal and family were in the room? Does this include someone else’s identity without consent? How would I feel if a post like this about me was uploaded? Then have them rewrite a risky prompt into a safe, consent-respecting version. Close by revisiting your campus norms (ask first, anonymize always, never forward harm, report immediately). Keep in mind that safety instruction works best when it is steady, routine, and judgment-free. Pair boundaries with empathy; remind students that mistakes can be fixed, and help is available.
An Important Call to Action
Talk with others in your grade level, department, or campus about what you can do to help students remain safe while using AI tools. Consider scheduling a meeting with parents to share this information with them. AI safety for grades 7-12 isn’t just a good idea. It’s vital to the safety of our teenage students.
2 comments
Where is the lesson “Three Red Flags Before You Upload”?
I’m so sorry, Jennifer. I’ve added the link to the blog now. You can find the lesson plan here.