How to Co-Write an AI Policy with Students (and Why You Should)
Insights on students' perspectives with AI policy and writing with AI
What would happen if we stopped telling students how to use AI and, instead, asked them to help us decide?
That’s the question I posed in the first week of my fully online summer writing class. I’ve taught variations of first-year writing for nearly a decade, but this time, I decided to try something new: instead of enforcing a top-down AI use policy, I invited my students to co-create one. Something I was sincerely nervous about.
Butt it worked better than I imagined.
In a time of mounting anxiety around academic integrity, AI misuse, and student disengagement, this simple activity flipped the script. Not only did students take the assignment seriously, but they also articulated a thoughtful, nuanced understanding of how AI can support (or sabotage) their learning.
In this post, I’ll share exactly how I structured the assignment, what students said, the policy we built together, and why I believe this model is one of the most promising ways to cultivate critical digital literacy, ethical engagement, and student agency.
Why I Let My Students Write the Rules
This wasn’t just a warm-and-fuzzy attempt at classroom democracy. The decision to co-create our AI policy was rooted in three pedagogical principles:
Contract grading: Our course is built on Asao Inoue’s labor-based grading contract model that emphasizes effort, reflection, and growth—not perfection. This makes space for risk-taking and collaboration, rather than competition for points.
Critical digital literacy: As AI tools like ChatGPT become more integrated into students’ daily lives, we can’t afford to teach around them. We need to teach through them—focusing on ethics, impact, and the habits of mind that will outlast any one platform. Much like critical media literacy scholars have been teaching for decades.
Community agreements over surveillance: Rather than treating students as cheaters-in-waiting, I wanted to approach policymaking as a collaborative act of trust, one that aligned with our course values of compassion, equity, curiosity, and care.
So I ditched the lecture about “AI misuse” and launched week one with an open invitation.
The Assignment: Co-Creating an AI Policy
Here’s what I posted on Canvas to kick things off:
“Imagine you’re in a home studio writing group—not a university classroom. You're not trying to earn a grade, but to grow as a thinker, writer, and community member. In this space, we use tools, learn from each other, and figure things out together. One of those tools might be AI—but what should that look like?”
Students were directed to post their thoughts on a shared Padlet board and reply to at least two peers. I gave them a range of response options: short paragraphs, guidelines, questions, do/don’t lists, etc.
They also had guiding prompts like:
What types of AI use feel ethical, helpful, or appropriate in a writing course
When does AI help you learn—and when might it hinder your growth?
How can we ensure AI use doesn’t impact our critical thinking skills?
Before they participated, students were required to read our full grading contract (based off Inoue’s fully accessible Grading Contract Template), which frames the classroom as a cooperative learning environment grounded in trust and mutual responsibility. This helped set the tone: their words would help shape real policy, not just check a box for participation.
What Students Said
Over the course of the first week, dozens of thoughtful, surprising, and even funny responses rolled in. Some paraphrases:
“AI can be a tool, not a crutch. If it does the hard part for you, you won’t grow.”
“It’s like using Grammarly but on steroids. I think it’s okay as long as you’re still learning from it.”
“What’s the point of taking a writing class if you’re not going to try writing?”
“AI is helpful for outlines and ideas, but not for actual voice.”
“We need to be transparent. If you used AI a lot, say so.”
Several students admitted they’d experimented with AI before but hadn’t felt comfortable asking what “counted” as cheating. Others expressed relief that we were even having this conversation. Across responses, a clear agreement emerged:
General Agreement:
AI can be a valuable support tool for learning when used intentionally and ethically. It should assist—not replace—your thinking, voice, or writing process.
The Policy We Wrote Together
Based on their responses, I compiled and shared our first version of a Class-Designed AI Policy. Here's a condensed version:
Ethical Uses of AI:
Brainstorming ideas or generating writing prompt
Example: Asking AI to help come up with hooks or thesis starters.
Clarifying confusing concepts or assignments
Example: Using AI to simplify a reading or explain a complex term.
Grammar and sentence-level revision
Example: Asking AI to rephrase an awkward sentence or check for run-ons.
Organizing ideas or creating outlines
Example: Getting suggestions for how to structure your argument.
Feedback and growth-oriented suggestions
Example: Asking AI to evaluate clarity or tone and suggest improvements.
➤ Inappropriate Uses of AI
Writing full essays or paragraphs
Answering quizzes or graded questions
Replacing your original voice
Using AI heavily without disclosing it
➤ Transparency Expectation
“If you use AI in a significant way, include a short note in your author’s memo or margin comments.”
Example: “I used ChatGPT to help brainstorm my topic and reword the intro paragraph.”
➤ Why This Matters
Students emphasized that learning requires effort. If AI does all the work, they lose out. Voice, growth, and originality were recurring values.
➤ Transparency and Opportunities for Revisions
At the end of our AI document I added the following:
“You can find the full version of the discussion here (linked out to PDF exported Padlet, not included in this Substack for student privacy). If for some reason you feel your views were not represented in this official document, please feel free to reach out to me at [school email]. This policy was designed to represent a wide range of ideas while still maintaining a general consensus around AI and its uses in this particular class.”
Why This Works
While I almost assumed this activity would be nothing more than a “feel good” exercise, I was delighted to see how deeply practical it was. Here’s why I’ll keep doing it:
1. It builds buy-in (I hope).
I don’t have any data to confirm this—yet—but I hope students are more likely to follow this AI policy because they helped write it.
2. It invites ethical reasoning.
Instead of memorizing rules, students practiced critical thinking, negotiation, and values clarification—essential skills for digital literacy.
3. It reframes “academic integrity.”
Rather than fear-based language or surveillance, this approach fosters responsibility and reflection. It sends the message: we trust you to think about your choices, not just follow commands.
4. It humanizes AI.
I did find that students were more curious—and less anxious—about AI tools after this conversation. We created space to explore, not just comply.
What I’d Change Next Time
Incorporate examples of gray areas: Students wanted to know more about “in-between” cases, like using AI to reword a single sentence or help with citations.
Bring in student voices from other classes or schools as comparison points (maybe via anonymous quotes or guest posts).
Track changes over time: I’d love to revisit the Padlet mid-semester to see if their views shift.
Try This in Your Own Classroom
Want to run your own version of this activity? Here’s a plug-and-play version:
Quick Start Guide:
Set the tone with a message about learning, trust, and shared responsibility.
Use a collaborative tool (Padlet or Google Doc) for students to post.
Offer flexible formats—lists or questions.
Provide guiding prompts (feel free to steal mine above).
Compile responses into a living policy and revisit it throughout the term.
Hyperlink the policy in your syllabus or LMS and give students credit for their work.
Let me know if you'd like a downloadable version of the template I used or a sample Padlet prompt. I’m happy to share!
Final Thoughts: Policy as Pedagogy
This week on LinkedIn I saw a post from Maurie Beasley, M.Ed that read:
AI didn’t kill deep reading.
We did. With:
- Packets instead of exploration
- Five-paragraph essays instead of ideas
- “Did you cite the author’s middle name?” instead of “What did this story mean to you?”
Now, students use AI to “skim” or write papers faster... and everyone’s surprised? (contain your shock)... Of course they are. We never taught them to read like thinkers. We taught them to pass.
Ultimately, this AI-policy activity wasn’t about AI. It was about values. When we invite students to co-create the boundaries of our classroom, we teach them something far more important than citation rules: that they are ethical agents and critical thinkers, capable of shaping the culture of learning we build together.
As AI evolves, our policies will, too. But what should remain constant is our commitment to shared dialogue, student voice, and the collective work of thinking through these tools—not just policing them.
I’d love to hear from you. Are you thinking about co-writing policies with students this fall? How are you approaching AI in your classroom?
Let’s keep the conversation going.
—
Bonus: Want a template for your syllabus or LMS? Drop a comment or reply and I’ll send you a copy of the full Padlet prompt + class policy document I used.
Fantastic. A big benefit of what you did is bringing AI use out of the shadows and into something that can be openly discussed in your classroom. I wrote a post on Monday on my attempts to understand my school district's AI policy: https://mikewhitaker1.substack.com/p/back-to-school-with-ai-the-clarity?r=mld5. I ended up using AI to try to discern it from several documents and then created a simple webpage to explain it. Many of the principles parallel what you did.
I absolutely love this! Thank you so much. And I would indeed like to have copies of the materials you used.