Trying to Avoid Defeatism
Trying to Avoid Defeatism
Acknowledging and Responding to A.I. in Schools
I taught my first Data Science class to high school students in the 2022-2023 school year. The course has always been designed to be multidisciplinary. Students complete coding projects, make presentations, do math worksheets, and write a research paper. I designed the research paper to be about Aritifial Intelligence. I wanted students to think critically about the “what”, “how”, and “why” of AI.
Little did I know, ChatGPT would release its first public model - ChatGPT 3.5 - in the midst of that project on November 30, 2022. I don’t know if it was seredipity, irony, or something else entirelyy, but I will always remember the electricity that surrounded that moment. Students were excited, enamored, enthralled by this new technology. Teachers were fearful. Some of my colleagues sounded alarm bells reminiscent of my own schoolteachers’ anathema towards calculators or cell phones. I have a faint memory of a student asking me, “What if I wrote this paper using ChatGPT?” I laughed it off.
Fast forward to May 23, 2025, when my school hosted its first AI Day with events and speakers. I sat on a panel fielding questions about how AI has changed my teaching practice from the student organizers of the event (a list of questions I would later learn was written by - you guessed it - ChatGPT). At one point, I asked all of the adults in the room to shut their eyes before asking students to raise their hands if they had used ChatGPT to help them with a school assignment. Every hand went up. Then, I asked them to keep their hand up if they felt like any of the work they got help with crossed the line into cheating or academic dishonesty. About 90% of the hands stayed up.
A Weird Analogy
More and more, I’ve come to think about A.I. policies in schools in the same way that I think about Sex Education or Health. We know that kids are going to engage in risky behaviors, so they should be informed about the dangers, the laws, etc. Kids are using A.I. While there aren’t the same health implications as unprotected sex or illicit drug use, there are meaningful academic and developmental considerations that make A.I. education important, especially at this moment.
I think that all schools should have an explicit curriculum for how students can use A.I. safely and appropriately. Just like with Sex Ed, I don’t think that an “abstinence-only” approach will really work here. In other words, banning A.I. from student use is a surefire way to guarantee that students use A.I. poorly. The first step I took towards this curriculum was opening up a conversation with my data science class about the ways they have used A.I. and which uses may have crossed a line. We worked together to come up with a list of uses that students thought should be acceptable most of the time, which included items like these:
- Doing background research
- Making study guides, flashcards, and other preparation resources
- Getting alternative explanations for topics
- Summarizing or reorganizing notes taken in class
- Checking work for errors and clarity
- Improving the efficiency of coding solutions
I really tried to push students to be thoughtful about their responses here. I also used this activity as an opportunity to talk about ethics in general. We discussed questions like, “What if I have a teacher who ignores the acceptable use policy and simply bans A.I. when other teachers allow their students to use it within the policy?” This was a terrific opportunity to explicitly teach students about the idea of authority (i.e. that teacher absolutely can operate outside of the acceptable use policy and ban A.I., and the student voice is important when giving feedback to that teacher in respectful ways). I also pushed students to think about why a teacher might want to ban A.I. on their assignments. It doesn’t take a fully-formed adult to realize that a ban might actually be in students’ best interests. We are in school, after all, to develop our own brains.
Then, we came up with a list of unacceptable uses of A.I. for students and - crucially - for teachers. This op-ed from the New York Times illustrates the importance of consistency of expectations around A.I. in the classroom. Just as I want to know how my students are accessing the technology, students also want to know how it might be formulating their learning materials or evaluative materials. A big part of fostering a thriving learning community comes from making mutual agreements - we’re all on the same team, and we should all be able to commit to mutually agreeable policies around A.I. For teachers, a list of unacceptable uses of A.I. might include:
- Grading student work
- Writing part or all of a high-stakes assessment
- Using A.I. generated content without proofreading
Going back to the Sex Ed analogy, Health teachers do this regularly. Students are encouraged to ask questions, and teachers commit to answering those questions without shame. Students are expected to take their Health classes seriously and with maturity, and teachers commit to treating student concerns with similar weight. The risk that irresponsible A.I. use poses to student development and academic honesty may not be as pressing as drug use or risky sexual behaviors, but it certainly merits a careful and frank approach based on our united goals for longterm student success.
The optimist in me thinks that a revolutionary technology has the potential to push our classrooms in as many positive directions as it does in negative directions. We just need to face the problem without stigma. We need to really understand HOW and WHY students use it, and we need to agree upon policies that move us towards our mutual learning goals. And, we need to hold each other accountable to those policies.